title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
On the Ability of Developers' Training Data Preservation of Learnware | Accept (poster) | Summary: The authors theoretically analyze the properties of the learnware paradigm. In the learnware paradigm, a model developer can provide their trained models for other developers to use. To enable re-use, along with the model the developer provides a model specification that adequately represents the model's training data. This allows developers looking for models to find those that are most useful for their tasks of interest. Importantly, this specification should preserve the privacy of the original training data of the model.
While, the reduced kernel mean embedding specification has been proposed in the literature, a theoretical analysis that guarantees the protection of the model's training data is missing. The authors prove that RKME can simultaneously have the following three desirable properties:
* It does not contain any of the original training data points
* It is robust against inference attacks
* It still preserves sufficeint information about the training data for effective use as a learnware specification.
Strengths: To the best of my knowledge, the results provided by the authors are novel. While I am not an expert neither in learnware systems nor in reproducing kernels, the results provided and the tools used in the analysis are non trivial. The result should be of high significance to the learnware community, especially since disclosing a model specification may carry risk if there are no formal guarantees. In terms of writing, the authors introduce the learnware problem and their contribution in a clear manner in Sections 1 and 2. The figures presenting the trade-offs between the different choices of $m$ are also very helpful for readers who may not be able to follow the theoretical results.
Weaknesses: I think the clarity of Sections 3 and 4 can be significantly improved, so that they can be more approachable to a broader audience.
For Section 3, the authors present core results that are the basis of Theorems 3.4 and 3.5 but the connection to these theorems is not particularly clear. I would advise the authors to first explain the proof sketch and then present the key lemmas and how they connect to the proof sketch. See also the questions section for more.
For Section 4, I understand that the setting is even more complicated compared to Section 3 but providing some more intuition behind Theorem 4.2, especially the parts that are not already covered in Section 3, would also be helpful.
Technical Quality: 3
Clarity: 2
Questions for Authors: I am a bit confused with regards to Theorem 3.4: Is Theorem 3.4 proven for the $\delta=0$ case or is it proven for a specific $\delta$? Intuitively, the overlap of a continuous distribution and a discrete distribution of synthetic data should be zero for $\delta=0$ regardless of how the number of discrete synthetic points. So I feel I am missing something.
Also I am still not sure which is the $\delta$ chosen for Theorem 3.5. Can you explain this more?
I am a bit confused about the linkability privacy game. It seems like the game can be technically split in two games, one when $b=0$ and one when $b=1$. Given that the adversary knows $b$, these two subgames are completely independent. In addition, the subgame of $b=0$ is trivial because the adversary trivially knows the answer. I am thus unsure what is the value of having the $b=0$ subgame at all. I guess my question here, is why is the random $b$ introduced in the game?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews!
---
Q1: I am a bit confused with regards to Theorem 3.4: Is Theorem 3.4 proven for the $\delta=0$ case or is it proven for a specific $\delta$ ? Intuitively, the overlap of a continuous distribution and a discrete distribution of synthetic data should be zero for $\delta=0$ regardless of how the number of discrete synthetic points. So I feel I am missing something.
A1: Thank you for your thorough review of Theorem 3.4 and for raising specific questions! Indeed, Theorem 3.4 is proven for the $\delta = 0$ case, but this is not a trivial case:
In our paper, the consistency risk is defined as $R\_C(\mathcal{P}) \triangleq \mathbb{E}\_{D \sim \mathcal{P}^n}\left(\mathbb{I}\_{Z\_\delta \cap D\_\delta \neq \emptyset}\right)$. Therefore, we are concerned not with the overlap of a continuous distribution and a discrete distribution of synthetic data, but with the probability that a dataset sampled from the continuous distribution overlaps with the synthetic data generated from that dataset. The value of $R_C(\mathcal{P})$ at $\delta = 0$ strongly depends on the properties of method of generating synthetic data. For example, if we replace the commonly used Gaussian kernel in RKME, $k\left(\boldsymbol{x}_1, \boldsymbol{x}_2\right)=\exp \left(-\gamma\left|\boldsymbol{x}_1-\boldsymbol{x}_2\right|_2^2\right)$, with a Laplacian kernel, $k\left(\boldsymbol{x}_1, \boldsymbol{x}_2\right)=\exp \left(-\gamma\left|\boldsymbol{x}_1-\boldsymbol{x}_2\right|\right)$, it can be shown through analysis of its nonlinear equations that for any dataset, the generated synthetic data in RKME will always contain points from the original dataset. In this case, even if $\delta = 0$, the $R_C(\mathcal{P})$ for RKME with the Laplacian kernel would be $1$. Therefore, the proof in our paper that $R_C(\mathcal{P})$ for RKME with the Gaussian kernel is $0$ when $\delta = 0$ is non-trivial and relies on the analysis of nonlinear equations involving the Gaussian kernel.
In the paper, we separately prove the $\delta = 0$ case because it has some geometric intuition and provides insights for the proof of the $\delta > 0$ case. In future versions, we will make the significance and position of this example clearer in the text.
---
Q2: Also I am still not sure which is the δ chosen for Theorem 3.5. Can you explain this more?
A2: Thanks for your valuable feedback!
A detailed discussion on the selection of $\delta$ can be found in Appendix D. Here, we provide a simple understanding and discussion. The meaning of $\delta$ is that we want the synthetic data to differ from the real data by at least $\delta$ to ensure that the real data does not appear in the synthetic data. Therefore, $\delta$ is related to the scale of the data. When the overall data scale is small, it is impossible to ensure a $\delta$ larger than the data scale to ensure privacy. Additionally, $\delta$ is related to the number of real data points. For example, when there are many real data points, they become very dense in $\mathbb{R}^d$, and the distance between real data points becomes very small. In this case, the feasible $\delta$ will also become correspondingly smaller. Hence, in the proof of Theorem 3.5, $\delta$ is chosen as $\frac{R}{n}$, where $R$ is the upper bound of the data and $n$ is the number of samples.
A simpler and more relaxed form of this selection is to directly choose the smallest different sample distance in the sample set, as it is easily shown to be less than $\frac{R}{n}$. However, this approach provides users with a clear metric: they can use the smallest sample distance in their data set to measure the strength of privacy protection we can offer. Therefore, we have provided this easier-to-understand form in the paper. In future versions, we will provide a more detailed explanation of the selection of $\delta$ in Section 3.
---
Q3: I am a bit confused about the linkability privacy game. ... why is the random $b$ introduced in the game?
A3: Thanks for your feedback!
The cases $b=0$ and $b=1$ indeed represent two completely independent subgames, and the random bit $b$ is a common and necessary setup in the privacy game. The random bit $b$ serves to indicate to the adversary whether the dataset provided is the real dataset. Without the random bit $b$, the adversary would still attempt to attack using some method even if they obtained the original dataset. Many membership inference attacks (e.g., black-box attacks) use a confidence estimation approach, and even with the original dataset, they cannot always provide a completely correct conclusion. In the linkage privacy game, we naturally want the adversary to be able to accurately determine conclusions if they have the original dataset. Therefore, we include a random bit $b$ to be sent to the adversary along with the dataset.
Moreover, the random bit $b$ plays a crucial role in the subsequent inference privacy game, as obtaining the original dataset does not necessarily lead to correct attribute inference (there could be samples with completely overlapping attributes except for the target attribute).
---
Q4: I think the clarity of Sections 3 and 4 can be significantly improved, so that they can be more approachable to a broader audience.
A4: Thanks for the constructive feedback!
We will address the two points mentioned by the reviewer in future versions as follows:
1. We will revise the order of Section 3, presenting the proof sketch first to give readers a preliminary understanding, followed by a detailed proof according to its logic.
2. We will provide more intuition in Sections 3 and 4, and move some of the more complex parts to the appendix as appropriate.
---
In the end, we take this opportunity to sincerely thank you for the careful review, your suggestions are very insightful and important for further improving the paper. We are happy to provide further clarifications if needed in the following author-reviewer discussions.
---
Rebuttal Comment 1.1:
Title: The authors have answered my questions
Comment: The authors have answered my questions and the justifications provided would help significantly improve the paper if they are indeed included. Also the extensive experimental evaluations are appreciated on top of the paper's theoretical contributions. I thus increase my score to a 7.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 4E2V,
Thank you very much for your kind reply! We will revise our paper according to the constructive reviews.
Best
Authors | Summary: The paper presents the "Reduced Kernel Mean Embedding (RKME)" specification, which represents a model's capabilities while ideally preserving the privacy of the original training data. The paper provides a theoretical analysis and proves that the RKME specification can protect the training data against common inference attacks and maintain its utility in the learnware search process.
Strengths: * This paper aims to resolve the crucial data privacy challenge while enabling the effective reuse of pre-trained models under the learnware setting.
* This paper provides a comprehensive theoretical framework to prove the efficacy of RKME in preserving privacy. The proofs are detailed and robust, offering a strong theoretical foundation for the claims about data privacy and security against inference attacks.
* The paper also discusses the practical implementation of the RKME specification in learnware systems.
Weaknesses: * The paper focuses on theoretical proofs and lacks extensive empirical evidence to support the effectiveness of the RKME specification in real-world scenarios.
* The analysis primarily hinges on the assumption that the RKME specification works optimally with certain types of data distributions and kernel functions.
Technical Quality: 3
Clarity: 3
Questions for Authors: I do not have particular questions as I am unfamiliar with the field.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have included discussions about potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews!
---
Q1: The paper focuses on theoretical proofs and lacks extensive empirical evidence to support the effectiveness of the RKME specification in real-world scenarios.
A1: Thanks for the feedback! We are not entirely sure whether the "effectiveness of the RKME specification in real-world scenarios" you mentioned refers to the effectiveness of RKME in Learnware search or its effectiveness in protecting privacy while performing search tasks well. Therefore, we will provide a detailed discussion on both aspects below:
1. **For Learnware search**, in current learnware research, almost all data types that use RKME with Gaussian kernel as a specification have achieved very good results. [Liu et al., 2023] demonstrated through experiments that for tabular data, using RKME with Gaussian kernel for Learnware search significantly outperforms searching in the learnware market without a specification. Similarly, [Wu et al., 2023] have shown that image data, after feature extraction, can use RKME with Gaussian kernel and achieve good experimental results. It is important to note that the core contribution of our work is providing theoretical guarantees that RKME can protect user data privacy, while the effectiveness of RKME in Learnware model search is not within the scope of discussion of this paper.
2. **For the effectiveness of RKME in protecting data privacy in real-world scenarios**, we carried out detailed experiments, the results of which can be found in the global response to all reviewers. Additionally, we have provided an analysis of our experimental results in the global response to all reviewers. It is worth mentioning that our experimental results match our theory well and demonstrate that RKME effectively protects data privacy while performing Learnware search tasks efficiently.
If you still have any concerns, we are looking forward to addressing any further questions during the reviewer-author discussion period!
---
Q2: The analysis primarily hinges on the assumption that the RKME specification works optimally with certain types of data distributions and kernel functions.
A2: Thanks for the feedback! Our theoretical analysis does not make non-trivial assumptions about data distributions, and our analysis is also applicable to a broad class of kernel functions. We chose the Gaussian kernel for our proofs because it is widely recognized as the best kernel for the MMD (maximum mean discrepancy) distance. Here is a detailed explanation:
1. Note that in proving Theorem 3.4, we show that it holds for any continuous distribution. In the subsequent remark following Theorem 3.4, on line 165, we further discuss the applicability of Theorem 3.4 to discrete distributions. The conclusion of Theorem 3.5 is derived based on Theorem 3.4, so the assumptions are consistent. Therefore, our theory is not limited to specific types of data distributions.
2. In learnware research, one of the most representative distribution metric, MMD(maximum mean discrepancy), is used to calculate distribution distance. MMD distance with characteristic kernel is commonly used in various scenarios, such as applying MMD in GAN on image data, and the gaussian kernel is the most commonly used. Thus, following the tradition in most field using MMD as distribution metric, we mainly conduct the theoretical analysis on RKME with gaussian kernel.
However, our analysis method is highly applicable to Kernels with *nonrationality* and *analyticity*. These properties determine whether the synthetic data in RKME are locally homeomorphic to the manifold of the sample space. Therefore, kernels that satisfy these properties (such as the Sigmoid kernel $K(x, y)=\tanh \left(\gamma x^T y+r\right)$, Cauchy kernel $K(x, y)=\frac{1}{1+\gamma|x-y|^2}$, etc.) can undergo a similar analysis and reach the same conclusion as Proposition 3.2 and Proposition 3.3.
For these kernels, the bound appearing in Theorem 3.5 should be estimated differently for each kerne For the Gaussian kernel, we used linearization and isoperimetric inequalities in our proof to estimate sub-manifold within the manifold formed by the samples, which is also where the difficulty of this problem lies. For other kernels, similar conclusions can be reached after selecting appropriate methods for estimation.
---
We greatly appreciate the your feedback! We hope our responses have clarified our work. If you still have any concerns or feel unfamiliar with certain aspects, we are looking forward to addressing any further questions during the reviewer-author discussion period! | Summary: The paper analyzes the data preserving properties of Learnware, wan interesting idea involving a marketplace of pretrained ML models. In Learnware, new inference tasks are matched to ML models capable of solving that task without any raw data being shared. Rather, the method leverages RKME to construct a smaller, synthetic representation of the model's distribution over inputs and outputs. In this work, the paper explores whether Learnware is secure against data privacy attacks (linkage, attribute inference) when using the Gaussian kernel and various assumptions on the data. More compact representations are shown to be harder to attack. However, this reduces model search (retrieval) quality inducing a tradeoff.
Strengths: + Analysis of the ability of Learnware to resist privacy attacks against the dataset used to train the model makes the Learnware ecosystem more robust. Demonstrating the tradeoff between privacy and search (retrieval) quality is an intuitively clear result.
+ The theoretical results and analyses seem novel to me, as far as I can tell. A brief search didn't turn up anything relevant. (However, this is outside my area of expertise so I'm unable to assess validity.)
Weaknesses: - The paper analyzes the privacy-preserving properties of Learnware. However, I remain unsure about the benefits of the core Learnware system. Reading through the recent references ("Learnware: Small models do big"), I'm left with many questions which are not really addressed in any of the papers. I don't see how Learnware is better than the existing model sharing infrastructure (model hub, data and model cards, benchmark results, open-source training and inference code). The existing ML model sharing infrastructure is widely used already and doesn't require the new user to even label any data first. Please see the questions below.
- The Learnware ecosystem seems like a very niche area. Without additional details of system usage, it becomes difficult to assess the impact of contributions in this paper.
- I'm not really equipped to comment on the quality of the theoretical analyses. That said, the paper could do a better job of describing how the analyses build on and fit into the larger body of work on related tasks.
- Experiments exploring the tradeoff between data linkage protection and search performance would have been nice to have. Without these, I'm again left wondering if the existing ML model sharing infrastructure (which does not have this issue) is indeed better.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is the Learnware market currently operational in the wild? Please provide additional details into which parts of the Learnware ecosystem are actually in use at this time vs hypothesized. For example, scale of daily uploads & downloads?
- Please describe how the Learnware approach outperforms existing ML model-sharing infrastructure (model hubs, data cards, model cards, benchmarks, open source training and inference routines). For example, why is downloading an image segmentation model checkpoint off a model hub after reading through its data and model cards, benchmark and performance reviews, and trying it out in the online UI insufficient? How well does Learnware's "anchor learnwares" mechanism work in this situation compared to the approach above?
- Which of the theoretical analyses or results included in this paper are novel compared to prior geometric analyses or privacy works? Please include references, if any, for the analytical techniques in the paper.
- Is it possible to experimentally explore the data linkage vs search quality tradeoffs? How does the search quality degradation affect the user experience of trying to find an appropriate model?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the constructive reviews! We provide detailed responses below, and hope the reviewer could reassess the significance of our results. We are looking forward to addressing any further question in the reviewer-author discussion period.
---
Q1: Is the Learnware market currently operational in the wild? ... For example, scale of daily uploads & downloads?
A1: Thanks for the feedback!
Yes. Recently, the first learnware dock system, *Beimingwu*, has been built and released [1,2]. With a novel specification-based architecture, the system can significantly streamline the process of building machine learning models for new tasks, by automatically identifying useful models. The system provides implementations and extensibility for the entire process of the learnware paradigm, including the submission, usability testing, organization, identification, deployment, and reuse of learnwares. The core engine is also released as *learnware* package [3].
The Beimingwu system is currently primarily serving as a research platform for learnware studies. Although it is only open to the academic community, it has already been registered by over 500 researchers from more than **150 universities**.
[1] Beimingwu: A Learnware Dock System. KDD 2024.
[2] Website: https://bmwu.cloud/
[3] Learnware package: https://learnware.readthedocs.io/
---
Q2: Please describe how the Learnware approach outperforms existing ML model-sharing infrastructure ... reading through its data and model cards, benchmark and performance reviews, and trying it out in the online UI insufficient?
A2: Thanks for the detailed feedback!
The biggest advantage of Learnware compared to existing ML model-sharing infrastructures lies in **model search with specification**. As you mentioned in your example, reading through data and model cards, benchmarks, and performance reviews can certainly help determine a good model, but when there are lots of models in the library, it is difficult to try each one individually, which necessitates model search. General existing ML model-sharing infrastructures can only perform keyword or language description searches. Since they cannot access specific user task data, such searches are imprecise and require users to try the search results themselves.
In Learnware, besides using language description information (which we refer to as **semantical specification**), we also use the RKME-based **statistical specification** mentioned in this paper to describe the user's specific task for precise model localization. This allows users to search for models in the market that are more suitable for their specific tasks.
To our best knowledge, for the first time, the learnware paradigm formally proposes to build a large model platform consisting of numerous high-performing models with specifications, and enable users to easily leverage existing models to solve their tasks. Recently, utilizing the large model platform to solve new learning tasks has witnessed a rapidly increasing attention, notably the Hugging Face platform, hosting over half a million models. Thus identifying truly helpful models becomes more and more difficult. Based on a novel statistical-specification-based architecture, learnware system aims to automatically identify and assemble high-performing models suitable for user tasks, with no need for extensive data and expert knowledge, while preserving data privacy we proved in this paper.
---
Q3: The paper could do a better job of describing how the analyses build on and fit into the larger body of work on related tasks. Which of the theoretical analyses are novel compared to prior geometric analyses or privacy works?
A3: Thanks for the feedback. To the best of our knowledge, our work is the first to use geometric analyses to study privacy. The theories and related analytical methods are completely original and provide significant contributions to the field of privacy:
1. The problem we mainly focus on is the privacy of synthetic data brought by data compression. However, related work (e.g., data distillation/data condensation) mainly explores privacy properties through experiments without theoretical guarantees. This is because the most mainstream privacy theoretical analysis framework, differential privacy (DP), is difficult to apply to compressed synthetic datasets. Thus, theoretical analysis methods for the privacy of compressed data have always been lacking. Our paper provides the first theoretical analysis attempt for RKME, a special form of data compression.
2. The essential difficulty in analyzing the privacy of RKME-compressed synthetic datasets lies in the fact that RKME is the optimal solution to a non-convex nonlinear system of equations involving a Gaussian kernel. Solving the optimal solution for non-convex problems involving Gaussian kernels has been an open problem. However, in geometric analyses, we found a way to analyze the properties of the solution without solving this non-convex nonlinear system. Our newly proposed technique may prove to be highly useful in analyzing the properties of solutions to specific forms of non-convex problems.
---
Q4: Is it possible to experimentally ... to find an appropriate model?
A4: Thanks for your constructive feedback! We have carried out detailed experiments, the datasets, settings, and results of which can be found in the global response to all reviewers. We have also analyzed the experimental results in the global response to all reviewers, and the empirical findings align closely with our theories.
---
We once again thank you for your constructive comments! We believe that the Learnware paradigm is extremely important and general, and our research occupies a non-negligible position in both the learnware and privacy communities. We hope the above replies will address your concerns and would appreciate a reevaluation of our paper's score!
---
Rebuttal Comment 1.1:
Title: Re. author response
Comment: I thank the authors for their detailed response to the reviewers. A number of my concerns were about the underlying assumptions and properties of the Learnware system and these have been satisfactorily addressed in the response. At this point, I have no major objections to this paper and have increased my score to reflect this.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer mMLd,
Thank you so much for your kind reply and for adjusting the score! We will revise our paper according to the constructive reviews.
Best
Authors | null | null | Rebuttal 1:
Rebuttal: We have conducted validation experiments to further illustrate the tradeoff between data privacy and search quality in our work. Below, we present the experimental setting and empirical results. All related figures can be found in the accompanying PDF.
---
### **Datasets**
We use six real-world datasets: Postures [Gardner et al. 2014], Bank [Moro, Cortez, and Rita 2014], Mushroom [Wagner, Heider, and Hattab 2021], PPG-DaLiA [Reiss et al. 2019], PFS [Kaggle 2018], and M5 [Makridakis, Spiliotis, and Assimakopoulos 2022]. These datasets cover six real-world scenarios involving classification and regression tasks. Postures involves hand postures, Bank relates to marketing campaigns of a banking institution, and Mushroom contains different mushrooms. PPG-DaLiA focuses on heart rate estimation, while PFS and M5 concern sales prediction. These datasets span various tasks and scenarios, varying in scale from **550 thousand to 46 million** instances.
---
### **Learnware market**
We have developed a learnware market prototype comprising about **4000 models** of various types. We naturally split each dataset into multiple parts with different data distributions based on categorical attributes, and each part is then further subdivided into training and test sets. For each training set, we train various models with different model types, including linear models, LightGBM, neural networks with different hyperparameters, and other common models. The number of models in each scenario ranges from 200 to 1500. For evaluation, we use each test set as user testing data, which does not appear in any model’s training data. The various scenarios, partitions, and models ensure that the market encompasses a wide array of tasks and models, significantly enhancing the diversity in the prototype and the authenticity of experimental settings.
---
### **Evaluation**
We explored the tradeoff between data privacy and search ability in the six scenarios mentioned above. For search ability, a natural metric is to evaluate the performance of the model obtained through the search on the user’s dataset. Good performance indicates that we have found a more suitable model. Therefore, we employ error rate and root-mean-square error (RMSE) as the loss function for classification and regression scenarios, respectively, collectively referred to as Search error. A **smaller search error** indicates **stronger search ability**.
For data privacy, we calculate the empirical risk for the three types of privacy risks proposed in this paper. Consistency risk is defined as $1-\widehat{R}_C$, where $\widehat{R}_C$ is the sample estimate of $R_C$ in the paper, defined as the number of samples in the generated RKME synthetic data that are close to the original samples in terms of the Euclidean norm. Linkage and Inference risks are defined as $\widehat{R}_L(D) - \widehat{R}_L(Z)$ and $\widehat{R}_I(D) - \widehat{R}_I(Z)$, respectively, where $\widehat{R}_L(D)$, $\widehat{R}_I(D)$, $\widehat{R}_L(Z)$, and $\widehat{R}_I(Z)$ represent the confidence given by a brute force attack on the dataset $D$ or RKME $Z$. **Smaller privacy risks** indicates **stronger data preservation ability**.
---
### **Configuration**
For the specification of RKME, we use a Gaussian kernel $k\left(\boldsymbol{x}_1, \boldsymbol{x}_2\right)=\exp \left(-\gamma\left|\boldsymbol{x}_1-\boldsymbol{x}_2\right|_2^2\right)$ with $\gamma=0.1$. For all user testing data, we set the number of synthetic data points in RKME, $m$, to $0$, $10$, $50$, $100$, $200$, $500$, and $1000$ to explore the tradeoff between search ability and data privacy (when $m$ is 0, a model is randomly selected).
Our detailed experimental results can be found in the accompanying PDF. We summarize some representative results in the following table:
| | | Posture | **Bank** | **MR** | **PPG** | **PFS** | **M5** |
| ------------------------------- | -------- | -------- | -------- | ------ | ------------- | ------- | ------ |
| Search error | $m=10$ | 43.57%| 15.58%| 32.55% | 31.98| 2.41| 2.33|
| | $m=100$ | 23.43%| 14.13%|16.29%| 20.62| 2.18| 2.19|
| | $m=1000$ | 21.15% | 13.97% | 15.36% | 18.81| 2.21| 2.07|
| Consistency risk | $m=10$| 0.000‰| 0.000‰| 0.000‰ | 0.000‰| 0.000‰ | 0.000‰ |
| | $m=100$ | 0.001‰| 0.000‰| 0.001‰ | 0.003‰| 0.000‰ | 0.002‰ |
| | $m=1000$ | 0.041‰| 0.038‰| 0.039‰ | 0.040‰| 0.047‰ | 0.036‰ |
| Linkage risk| $m=10$ | 0.01‰| 0.02‰| 0.02‰ | 0.03‰| 0.02‰ | 0.02‰ |
| | $m=100$ | 0.16‰| 0.18‰| 0.15‰ | 0.19‰| 0.14‰ | 0.15‰ |
| | $m=1000$ | 0.30‰| 0.34‰| 0.31‰ | 0.32‰| 0.33‰ | 0.37‰ |
| Inference risk| $m=10$ | 0.01‰| 0.01‰ | 0.01‰ | 0.02‰| 0.01‰ | 0.02‰ |
| | $m=100$ | 0.11‰| 0.10‰| 0.18‰ | 0.13‰| 0.12‰ | 0.14‰ |
| | $m=1000$ | 0.42‰| 0.40‰| 0.46‰ | 0.41‰| 0.39‰ | 0.43‰ |
Due to the limitation of space, we provide a brief analysis of the experimental results as follows:
1. It can be observed that as the number of synthetic data points $m$ in RKME increases, the search error decreases. This indicates that more synthetic data leads to better search ability. At the same time, as $m$ increases, all three privacy risks also increase, indicating that more synthetic data may lead to greater privacy risks.
2. It is noted that as the number of synthetic data points $m$ in RKME increases, the search error initially decreases rapidly, but the rate of decrease slows down after $m = 100$. Conversely, the three privacy risks initially increase slowly but then rise more sharply after $m = 100$. Given that the number of user test data points $n$ we used ranges from 10,000 to 100,000, this aligns with our theoretical expectations in the paper that $m \in [\sqrt{n}, k\sqrt{n}]$.
Pdf: /pdf/f00bf5d90bf91d60f5e0e1fb4ad7681ee240a031.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks | Accept (poster) | Summary: The paper introduces the BFA - a novel quantity to predict and control feature learning in DNNs, as well as the feature speed formula which allows expressing the magnitude of feature updates after one GD step. The paper recovers key properties of known HP scalings, and also extends these results by introducing a new HP scaling for large depth ReLU MLPs.
Strengths: 1. The BFA and BFK are interesting objects to study and the geometrical picture that arises (mentioned in the introduction) gives a nice intuition.
2. The main results (Thms 2.1 and 3.2) are clearly stated and the proofs are straightforward.
3. The contributions are clearly stated and the relation to previous work distinguishes these contributions.
4. Earlier results are recovered here with a transparent derivation, but Ref [1] also provided quite an intuitive derivation, as you mentioned.
[1] https://arxiv.org/abs/2310.17813
Weaknesses: Despite the strengths mentioned above, I did not give a higher score for the following reasons:
1. Novelty for HP scaling:
As far as I can see, the main takeaway regarding HP scaling is the extension of known results, such as muP, to the limit of large width-then-depth. While this is indeed new, this is a somewhat limited contribution.
2. Applicability of results:
While some of the results are rather general (like Thm 2.1), some other parts of the results seem to apply only under rather limited conditions, e.g. only a single input.
3. Experimental findings:
I found issues with some of the experimental findings: I did not find a mention of what is assumed about the data: is it synthetic, random, from some known benchmark etc. Also, by inspecting Fig 2b I was not convinced that the output sensitivity is bounded away from zero.
4. I feel that the paper could be made less technical and more readable by delegating some of the proofs to the Appendix and using the space for some visualizations.
typos:
- Fig1 caption 1st line: witdh -> width
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. in line 194 - BC condition - is this a strict equality? or is there some tolerance?
2. In the Introduction you use the term "hierarchical features" - can you give a definition for that?
3. in the BFK definition (eq. 5) - is this for multiple inputs? the NTK is defined for $x, x'$.. Is $m_v$ here the product of batch-size with input dimension to the layer?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations of their results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her detailed and encouraging review! Here is our answer to the main weaknesses raised by the reviewer:
- in this paper, we focus on introducing a theoretical methodology and do not introduce new (useful) HP scalings indeed. In current work we are tackling other asymptotics via this approach (see answer to R#2 for an example).
- the assumption of a single input is also made by all related works, but we do not believe that it is too problematic. The scalings are the same for a finite batch size as long as the NTK does not degenerate (eg on a ReLU ResNet -- a setting that we are developing in a work in progress). However, the situation would change and become more interesting for joint limits such as batch-size and width jointly going to $\infty$ (see also answer to R#2).
- there was indeed a problem in the code of Fig.2b which will be fixed in the revision.
- since our focus is on exposing a theoretical approach, we would like to keep the proofs in the main paper as much as possible. We will use the additional page to introduce illustrations and expand the discussions.
Answers to questions:
- the equality is indeed a typo, this should be a $\Theta(\cdot)$. Thank you!
- we have removed the discussion on "hierarchical features" which we believe is not central to our goals and was too vague. By this term, we meant that each layer learns a representation of the input that relies on the learnt representation of the previous layer, and so on.
- Yes the BFK in Eq 5 is for multiple inputs (in fact for essentially any architecture and tensor dimensions). In a vanilla MLP and Resnet, $m_v$ would indeed be the product of batch-size with the input dimension to the layer. It is this ability to cover any architecture and tensor shapes that we think make our approach a good starting point to derive HP scalings.
---
Rebuttal Comment 1.1:
Comment: thank you for your response.
I still did not find a "mention of what is assumed about the data: is it synthetic, random, from some known benchmark etc"
It would be helpful to get your response to that.
---
Reply to Comment 1.1.1:
Comment: We apologize for the oversight. These illustrations are computed with a single sample as an input (batch-size $1$) which is a random Gaussian vector in $\mathbb{R}^d$, so this is synthetic data. Since those models are rotationally invariant -- because the distribution of the input layer's weight is Gaussian -- we would get the same result for any input vector, be it synthetic, or from a dataset. We plan to release the (simple) code with the final version of the paper. | Summary: The paper presents a novel perspective on infinite width and depth feature learning networks. It introduces the backward-to-feature kernel (BFK) as a central quantity determining the evolution of the intermediate layer features. The paper shows that the movement of the hidden layer features can be exactly related to an angle $\theta_\ell$ between the backward pass and the feature velocity, and uses insights on the scaling of the cosine of this angle with width to recover essentially all known infinite width and depth feature learning limits, as well as a novel large depth MLP limit.
Strengths: The paper studies an extremely important topic in deep learning theory. Given the technically challenging nature of the study of large width and depth limits, the paper is superbly well-written and accessible. Prior papers by Yang et al and Bordelon et al have done important work in developing the study of large width and depth limits, but their derivations are either very dense or otherwise rely on non-rigorous methods to derive the scalings. This paper manages to both rigorously motivate feature learning at infinite width and depth while simultaneously making the paper short and accessible. This is a major strength and no easy feat. I commend the authors on it.
Beyond this, there are several results of strong technical merit that will be of value for researchers studying infinite width and depth limits. The infinite depth MLP and scale invariant learning rate discussions are particularly interesting. The authors do a good job placing their work in context by presenting tables comparing their parameterization to others.
Ultimately, I believe that this paper is not only technically novel and sound, but is also a service to the community. I strongly recommend it for acceptance.
Weaknesses: There are no technical weaknesses that I have found, and I have gone through the derivations in detail. My only comment is expository:
In equation 1, the definition of the forward pass $T_{\ell}(f_{\ell-1}, w_\ell)$ as well as its discussion in terms of selection derivatives is quite technical and may confuse readers from outside sub-communities in machine learning. I recommend stating more clearly that this includes a simple forward pass such as $W_{\ell} \cdot f_{\ell-1}$ and perhaps adding a footnote to make this first paragraph a bit more readable.
Technical Quality: 4
Clarity: 4
Questions for Authors: As a simple clarification, I want to confirm that the ResNet scalings found precisely reproduce those predicted by Bordelon et al and Yang et al. Are there any additional ResNet scalings that have not been studied in prior work that this paper finds?
In the second paragraph of the conclusion section "it can only quantify feature speed for (S)DG (and does not apply to variants, a priori) and at “cut nodes” in the NN architecture, where all the signal goes through (in particular, it does not apply inside the blocks of a
290 ResNet)"
I assume you mean "(S)GD". Can you please elaborate a bit more on what you mean by cut nodes? Is this like a residual block with many layers? It will be interesting if you can derive a similar feature formula for more general differentiable circuits with branching.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Large width and depth limits may serve to determine the scaling laws for the next frontier of language and vision models, which may have major societal impact. However, the theoretical nature of the paper limits any major negative impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for his/her detailed and encouraging review.
- yes the scalings for ResNets are precisely those predicted by Bordelon et al, and Yang et al, this is mentioned in the manuscript but we'll make this more visible (note that the first version of our work appeared in November 2023, the same month as Yang et al).
- yes, we mean "SGD". By "cut nodes" we mean "cut nodes in the DAG computational graph where each node represents a tensor". This means that a ``cut node'' is any intermediate computations in the forward pass that is not skipped by any other computation, such as a skip connection in a Resnet. For instance, in a ResNet, the formula applies just after each residual connection, but not just before. We will clarify that (with a mathematical definition). We also think that it would be interesting and important to extend this approach to any node in any computational graph!
- we agree with the comment that we should be a bit more concrete in the beginning of the paper, when introducing the forward pass and what this can represent. We will add examples of how to instantiate the formulas. | Summary: The authors propose a technical strategy for deriving neural net parameterizations that relies on controlling the angle between the activation gradient and the feature update. The authors derive various theoretical results about this quantity, including a formula for computing it, and some analyses in the context of MLPs and ResNets. The authors claim to use this principle to derive new parameterizations, but crucially they never test them in a real learning problem.
Strengths: - the authors propose an interesting notion and derive interesting analyses surrounding it
- the parts of the math I checked seem rigorous and sound
- the authors do a good job of connecting their work to related work
- the ideas are quite creative
Weaknesses: I need to preface this review by saying that this feedback is intended to be constructive and to help you improve the paper. My current impression is that the paper is not ready for publication. I strongly encourage you to keep working in this direction, and I hope this feedback will be useful for that.
With that said, the main issues I see with the paper are:
### **No real experimental evaluation**
My understanding is that the main practical outcome of your work and theoretical analysis is a new parameterization for training neural networks. I feel that it is really important for you to test this parameterization to check that it is useful, or at least to see what its properties are in an actual training situation. It's so easy to come by free cloud compute (e.g. Google Colab) that I can't really see a reason for not doing this.
I don't feel that the experiments in Figures 1 and 2 are enough to convince me of the utility of your framework. Also I'm not sure how to reproduce these experiments. For example, what dataset did you use? What is the loss function?
As a side note, I'm also a bit doubtful that you can even train MLPs effectively beyond depth 20 or so. I read the Jelassi et al paper (https://arxiv.org/abs/2305.07810) and noticed they don't test their parameterization either. I may be wrong here, but I don't think you can hope for some engineer or experimentalist to pick up the paper and implement things for you. I think you have to be proactive here.
### **Doesn't go that far beyond existing ideas**
A lot of the paper focuses on dealing with analyzing or re-deriving existing parameterizations---e.g. muP or the 1/sqrt(L) depth scaling in ResNets. But this is not so interesting because it has already been done and there are already ways to analyze these things. What does your analysis offer that prior analyses do not? I also want to point out that concurrent works to this paper go beyond 1/sqrt(L) depth scaling rules. For example arxiv.org/abs/2405.15712 and arxiv.org/abs/2405.14813. And these papers actually experimentally test these deviations. Clearly these are concurrent works, but I just mention it to demonstrate that there is more out there.
### **Paper only seems to focus on batch size one**
In my opinion, doing things only at batch size one is a bit toy, and it would be better to directly analyze larger batch sizes.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses section
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 1
Limitations: Overall, I think it's hard to assess the limitations without having more thorough experimental evaluation. I would encourage you to work out how to streamline the mathematical exposition, and then to start testing these ideas.
I realize this feedback might be construed as being fairly negative, but I hope that it can help to improve the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review. We have replied to your first 2 criticisms in the main rebuttal. Concerning the limitation to batch-size 1: this is an assumption made by all related works on feature learning. However, note that the "feature speed formula" applies to any architecture -- and in particular to any batch size. With our approach, it is in fact possible to obtain the scalings in the *joint limit* of width and batch size -- which is more powerful than considering a large width limit with fixed batch size (and this is not accessible via other existing approaches, which start with the infinite width limit).
For instance, a direct application of the feature speed formula in a two-layer MLP (one hidden layer) shows that to ensure feature learning, the scale of the output layer $\sigma_{L}$ should be scaled as
$$
\sigma_L \asymp \frac{1}{\cos(\theta_1)\cdot \text{width}} \asymp \frac{\sqrt{M_2}}{M_1}\frac{1}{\text{width}}.
$$
where $M_p$ are the spectral moments of the input covariance. (To derive the expression of $\cos(\theta)$, we use that in this case, the BKF is a linear operator that acts on a matrix $B\in \mathbb{R}^{\text{width}\times \text{batch}}$ as $K(B) = BX^\top X$, writing the linear case for clarity). This differs from the usual $\frac{1}{\text{width}}$ mean-field scaling (this new insight is not discussed in the manuscript to avoid overcrowding, but we mention it here to highlight the flexibility of our approach).
---
Rebuttal 2:
Comment: I'll reply in full to your top level comment---thanks for writing that. Just two nits with this rebuttal:
**"this is an assumption made by all related works on feature learning"** not true, for instance one of the papers I posted in my rebuttal (and it's non concurrent antecedents) does not make this assumption. It satisfies itself with proving bounds on feature learning that hold for any batch size.
**"and this is not accessible via other existing approaches, which start with the infinite width limit"** my same comment applies again with the words "batch size" replaced by "width"
---
Rebuttal Comment 2.1:
Comment: Thank you for engaging with my response.
- Indeed, I should have been more precise, I was talking about all works that provide proof of the scale of feature learning (by this, I mean both upper and lower bounds, of matching scale). The line of work of arxiv.org/abs/2405.15712 (using dynamical mean-field theory) is concerning with proving the existence of dynamical limits. The existence of well defined limits can be interpreted as the existence of upper bound, but does not, however, imply lower bounds. The line of work of arxiv.org/abs/2405.14813 (on the "spectral criterion") is also, as far as I am aware, proposing a method that only gives upper bounds. Works that prove both upper and lower bounds (Yang et al., Jelassi et al. ) consider a single batch size, generally for convenience. But as I mentioned in my review, this is not out of reach with our approach.
- I am not sure I understood the sentence : "my same comment applies again with the words "batch size" replaced by "width" (since width is studied in our work) | Summary: This paper studies the feature learning speed of the layers of Neural Networks (NNs). Specifically, it proposes to measure it through the quantity *Backward-Feature Angle* (BFA), denoted by $\theta_l$ for a layer $l$. This quantity is directly related to the layer-wise decomposition of the Neural Tangent Kernel (NTK). In practice, the BFA is measured experimentally and several properties (feature learning, signal propagation, etc.) can be related to the BFA.
Strengths: # Originality
This paper tackles an important problem: the relation between the optimal hyperparameters of a neural network and its architecture.
# Clarity
This paper is easy to read and the statements are clear.
Weaknesses: # Originality
The BFA is closely related to the layer-wise decomposition of the NTK, which is already widely used in the NN optimization literature [1, 2, 3, 4]. Overall, the BFA does not contain any information that is not already available with previous objects.
# Significance
The benefits and the properties of the BFA are still unclear.
For instance, Section 5 proposes a new scaling of the hyperparameters, that is not clearly related to the BFA. Besides, the experimental validation of this new scaling is not provided.
# Quality
The contribution of this paper is unclear. The usefulness of the BFA, either theoretical or experimental, is still unclear, and the proposed hyperparameter scaling is not tested experimentally.
# EDIT: References
[1] Gradient descent provably optimizes over-parameterized neural networks (2018), Du et al.
[2] Gradient descent finds global minima of deep neural networks (2019), Du et al.
[3] A convergence theory for deep learning via over-parameterization (2019), Allen-Zhu et al.
[4] Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks (2020), Zou et al.
Technical Quality: 1
Clarity: 3
Questions for Authors: How does the "ours" hyperparameter scaling compare to the others (usual, muP or NTK)?
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 2
Limitations: Lack of experimental validation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We have replied to this review in the main comment.
Can you please specify what are the references [1,2,3,4] in your review? To the best of our knowledge our approach to derive HP scalings is new (the closest work being Jelassi et al), but we would appreciate precise pointers to related results.
We are aware that the NTK and related objects have been heavily used in NN theory. This is expected since it is a fundamental characteristic of the training dynamics. However, we do not think any work has used these objects the way we do. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time and their comments and we appreciate their encouraging remarks. We also disagree with a few comments which, we believe, result from a misunderstanding of the content of our paper, the state of the theory on feature learning, and perhaps from a disagreement on the role of theory, in general.
- *lack of experimental validation*: this paper is theoretical and the usefulness of the hyper-parameter (HP) scalings that we discuss has been already demonstrated in prior works (which themselves followed from previous purely theoretical works on infinite width neural nets). The only new scaling that is introduced is the one for deep ReLU MLP but this just for the purpose to illustrate how to apply our theory in various contexts : as mentioned in the paper, deep ReLU MLPs cannot be trained (the NTK degenerates, a problem comes up when considering more than 1 sample) so there is no relevant "real data" experiment to do in this case.
- *lack of significance*: We sincerely believe that our approach is a significant advance in the theory of feature learning and HP scalings. The existing approaches to HP scalings are the following:
(1) *write infinite-width limits dynamics first*, and see whether "features move" (eg Chizat et al, Yang et al, Bordelon et al, among many): this is the original approach to design HP scalings, but this leads to proofs which are technical and that help intuition in a limited way. For instance, in their breakthrough paper, Yang & Hu obtain a scaling for feature learning but only for specific architectures (namely finite depth tanh & gelu MLP) at the end of quite technical computations and use of a heavy random matrix machinery (see their appendix H.7.2/3, let us mention that this paper was obviously very influential to us). Note that the recent extensions of this approach to large depth limit are written only at a formal level, due to their high technicality. Besides, this approach is intrinsically limited to "infinite width first" limits and cannot deal with joint limits (e.g. with depth, batch-size or context length). All this facts make it crucial to dispose of alternative theoretical approaches to HP scalings.
(2) *heuristics*: because of the difficulty of the above approach, other researchers typically rely on heuristics involving alignment/CLT/LLN etc. These heuristics can be useful as long as one lacks of rigorous and simple derivations (which we provide). For instance the "spectral condition" in (Yang, Simon, Bernstein) is a heuristic that recovers the good scalings in terms of "hidden width", but it fails to give the correct scalings for other asymptotics (such as depth or large batch-size).
In contrast, our approach is simple, rigorous and general. Since the "feature speed formula" is a non-asymptotic equality, it can be used as a starting point to obtain the good scalings in any asymptotic, provided each term is worked out : we have done this for width and depth in the current submission, and are currently extending it to other asymptotics (see also answer to R#2). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Quantum Algorithms for Non-smooth Non-convex Optimization | Accept (poster) | Summary: This paper considers using quantum methods for stochastic optimization, using zeroth order queries. It looks like the main idea is that using quantum methods, one can summarize over finite difference calculations quickly and efficiently, to arrive at approximate subgradients efficiently; this would usually be very inefficient for classical methods. Overall, they are able to show speedup, from $O(\epsilon^{-4})$ to $O(\epsilon^{-3})$.
Strengths: The problem is well contained and the premise is believable.
The classical optimization bits looks reasonable, and the results make sense.
I skimmed through the appendix, and the classical optimization parts are reasonable.
Weaknesses: Section 3 is a bit hard to follow. The specific speedup offered by the quantum method is not entirely clear, though it is likely coming from Theorem B1. Perhaps a deeper discussion of this, and why this quantum speedup exists (e.g. is it a consequence of Deusch Josza? Can you provide a more complete argument for where the speedup appears? )
Technical Quality: 3
Clarity: 3
Questions for Authors: (minor) Why do you say that finding a Clarke subdifferential is harder than finding a smooth differential? Generally speaking the complexities are comparable.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no societal limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and helpful comments.
**To Weakness 1 (the presentation of Section 3)**:
Thank you for this question. Hopefully, the following explanation will give you a better picture of the structure of Section 3.
The main goal in Section 3 is to construct unbiased quantum estimators $\hat{{\bf g}}$ and ${\Delta}{\bf g}$, which will be used in Algorithms 1 and 2. We present the query complexities on ${\bf U}\_F$ in constructing these oracles in Section 3.2, Theorem 3.4.
The implementation of such (mini-batch) unbiased quantum estimators $\hat{{\bf g}}$ and ${\Delta}{\bf g}$ requires the access to quantum stochastic oracles ${\bf O}\_{{\bf g}\_\delta}$ or ${\bf O}\_{\Delta{\bf g}\_\delta}$, which are not given.
Instead, we only have access to the quantum stochastic function value oracle $\mathbf{U}\_F$. We show the procedure for constructing ${\bf O}\_{{\bf g}\_\delta}$ and ${\bf O}\_{\Delta{\bf g}\_\delta}$ by ${\bf U}\_F$ and a quantum sampling oracle $\mathbf{O}\_{\xi, \mathbf{w}}$ in Lemma 3.2 and Corollary 3.3 in Section 3.1, respectively. Then, in Section 3.3, we explicitly show how this quantum sampling oracle $\mathbf{O}\_{\xi, \mathbf{w}}$ can be constructed from scratch.
A diagram showing the logic flow presented above is as follows:
\begin{align}
\underbrace{\hbox{Section 3.3}}\_{\text{Construct ${\bf O}\_{\xi,{\bf w}}$}} \longrightarrow \underbrace{\hbox{Lemma 3.2 and Corollary 3.3}}\_{\text{Obtain ${\bf O}\_{{\bf g}\_{\delta}}$ and ${\bf O}\_{\Delta{\bf g}\_{\delta}}$ from ${\bf O}\_{\xi,{\bf w}}$ and ${\bf U}\_F$}} \longrightarrow \underbrace{\hbox{Theorem 3.4}}\_{\text{Construct $\hat{{\bf g}}$ and $\Delta{{\bf g}}$ from ${\bf O}\_{{\bf g}\_{\delta}}$ and ${\bf O}\_{\Delta{\bf g}\_{\delta}}$}}
\end{align}
**To Weakness 2 (discussion on the quantum speedup)**:
The quantum mean estimator (Theorem B.1) is one of the main ingredients providing the speedup which leads to better estimators of $\nabla f\_{\delta}({\bf x})$ and $\nabla f\_{\delta}({\bf y})-f\_{\delta}({\bf x})$ as presented in Theorem 3.4 (also see the discussion in Remark 3.5).
However, achieving the speedup shown in Theorem 3.4 requires additional considerations, including implementing a sampling oracle for the desired distribution and constructing a stochastic gradient oracle from it and the function value oracle. A high-level picture of those steps is detailed above in our answer to Weakness 1.
Incorporating such better estimators in our quantum algorithm allows us to achieve better query complexities over the classical ones.
More concretely, one needs to carefully select the variance level $\hat{\sigma}\_{1,t}^2$ in Algorithm 1 and $\hat{\sigma}\_{1,t}^2, \hat{\sigma}\_{2,t}^2$ in Algorithm 2 when utilizing the Theorem 3.4.
As for the speedup on the quantum mean estimator [13, 42] over classical estimators, it can be viewed as a consequence of combinations of quantum Fourier transform [A] and the quantum amplitude amplification algorithm [B].
We will involve more discussion on this in the revision.
[A]. Shor, P. W. “Algorithms for quantum computation: Discrete logarithms and factoring”. In: Proceedings of the 35th Annual Symposium on Foundations of Computer Science, 1995.
[B]. G. Brassard, P. Høyer, M. Mosca, and A. Tapp. “Quantum Amplitude Amplification and Estimation”. In: Contemporary Mathematics 305, 2002.
**To Quetsion 1**:
The complexities of finding Clarke differential and smooth differential are not comparable due to the following reasons:
1. We do not assume $f(\cdot)$ is differential. Since $f(\cdot)$ it in general non-convex and non-smooth, thus the differential of it may be intractable.
2. There are no finite time algorithm that can find an $\epsilon$-stationary point such that $\|\|\partial f({\bf x})\|\|\leq \epsilon$ in non-convex and non-smooth setting, where $\partial f(\cdot)$ denotes the Clarke differential of $f(\cdot)$. Please refer to [Theorem 5, 51].
On the other hand finding an $\epsilon$-stationary point of a smooth differential in non-convex smooth optimization can be done in polynomial time [18, 31]. | Summary: This paper investigates quantum algorithms for finding the $(\delta,\epsilon)$-Goldstein stationary point of a potentially nonconvex and nonsmooth objective function $f$. Utilizing quantum variance reduction techniques as outlined in [42], the authors have developed a zeroth-order quantum estimator for the gradient of the smoothed surrogate of $f$. The stationary point of this smoothed surrogate is also the Goldstein stationary point of $f$ when using an appropriate smoothing parameter $\delta$. Leveraging this zeroth-order quantum estimator, the authors propose two algorithms, QGFM and QGFM+, to find the Goldstein stationary point, achieving a quantum speedup on the order of $\epsilon^{-2/3}$. Additionally, the QGFM+ framework adjusts the variance level during each variance reduction step, providing further acceleration to the Q-SPIDER algorithm described in [42] for smooth nonconvex optimization.
Strengths: This paper initiates the study of quantum algorithms for finding Goldstein stationary points, a significant problem in continuous optimization. Additionally, the authors present an explicit construction of the quantum sampling oracle using the quantum zeroth-order oracle, including a detailed discussion on the number of qubits required.
Weaknesses: Despite the detailed implementation and calculations, the overall technical approach remains relatively straightforward. The zeroth-order quantum estimator combines the classical stochastic gradient estimator for the smoothed surrogate with the quantum variance reduction algorithm in [42]. The quantum algorithms for finding the Goldstein stationary point are obtained by replacing the classical estimators with quantum estimators. Moreover, the narrative is somewhat incomplete due to the absence of lower bound results.
Technical Quality: 4
Clarity: 3
Questions for Authors: Is it possible to improve the $\delta$ dependence using quantum algorithms?
Minor issues:
1. Consistency of big-O notation. For example, $O$ is used in line 139 and $\mathcal{O}$ in line 183. Similarly, there are consistency issues with the quantum oracle notation, where $\mathcal{O}$ is used in line 168 and $\mathbf{O}$ in line 184.
2. Typo on the RHS of the inequality in line 125.
3. The use of dashes '-' is a bit odd. For example, the dashes in line 139, line 210, and line 251 can be removed.
4. The name initials in the citation format are not precise. For example, in entry [1], it should be 'G. Arfken' instead of 'G Arfken'.
5. Line 310: "adjust" -> "adjusts". Line 311: "fixed" -> "fixes".
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and helpful comments.
**To Weakness 1 (discussion on the technical novelty):**
We appreciate the feedback. We highlight our technical novelty on the construction of zeroth-order estimator and the design of quantum algorithms as follow:
* In term of the zeroth-order quantum estimator, utilizing the existing quantum mean value estimation procedure requires having a suitable quantum sampling oracle that returns a superposition over the desired distribution.
We fixed the following gaps between this paper and the one of [42]:
1. [42] requires the assumption of direct accesses of quantum stochastic gradient oracle.
But this is not applicable in our setting because instead, we are given only access to quantum stochastic function value oracle ${\bf U}\_F$.
We overcome this by providing the efficient procedure for constructing our wanted oracle from ${\bf U}\_F$ and sampling oracle $\bf{O}\_{\xi, {\bf w}}$, which is summarized in Lemma 3.2.
2. Furthermore, [42] does not provide how to construct the quantum sampling oracle, which leaves a gap between the quantum algorithm and its detailed implementation on the quantum circuits. We fill this gap by giving an explicit and efficient construction on ${\bf O}\_{\xi,{\bf w}}$ for desired distribution in Section 3.3.
* In term of the quantum algorithms for finding the Goldstein stationary point, our new proposed quantum algorithms are quite different from the classical ones for finding the Goldstein stationary point and the quantum methods for non-convex optimization.
1. The most related classical algorithm is GFM+[7], where they adopted SPIDER algorithm, while we adopt the PAGE-framework, which makes the algorithm single-loop. To the best of our knowledge, such framework has not been investigated in finding the Goldstein stationary point even for classical optimization.
2. Our algorithm framework is also different from existing quantum algorithms for non-convex optimization, where they fixed the desired variance level when using quantum mean estimator, while we adaptively set the variance level and make $\hat{\sigma}_{2,t}\propto \|\|{\bf x}_t-{\bf x}\_{t-1}\|\|$, such strategy allows us not only to show the quantum advantage for finding the Goldstein stationary point in non-convex non-smooth optimization, but also to provide a better query complexity over the state-of-the-art non-convex smooth quantum optimization methods [42].
**To Weakness 2 (Discussion on the Lower Bound)**:
We agree that including a result of quantum lower bound will make the investigation on this problem more well-rounded. But we don't think the lack of a lower bound would diminish our contribution for the following reasons:
1. The main purpose of this paper is to show the quantum advantage on non-convex non-smooth optimization.
We have provided an upper bound of $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-7/3})$ for finding the $(\delta,\epsilon)$-Goldstein stationary point,
which cannot be achieved by ANY of the classical methods due to the classical lower bound $\delta^{-1}\epsilon^{-3}$ established by [14, 30].
2. Even for non-convex smooth function, the quantum lower bound has not been fully investigated [42, 49].
There are only negative results that show no quantum speed-up for non-convex smooth optimization when the dimension $d$ is large [49].
On the other hand, the construction of classical lower bound for finding Goldstein stationary point is based on the lower bound of finding stationary point of non-convex smooth function [14, 30].
Hence, we think the quantum lower bound can be regarded as an important open problem for both non-convex non-smooth optimization and non-convex smooth optimization.
**To Question 1**:
Existing zeroth-order methods for finding the $(\delta,\epsilon)$-Goldstein stationary point of a non-convex non-smooth objective are to find the $\epsilon$-stationary point [7, 32, 33] or $(\delta,\epsilon)$-stationary point [29] of its smoothed surrogate $f_{\delta}({\bf x})$.
However, there is a gap between the function value of $f(\cdot)$ and $f_{\delta}(\cdot)$, which can be only bounded by $|f(\cdot)-f_{\delta}(\cdot)|\leq \delta L$. This yields a factor of $\delta^{-1}$ in iteration complexities, which is irrelevant to using classical or quantum oracles.
Therefore, we think the improvement on $\delta$ dependency cannot be achieved by existing algorithm framework and leave it as an important future work.
**To Minor Issues 1-5**:
Thanks for pointing out these. We will make the consistency of the big-O notation in the revision and fix the typos.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response. I remain my rating | Summary: This paper studies quantum algorithm for non-smooth non-convex stochastic optimization with zeroth-order oracle. It introduces an effective quantum estimator that reduces the variance compared to classical zeroth-order estimators. Upon substituting this estimator into known zeroth-order non-smooth optimizers, namely GFM and GDM+, the resulting quantum optimizer achieves improved rate $\tilde O(d^{3/2}\delta^{-1}\epsilon^{-3})$ and $\tilde O(d^{3/2}\delta^{-1}\epsilon^{-7/3})$ respectively for finding a $(\delta,\epsilon)$-Goldstein stationary point. Notably, quantum speedup improves upon the classical lower bound $\delta^{-1}\epsilon^{-3}$ by a factor of $\epsilon^{2/3}$. Moreover, a modified algorithm achieves $O(\sqrt{d}\epsilon^{-7/3})$ for smooth optimization, improving upon the best known rate.
Strengths: This paper proposes a new zeroth-order quantum estimator. This leads to new quantum algorithms that solves zeroth-order non-smooth non-convex optimization problem, which is not well studied in the literature. Moreover, the proposed algorithms show quantum speedup compared to their classical (non-quantum) counterparts. Notably, it improves over the classical lower bound of $\Omega(\delta^{-1}\epsilon^{-3})$ by a factor of $\epsilon^{2/3}$. Overall, these results represent a significant contribution to the understanding of optimization with quantum oracles. Given my expertise lies primarily in optimization and not in quantum computation, I am only able to assess the optimization-related aspects of this work.
Weaknesses: Although the dependence on $\delta,\epsilon$ is improved, the dimension dependence is suboptimal. In particular, since GFM and GFM+ are known to have suboptimal dimension dependence $d^{3/2}$, so do QGFM and QGFM+. On the other hand, as observed by Kornowsky and Shamir [1], optimizing the random smoothing $f_\delta$ with a non-smooth optimizer, such as online-to-non-convex (o2nc) [2], eliminates this $\sqrt{d}$ factor and achieves $O(d)$ in dimension. Hence, my intuition suggests that upon substituting the quantum estimator into o2nc and following a similar approach to Kornowsky and Shamir, the authors might be able to recover $O(d)$ (or even better) dimension dependence.
[1] Kornowski, G. and Shamir, O., “An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization”, 2023. doi:10.48550/arXiv.2307.04504.
[2] Cutkosky, A., Mehta, H., and Orabona, F., “Optimal Stochastic Non-smooth Non-convex Optimization through Online-to-Non-convex Conversion”, 2023. doi:10.48550/arXiv.2302.03775.
Technical Quality: 3
Clarity: 3
Questions for Authors: - As someone unfamiliar with quantum computation, I have a general question: Is the proposed quantum oracle practically feasible to implement, or is it purely theoretical?
- line 87: does state $|i\rangle$ denote the $i$-th orthonormal basis of $\mathcal{H}^m$?
- line 100: what does it mean by $|\mathbf{x}\rangle |q\rangle$? Is it a shorthand for tensor product?
- Thm 3.4 part 1: should it be $Var(\hat g) \le \hat\sigma_1^2$ instead of $\hat \sigma_1$? part 2: number of queries should be $\frac{d^{3/2}L\\|y-x\\|}{\delta \hat\sigma_2}$ (i.e., currently it's missing $1/\delta$)? Since this theorem is the main result of the quantum oracle, I encourage the authors to carefully check its correctness.
also in the proof (line 471): $\sigma_1^2$ => $\hat\sigma_1^2$?
Minor comments:
- line 98: $C_{f(x)} = f(x)$ => $C_f(x) = f(x)$?
- Proposition 2.1: the properties of smooth surrogate $f_\delta$ are known in [1] and [2], and Lin et. al. and Chen et. al. are restating these results in their papers. Hence, these should be more appropriate references.
[1] Yousefian, F., Nedić, A., and Shanbhag, U. V., “On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences”, 2011. doi:10.48550/arXiv.1105.4549.
[2] Duchi, J. C., Bartlett, P. L., and Wainwright, M. J., “Randomized Smoothing for Stochastic Optimization”, 2011. doi:10.48550/arXiv.1103.4296.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and helpful comments.
**To Weakness 1**:
We think such sub-optimal dependency on $d$ is reasonable due to the following reasons:
1. The sub-optimality on dimension $d$ is a common trade-off in quantum optimization. [49] proved that there are no quantum speedups for finding stationary points of a non-convex smooth object function when the dimension $d$ is large. [42] showed that it requires $\mathcal{O}(\sqrt{d}\epsilon^{-2.5})$ queries of quantum stochastic first-order oracle to find the $\epsilon$-stationary point of a non-convex smooth objective, while the classical optimal methods [18, 31] require $\mathcal{O}(\epsilon^{-3})$ queries. This paper considers a more difficult function class which is non-convex and non-smooth.
2. The algorithm frameworks pointed out by the reviewer in Kornowski and Sharmir [30], Cutkosky et al. [14] construct the gradient estimator by $2$-queries of stochastic function value oracle or one query of stochastic gradient oracle, which means we cannot apply quantum mean estimator to improve such estimator.
Hence, whether using their framework can recover the $\mathcal{O}(d)$ dependency while still improve the dependency on $\epsilon$ is not clear.
**To Question 1**:
We answer this question from the following two aspects:
1. The quantum oracles can be viewed as instances of quantum circuits.
Since the current hardware implementation of quantum computers is still in a relatively early stage,
the practical feasibility of implementing these quantum circuits depends on factors such as the size of the circuits and the types of gates used.
2. The gates used in constructing our quantum stochastic gradient oracle, aside from the provided black-box quantum function value oracle, are primarily simple single-qubit and two-qubit gates, such as Hadamard gate, controlled NOT gate, and controlled rotation gate, which are known to be efficiently implementable [A].
Therefore, if the size of our problem is not large and the black-box quantum function value oracle is practically feasible, then our algorithm can be practically feasible.
[A]. Groenland, Koen, et al. "Signal processing techniques for efficient compilation of controlled rotations in trapped ions." New Journal of Physics, 2020.
**To Question 2 and 3**:
Yes. Your understanding is correct and we will make them clear in the revision.
**To Question 4**:
Thanks for pointing out this.
For part 1, it should be $\mathbb{E}\left[\||\hat{{\bf g}}-\nabla f\_{\delta}({\bf x})\||^2\right]\leq \hat{\sigma}\_1^2$, which is a typo.
For part 2, it should be $d^{3/2}L\|\|{\bf y}-{\bf x}\|\|\hat{\sigma}_{2}^{-1}\delta^{-1}$, where we forget to present the dependency on $\delta$.
They do not impact the results of Theorem 4.1 and Theorem 4.3.
Part 2 of Theorem 3.4 only impact the size of $b_1$ in Theorem 4.3. We determine the size of $b_1$ in line 507 of Appendix D by setting
$$
\hat{\sigma}\_{2,t}^2 = \epsilon^{-2/3}\|\|{\bf x}\_{t+1}-{\bf x}\_t\|\|^2 L^{4/3}d\delta^{-1}
$$
according to line 494, then we have
\begin{align}
b\_1 = \tilde{\mathcal{O}}\left(d^{3/2}L\|\|{\bf x}\_{t+1}-{\bf x}\_t\|\|\hat{\sigma}\_{2,t}^{-1}\delta^{-1}\right)
= \tilde{\mathcal{O}}\left(d^{3/2}L\|\|{\bf x}\_{t+1}-{\bf x}\_t\|\| (\epsilon^{-1/3}\|\|{\bf x}\_{t+1}-{\bf x}\_t\|\|^{-1}L^{-2/3}d^{-1/2}\delta)\delta^{-1}\right)
= \tilde{\mathcal{O}}\left(dL^{1/3}\epsilon^{-1/3}\right),
\end{align}
which means $b_1$ remains the same and it is independent of $\delta$.
We will fix them in the revision.
**To Question 5 and 6**:
Thanks for pointing out these, we will modify them in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed response. Regarding weakness, thanks for providing the additional background. Now I understand improving dimension dependence is a non-trivial challenge. Given this, I adjusted my score accordingly.
As a quick follow-up question, the authors mentioned that quantum estimators can only estimate one-point stochastic function value $f(x,z)$, but not 2-point function values nor stochastic gradients. Is this limitation proven impossible in the quantum computing literature, or is it still an open problem?
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive comments and raising your score.
We present the answer for your follow-up question as follows:
We do not mean that ‘’quantum estimators can only estimate one-point stochastic function value $f(x,z)$, but not 2-point function values nor stochastic gradients''.
In fact, we can use quantum oracles to construct gradient estimators by $2$-queries of stochastic function value oracle (see our result in Theorem 3.2) or a query of stochastic gradient oracle [42].
However, the speed-up by the quantum mean estimators requires to use mini-batch queries of stochastic function value oracles or mini-batch queries of stochastic gradient oracles to reduce oracle calls with a given variance level (see our Theorem 3.4 and Remark 3.5).
Hence, whether one can construct the stochastic gradient estimators by $2$-queries (or constant queries) of stochastic function value oracle or a query of stochastic gradient oracle (as in the algorithms of [30, 41]) while still maintain the quantum speed-up remains an open problem.
We hope this answers your following-up question and are happy to address any further concern. | Summary: This paper introduces new quantum algorithms for non-smooth non-convex optimization problems. The authors propose a quantum gradient estimator for smoothed objectives and develop the Quantum Gradient-Free Method (QGFM) and its enhanced version, QGFM+, which achieve better query complexities than their classical counterparts. These complexities demonstrate a marked quantum speedup over classical counterparts, indicating the potential of quantum computing in optimizing complex functions more efficiently. The paper also discusses the construction of quantum oracles and the application of variance reduction techniques, paving the way for future research in quantum optimization.
Strengths: - The paper proposed new zeroth order quantum optimization algorithms achieving better computational complexities compared to classical methods for non-smooth and non-convex optimization.
- Technically, they construct efficient quantum gradient estimators and quantum superpositions over required distributions as a key subroutine.
- They also proposed a quantum algorithm for non-convex smooth problems with an adaptive variance level, accelerating prior quantum algorithms to get more speedups.
Weaknesses: - The assumptions of having a quantum stochastic function value oracle may be strong. Could the authors explain more about why it is reasonable and important to have such a function oracle?
- The technical core for quantum speedups seems to be the quantum mean value estimation procedure, which is already used in many other optimization problems and scenarios. Could the authors explain more about the technical novelty of their work?
Technical Quality: 3
Clarity: 2
Questions for Authors: Besides the questions raised in the weakness part, I have some minor issues with the submission as follows:
- In line 89, the definition of the tensor product may be a little confusing.
- In the explicit construction of quantum sampling oracles, it seems that the time complexity of the quantum algorithm may be much larger than the query complexity, due to the sampling on the unit sphere. However, for such optimization algorithms, time complexity may be more crucial in real-world applications. Could the authors state the actual time complexity of their algorithm in terms of gate counts?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The quantum complexity lower bound on this problem is not proved in this paper, which is mentioned in the conclusion part. Also, as noted in remark 3.7, implementing quantum sample oracle may require the uses of QRAM, which is currently limited by the physical realizations.
This is a theoretical work, so there is no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive and helpful comments.
**To Weakness 1**:
We think the assumption of having a quantum stochastic function value oracle is reasonable and not strong due to the following reasons:
1. It is very common to assume having a classical stochastic function value oracle, when the objective is in the form of expectation [7, 30, 32, 33].
2. One can efficiently construct such quantum stochastic function value oracle with the same asymptotic computational complexity by replacing the gates in the classical circuit with reversible gates [38].
Such quantum stochastic function value oracle is important to further design quantum algorithms.
**To Weakness 2**:
Utilizing the existing quantum mean value estimation procedure requires having a suitable quantum sampling oracle that returns a superposition over the desired distribution.
The closest work to ours is [42], which utilizes quantum mean estimation in non-convex optimization. We state below the technical novelty in our paper:
1. [42] requires the assumption of direct accesses of quantum stochastic gradient oracle.
But this is not applicable in our setting because instead, we are given only access to quantum stochastic function value oracle ${\bf U}_F$.
We overcome this by providing the efficient procedure for constructing our wanted oracle from ${\bf U}_F$ and sampling oracle
${\bf O}\_{\xi, {\bf w}}$, which is summarized in Lemma 3.2.
2. Furthermore, [42] does not provide how to construct the quantum sampling oracle, which leaves a gap between the quantum algorithm and its detailed implementation on the quantum circuits.
We fill this gap by giving an explicit and efficient construction on ${\bf O}\_{\xi,{\bf w}}$ for desired distribution in Section 3.3.
In conclusion, our technical novelty on quantum part is to provide an explicit and efficient construction of quantum $\delta$-estimated stochastic gradient oracle (Definition 3.3) by using quantum stochastic function value oracle.
**To Question 1**:
We give a more specific description: Given $|{\bf x}\rangle \in \mathcal{H}^m$ and $|{\bf y}\rangle \in \mathcal{H}^n$, we denote their tensor product by $\ket{\bf x} \otimes \ket{\bf y}\triangleq (x_1y_1, x_1y_2 \cdots, x_1y_n, x_2y_1, \cdots, x_m y_n)^\top \in \mathcal{H}^{m\times n}$.
**To Question 2**:
We want to first respectfully point out that it is inappropriate to compare computational complexity and query complexity.
Computational complexity considers the number of basic operations or gates used, whereas query complexity measures the number of calls to a particular process in a black-box manner.
In generally, this process can cause significant costs (including a large number of basic operations) or may not be efficiently computed locally.
This paper focus on the query complexities, which follows the previous work in [4, 5, 42]. Unlike [42] which does not consider the implementation of the sampling oracles, our Section 3.3 enables us to give the gate counts for constructing one quantum $\delta$-estimated stochastic gradient oracle.
Specifically, the steps in Lemma 3.2 needs $1$ access to sampling oracle, $1$ addition operation, $2$ subtraction operations, and $1$ multiplication operation. Hence, the gate complexity stems from the two parts below:
1. The gate complexity for implementing the sampling oracle is $\mathcal{O}(\lceil\log N\rceil + \log (1/\epsilon_0) d)$ where $N$ is the number of the stochastic components of $\xi$, and $\epsilon_0$ being the incurred precision error. More precisely, this construction utilizes $\lceil\log N\rceil + \log (1/\epsilon_0) d$ Hadamard gates and $d-1$ circuits of calculating $\sin$ and $\cos$ on a single qubit.
2. There are various existing methods to implement quantum arithmetic operations, e.g. [41]. Using methods from [41] gives implementations of addition, subtraction and multiplication with gate complexity $\mathcal{O}(C^2)$, $\mathcal{O}(C^2)$, and $\mathcal{O}(C^3)$, respectively, where $C$ is the number of qubits that represent the numbers being manipulated in our algorithm and is usually independent of $d$, $\epsilon$, and $\delta$.
Hence, the asymptotic total gate complexity for constructing one quantum estimated gradient oracle is $\mathcal{O}(\lceil\log N\rceil+\log (1/\epsilon_0) d + C^3)$.
**To Limitation 1**:
We agree that including a result of quantum lower bound will make the investigation on this problem more well-rounded. But we don't think the lack of a lower bound would diminish our contribution for the following reasons:
1. The main purpose of this paper is to show the quantum advantage on non-convex non-smooth optimization.
We have provided an upper bound of $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-7/3})$ for finding the $(\delta,\epsilon)$-Goldstein stationary point,
which cannot be achieved by ANY of the classical methods due to the classical lower bound $\delta^{-1}\epsilon^{-3}$ established by [14, 30].
2. Even for non-convex smooth function, the quantum lower bound has not been fully investigated [42, 49].
There are only negative results that show no quantum speed-up for non-convex smooth optimization when the dimension $d$ is large [49].
On the other hand, the construction of classical lower bound for finding Goldstein stationary point is based on the lower bound of finding stationary point of non-convex smooth function [14, 30].
Hence, we think the quantum lower bound can be regarded as an important open problem for both the non-convex non-smooth optimization and the non-convex smooth optimization.
**To Limitation 2**:
Our algorithms **don't need** QRAM.
Remark 3.7 serves as a discussion for general scenarios on the distribution of $\xi$.
More concretely, a distribution that can be efficiently sampled classically guarantees an efficient construction of a quantum sample oracle over it; while for other cases, QRAM might be needed on a case-by-case basis.
---
Rebuttal 2:
Comment: Thank the authors for the detailed response!
Since the quantum upper bound provided in the paper is $\tilde{\mathcal{O}}(d^{3/2}\delta^{-1}\epsilon^{-7/3})$, while a possible classical upper bound is $\mathcal{O}(d\delta^{-1}\epsilon^{-3})$ the classical lower bound is $\delta^{-1}\epsilon^{-3}$, does it mean that quantum speedup can only occur when the requirement of the precision is relatively high compared with the dimension, given the dimension fixed? If so, regarding current physical realization constraints like noises in quantum devices, it seems that this result is more of a theoretical one with few potential applications.
Another problem is the time/gate complexity part. Thank the authors for their effort to analyze and calculate the gate cost in great detail of constructing one quantum estimated gradient oracle. However, it seems that for one call of the gradient oracle, the cost is at least proportional to the dimension, which is relatively expensive for some problems and may be a major concern in solving them.
Due to the above reasons, I'll keep my rating currently and keep in mind your detailed comments for future discussions.
---
Rebuttal Comment 2.1:
Comment: Thanks for your response and the follow-up questions. We present detailed answers as follows.
> Since the quantum upper bound provided in the paper is $d^{3/2}\delta^{-1}\epsilon^{-7/3}$, while a possible classical upper bound is $d\delta^{-1}\delta^{-3}$ the classical lower bound is $\delta^{-1}\epsilon^{-3}$, does it mean that quantum speedup can only occur when the requirement of the precision is relatively high compared with the dimension, given the dimension fixed?
Yes, the quantum speedup occurs when $d$ and $\epsilon$ satisfies that $d\leq \epsilon^{-4/3}$.
We want to point out it is very common to show the quantum speed-up under the case that the dimension is not too large in non-convex optimization.
For the smooth case, there is NO quantum speedup for finding $\epsilon$-stationary point with stochastic gradient inputs when the dimension is large such that $d\geq \epsilon^{-3}$ [Proposition D.12, A]. Besides, [41] presents a quantum upper bound of $\epsilon^{-5/2}\sqrt{d}$ for smooth non-convex optimization, where the quantum speedup only occurs when $d\leq \epsilon^{-1}$.
The function class considered in this paper can be non-smooth, which is more general than the previous works.
Thus we think the speed-up region $d\leq \epsilon^{-4/3}$ obtained in this paper is reasonable.
> If so, regarding current physical realization constraints like noises in quantum devices, it seems that this result is more of a theoretical one with few potential applications.
We would like to point out that the device's noise mentioned by you can be taken care of by quantum error correction code [B] such as surface code.
On the other hand, the problem's dimension $d$ is limited by the scale of existing quantum computers and thus cannot be very large.
Hence we think the quantum speedup region $d\leq \epsilon^{-4/3}$ is not that strict.
Though the number of precise qubits is still limited by the current technique, the quantum community believes that getting more enough precise qubits is just a matter of time [C].
> Another problem is the time/gate complexity part. Thank the authors for their effort to analyze and calculate the gate cost in great detail of constructing one quantum estimated gradient oracle. However, it seems that for one call of the gradient oracle, the cost is at least proportional to the dimension $d$, which is relatively expensive for some problems and may be a major concern in solving them. ''
Thanks for acknowledging our effort!
We think the $d$-dependency computation cost for one call of the quantum estimated gradient oracle is unavoidable and should not be regarded as a major concern.
Evaluating a function value at a $d$-dimension input will at least take $\mathcal{O}(d)$ time to even just read the input.
Hence, our construction of the quantum estimated gradient oracle doesn't introduce much more cost than necessary, and it is natural to focus on the query complexities on stochastic function value oracle, which usually is the dominant part in the computation.
For instance, even for a simple function with the form $f({\bf x}) = {\bf x}^{\top}{\bf A}({\bf x}){\bf x}$, where ${\bf A}({\bf x})\in R^{d\times d}$, its computation cost of a given $\bf x$ will be at least proportional to $d^2$, which is much larger than $\mathcal{O}(d)$ for constructing one quantum estimated gradient oracle.
**References:**
[A]. Chenyi Zhang, and Tongyang Li. Quantum Lower Bounds for Finding Stationary Points of Nonconvex Functions. ICML, 2023.
[B]. Roffe J. Quantum error correction: an introductory guide. Contemporary Physics, 2019.
[C] Gambetta, Jay M., Jerry M. Chow, and Matthias Steffen. Building logical qubits in a superconducting quantum computing system. npj quantum information, 2017.
We hope our response can address your concern and are happy to answer any further questions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification | Accept (poster) | Summary: This paper studies corruption-robust linear bandit optimization and characterizes the regret bound in terms of both weak and strong corruption measures. Under the stochastic setting, this paper proposes a phased elimination algorithm, and the regret bounds match the lower bound. Under the adversarial setting, the paper proposes two individual algorithms for the two corruption measures respectively. In addition, this paper studies gap-dependent misspecification setting through reduction, and discusses a use case for linear MDPs.
Strengths: - The regret bounds in terms of both corruption measures are provided, where the regret bound depending on $C_\infty$ is first introduced in this paper.
- The theoretical results are supported with detailed proof.
- This paper is generally well-written.
Weaknesses: - The algorithms are efficient regarding regret bound, but the computational complexity is not discussed.
- A conclusion section could be added.
Technical Quality: 2
Clarity: 2
Questions for Authors: What is the computational cost of the proposed algorithms?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Some limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. The global response includes a summary of our paper and potential future work, which we will incorporate as a conclusion in the future version. Your questions are answered below.
**Q1**: What is the computational cost of the proposed algorithms?
**A**: All our proposed algorithms could be computed in polynomial time except Algorithm 2 because it requires solving a fixed point optimization defined in Line 6 of Algorithm 2. As a remedy, we propose Algorithm 4, which gives a computationally efficient counterpart of Algorithm 2 with a worse $dC_{\infty}$ regret.
To be more specific, the main computational cost in Algorithm 1 is to get the G-optimal design and the per-step computational complexity roughly has order $|\mathcal{A}| d^2$ from lemma 3.9 of [1] (also discussed after Theorem 4.3 of [2]).
For Algorithm 3, the main computational cost is solving the continuous exponential weights defined in Line 4. Assuming we get access to linear optimization oracle, the per-step computational complexity has order $poly(dT)$ as discussed on Page 6 of [3].
For Algorithm 4, the main computational cost is solving the logdet optimization in Line 6 and the per-step computational complexity has order $d^4|\mathcal{A}|$ from proposition 1 of [4].
Note that the $|\mathcal{A}|$ dependence is common in most linear bandit algorithms including LinUCB and phase elimination. More discussion on the $|\mathcal{A}|$-dependent computational complexity for existing linear bandit algorithms can be found in Section 1.2 of [5].
**References**:
[1] M. J. Todd. Minimum-volume ellipsoids: Theory and algorithms, volume 23. SIAM, 2016.
[2] Lattimore, T., Szepesvari, C., and Weisz, G. Learning with good feature representations in bandits and in RL with a generative model. ICML, 2020.
[3] Zimmert, J. and Lattimore, T. Return of the bias: Almost minimax optimal high probability bounds for adversarial linear bandits. COLT, 2022.
[4] Dylan J Foster, Claudio Gentile, Mehryar Mohri, and Julian Zimmert. Adapting to misspecification in contextual bandits. NeurIPS, 2020
[5] Liu, H., Wei, C.-Y., and Zimmert, J. Bypassing the simulator: Near-optimal adversarial linear contextual bandits. NeurIPS,2023
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer for the time and effort spent reviewing our paper. As the discussion phase is about to end, we would like to make sure our responses have sufficiently addressed your concerns. We look forward to your feedback. | Summary: In this work, the authors characterize the problem of learning the presence of reward corruption in the linear bandit setting. They provide matching upper and lower bounds in the corrupted stochastic setting, and initiate the study on the corrupted adversarial setting, for which they obtain optimal scaling in the corruption level.
Not only that, the authors prove a general reduction that efficiently handles gap-dependent misspecification with corruption-robust algorithms. They were able to show that linear MDPs with gap-dependent misspecification are efficiently learnable. While this reduction is general, interestingly they denied the possibility to obtain the tightest rate for gap-dependent misspecification. This observation leads them to develop a specialized algorithm which, in the linear bandit setting, obtains the optimal rate. According to their argument, this resolves the open problem of Liu et al. (2023a).
Strengths: - Interesting results
- Deterministic algorithm cannot avoid suboptimal regret (Proposition 1)
- Matching upper and lower bound on the stochastic setting, by just changing deterministic sampling to stochastic.
- Solving an open problem of instance-dependent misspecified setting.
- Clearly state the limitations of previous works and their technical novelties.
- Easy to understand their contributions.
Weaknesses: - (Minor) The algorithms are not seriously different from the previous works as they mentioned, but this is just a minor point - every theoretical improvement is important.
- Not clear what they tried to say on page 9
- Why Theorem 6.2 shows that $\rho \leq \frac{1}{d}$ is not optimal?
- Impossibility result (from line 304): so basically what authors are trying to say is, that 'their' reduction is not applicable for a tighter result, right? It is not about any reduction from corruption to misspecification.
- No future works.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Unclear points in the weakness section.
- It would be great if authors could explain why the gap-dependent misspecification assumption (Assumption 1) is necessary.
### Minor
- Theorem G.1 in line 296 - is it correct?
- Corollary 6.2.1 in line 209 - it seems like it is the result for the MDP...
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations. It would be great if authors provide possible future works.
There is no potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. The global response includes potential future works, which we will incorporate into the future version. Your questions are answered below.
**Q1**: The algorithms are not seriously different from the previous works as they mentioned
**A**: Although our algorithmic framework inherits from previous works, the corresponding modifications offer new insights. For example, the impossible results in Proposition 1 for the deterministic algorithm lead to our randomized design in Algorithm 1. Moreover, our geometric-inspiring bonus design in Algorithm 2 does not appear in previous papers and it is the key to the tight $\sqrt{d} C_{\infty}$ regret. We believe such bonus design ideas could also inspire future work.
**Q2**: Why Theorem 6.2 shows that $\rho \leq \frac{1}{d}$ is not optimal?
**A**: In Theorem 6.2, we show that there exists an algorithm with which $\rho \leq \frac{1}{\sqrt{d}}$ suffices to ensure $d\sqrt{T}$ regret. This implies that the requirement $\rho \leq \frac{1}{d}$ is "too strong", and thus not optimal. The $\frac{1}{\sqrt{d}}$ factor, on the other hand, is optimal because of the lower bound given in Theorem G.2 (discussed in line 296-303).
**Q3**: Impossibility result (from line 304): so basically what authors are trying to say is, that 'their' reduction is not applicable for a tighter result, right? It is not about any reduction from corruption to misspecification.
**A**: Yes, the impossibility result only shows our reduction is not optimal. However, we generally believe that no reductions can guarantee $\rho \le \frac{1}{\sqrt{d}}$ suffices for $d\sqrt{T}$ regret, due to the $dC$ lower bound for corruption. This will be clarified in our revision.
**Q4**: Why the gap-dependent misspecification assumption (Assumption 1) is necessary?
**A**: Assumption 1 comes from Liu et al. (2023a), with more motivation provided in its Introduction. Existing work on misspecification tends to assume the misspecification level is uniform over all actions, which typically yields overly pessimistic results (a regret linear in the misspecification level). Assumption 1 is the only known assumption beyond realizability that is sufficient to obtain the minimax $d\sqrt{T}$ regret, as in the unmisspecified case. Understanding precisely what conditions on misspecification are necessary to achieve $d\sqrt{T}$ regret is an interesting open question.
We also believe that this assumption is often reasonable in real-world applications. In practice, approximating a decision-making problem as a linear bandit typically requires extensive modeling and feature selection to choose the linearization. In settings that are not perfectly linear, one would naturally focus on obtaining an accurate linearization for "good" or "typical" users, and would likely be willing to tolerate a less accurate model for "bad" or "atypical" users. This is precisely the intuition that is captured in Assumption 1.
**Q5**: Theorem G.1 in line 296 - is it correct?
**A**: Thank you for pointing that out. It is a typo and should be Theorem 6.2. We have fixed it.
**Q6**: Corollary 6.2.1 in line 290 - it seems like it is the result for the MDP...
**A**: Thank you for pointing that out. It is a typo and should be Corollary 6.1.1. We have fixed it.
---
Rebuttal Comment 1.1:
Comment: We thank the reviewer for the time and effort spent reviewing our paper. As the discussion phase is about to end, we would like to make sure our responses have sufficiently addressed your concerns. We look forward to your feedback. | Summary: This paper studied the corrupted linear bandits. The authors propose four different metrics to evaluate the total corruption in Eq. (1). Many settings are considered in this paper. For stochastic LB, the proposed algorithm achieves a regret bound of $d\sqrt{T}+\sqrt{d} C_{\infty}$. For adversarial LB, the proposed algorithm achieves a regret bound in the order of $d\sqrt{T}+\sqrt{d} C_{\infty}$ or $d^{3}\sqrt{T}+d^{5/2} C$. The authors also consider the gap-dependent misspecification, where the misspecification level of an arm $a$ can be evaluated by $\rho$ times the gap of arm $a$.
Strengths: See summary.
Weaknesses: **Weaknesses and Questions:**
1. At lines 107-109, the authors claim that the strong adversary is equivalent to the CM viewpoint. This doesn't seem right. For regret, the strong adversary is harder than the CM viewpoint. Thus, it is unfair and wrong to compare He et al. (2022) in the same way.
2. At line 131, adversarial linear bandits are discussed. However, no problem definition of this problem is introduced before line 131.
3. This paper studies the fixed action set, while the previous works He et al. (2022) and Foster et al. (2020) allow the action set to be chosen by an adaptive adversary, which is much harder than this paper. Table 1 is not fair. He et al. (2022) is for the adaptive adversarial viewpoint, which is totally different from the stochastic LB. For a fixed action set, the optimal regret without $C$ should be $\sqrt{d T \log k}$, where $k$ is the number of arms.
4. Assumption 1 is not very reasonable.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. Your questions are answered below.
**Q1**: The strong adversary (AA) seems not equivalent to the CM viewpoint.
**A**: The equivalence between the "strong adversary" in the AA viewpoint and the "strong measure'" in the CM viewpoint is based on the equivalence between the following two processes (note that the discussion here only concerns the adversary in choosing the "corruption"; the discussion about the adversary in choosing the "action set" is discussed in **Q3**):
Process 1 (AA viewpoint): In every round $t$, the learner first chooses an action $a_t$, then the adversary chooses the corruption $\epsilon_t$. The learner observes $r_t = a_t^\top \theta_t + \epsilon_t + \zeta_t$, where $\zeta_t$ is a zero-mean noise. Define $C=\sum_{t=1}^T |\epsilon_t|$.
Process 2 (CM viewpoint): In every round $t$, the adversary first chooses the corruption $\epsilon_t(a)$ for every action $a$, then the learner chooses an action $a_t$. The learner observes $r_t = a_t^\top \theta_t + \epsilon_t(a_t) + \zeta_t$, where $\zeta_t$ is a zero-mean noise. Define $C=\sum_{t=1}^T |\epsilon_t(a_t)|$.
In Process 1, the adversary chooses the corruption based on the learner's action $a_t$. In Process 2, although the adversary specifies the corruption "before" seeing the learner's action, it captures the same effect as Process 1 because the applied corruption depends on the chosen action $a_t$. In other words, the amount $\epsilon_t(a)$ in Process 2 is a "plan" of the adversary (set before seeing the learner's action $a_t$) for the amount of corruption under the assumption that the learner chooses $a$. After seeing $a_t$, the adversary simply applies the planned corruption $\epsilon_t(a_t)$.
The AA viewpoint is adopted by most previous works on linear bandits (Bogunovic et al., 2021) and linear contextual bandits (He et al., 2022). In this work, we argue that such results are equivalent to the standard adversary (i.e., adaptive adversary) with a fine-grained corruption measure (i.e., $C=\sum_{t=1}^T |\epsilon_t(a_t)|$) in the analysis. We believe this leads to a more unified treatment for the strong and weak adversary widely studied in the literature, which are usually separately discussed.
We hope this clarifies the equivalence between the two viewpoints. If there are still concerns, we would appreciate it if the reviewer could elaborate on the potential mismatch in the difficulty between the two viewpoints.
**Q2**: No definition of adversarial linear bandits before line 131.
**A**: The general linear bandit problem is introduced in lines 71-76. Then, in lines 77-79, we describe the stochastic and adversarial settings of it. The term "adversarial linear bandits" in line 131 simply refers to the adversarial setting of linear bandits. We will make this more clear.
**Q3**: This paper studies the fixed action set, while He et al. (2022) and Foster et al. (2020) allow adversarially chosen action set, which is much harder. Table 1 is not fair. He et al. (2022) is for the adaptive adversarial viewpoint, which is different from the stochastic LB.
**A**: In this paper, we focus on linear bandits with fixed action sets. Note that even in this simpler setting, for the stochastic case, there is no prior work achieving $\sqrt{d}C_{\infty}$ regret; for the adversarial case, there is no prior study on
$C_\infty$ bound. Our work obtains tight $\sqrt{d}C_{\infty}$ regret in both cases.
The purpose of Table 1 is to summarize known $C_{\infty}$ and $C$ bounds in linear bandits. He et al. (2022) is included because their algorithm gives the best $C$ bound in linear bandits, even though it works for the more general linear contextual bandit setting with adversarially chosen action sets. We will clarify this in our next version. In other related work discussions (e.g. Line 120-130), we will also make similar clarification that Foster et al. 2020, Takemura et al. 2021 study the more challenging linear contextual bandit setting.
**Q4**: For a fixed action set, the optimal regret without $C$ should be $\sqrt{dT\log(k)}$, where $k$ is the number of arms.
**A**: Technically, there are two widely adopted lower bounds for linear bandits with fixed action sets. One is $\sqrt{dT\log k}$, which is tight for $k\leq 2^d$; the other is $d\sqrt{T}$, which is tight for $k>2^d$ (see, e.g., Section 24.1, 24.2 of Lattimore and Szepesvari, 2020, for the second lower bound). For simplicity, we adopt the second lower bound. Our analysis for the stochastic setting (Theorem 4.1) can also give the $\sqrt{dT\log k}$ bound straightforwardly. For the adversarial setting, however, we do not obtain $\sqrt{dT\log k}$ though due to the limitation of our base algorithms.
(Lattimore and Szepesvari, 2020) Bandit Algorithms.
**Q5**: Assumption 1 is not very reasonable.
**A**: Assumption 1 comes from Liu et al. (2023a), with more motivation provided in its Introduction. Existing work on misspecification tends to assume the misspecification level is uniform over all actions, which typically yields overly pessimistic results (a regret linear in the misspecification level). Assumption 1 is the only known assumption beyond realizability that is sufficient to obtain the minimax $d\sqrt{T}$ regret, as in the unmisspecified case. Understanding precisely what conditions on misspecification are necessary to achieve $d\sqrt{T}$ regret is an interesting open question.
We also believe that this assumption is often reasonable in real-world applications. In practice, approximating a decision-making problem as a linear bandit typically requires extensive modeling and feature selection to choose the linearization. In settings that are not perfectly linear, one would naturally focus on obtaining an accurate linearization for "good" or "typical" users, and would likely be willing to tolerate a less accurate model for "bad" or "atypical" users. This is precisely the intuition that is captured in Assumption 1.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal.
---
Regarding A1: I disagree. In the AA perspective, the adversary can first observe the chosen arm $A_t$ and then decide whether $A_t$ is the arm it wants to corrupt or not. For instance, if the adversary aims to corrupt a specific dimension of $\theta^*$, it can choose to do so to maximize its profit, given that it knows $A_t$ beforehand. If $A_t$ is not known, the only option for the adversary is to add corruption to each arm and hope the algorithm picks the arm that has a high influence on the dimension it wants to corrupt.
---
Regarding A2, the term "adversarial linear bandits" in the bandit literature refers to a different setting. Please consider changing the name of "adversarial linear bandits".
---
Regarding A4, I believe that $\sqrt{dT \log k}$ is significantly better than $d\sqrt{T}$ regret. It might be beneficial to present the $\sqrt{dT \log k}$ results in your main theorem and clearly discuss the differences in settings and results compared with previous work.
---
Rebuttal 2:
Comment: Thank you for the comments! We would like to elaborate more as follows.
**Regarding the first concern:**
Our AA and CM viewpoints are exactly what you describe. However, we would like to draw the reviewer's attention to the definition of the corruption measure in the CM viewpoint: $C=\sum_{t=1}^T |\epsilon_t(a_t)|$. This definition only accounts for the corruption for the "chosen action" $a_t$, but not that for other actions $a\neq a_t$.
How does this property allow us to simulate a strong adversary in the AA viewpoint? Below, we illustrate it through a concrete example as in your response (below, we use "he" for the adversary and "she" for the learner). Consider a $d$-dimensional linear bandit with the action set being the canonical basis $e_{1:d}$. From the AA viewpoint, suppose the adversary wants to corrupt the $i$-th dimension of $\theta^*$, and his strategy can be described as the following:
*If $a_t=e_i$ is observed, then set $\epsilon_t=c$*;
*If $a_t\neq e_i$ is observed, then set $\epsilon_t=0$*.
Clearly, this is a strong adversary in the AA viewpoint since the adversary decides whether he wants to corrupt $a_t$, and to what extent he wants to corrupt it, after observing the chosen action $a_t$.
From the CM viewpoint, the equivalent adversary would specify the following "corruption function" $\epsilon_t(\cdot)$ *before* seeing $a_t$:
*$\epsilon_t(e_i)=c$, and $\epsilon_t(e_j)=0$ for $j\neq i$*.
After the learner chooses $a_t$, the learner experiences a corruption of $\epsilon_t(a_t)$.
We clarify two points:
1. The learner has exactly the same observations in the two viewpoints: if she chooses $a_t=e_i$, then she suffers a corruption of $c$; if she chooses $a_t\neq e_i$, then she suffers no corruption.
2. The total corruption $C$ is the same in the two viewpoints: In the AA viewpoint, $C=\sum_{t=1}^T |\epsilon_t|=\sum_{t=1}^T |c|\mathbf{1}\\{a_t=e_i\\}$. In the CM viewpoint, $C=\sum_{t=1}^T |\epsilon_t(a_t)|=\sum_{t=1}^T |c|\mathbf{1}\\{a_t=e_i\\}$.
One key observation is that: in the CM viewpoint, even though the corruption function might be a non-zero function (i.e., $\epsilon_t(a)\neq 0$ for some $a$), the amount of corruption counted towards $C=\sum_{t=1}^T |\epsilon_t(a_t)|$ is zero as long as $\epsilon_t(a_t)=0$. In other words, it is possible that $\epsilon_t(a)=0$ and $\epsilon_t(b)\neq 0$ for some arms $a,b$, which could simulate the AA strong adversary who decides NOT TO corrupt after observing $a_t=a$, but decides TO corrupt after observing $a_t=b$.
This might address the reviewer's concern, "*If $a_t$ is not known, the only option for the adversary is
to add corruption to each arm and hope the algorithm picks the arm that has
a high influence on the dimension it wants to corrupt.*" From the discussion above, we know that a CM adversary can simply do the following: set high corruption for the actions with high influence on the dimension, and set low/zero corruption for the actions with low influence on the dimension. Then, after $a_t$ is decided by the learner, the corruption level $|\epsilon_t(a_t)|$ will naturally adapt to the choice of $a_t$, i.e., the corruption will be high if $a_t$ has high influence on the dimension, and will be low/zero if $a_t$ has low influence on the dimension.
A mathematical conversion between the two viewpoints is as follows. Assume that the strategy of the AA strong adversary can be decribed as a function $\epsilon_t = f(h_{t-1}, a_t)$, where $h_{t-1}$ is the history up to time $t-1$ and $a_t$ is the chosen action at time $t$. Then in the CM viewpoint, the "corruption function" would be defined as $\epsilon_t(a) = f(h_{t-1}, a)$ for every $a$. Notice that the function $\epsilon_t(\cdot)$ depends only on the history up to time $t-1$.
**Regarding the second concern:**
If the corruption is always zero, our formulation (Line 74 - 79) is the same as the standard adversarial linear bandit setting formulated in Section 27 of Lattimore and Szepesvari (2020), where the linear reward vector $\theta_t$ changes over time. The only extension we make is that we consider additional reward corruption. We can refer to it as "adversarial linear bandits with corruption" to make things even clearer.
(Lattimore and Szepesvari, 2020) Bandit Algorithms.
**Regarding the third concern:**
Thanks for your suggestions. We will add the bound $\sqrt{dT \log k}$ to our Theorem 4.1. We will also discuss more about our bounds compared with previous works.
---
Rebuttal 3:
Comment: We thank the reviewer for the time and effort spent reviewing our paper. As the discussion phase is about to end, we would like to make sure our responses have sufficiently addressed your concerns. We look forward to your feedback.
---
Rebuttal 4:
Comment: Thank you for your response.
**Regarding the first concern,** I disagree with your statement. I believe the second point you made is incorrect. For example, with a probability of 1/2, the algorithm selects $a_{t} = e_i$. Assuming both AA and CM have the same influence on the algorithm, and CM has a corruption level $C$, then, since AA only corrupts the algorithm with a probability of 1/2, the corruption level of AA would be $C/2$. I look forward to your reply.
---
Rebuttal 5:
Comment: We thank the reviewer for the feedback.
Notice that in our previous response, we do not assume the type of algorithm the learner uses (i.e., whether it's deterministic or randomized). Therefore, the points we make still hold even when the learner uses a randomized algorithm.
Below we demonstrate that in the example you gave (i.e., the learner chooses $a_t=e_i$ with probability $1/2$), the corruption levels are the same in the two viewpoints.
**AA viewpoint:**
Adversary strategy: If $a_t=e_i$ is observed, then set $\epsilon_t=c$. If $a_t\neq e_i$ is observed, then set $\epsilon_t=0$.
Corruption level: $C=\sum_{t=1}^T |\epsilon_t|=\sum_{t=1}^T |c|\mathbf{1}\\{a_t=e_i\\}$.
Thus, $\mathbb{E}[C]=\mathbb{E}\big[\sum_{t=1}^T |c|\mathbf{1}\\{a_t=e_i\\}\big]=\frac{T|c|}{2}$, where in the last equality we use $\Pr\\{a_t=e_i\\}=1/2$.
**CM viewpoint:**
Adversary strategy: Set $\epsilon_t(e_i)=c$, and $\epsilon_t(e_j)=0$ for $j\neq i$ before seeing $a_t$.
Corruption level: $C=\sum_{t=1}^T |\epsilon_t(a_t)|=\sum_{t=1}^T |c|\mathbf{1}\\{a_t=e_i\\}$.
Thus, $\mathbb{E}[C]=\mathbb{E}\big[\sum_{t=1}^T |c|\mathbf{1}\\{a_t=e_i\\}\big]=\frac{T|c|}{2}$, where in the last equality we use $\Pr\\{a_t=e_i\\}=1/2$.
This shows that the corruption levels are the same in the two viewpoints.
**Potential source of confusion:** We point out that in the CM viewpoint, although there could be ``non-zero corruption function'' in every round (e.g.., $\epsilon_t(e_i)=c$ for all $t$ in our previous example), the corruption measure we consider is $C = \sum_{t=1}^T |\epsilon_t(a_t)|$, which only depends on $a_t$. Thus, even though $\epsilon_t(e_i) = c$ for every round $t$, such corruption only contributes to $C$ when $a_t = e_i$, and it will not contribute to $C$ when $a_t\neq e_i$. This makes it equivalent to the strong AA adversary whose corruption adapts to the choice of the learner.
We guess that the reviewer might be thinking about another corruption measure $C_\infty=\sum_{t=1}^T \max_a |\epsilon_t(a)|$, which accounts for the corruption for *every* (no matter chosen or unchosen) action. However, this is not what we want to show equivalence for with the strong AA adversary.
---
Rebuttal Comment 5.1:
Comment: Thank you for the detailed response. I recommend focusing solely on the Corruption Measure (CM) Viewpoint, as the AA Viewpoint introduces some confusion. The concepts would be more clearly presented without it. This paper makes a valuable contribution to the understanding of corruption across various metrics and presents solid regret bounds. One potential weakness of this paper is that it focuses solely on a fixed action set. I will be increasing the score to 6. Good luck!
---
Reply to Comment 5.1.1:
Comment: Thanks for your positive feedback and the adjustment for the score. We will make efforts to reduce the confusion to the reader. | null | null | Rebuttal 1:
Rebuttal: ## Global Response:
We thank all reviewers for their time and valuable feedback. As suggested, we summarize our paper here together with possible future directions. We will incorporate them into our future versions.
Our paper contributes to three research lines.
1. For stochastic linear bandits with corruption, we design an algorithm that achieves $\sqrt{d}C_{\infty}$ regret for the first time, filling a gap in the existing literature. We also construct a lower bound demonstrating that such regret is unattainable by any deterministic algorithm, which leads to our randomized design.
2. For adversarial bandits with corruption, previous works only focus on multi-armed bandits while we move forward to the more challenging linear bandit settings. We achieve the tight $\sqrt{d}C_{\infty}$ regret and also show it is possible to get $poly(d)C$ regret. The novel geometric-inspiring bonus design in Algorithm 2 does not appear in previous papers and it is the key to the tight $\sqrt{d}C_{\infty}$ regret. We believe such a bonus design idea could also inspire future work.
3. For gap-dependent misspecification in linear bandits, Liu et al. (2023a) give an algorithm showing that $\rho \le \frac{1}{d}$ is sufficient to achieve $d\sqrt{T}$ regret, and ask what is the optimal rate of $\rho$. We fully resolve this open problem by designing an algorithm that achieves $d\sqrt{T}$ regret with $\rho \le \frac{1}{\sqrt{d}}$ (Theorem 6.2), and provide a matching lower bound (Theorem G.2). Going beyond linear bandits, we show that such gap-dependent misspecification could be generalized to RL and also give a reduction from corruption to gap-dependent misspecification. Our reduction ensures that $\rho \le \frac{1}{d}$ is sufficient for $\sqrt{T}$ regret in linear MDPs. This is the first RL misspecification assumption beyond realizability that leads to $\sqrt{T}$ regret bound without any additional misspecification term.
There are many future works based on our paper. Firstly, the current $C$ bound for adversarial linear bandits is $d^3\sqrt{T} + d^{\frac{5}{2}}C$ while the lower bound is $d\sqrt{T} + dC$, leaving a gap to be addressed. Secondly, getting tight $\sqrt{d}C_{\infty}$ bound beyond linear bandit (e.g. linear contextual bandits, general contextual bandits, and RL) is interesting. Thirdly, for gap-dependent misspecification, designing algorithms that achieve $\sqrt{T}$ regret for RL with $\rho \le \frac{1}{\sqrt{d}}$ could be a future direction. Moreover, finding other misspecification assumptions beyond realizability that lead to $\sqrt{T}$ regret without any additional term is also an interesting area to explore.
[Liu et al, 2023a] Liu, C., Yin, M., and Wang, Y.-X. (2023a). No-regret linear bandits beyond realizability. UAI. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Solving Inverse Problems via Diffusion Optimal Control | Accept (poster) | Summary: The paper addresses the limitations of existing diffusion-based inverse problem solvers, which typically frame signal recovery as a probabilistic sampling task. The authors propose a novel approach that redefines the generative process as a discrete optimal control task. Inspired by the iterative Linear Quadratic Regulator algorithm, this new framework named diffusion optimal control, can handle various differentiable forward measurement operators, including super-resolution, inpainting, and deblurring.
Strengths: 1. The paper introduces a novel framework based on optimal control theory to solve diffusion-based inverse problems, moving away from the traditional probabilistic sampling approaches. This is a significant theoretical advancement.
2. The framework addresses critical drawbacks of current methods, such as the intractability of the conditional likelihood function and dependence on score network approximations. This leads to more robust and potentially more accurate solutions.
Weaknesses: 1. The method involves complex mathematical formulations and optimal control theory, which may pose challenges for implementation and understanding by practitioners who are not familiar with these concepts. The need to compute Jacobian and Hessian matrices, as well as the regularized inverses, may lead to significant computational demands, particularly in high-dimensional settings.
2. Lacking of enough experiments, such as MRI reconstruction or other medical images. Including more diverse datasets and additional baseline methods would provide a more comprehensive evaluation.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What is the purpose of injecting control vectors u_t into the reverse diffusion process, and how do they influence the terminal state of the system?
2. How are the gradients V_x and Hessians V_xx of the value function used within the optimal control framework, and what is their significance?
3. Please indicate what is output of Algorithm 1.
4. Can you give some high level description about: How does using an adaptive optimizer for action updates improve the iLQR algorithm, and what impact does it have on the performance and efficiency of solving inverse problem tasks?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The method requires the forward measurement operator to be differentiable. In cases where this is not possible or practical, the applicability of the proposed framework may be limited.
The performance evaluations rely on specific pretrained models. The method’s robustness and performance with other pretrained models, or models trained on different data distributions, would need further investigation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We highlight our gratitude that the reviewer appreciates both the theoretical and empirical results of our work, and for bringing several insightful shortcomings of our work to our attention. Below we respond to the reviewers concerns in a point-by-point basis.
**Mathematical formulations are complex and potentially challenging to implement.**
We agree that the mathematical framework behind optimal control theory is sophisticated, and differs from that of standard diffusion models, which comes from nonequilibrium thermodynamics. However, our approach has several distinct advantages. First, we can treat the generative process as a black box dynamical system, which vastly abstracts the pre-existing mathematics in the reverse diffusion process. Practitioners may thus choose between standard diffusion modeling and optimal control-based modeling based on their expertise. Second, we can leverage extensive research from the optimal control community to improve diffusion-based inverse solvers. Third, we are able to sidestep the intractability of the conditional score function, as discussed in Section 4.
**Computing Jacobian and Hessian matrices and their inverse may have significant computational demands.**
We note that even a rank-one approximation of these matrices is sufficient, with further increases in rank providing quickly diminishing returns (see Table 4 in Appendix). Using modern randomized linear algebra libraries (e.g. randomized SVD), this brings the computation of the Hessian matrices to cost $\mathcal{O}(d)$, where $d$ is the data dimension. Therefore, the cost of our algorithm is equivalent to the cost of, e.g., performing a DPS step. (For more details, see a thorough complexity analysis in Appendix D.2.)
**Including more diverse datasets would provide a more comprehensive evaluation.**
Thank you for this insight. We agree, and additionally include results on ImageNet, and more nonlinear settings. Please see the main rebuttal and PDF.
**Purpose and mechanism of injected control vectors in the reverse diffusion process.**
The control vectors u_t are simply perturbations to the original unconditional diffusion process, and are widely used in conditional sampling with diffusion models. Analogues include the conditional score term in DPS (Chung et al., 2022), the classifier gradient in classifier guidance (Dhariwal and Nichol, 2021), or the classifier-free guidance term (Ho and Salimans, 2022).
**How are $V_x$ and $V_{xx}$ used in the optimal control framework?**
In optimal control, $V(x)$ is the value function, which intuitively represents "desirability" of each state x in the dynamical system to the user. In this case, $x$ is the reverse diffusion iterate. $V_x$ and $V_{xx}$ are simply derivatives of this value function. We can think of $V(x)$ as the negative loss of the optimal control system. Therefore we use the derivatives $V_x$ and $V_{xx}$ much like they are used in an optimization framework to guide a solver towards (locally) optimal solutions.
**Please indicate the output of Algorithm 1.**
The output of Algorithm 1 is the perturbed dynamics ${x’_t}_{t=1}^T$. The final iterate of which is the solution to the inverse problem. Thank you for this comment, we have clarified this point in the manuscript.
**High level intuition on the adaptive optimizer for action updates.**
Note that the default choice of optimizer for iLQR implementations is the backtracking line search. We instead use an adaptive optimizer, and motivate this design decision with three main observations.
First, recall that given the update direction $v$ the backtracking line search finds the optimal step size $\lambda$ to update the actions by applying $u’ = u + \lambda v$ to our reverse diffusion process. Evaluating each $\lambda$ requires re-computing the inverse problem loss under the new proposed $u’$, we would have to re-run the reverse diffusion process, which is quite expensive.
Second, the backtracking line search is used with generally smooth landscapes, and not as amenable to highly nonconvex settings. Since each marginal density of the diffusion process is essentially the data distribution convolved with a Gaussian distribution, it is highly nonconvex (especially at the low noise regime) and thus ill-suited for our setting.
Third and last, since the diffusion process occurs on images which can be very high dimensional, our optimization space is also very high-dimensional. Adaptive optimizers such as Adam, Adagrad, RMSprop, etc. are specially designed for deep learning settings where there is a similarly high dimensional optimization space. Therefore, it is a natural choice for our problem setting.
---
Rebuttal Comment 1.1:
Comment: Thanks for providing detailed explanation. I changed my score to be 5. | Summary: This paper proposes diffusion optimal control that solves inverse problems via posterior sampling by combining the power of a pre-trained unconditional diffusion model and the iterative Linear Quadratic Regulator algorithm to produce optimal controls that steer the reverse diffusion process to correctly recover the original signal.
The framework is general and able to handle any differentiable forward measurement operator and establishes a new baseline in image reconstruction.
Strengths: * The idea of augmenting the reverse diffusion sampling process with a perturbation control is quite novel and general for arbitrary cost functions, although the paper focuses specifically on the cost for posterior sampling.
* The writing is generally good (see more comments regarding writing in questions). It is concise and to the point with good intuitions provided.
* Efforts (e.g. low-rank approximations) have been made to bring down the computational cost in iLQR for the high-dimensional image setting.
* The empirical performance of the proposed method is strong and establishes new state-of-the-art results.
Weaknesses: * The runtime of the proposed method seems high and is not much discussed. iLQR is a global-in-time iterative method that could potentially require a large number of iterations to converge (and all nice things discussed in Section 4. rely on this convergence). On top of that, for each iteration, there needs to be $\Omega(T)$ matrix solves which can be quite slow given the dimension of the images (even with techniques like low-rank approximation). It would be interesting to see ablation studies on the effect of num_iter in Algorithm 1. It would be also more convincing to report the runtime of each method in Table 1.
* There is no analysis of the approximation error of iLQR (the first and second-order Taylor approximations) in the studied setting. Specifically, it seems to me that a lot of heavy lifting is done when the control $u_t$ is designed to be only a perturbation of the reverse diffusion step. For instance, does this imply that the value function is smoother (hence the Taylor approximation is more accurate) when parameterized by $u_t$?
Technical Quality: 3
Clarity: 3
Questions for Authors: * What is the rough runtime of each method in Table 1?
* Notations such as $p_t(x|y)$ are confusing to me. The subscript $t$ on $p$ suggests a family of distributions but I think that's not the case. My understanding of the randomness is the following. First, the random variable $x_0$ is drawn from the clean image distribution. Then $y = A(x_0) + \eta$. In parallel, we also have random variables $x_t$ obtained deterministically from $x_0$ by evolving along the ODE of the forward diffusion. A better notation in my opinion is to put the subscript $t$ on $x$, like $p(x_t|y)$.
* In (29), what is the meaning of $\nabla_{x_t} \log p(y|x_0)$?
* Line 149, what do you mean by "produces a feasible solution"? What's the meaning of being feasible?
* The texts around line 160 are hard to parse. What is the notation $p(x_0|x_t,x_0)$? What is the $x_0$-centered marginal?
* Line 176, why is $\log p(y|x_t) = \log p(y|x_0)$ an assumption, not a consequence?
* In the paragraph of Line 204, I'm confused about why the diffusion model can be taken to have randomly initialized weights. This does not seem to result in any meaningful application, since there is no information about the image distribution. For instance, in Figure 6, the produced result looks even worse than the input $y$.
Minor comments
* Line 82, what is $\theta$?
* In Section 2.3, it would be good to include the dimension of each variable. In (13), it would be good to clarify that $k, K$ all depend on $t$.
* Algorithm 1 appears unreferenced in the main text. In the input, what is $x_T$? Is it just drawn from a Gaussian? What is the output?
* Line 117, $\ell_t(x_t,u_t) = 0$, not just not depend on $x_t$. It would also be good to say here what exactly is $\ell_0$ for input/output perturbation controls instead of defining it later.
* Theorem 4.1 is missing transitions for presenting the conclusion (29).
* The values of $\alpha$ in the two theorems appear to be missing a factor of $2$.
* In (32), it should be $\widetilde{x}_{t-1}$ on the left side of the equation.
* Line 253, there is a parenthesis not closed.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: There's not much discussion about the limitations in the paper. There is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate the reviewer’s positive assessment of theoretical contributions, for their in-depth reading of our manuscript, and for their suggested improvements. We respond to comments in detail below.
**The runtime of Algorithm 1 seems high.**
Without taking any approximations of the Hessian and Jacobian matrices in Eqs. 19-25, Algorithm 1 will indeed be relatively costly, compared to the other algorithms we compare against in Table 1. However, we maintain that our method is not significantly more expensive than competing methods. For details, see computational complexity analysis in Section D.3. Indeed, under the $T=50$ configuration, our method is slightly more expensive than competing methods. That being said, we run an equivalent runtime/flops budget in the Ours (T=20) case in Table 1, where we restrict our method to 1000 NFEs, which take up the majority (~97%) of the runtime of our algorithm. We still show significant gains over DPS, and comparable performance against the most recent state-of-the-art methods on FFHQ256-1K. Empirically, we find that our method has a similar runtime to DPS on an A6000 GPU (130s vs 125s).
**iLQR could require a large number of iterations to converge.**
Indeed, iLQR could take many iterations to converge. However, we observe near convergence for nearly all of the settings we consider in our work. Moreover, true convergence is not necessarily desirable, since the forward operator measurements $y$ are noisy. Therefore, a fully converged iLQR method would overfit to the noisy, and produce inferior solutions.
**...(and all nice things discussed in Section 4. rely on this convergence).**
We respectfully disagree. A central claim in our paper, and in Theorems 4.1-4.3 in Section 4 is the ability to compute the conditional score function $\nabla_{x_t} \log p(y | x_0)$. This claim does not rely at all on the convergence of the iLQR algorithm. In fact, this quantity is a fundamental property of the backward pass of the iLQR algorithm (the second loop in Algorithm 1). Therefore, we obtain the true $\nabla_{x_t} \log p(y | x_0)$ in every pass of our the iLQR algorithm, including the first step.
**Furthermore, each iteration needs to be $\Omega(T)$ matrix solves which can be quite slow.**
This is true. We agree with the reviewer's statement. However, the constant swallowed by the $\Omega$ notation is important, and rather small here. After an ablation study, we found (somewhat surprisingly) that even a rank-1 approximation of the relevant matrices is sufficient to obtain competitive results on our benchmarks (Table 4). Under a rank-1 approximation, many of the matrix operations are simply $\mathcal{O}(d)$, where $d$ is the size of the image.
**It would be interesting to see ablation studies on the effect of num_iter in Algorithm 1.**
This is a good point. We provide the ablation study on the FFHQ256-1K dataset with the super-resolution task, letting $T=50$ and $k=1$. Performance is evaluated with the LPIPS metric.
| num_iter | 1 | 5 | 10 | 25 | 50 |
|-|-|-|-|-|-|
| | 0.491 | 0.411 | 0.322 | 0.236 | 0.171 |
**No analysis of the approximation error of iLQR (the first and second-order Taylor approximations) in the studied setting.**
We respectfully disagree. We do analyze the approximation quality of our proposed iLQR Algorithm 1 in Theorems 4.1 and 4.3. Here, we show that, in the deterministic setting, the solution $\mu_t$ is precisely the desired conditional score $\nabla \log p(y | x_0)$, which is the quantity desired in DPS-based solvers at each step. Therefore, in the deterministic setting, the approximation error arises entirely from the discretization error of the numerical solver for $x_0$ itself. In the stochastic setting, the approximation error will also come from the randomness of the diffusion process.
**What is the rough runtime of each method in Table 1?**
Below we report the approximate runtimes for each method on an NVIDIA A6000 GPU. The first two are ours with different choices of $T$.
| (T = 50) | (T=20) | DPS | MCG | PSLD | DDNM | DDRM | Score-SDE |
|-|-|-|-|-|-|-|-|
| 259s | 130s | 125s | 123s | 251s | 122s | 35s | 50s|
**Elaboration on $p_t(x|y)$ notation.**
In general, $p_t(x|y)$ and $p_t(x)$ *do* come from a family of distributions, and refer to the marginal distribution of the conditional diffusion process at time $t$. Each $x_t \sim p_t(x|y)$ can be obtained, given $x_0$, via $x_t = \alpha x_0 + \sqrt{1 - \alpha} \epsilon$, where $x_0 \sim p(x | y)$ (the distribution of our solution set), and $\epsilon \sim \mathcal{N}(0, \mathbf{I})$.
**In (29), what is the meaning of $\nabla_{x_t} \log 𝑝(𝑦|𝑥_0)$?**
We apologize for the confusing notation. This is the derivative of the conditional log likelihood $\log p(y | x_0)$, which depends on $x_0$, which in the reverse diffusion process depends on $x_t$. We thank you for this question and have clarified this notation in the text immediately following (29).
**What is a feasible solution in Line 149?**
We mean feasibility in the constrained optimization sense – a solution is feasible when it satisfies all the constraints of the problem, i.e., $y \approxeq f(x)$. Since our proposed algorithm is closed loop (i.e., changes at each step $t$ depend on the final error at $t=0$), the algorithm can make global modifications to the diffusion trajectory at each $t$. To our knowledge, this is not possible for general inverse problems with competing methods like DPS, MCG, etc.
---
Rebuttal 2:
Comment: **Line 176: Why is $\log 𝑝(𝑦|𝑥_0) = \log 𝑝(𝑦|𝑥_t)$ an assumption, not a consequence?**
Indeed, $\log p(y | x_t) = \log p(y | x_0)$ is a consequence under the deterministic ODE dynamics. However, we also consider the setting of DPS, where the process is stochastic. Under this setting, there may be more than one $y$ (and thus $x_0$) associated with a given $x_t$, since $p(x_t|x_0)$ is Gaussian and supported everywhere. In this case, the equality is no longer guaranteed, but is assumed in many prior works (e.g., DPS).
**Line 204: What is the purpose this experiment with randomly initialized weights?**
Indeed, this application is itself not meaningful. We meant to demonstrate DPS samplers *require* a well-approximated score model to function properly, whereas we do not. Of course, the performance of our model also improves with a better diffusion prior, but the deterioration when the score is not well approximated is more graceful than DPS (i.e., in less represented parts of the dataset). This can be useful when using a general inverse solver (e.g. ImageNet or foundation model) and solving for an image from an unknown or poorly approximated distribution.
To further illustrate our point, we also demonstrate that the same trend occurs even when the diffusion model weights are initialized from a pretrained model, but from a different distribution than the source image. In Figure 10 in the rebuttal PDF, the right hand block shows an inverse problem setting for reconstructing ImageNet images, using an FFHQ model. As expected, our proposed method outperforms DPS on this out-of-domain inverse problem setting.
---
Rebuttal Comment 2.1:
Title: Response to authors
Comment: Dear authors, thank you for the detailed response and the additional experiments. This clarifies most of my questions. I would like to keep my positive score unchanged. | Summary: This paper proposes a new approach to conditional generation tasks through score-based diffusion models, with a focus on inverse problems. As an alternative to using the likelihood $p(y | x_t)$ to guide the time-reversed SDE towards the posterior distribution, the authors reformulate this as an optimal control problem. Starting from the ODE-based time-reversed flow for the unconditioned prior, the authors derive a controller based on the iLQR algorithm to guide the particles towards high posterior probability regions. The authors provide theory to demonstrate that the optimal guidance coincides precisely with the desired conditional scores. They demonstrate the method on a number of benchmarks including image inpainting and other inverse problems.
Strengths: The paper is well written and very clear. The method appears novel and addresses a legitimate challenge in conditional diffusion models. As the authors acknowledge: optimal control formulation of diffusions exist, but not (to my knowledge) in the context of guiding conditional diffusion models. The theoretical results provide a sound justification of the validity of the approach. The numerical results demonstrate that it is competitive in terms of accuracy compared to baseline, established methodology.
Weaknesses: The main weakness is the sheer computational cost of the algorithm, the need to compute very expensive hessians drastically limits the practical use of this method. The authors suggest a number of low rank approximations to mitigate this, but it is unclear how much is lost by introducing them. One point of question is the interplay between the number of diffusion steps $m$ and $T$. As $m\rightarrow \infty$, for $T$ fixed and large we expect that the baseline conditional diffusion model will improve significantly in accuracy. Generally, I feel that the configuration of the baseline has not been explored (or if it has, it has not been reported carefully). Similarly, the authors claim that they have done equivalent budget analysis in the experiments -- I could not find the details of this: is it the case that the computational cost is the equivalent? Have the author really explored the hyper-parameter space for these methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can the authors provide some insight on how to choose the key parameters $(m, n, k)$? -- does the optimal control method allow substantially smaller $m$? When does one approach become more computationally effective than the other? I can imagine, when $m$ is sufficiently small, that PSLD, DPS will start to outperform this method with comparable computational cost.
Minor comments: the metrics LPIP, PSNR, SSIM are reported, but at no point are these explained, or are references provided. These are well known in some communities, but not for the wider readership. Small typos around references were found.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitation is the large computational cost of this methodology. This has been identified and acknowledged. The authors have claimed to do an equivalent budget analysis, but this maybe needed a bit more details (wall-clock time, etc). Potential societal impacts are addressed in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work, and their insightful critique. Below we provide a point-by-point response to the reviewer's discussion points.
**Main weakness is the computational cost of the algorithm, e.g., computing Hessians. Approximation error is not well understood.**
Indeed, the Hessians would be very difficult to compute in their entirety. However, we want to mention that we do investigate the computational trade-off that occurs when approximating the Hessian $V_{xx}$. Our primary method technique is the randomized low rank SVD (Halko et. al, 2011), which provides very strong approximation guarantees when there does exist low rank structure in the matrix. Indeed, the Hessians of overparameterized neural functions have been well known to be low rank (Sagun et. al, 2017), and we also verify this empirically in Table 4 in the Appendix, where we see that the algorithm performance does not deteriorate with sparser representations, in terms of rank. Moreover, even letting the rank $k = 1$ obtains very strong performance (see also Table 1). We generally found in our experiments that a low-rank approximation is sufficient to impose the quadratic trust-region optimization to our model, and does not meaningfully deteriorate performance.
**Understanding the interplay between the number of diffusion steps $𝑚$ and $𝑇$.**
We generally find that, fixing a computational budget (e.g., 1000 NFEs), the number of diffusion steps and number of iLQR iterations can quite significantly affect the final outcome of the model. The key observation is that the iLQR objective optimizes strictly the inverse constraint, whereas the diffusion model acts as a regularizer. With a fixed budget, increasing one will come at the cost of the other. Therefore, if the measurement is very noisy, or the solution is known to be well-supported by the diffusion prior distribution, then one should take a large number of diffusion steps, and fewer number of iLQR iterations. Conversely, if the measurement is noise free, or the prior is not very informative, then the reverse should be done.
**Equivalent budget analysis. Is the computational cost equivalent?**
We perform equivalent budget analysis in two ways. First, we note that theoretically, our Algorithm 1 costs $\mathcal{O}(nm(k^3 + kd^2))$ in terms of FLOPS and $\mathcal{O}(mnk)$ in terms of neural function evaluations (NFEs). Perhaps interestingly, we found that NFEs dominate the wall-clock time of the algorithm, accounting for 97%. We let $k=1$ in all reported experiments outside of our ablation study in Table 4. More details can be found in Appendix D.2.
Empirically, the results of our equivalent budget analysis is summarized in the Ours (T=20) entry Table 1. As discussed, neural network calls take up ~97% of the wall-clock runtime of our solver. Therefore, we measure computational budget by the number of neural function evaluations (NFEs) of the method. DPS, MCG, DDNM, and DDRM allow a computational budget of 1000 NFEs, and most other methods in Table 1 follow a similar guideline. Therefore, the entry “Ours (T=20)” runs with 50 iLQR iterations and T=20, satisfies an equivalent budget to these models. We see that under this restricted budget our method beats out several prominent methods, including DPS, while comparing favorably against other state-of-the-art methods.
**Have the author really explored the hyper-parameter space for these methods. Can the authors provide some insight on how to choose the key parameters (𝑚,𝑛,𝑘)?**
Please see Tables 4, 5, and 6 for an in-depth exploration of the main hyperparameters of our method.
We assume the reviewer is using the notation from Appendix D.2 for $(m, n, k)$. Fixing a computational budget $\mathcal{N}$, there is a clear trade-off between the number of diffusion steps $m$ and iLQR iterations $n$, since they are inversely related. More diffusion steps bias the model towards the diffusion prior, whereas more iLQR iterations bias the model towards the observation $y$.
**When $m$ is small, DPS and PSLD should outperform this method with comparable cost.**
We in fact observe the reverse phenomenon with $m$! In other words, our model actually outperforms DPS at low $m$. We conduct the simple comparison below on the super-resolution task on FFHQ. We fix $n = 50$, and report LPIPS.
| Algorithm / $m$| 1 | 5 | 10 | 25 | 50 | 100 |
|-|-|-|-|-|-|-|
| Ours | 0.643 | 0.472 | 0.277 | 0.185 | 0.171 | 0.169 |
| DPS | 0.927 | 0.799 | 0.485 | 0.395 | 0.354 | 0.331 |
**The metrics (e.g., LPIPS, PSNR, SSIM) are mentioned but not described.**
Thank you for pointing this out. We agree that this is can be confusing and have added references to the LPIPS, PSNR, and SSIM metrics.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I thank the authors for their detailed response, and clarifications which were insightful.
Taking into account the new information provided, I will increase my score. | Summary: The paper uses the optimal control theory to solve the diffusion posterior sampling problem by iterative Linear Quadratic Regulator (iLQR) algorithm. The method could be utilized to solve both linear and nonlinear inverse problems. Experiments on MNIST and FFHQ demonstrate the outperformance of the proposed method.
Strengths: 1. This paper is well-written, with a good summary of previous methods and their shortcomings.
2. The proposed method that interprets the reverse diffusion process as an uncontrolled non-linear dynamical system is novel. Theoretical support is provided to verify the algorithm.
Weaknesses: 1. The method is well-backed but might be computationally exhausting.
2. The experiments are limited. Quantitative results on different datasets and nonlinear inverse problems are lacking.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. As shown in Algorithm 1, the method's time complexity is $O(T)$ in each iteration. Although the $T$ is relatively small $(=50)$ in the experiments, num_iters $\times T$ would be large, e.g. $50\times 50 = 2500$ as shown in Table 2 in the appendix. Also, the initialization of $\{x_T'\}$ requires $T$ NFEs (number of function evaluations). Is this correct? How about the computational efficiency of the proposed method? I would like to see a more detailed comparison of the method with other baselines like DPS in terms of time.
2. More baselines need to be compared such as [1], [2], [3] and [4]. The settings in these works might be a bit different. Some settings might be different. Can you clarify the proposed method's advantage over these baselines?
3. Previous methods like DPS have done extensive experiments on both linear and nonlinear inverse problems across both FFHQ and ImageNet datasets. However. the experiments in the paper seem to be somewhat limited. I have two questions about the experiments. 1) Since there are only quantitative results for linear inverse problems (note that the results in Table 1 are all linear), can you clarify the proposed method's advantages in nonlinear problems such as phase retrieval, nonlinear deblurring, and so on? 2) Can you show more results on ImageNet, which is a broader dataset that contains more than one domain, such as faces in FFHQ?
[1] Zehao Dou, and Yang Song. Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective, ICLR 2024
[2] Morteza Mardani, Jiaming Song, Jan Kautz, Arash Vahdat. A Variational Perspective on Solving Inverse Problems with Diffusion Models. ICLR 2024
[3] Jiaming Song, Arash Vahdat, Morteza Mardani, Jan Kautz. Pseudoinverse-Guided Diffusion Models for Inverse Problems. ICLR 2023
[4] Zhu, Y., Zhang, K., Liang, J., Cao, J., Wen, B., Timofte, R., and Van Gool, L. Denoising diffusion models for plug-and-play image restoration. CVPR 2023
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes. They mentioned limitations and impacts in the appendix, which looks good to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their appreciation of the optimal control perspective, and for their insightful discussion and bringing many recent works to our attention. We respond to comments in detail, on a point-by-point basis below.
**The method is well-backed but might be computationally exhausting.**
We maintain that our method is not significantly more expensive than competing methods. For details, see computational complexity analysis in Section D.3, where we show that iteration-for-iteration, our method has similar costs to DPS. Indeed, under the $T=50$ configuration, our method will run for more iterations. Moreover, under the $T=20$ configuration in Table 1, our method has a very similar runtime as DPS (130s vs 125s on an NVIDIA A6000 GPU).
**Quantitative results on different datasets and nonlinear inverse problems are lacking.**
Thank you for this suggestion. We have added experiments on ImageNet 256 x 256-1K and the phase retrieval and nonlinear blurring inverse problems, and find that our algorithm compares very favorably against existing methods.
**Algorithm 1 takes 2500 NFEs in the $T=50$ case in Table 1.**
Indeed, the reviewer is correct. We see that reporting only $T$ may not show the full picture, and have updated Table 1 to show NFEs. The new Table 7 (see main rebuttal PDF) displaying ImageNet results already shows $NFEs$.
**What is the computational efficiency of the proposed method?**
While the full matrix computations in equations 19-25 would be expensive to compute in full, we find that low rank approximations are very effective and result in minimal deterioration of the algorithm performance (for more details, see Appendix D.3 and specifically Table 4). Moreover, we find that the algorithm has the same computational complexity as most existing algorithms (e.g., DPS), and is similarly dominated by neural network calls (i.e., 97% of the wall-clock time is dominated by NFEs). Therefore, we also use NFEs to measure runtime.
**I would like to see a more detailed comparison of the method with other baselines like DPS in terms of time.**
Under the $Ours (T=20)$ configuration in Table 1, we take the number of iterations to be $50$, resulting in $1000$ NFEs. Under this equivalent computational budget to other methods in Table 1, we show that we still outperform prominent methods such as DPS and DDRM, while comparing favorably to state-of-the-art algorithms. Finally, we verify that the computational budget is indeed equivalent by comparing our method to DPS and MCG, letting $T = 1000$. On an NVIDIA A6000 GPU, all three methods take ~120s to run.
**Please discuss advantages over [1], [2], [3] and [4].**
We thank the reviewer for bringing these works to our attention. After careful study, we note that [1, 3, 4] all rely on Tweedie’s formula to predict an estimated $x_0$, given $x_t$, to formulate the correction step at each time of the reverse diffusion process. Therefore, these methods all suffer from the same pitfall of DPS (and MCG, and other algorithms of the same vein), where $\nabla log p(y | x_0)$ must be approximated.
We do observe that [2] stands out from the other three in that does not require this approximation. However, the variational framework that is proposed is itself intractable in their inner optimization loop, and they require a stop-gradient on the score network to maintain tractability. Overall, these related works are important to discuss, and we have added these references to our manuscript.
---
Rebuttal 2:
Comment: I thank the authors for their hard work on the rebuttal and appreciate the additional results. However, I still have concerns about two points:
- Insufficient baselines: While I agree that DPS-like baselines share the same issues due to the use of Tweedie's formula, recent works like FPS have shown significant performance improvements. Additionally, Resample (https://arxiv.org/abs/2307.08123, ICLR 2024) is an advanced method of PSLD, as both approaches are latent-based.
- Unsatisfactory results: In Table 1 of the paper, the proposed method (T = 20) does not outperform either PSLD or DPS and is not even competitive. I wouldn't expect a significant difference between the FFHQ (Table 1 in the paper) and ImageNet datasets(in the rebuttal pdf) to change this outcome.
Given these considerations and other reviewers' comments, I retain my score. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoroughness and diligence in reading our manuscript. We received a lot of sound, constructive criticism and positive feedback. This guided our revisions and further experiments in this rebuttal period, and we believe that the paper has meaningfully improved as a result. Below, we summarize some common assessments (good and bad) shared by multiple reviewers. We then address each reviewer's concerns on a point-by-point basis below.
**Strengths:**
All reviewers noted the novelty of our application of the well-known iLQR algorithm in optimal control theory to diffusion modeling. Reviewers VcWe, KZ9Z GqZF, and L2Ru found the theoretical contributions sound and compelling. KZ9Z, GqZF, and u6BH found the writing generally clear, and all reviewers provided suggestions that further improved the readability of our work.
**Weaknesses:**
**Lack of further empirical experiments.**
Many reviewers felt that, while performance is very strong on FFHQ, it is important to validate the performance of our algorithm on other datasets and tasks so as to understand the generalizability of the approach. We considered this feedback and extend the empirical results of our algorithm on two fronts: **ImageNet experiments** and **further nonlinear tasks**. In both settings, we constrain the number of function evaluations (NFEs) of our algorithm to 1000, in line with other works in these settings.
In Table 7 and Figure 7 in the attached rebuttal PDF, we demonstrate very favorable performance on ImageNet compared to existing works. We note that some works (e.g. PSLD) cannot be applied to ImageNet, since there do not exist unconditional latent ImageNet models, and are therefore absent from our benchmark.
In Table 8, we compare our model to DPS on two nonlinear inverse problems, phase retrieval and nonlinear deblurring following the setup in DPS (Chung et. al, 2022), and find that we improve on the baselines set by DPS on the phase retrieval task, and are on par with DPS on the nonlinear deblurring task.
Finally, we further corroborate our claim that our proposed algorithm is more robust to approximation errors in the neural score function. We demonstrate an out-of-domain inverse problem setting, where the model is tasked to recover an image that is out-of-distribution to the training distribution of the pretrained model. In Figure 10 in the rebuttal PDF, the right hand block shows an inverse problem setting for reconstructing ImageNet images, where the diffusion model is initialized with a pretrained model on the FFHQ dataset. As expected, our proposed method outperforms DPS on this out-of-domain inverse problem setting.
Overall, these results round out the empirical evaluation of our work, demonstrating that our proposed algorithm is capable of obtaining high quality solutions to inverse problems on a bevy of datasets and problem settings.
**Concerns on computational cost.**
Reviewers were also concerned by the computational cost of the algorithm. Indeed, optimal control was designed for generally simpler, low-dimensional systems, and many computations in a vanilla iLQR algorithm are the Hessians would be very difficult to compute in their entirety. However, we want to mention that we do investigate the computational trade-off that occurs when approximating the Hessian $V_{xx}$. Our primary method technique is the randomized low rank SVD (Halko et. al, 2011), which provides very strong approximation guarantees when there does exist low rank structure in the matrix. Indeed, the Hessians of overparameterized neural functions have been well known to be low rank (Sagun et. al, 2017), and we also verify this empirically in Table 4 in the Appendix, where we see that the algorithm performance does not deteriorate with sparser representations, in terms of rank. Moreover, even letting the rank $k = 1$ obtains very strong performance (see also Table 1). We generally found in our experiments that a low-rank approximation is sufficient to impose the quadratic trust-region optimization to our model, and does not meaningfully deteriorate performance.
Many also noted that we only report $T$, not $NFEs$ in Table 1, which can be misleading about the runtime of our algorithm. Our original intention with displaying the time steps $T$ in Table 1 is to show the stability of our algorithm even at very low time steps. This is a nice property to have, and can be useful in certain scenarios. For example, consistency (Song et. al, 2023), progressively distilled (Salimans et. al, 2022) or adversarially finetuned diffusion models (e.g., Imagine-Flash, Kohler et. al), or even regular diffusion models sampled with certain solvers (e.g. DPM-Solver, Lu et. al, 2022) are known to be unstable at larger $T$. Methods like DPS, PSLD, MCG, etc., will thus struggle in this regime, and fail to find good solutions to the inverse problem at hand. That being said, we do agree that listing only $T$ is a somewhat unfair representation of our algorithm, and have adjusted the notation in Table 1 in line with Table 7 in our rebuttal PDF. In particular, Ours ($T=20$) now reads Ours (NFE = 1000) and Ours ($T = 50$) now reads Ours ($T = 2500$), and all other methods in Table 1 also list NFEs, where applicable.
Pdf: /pdf/18504b54300708ddb9503e94c6caa31179c37572.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper tackles inverse problem via the perspective of optimal control. By treating the diffusion process (ODE) as a non-linear dynamic system and the extra guidance term as control signal, the authors manage to optimize the diffusion trajectory via the iterative Linear Quadratic Regulator (iLQR) algorithm. Several techniques are used to make the iLQR algorithm more efficient. This paper show good results on FFHQ dataset.
Strengths: The idea is interesting and reasonable. Using optimal control to solve the inverse problem enables us to optimize the whole sampling trajectory and avoid the error for estimating $x_0$ via Tweedie's formula. And the results on FFHQ dataset looks good.
Weaknesses: 1. High computation cost: Despite the advantages mentioned above, one obvious drawback of this method is the potential high computation cost. This includes:
a. Computing and storing the Jacobian matrices, which can be of very high dimension, can be very costly. Although the authors
further propose some techniques to reducing the cost, these methods might also bring extra approximation error as well as more hyper-parameters to tune;
b. Optimizing the the whole trajectory requires evaluating the whole trajectory for many times and do iterative updates. This requires more computation. Thus, though in Table 1, the authors denoted their methods as $T=50$ and $T=20$, considering the iterative update nature over the whole trajectory, this might not be directly comparable (and might actually need more computation) to other methods, which are denoted as $T=1000$. And the authors might have to greatly reduce the timesteps to make the whole algorithm affordable, this might also bring extra approximation error.
2. Lack of more complex dataset: Though the authors achieve good performance on FFHQ dataset, considering the human face data is relatively easy (aligned, not very multimodal), it is still not very clear to me how the proposed method can work on more complex dataset, for example, on ImageNet. From my own experience, the ImageNet data can be much harder than the FFHQ human face data in the inverse problem. And considering the approximation error introduced in iLQR algorithm, computing the Jacobian matrices as well as using less timesteps, it might raise concerning regarding whether the proposed algorithm can work well on more complex dataset.
3. Minor suggestion: I think it might be better for the authors to add more introduction for the optimal control part in the main paper. Or at least give more clear introduction for the notation used in 2.3. Currently, I find it not very clear to people without much background in optimal control.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Following my first point in weakness, can the author provide a comparison in sampling time (e.g. second, or NFE) of different methods. Only comparing diffusion timesteps are not very fair considering the proposed method needs to iteratively update over the whole trajectory for many times.
2. Under different initializations, can the proposed algorithm always be able to find a good solution? And will the optimized results look same or different?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors has adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude that the reviewer appreciates both the theoretical and empirical results of our work, and for bringing several insightful shortcomings of our work to our attention. Below, we address the reviewers concerns on a point-by-point basis.
**High computational cost:**
**a) Jacobian matrix computation can be costly, and low rank approximations bring additional error.**
Indeed, Hessian matrices in iLQR are costly at full rank. Moreover, we agree that low rank approximations generally introduce some error to the computation. However, we maintain that this error is negligible. In our ablation study, we see that even a rank-one approximation of these matrices is sufficient, with further increases in rank providing quickly diminishing returns (see Table 4 in Appendix). Using modern randomized linear algebra libraries (e.g. randomized SVD), this brings the computation of the Hessian matrices to cost $\mathcal{O}(d)$, where $d$ is the data dimension. Therefore, the cost of our algorithm is on par with competing methods, e.g. DPS. (For more details, please find a thorough complexity analysis in Appendix D.2.) We also find that there are few hyperparameters introduced by these approximations, and the sensitivity of the algorithm to these hyperparameters is low (Tables 4, 5, and 6).
**b) The proposed algorithm is iterative over the simulated trajectory. Reported $T$ in Table 1 may not be directly comparable.**
The reviewer is correct, our Algorithm 1 requires multiple passes through the diffusion trajectory. Our intention with displaying the time steps $T$ in Table 1 is to show the stability of our algorithm even at very low time steps. This is a nice property to have, and can be useful in certain cases. For example, DPM-Solver (Lu et. al, 2022) is known to be unstable at >100 timesteps, and is usually used for T ~ 20. Methods like DPS will struggle in this regime.
That being said, we do agree with the reviewer that in terms of computation time, the number of neural function evaluations (NFEs) is also important to consider, and have added it to Table 1, as well as the new Table 7 in the rebuttal PDF. Overall, we find that our model performs favorably against other methods under an equivalent computational budget (the T=20 case in Table 1 — and 1000 NFEs), and establishes a new baseline at T=50 and 2500 NFEs.
**More complex dataset than FFHQ, e.g., ImageNet.**
We agree that further experiments can demonstrate the generalizability of our results. To this end, we evaluate our model on the ImageNet-1K dataset replicating the experimental setup in (Chung et. al, 2022), and on some nonlinear forward operators. Please see the general rebuttal and PDF.
**Minor suggestion: More introduction for the optimal control part in the main paper.**
We fully support this suggestion. As the reviewer has noticed, we moved a large amount of the introduction to the appendix due to page constraints. However, following this suggestion we have included a slightly more intuitive discussion on iLQR and its mechanism in Section 2.3.
**Only comparing diffusion timesteps is unfair. Please provide a second form of comparison for sampling time.**
We agree, and again note that the “Ours (T=20)” entry in Table 1 is run for 50 trajectory iterations, yielding 1000 NFEs. This results in roughly the same sampling time (e.g., on an A6000, DPS and our method both take ~120s to run). We also maintain the same 1000 NFE budget in the new Table 7 showcasing results on ImageNet.
**Robustness to different initializations.**
We currently initialize our actions as identically 0, and our states as i.i.d. normal (as prescribed by the DDPM algorithm). In the rebuttal Figure X we show the result of the algorithm with different state and action initializations. We currently do not fix the seed of the state initializations, and our algorithm obtain similar results on each run. Therefore we find our method to be relatively robust to different initializations.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their responses. After reviewing the rebuttal pdf and comments, my concerns are resolved and I would like to raise my rating. | Summary: The paper uses tools from optimal control to introduce a novel approach for solving inverse problems with diffusion models. The authors propose reframing the generative process of diffusion models as a discrete optimal control problem allowing to leverage the iterative Linear Quadratic Regulator (iLQR) algorithm. Tackling limitations of existing probabilistic sampling methods, the resulting method demonstrates promising performance for inverse problems on FFHQ, such as super-resolution, inpainting, and deblurring.
Strengths: While many connections between optimal control and diffusion models have been established, the proposed algorithm leverages variants of iLQR to provide a fresh perspective on training-free posterior sampling with diffusion models. The paper provides additional theoretical guarantees as well as multiple modifications (randomized low-rank approximations, matrix-free evaluations, and adaptive optimizers) to reduce computational costs. Finally, several ablations are presented for the proposed method.
Weaknesses: 1. The claims of the paper are not sufficiently supported by experiments and/or theory (the first statements in the following are just examples---similar statements can be found throughout the paper):
* "dependence on the approximation quality of the underlying terms in the diffusion process": only a result for a single image is provided (Fig. 6). The current paper also does not seem to provide theoretical results for such robustness as claimed in "reconstruction performance is theoretically and empirically robust to the accuracy of the approximated prior score".
* "its sensitivity to the temporal discretization scheme": for the baselines, again only a result for a single image is provided (Fig. 3). Moreover, the number of steps is typically reduced to accelerate the algorithm. Accordingly, methods should be compared in terms of performance vs. runtime/flops and not the number of diffusion steps. It seems that the proposed method is significantly more expensive than competing methods (in particular, since `num_iters>=50` full simulations are used).
* "its inherent inaccuracy due to the intractability of the conditional score function": The conditional score function remains intractable, one just obtains an approximation via iLQR, since the obtained $x_0$'s obtained from the iLQR iterations only *approximately* converge to the posterior distribution *in the limit*. Statements like "Moreover, our model always estimates $x_0$ exactly, rather than forming an approximation $\hat{x}_0 \approx x_0$" sound misleading. Using iLQRs, we simulate "nominal" trajectories and thus iteratively obtain an approximate candidate for $x_0$ which will be used for the refinement of the control. In a similar (however, useless) fashion one could also use, e.g., DPS to obtain an estimate of $x_0$ and then run a probability flow ODE simulation where the scores are conditioned on $x_0$ (instead of $x_t$) to have a "method [that] produces controls that coincide precisely with the desired conditional scores". However, the advantage of DPS lies in the fact that only a single simulation is needed.
* "on several inverse problem tasks across several datasets": Apart from a single figure on MNIST (without metrics and for only a single baseline and task), results are only provided for FFHQ.
2. Moreover, several of the mentioned limitations have been already tackled by alternative approaches to posterior sampling with diffusion models, e.g., variational approaches (https://arxiv.org/abs/2305.04391) or resampling strategies (https://arxiv.org/abs/2307.08123).
3. Finally, the appendix could provide further details on
* hyperparameter choices and optimization for the baselines.
* precise assumptions for the theorems.
Technical Quality: 1
Clarity: 2
Questions for Authors: See "weaknesses" above.
Confidence: 3
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: See "weaknesses" above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful discussion and insightful critique. We hope to clarify some of the points of the paper, and alleviate the reviewer's concerns below in a point by point response.
**Only a single a single image is provided in Figs. 3 and 6.**
In the rebuttal PDF Figures 9 and 10, we provide further experiments that demonstrate the concepts in Figure 3 (accuracy of the predicted score) and Figure 6 (robustness to the approximated score), respectively.
**No theoretical results for such robustness [to the score approximation].**
This theoretical statement comes from simple fact that DPS (and related algorithms) sample from p(x|y) by solving the conditional reverse diffusion process (Eq. 1), where $\nabla \log p_t(x_t)$ is approximated by the neural score function $s_\theta$ (equivalently $\epsilon_\theta$, $v_\theta$, etc. in noise- and v- prediction models, respectively, up to a simple reparameterization). Therefore, the performance of DPS (and related algorithms) depends on the approximation $s_\theta \approx \nabla \log p_t(x_t)$. Our algorithm makes no such assumption, and is thus robust to this approximation error. We further illustrate this point with more examples in the rebuttal Figure [x].
**Methods should be compared in terms of performance vs runtime / flops and no the number of diffusion steps.**
We do run an experiment under an equivalent runtime/flops budget in the Ours (T=20) case in Table 1, where we restrict our method to 1000 NFEs, we still show significant gains over DPS, and comparable performance against the most recent state-of-the-art methods on FFHQ256-1K. Empirically, we find that our method has a similar runtime to DPS on an A6000 GPU (130s vs 125s). That said, we do agree that listing only the number of diffusion steps can misrepresent our method. Therefore, we have added NFEs to Table 1 (just like Table 7 in the rebuttal PDF).
**The proposed method is significantly more expensive than competing methods.**
We maintain that our method is not significantly more expensive than competing methods. For details, see computational complexity analysis in Section D.3, where we show that iteration-for-iteration, our method has similar costs to DPS. Indeed, under the $T=50$ configuration, our method will run for more iterations.
**The conditional score function remains intractable in the proposed method.**
We respectfully disagree. Let us first follow DPS (Chung et. al, 2022) and define the conditional likelihood as $p(y | x_0)$. Therefore, the conditional score function at time t is $\nabla_{x_t} \log p(y | x_t)$. Unlike DPS, we consider both deterministic and stochastic dynamics in our theory.
Under deterministic dynamics, we first note that the conditional **likelihood** is tractable, since $p(y | x_t) = p(y | x_0)$ — i.e., $x_0$ is determined by $x_t$, by definition — and $p(y | x_0)$ can be obtained via the forward rollout of Algorithm 1.
Now, we have the distinct advantage over DPS where $\nabla_{x_t} \log p(y | x_0)$ **is** exactly backpropagated to $x_t$ (i.e., the backward pass of Algorithm 1, see Theorems 4.1-4.3). For this reason, we also maintain that the conditional **score** is tractable. This claim cannot be made by DPS, because 1) DPS is derived from stochastic dynamics and 2) DPS relies on Tweedie’s formula at each step to “approximate” the gradient of the conditional likelihood.
**The statement that our model estimates $x_0$ exactly, rather than forming an approximation sounds misleading.**
We also respectfully disagree here. We do estimate $x_0$ exactly. Note that $x_0$ as defined in DPS (as well as our work) is $x_0 | x_t$. In other words, the denoised image **given the current noisy image $x_t$** at time $t$. Observe that we *do* obtain this term exactly in each iLQR step, up to the discretization error of the diffusion solver.
That said, we have added the qualification “up to the discretization error of the numerical solver” where applicable. We note that we do emphasize the discrete nature of our method in the abstract and throughout the paper, such as lines 67-68, 115, and 265.
**With DPS one could also (uselessly) obtain an ODE estimate of $x_0$ and then compute the scores from there.**
Indeed, this is possible --- but like the reviewer says, uselessly slow. The impracticality of this strategy illustrates the advantage of our method. Using iLQR, we are able to combine the prediction of $x_0$ in the forward rollout with the score computations used in the feedback step in one single sweep (Algorithm 1). To see this, observe that the scores can be computed for all timesteps with a single forward solve (and backprop) of the ODE. This is not possible for DPS, since each conditional score *must be* computed sequentially, **AFTER** the previous score has already been applied to the trajectory. If DPS were to use the reviewer’s suggested strategy, it would require on average $n / 2$ extra NFEs per update, resulting in a sum total of 500K NFEs under the current parameterization. By comparison, our method achieves superior performance to DPS using as little as 1K NFEs.
**Results are only provided for MNIST and FFHQ.**
We additionally provide experiments on ImageNet-1K, and demonstrate that our performance holds, with similar trends to Table 1.
**Please discuss alternative approaches to posterior sampling with diffusion models, e.g., variational approaches (https://arxiv.org/abs/2305.04391) or resampling strategies (https://arxiv.org/abs/2307.08123).**
We thank the authors for bringing up these concurrent works. We have added them to the manuscript. We note that ReSample (Song et. al, 2024) suffers from the same approximation error as DPS when estimating $x_0$ via Tweedie’s. Conversely, RED-Diff (Mardani et al., 2024) indeed does not require Tweedie’s, but requires approximations to their variational framework (e.g., the stop-gradient) when computing their proposed variational loss, thus also yielding an approximate solution.
---
Rebuttal 2:
Title: Rebuttal (Continued)
Comment: **Hyperparameter choices for the baselines.**
For baselines from competing works in Table 1, we directly replicate the reported hyperparameters from the respective papers. For our baselines, please refer to Table 2 for the hyperparameter values. For the T = 20 result, we let T = 20 and let the number of iterations be 50. For insight into how we selected these values, please see Section D.3 and Tables 4, 5, and 6.
**Assumptions for the theorems.**
The assumptions required for Theorem 4.1 are summarized in Equations 27 and 28. Similarly, Theorem 4.2 requires Equations 30 and 31. We have additionally clarified that we require $\alpha > 0$, and $\log p(y | x)$ to be twice-differentiable (which holds when $p(y | x)$ is Gaussian, as is the case in our settings). We take our theoretical results very seriously and would be happy to clarify any other ambiguities in the theorems.
---
Rebuttal Comment 2.1:
Comment: **No theoretical results for such robustness [to the score approximation]:** "Therefore, the performance of DPS (and related algorithms) depends on the approximation $s_\theta \approx \nabla \log p_t(x_t)$. Our algorithm makes no such assumption, and is thus robust to this approximation error." It is clear (and also written in 196-197), that the theoretical results require such an approximation. Thus, it still remains unclear *why* the method is more robust.
**Methods should be compared in terms of performance vs runtime / flops and no the number of diffusion steps. / The proposed method is significantly more expensive than competing methods. / Results are only provided for MNIST and FFHQ.** I thank the authors for the additional experiments which improve the empirical evaluation and resolve some of my concerns. A revised version of the paper should provide results for different NFEs to assess the scaling as well as provide more insights into runtime for all considered method (considering the maximal batchsize for each method for a given memory budget). However, when comparing against further baselines, such as FPS-SMC [1], DCDP [2], DiffPIR [3], RedDIFF [4], ReSample [5], the provided results for ImageNet, in particular for the deblurring tasks, seem not to be state-of-the-art anymore.
**The conditional score function remains intractable in the proposed method.** The conditional likelihood (also the one used for the ODE) is defined via $p(y|x_t) = \int p(y|x_0) p_{SDE}(x_0 | x_t) \mathrm{d}x_t$, where the conditional density of the SDE $p_{SDE}$ cannot just be replaced by the one for the ODE (which, I agree, would be a dirac, since for deterministic dynamics $x_0$ is exactly determined by $x_t$.).
**The statement that our model estimates $x_0$ exactly, rather than forming an approximation sounds misleading.**
Indeed one can just simulate the SDE, however, the distribution of $x_0$ depends on the approximation of the score (see also the first comment).
**Please discuss alternative approaches to posterior sampling with diffusion models, e.g., variational approaches (https://arxiv.org/abs/2305.04391) or resampling strategies (https://arxiv.org/abs/2307.08123).** Note that resampling-based methods, such as DiffPIR and DCDP, use a simulation instead of Tweedie's formula to obtain an estimate of $x_0$.
**Assumptions for the theorems.** It should be defined what exactly is meant by $x_t$ (does it originate from the approximate score, the ground-truth score, the forward/backward simulation), on which equation (29) depends. Also, equation (29) comes without any statement.
Given the additional empirical evidence, I adjusted my score. However, I still think that the presentation should be improved, both theoretically (unclear statements, see above) as well as empirically (compare to further baselines, see above).
---
[1] Dou, Zehao and Song, Yang. "Diffusion Posterior Sampling for Linear Inverse Problem Solving: A Filtering Perspective.", ICLR 2024.
[2] Li, Xiang, et al. "Decoupled data consistency with diffusion purification for image restoration.", arXiv preprint arXiv:2403.06054.
[3] Zhu, Yuanzhi, et al. "Denoising Diffusion Models for Plug-and-Play Image Restoration.", CVPR workshop NTIRE 2023.
[4] Morteza Mardani, Jiaming Song, Jan Kautz, and Arash Vahdat. A variational perspective on solving inverse problems with diffusion models. arXiv preprint arXiv:2305.04391, 2023.
[5] Bowen Song, Soo Min Kwon, Zecheng Zhang, Xinyu Hu, Qing Qu, and Liyue Shen. Solving inverse problems with latent diffusion models via hard data consistency. ICLR, 2023
---
Reply to Comment 2.1.1:
Comment: We greatly appreciate the reviewer's continued discussion of our paper, and their more positive outlook. Below we respond to some additional points the reviewer raised.
**No theoretical results for such robustness [to the score approximation] -- see Lines 196-197.** Theoretically, the reasoning for such robustness is due to the simple fact that DPS (and other algorithms) differentiate through Tweedie's formula, which relies explicitly on the score. On the other hand, our algorithm differentiates through the true simulated forward dynamics, which can be any dynamical system --- this additional flexibility is granted by our generalized optimal control framework.
**Comparison to ICLR 2024 results.** Indeed, the ImageNet results from ICLR 2024 (and some other venues) are very relevant, and we shall include some of these concurrent results in our work, where applicable. We note that some works do not use the same experimental setting. For example, ReSample uses the much easier setting of $\sigma = 0.01$, whereas we keep the settings used by DPS $\sigma=0.05$. We still believe that our work provides a meaningful contribution to the field, and hope the reviewer agrees.
**The conditional score remains intractable.** We claim tractability of $\nabla_{x_t} \log p(y | x_t)$ in the deterministic case. We also claim tractability of $\nabla_{x_t} \log p(y | x_0)$ in the stochastic case, which DPS (and related models are still unable to compute). For details, see our response to *No theoretical results for such robustness [to the score approximation]*. We have made this clearer in the manuscript.
**Estimating $x_0$ requires approximation of the score.** There is a subtle distinction here. $x_0$ as defined by the reverse diffusion process requires the score, of course. $x_0$ as defined simply as the final state of the dynamical system solved by our system does NOT require the score. We will make this distinction more clear in our manuscript.
**Please discuss alternative approaches to posterior sampling with diffusion models, e.g., variational approaches (https://arxiv.org/abs/2305.04391) or resampling strategies (https://arxiv.org/abs/2307.08123).** We thank the reviewer for bringing these papers to our attention. DiffPIR actually uses Tweedie's (See Line 3 in Algorithm 1). On the other hand, DCDP does not leverage Tweedie's. However, there is also no theoretical discussion of the correctness of the proposed approach.
**Assumptions for the theorems.** $x_t$ is defined as in Algorithm 1. We are missing a statement before Eq. 29, which should read: "Then the iterative linear quadratic regulator with Tikhonov regularizer $\alpha$ produces the control". We apologize for the confusion, and have added statements clarifying this in our manuscript. | null | null | null | null |
Human Expertise in Algorithmic Prediction | Accept (oral) | Summary: This paper introduces a new framework into algorithmic predictions. The paper asks and answers the question "how can we incorporate human input into the prediction algorithm, which may not even be captured in the training data"? The authors develop a method that first runs the predictor, and then runs a second predictor using the human input. The authors show that even a simple instantiation of their method can outperform existing predictors. They use the X-ray classification task as experimental datasets.
Strengths: The paper is written very clearly, and offers a novel method to incorporate human input into algorithmic prediction. Both theoretical derivations and experiment results are sound. The contributions of this paper is significant, and I believe this paper deserves to be accepted in its current form.
Weaknesses: The paper would be even more satisfying if the method is presented as a framework rather than a specific instantiation. In addition, it would be great if the authors can discuss potential ways to improve on the method they propose, and what these methods mean in the broader context of incorporating human feedback into algorithmic predictions. Nevertheless, these small weaknesses does not diminish the significance and novelty of this paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: My main comment is that the authors should comment more about the future work and implications of this method. Furthermore, I would be interested to hear what the authors think about a related paper [1], and how these papers might be related.
[1] DEFINING EXPERTISE: APPLICATIONS TO TREATMENT EFFECT ESTIMATION (https://arxiv.org/pdf/2403.00694)
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have addressed the limitations in the conclusion section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **In response to:** *The paper would be even more satisfying if the method is presented as a framework rather than a specific instantiation...* And: *My main comment is that the authors should comment more about the future work and implications of this method.*
Thank you for your feedback, we agree that we could devote more space to discussing the generality of our approach and avenues for possible future work. We provide our thoughts on this point below, and will plan to update our manuscript to better communicate the scope of indistinguishability beyond minimizing mean squared error (this is closely related to reviewer fseb's question about modeling decision makers which richer preferences; for convenience, we copy our response under both reviewers comments).
One way of interpreting algorithmic indistinguishability is that the algorithm provides the decision maker with a partial ordering over instances, where the ordering is defined with respect to a single outcome of interest $Y$. In particular, the algorithm can assert that instances in one indistinguishable subset $S_1$ have a larger average value of $Y$ than another indistinguishable subset $S_2$ --- so the algorithm implicitly ranks each $x \in S_1$ higher than each $x \in S_2$ --- but it has no way of ordering instances *within* each indistinguishable subset. Theorem 4.1 and Corollary 4.2 focus on settings where the objective of interest is to minimize mean squared error, and show how additional information provided by the expert can be used in service of this objective. However, this is far from the only possibility: for example, a decision maker might use predictions to inform a selection rule which seeks to balance both maximizing the mean value of $Y$ within the pool selected while also ensuring some measure of fairness or diversity (e.g., a university choosing which students to accept, where $Y$ is some measure of academic performance). In this setting, a very natural application of our framework would be to present the decision maker with a set of inputs which are algorithmically indistinguishable with respect to $Y$, and allow the decision maker to then choose from this pool to maximize their chosen fairness metric (e.g., by choosing a set of candidates with diverse interests from within a pool which cannot be distinguished on the basis of predicted academic performance). Similarly, a decision maker whose utility function includes some measure of risk aversion may select a pool of candidates from within an indistinguishable subset to minimize e.g., the *variance* of $Y$ among those selected. In both cases, indistinguishability provides a principled basis for imposing "secondary" preferences in decision making, as the decision maker can reasonably assert that they otherwise lack information to distinguish instances on the basis of (the expected value of) $Y$ alone.
Finally, we note that in both of these examples, we did not assume that the decision maker's utility can be linearly decomposed across inputs. This is in contrast to mean squared error, which can be decomposed as a (normalized) *sum* of prediction errors accross inputs. For example, a measure of fairness might depend on the composition of the entire group selected; it may not always make sense to ask whether the selection of a single individual in isolation is "fair". Similarly, a measure of risk might also depend on the composition of the entire group which is selected; perhaps the decision maker wants to select a set of inputs whose outcomes are minimally correlated (e.g., choosing a portfolio of stocks), and thus their utility is again necessarily a set-valued function. Thus, our framework is not restricted to minimizing mean squared error or simple variants thereof; instead, it provides a substantially more general basis for decision making under uncertainty. Modeling a decision maker with richer preferences is a fascinating direction, and would be happy to address any followup questions or comments during the upcoming discussion period.
**In response to:** *Furthermore, I would be interested to hear what the authors think about a related paper [1], and how these papers might be related. https://arxiv.org/pdf/2403.00694*
Thank you for the pointer to this work. This is a fascinating and related topic, which characterizes the value of human expertise as an inductive bias for guiding model selection. In particular, the authors argue that expert decisions regarding treatment allocation can be highly predictive of the true treatment effects, and that this fact can be used to inform the model selection process for treatment effect estimation. Importantly however --- and as is typical in treatment effect estimation problems --- [1] assumes *no hidden confounding* (see Section 2), which rules out the possibility that experts leverage information which is correlated with potential outcomes but unavailable to the algorithm.
This perspective is complementary to our work, which is instead focused on the possibility that experts *do* have information which is unavailable to the algorithm, and studies how to incorporate this information at "inference" or "test" time. Thus, even given infinite data to learn the best possible (e.g., Bayes optimal) model of $Y$ given $X$, our method might incorporate expert feedback at test time to improve predictions. In contrast, [1] uses expert decisions at training time to more efficiently learn a model under sample size constraints, but do not consider incorporating expertise at test time because it is assumed that experts do not provide signal which the algorithm could not (eventually, given sufficient training data, and under the overlap and unconfoundedness assumptions) learn on its own.
We thank you for pointing us to this work, and we will include a citation in our manuscript. We'd be happy to discuss this point further and answer any followup questions during the upcoming discussion period.
---
Rebuttal Comment 1.1:
Title: I recommend acceptance
Comment: Many thanks for the authors' detailed response. I am happy to see that all reviewers unanimously recommended acceptance. Therefore, I am happy to accept the paper, and nominate the paper for awards if the AC agrees. | Summary: The paper proposes a framework to incorporate human expert knowledge in algorithmic predictions. Under this framework, the authors introduce a meta-algorithm that uses a training dataset including human expert predictions together with a multi calibrated partition of the data; a partition of the dataset into bins, where each bin contains data that are indistinguishable to the predictive model. Using the data of each bin the meta-algorithm trains a regression algorithm to predict the true label from the human expert prediction. In this way, the authors aim to leverage the human expertise, that may be more accurate than the predictive algorithm on specific instances, to achieve complimentary—to achieve higher predictive accuracy through human AI collaboration than the performance of a human expert or AI in isolation.
Strengths: The paper suggests an elegant method to improve algorithmic predictions in light of human expertise, that could have significant applications such as the medical domain, where the additional information of human experts may lead them to more accurate predictions on certain instances compared ot predictive models.
The paper is very well and clearly written, nicely motivated and follows a clear structure. There is a thorough and comprehensive discussion on related work as well as a comprehensive and clearly presented experimental evaluation.
Weaknesses: Since the theoretical results of section 6 complement the ones of section 4, it would be perhaps more natural to follow them, rather than placing them after the experimental evaluation, which appears a bit odd.
Technical Quality: 3
Clarity: 4
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors adequately discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **In response to:** *Since the theoretical results of section 6 complement the ones of section 4, it would be perhaps more natural to follow them, rather than placing them after the experimental evaluation, which appears a bit odd.*
Thank you for your feedback --- we agree that this portion of the manuscript could flow a bit better. Section 4 and 5 are intended to be complementary, as both focus on a particular application of algorithmic indistinguishability. In contrast, section 6 presents results on a qualitatively different application of indistinguishability, which is intended to highlight the generality of our framework and suggest open directions for future work. We will update both the exposition to section 6 (see response to reviewer jnGw) and more directly highlight the flexibility of our approach (see responses to reviewers fseb and XFzN), which we expect will address this concern as well. We are however certainly open to other feedback on this point.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their reply. The suggested changes by the authors address my point and should be done to improve the flow. | Summary: The paper first presents some theory for the modelling of how to identify when human judgements may offer a better diagnosis - through access to additional information - than machine predictions, despite the latter typically being more accurate. This is followed by exploring how to integrate the human input with the algorithmic (model) input. Subsequently, the authors present some focussed experimental results using chest x-ray interpretation that support their proposition.
Strengths: Originality: carefully drawn comparison with the literature, situates and differentiates the contribution.
Quality + Clarity (addressed together):
Clear abstract and intro with well-defined contributions. Content offers a reasonable balance between technical and intuitive. Recognition of the value of the human contribution and seeking to integrate it in decision making.
The later mathematical results (section 4) have effective accompanying interpretations (see complementary point in weaknesses).
Effective, selective presentation of results: choosing one and going into detail, while two other cases in the appendices support the same observation, rather than trying to squeeze them all into the paper body. Same applies to results in section 5.2.
Significance: provides a sound framework for a particular, amenable class of collaboration problems that allows for the proper incorporation of human prediction where machine prediction could fall short.
Weaknesses: Clarity: Indistinguishability and multicalibration are critical elements to the contribution; it would be helpful if the interpretation of their definitions (3.1, 3.2) went into a bit more detail for accessibility.
This reader is not succeeding in following the argument about robustness (section 6).
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. The case studies are retrospective so both machine and human outcomes are available to use in the analysis. How would the approach work in a live situation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Section 7 provides some properly reflective critique on scope and applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **In response to:** *...it would be helpful if the interpretation of...definitions (3.1, 3.2) went into a bit more detail for accessibility.*
Thank you for your feedback. We agree that these definitions could use a bit more exposition; we will add more background and provide a concrete interpretation in the context of diagnosing atelectasis (which is described earlier in the introduction) to make them more accessible.
**In response to:** *This reader is not succeeding in following the argument about robustness (section 6).*
We agree that this portion of the manuscript could be more accessible. We will add some additional exposition to section 6 and further clarify the intended application of these results. To motivate this section, it is helpful to consider the following stylized example:
Suppose we are interested in designing an algorithmic risk score for a large hospital system. As discussed in section 6, one important consideration is that doctors at this hospital system may only selectively comply with the algorithm. Concretely, suppose there is one doctor, doctor A, who generally complies with the algorithm's recommendations except on patients who have high blood pressure; the doctor believes (correctly or not) that the algorithm underweights high blood pressure as a risk factor, and simply uses their judgment to make decisions for this subset of patients. A second doctor, doctor B, similarly complies with the algorithm on all patients under the age of 65, but ignores its recommendations for patients 65 and older.
What does an optimal algorithm look like? For doctor A, we would like to select the algorithm which minimizes error on patients who do not have high blood pressure, as these are the patients for whom the doctor actually uses the algorithm's recommendations. Similarly, for doctor B, we would like to select the algorithm which minimizes error on patients who are under 65. The key thing to note is that there is no guarantee that these are the same algorithm: if we were to run empirical risk minimization over the first population of patients, we might get a different predictor than if we ran empirical risk minimization over the second population. This is of course not just a finite sample problem; it is also possible that, given any restricted model class (e.g., all linear predictors), the optimal predictor for one subpopulation may not be optimal for a different subpopulation.
For both practical and ethical reasons, we cannot provide individualized risk scores for every physician; we must provide a single risk predictor for the entire hospital system. Our first result (Lemma A.4) shows that, without further assumptions, this is an intractable problem. If there are arbitrarily many physicians who may choose to comply in arbitrary ways, then we need a predictor which is simultaneously optimal for every possible patient subpopulation. The only predictor which satisfies this criterion is the Bayes optimal predictor, which is infeasible to learn in a finite data regime.
However, it is perhaps more likely that physicians decide whether or not to defer to the algorithm using relatively simple heuristics. If we believe we can model these heuristics as a "simple" function of observable patient characteristics --- e.g., that all compliance patterns can be expressed as a shallow decision tree, even if particular compliance behavior varies across physicians --- then perhaps we can leverage this structure to design a single optimal predictor. Theorem 6.1 shows that indeed this is possible: if a predictor is multicalibrated over the appropriate class of risk scores and compliance patterns, then, for every physician, on the subpopulation of patients that the physician defers to the algorithm, the performance of the multicalibrated predictor is competitive with that of any other predictor in the class ${\cal F}$. Thus, by leveraging known or assumed structure in user compliance patterns, we can design predictors which are "robust" to those compliance patterns.
We hope that helps clarify the motivation and intended application of the results in section 6. As mentioned above, we will update our manuscript to make this section more accessible, and would be happy to answer any follow up questions during the upcoming discussion period.
**In response to:** *The case studies are retrospective so both machine and human outcomes are available to use in the analysis. How would the approach work in a live situation?*
This is an excellent question, and one which we will address in more detail (particularly in the discussion section). On a forward-looking basis, perhaps the most natural application of our framework is to proactively solicit expert feedback on *only* those inputs where it appears to provide additional information. For example, in section 4, we observe that a nonzero coefficient $\beta_k^*$ indicates that an expert provides additional signal within subset $S_k$, whereas a coefficient of $0$ indicates the opposite. Thus, although we assume that we have retrospective expert predictions for every instance, we could prospectively focus expert attention on only the subset of instances where experts add value and simply defer the remainder to an algorithm.
More generally, choices for both *whether* to solicit expert feedback and *how* to incorporate that feedback into forward-looking decisions is a substantial topic in its own right. For example, translating predictions to decisions, and deciding whether to incorporate expert feedback to do so, may require a rich model of decision makers' preferences or utility functions. We discuss these at length in our response to reviewers fseb and XFzN. Furthermore, although we focus on the standard "batch" supervised learning setting to highlight our main contributions, we view the extension to an online learning context as a promising avenue for future work, and will include it in our discussion of open problems in section 7.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed and helpful response both to me and the other reviewers.
---
Rebuttal Comment 1.2:
Comment: Hello, thank you for your interesting paper, but as for your example of Doctor A and Doctor B, I think it is the reverse causality. In your example, Doctor A, except for patients with hypertension, usually follows the advice of the algorithm. This situation is often because the algorithm has more errors in the judgment of patients with hypertension, which is not as accurate as the doctor's own judgment. At other times, the judgment is accurate, and the doctor does not need to invest too much effort and judgment. You deduce from this that the optimal algorithm is minimizes error on patients who do not have high blood pressure. I think this judgment is irresponsible. It is difficult for me to understand how the example you give contributes to the explanation of Section VI | Summary: This paper introduces a framework for joint human-AI prediction, where human experts can augment AI predictions in particular ex ante identifiable subsets.
Strengths: This paper makes a lot of interesting contributions. First, its scope is broad and important: it tackles the question of how and whether human judgment can improve the predictions of any learning algorithm. That is and will remain to be a very important question in our time. It contributes a very interesting framework, rooted in algorithmic indistinguishability and multicalibration, to find subsets in which no algorithm in a user-specified class has predictive power (because they are algorithmically indistinguishable) but human experts do (because they might have more access to the instances, such as doctors examining patients). It demonstrates that using this framework, we can find subsets of instances where human experts can outperform algorithms, and thus the combination of the two can outperform either alone. It applies this to an important medical problem and in another domain of making predictions from photos of people. It even extends the framework to apply to a setting with noncompliance. The community stands to learn a lot from this paper.
Weaknesses: As the authors mention, the framework is dependent on minimizing mean squared error only.
Technical Quality: 4
Clarity: 4
Questions for Authors: How might you model deicision makers with richer preferences than mean squared error?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **In response to:** *How might you model decision makers with richer preferences than mean squared error?*
Thank you for your feedback, we agree that we should address this possibility in more detail (particularly in the final discussion section). We provide our thoughts below, and will plan to update our manuscript to better communicate the scope of indistinguishability beyond minimizing mean squared error (this is closely related to reviewer XFzN's question about the generality of our framework; for convenience, we copy our response under both reviewers comments).
One way of interpreting algorithmic indistinguishability is that the algorithm provides the decision maker with a partial ordering over instances, where the ordering is defined with respect to a single outcome of interest $Y$. In particular, the algorithm can assert that instances in one indistinguishable subset $S_1$ have a larger average value of $Y$ than another indistinguishable subset $S_2$ --- so the algorithm implicitly ranks each $x \in S_1$ higher than each $x \in S_2$ --- but it has no way of ordering instances *within* each indistinguishable subset. Theorem 4.1 and Corollary 4.2 focus on settings where the objective of interest is to minimize mean squared error, and show how additional information provided by the expert can be used in service of this objective. However, this is far from the only possibility: for example, a decision maker might use predictions to inform a selection rule which seeks to balance both maximizing the mean value of $Y$ within the pool selected while also ensuring some measure of fairness or diversity (e.g., a university choosing which students to accept, where $Y$ is some measure of academic performance). In this setting, a very natural application of our framework would be to present the decision maker with a set of inputs which are algorithmically indistinguishable with respect to $Y$, and allow the decision maker to then choose from this pool to maximize their chosen fairness metric (e.g., by choosing a set of candidates with diverse interests from within a pool which cannot be distinguished on the basis of predicted academic performance). Similarly, a decision maker whose utility function includes some measure of risk aversion may select a pool of candidates from within an indistinguishable subset to minimize e.g., the *variance* of $Y$ among those selected. In both cases, indistinguishability provides a principled basis for imposing "secondary" preferences in decision making, as the decision maker can reasonably assert that they otherwise lack information to distinguish instances on the basis of (the expected value of) $Y$ alone.
Finally, we note that in both of these examples, we did not assume that the decision maker's utility can be linearly decomposed across inputs. This is in contrast to mean squared error, which can be decomposed as a (normalized) *sum* of prediction errors accross inputs. For example, a measure of fairness might depend on the composition of the entire group selected; it may not always make sense to ask whether the selection of a single individual in isolation is "fair". Similarly, a measure of risk might also depend on the composition of the entire group which is selected; perhaps the decision maker wants to select a set of inputs whose outcomes are minimally correlated (e.g., choosing a portfolio of stocks), and thus their utility is again necessarily a set-valued function. Thus, our framework is not restricted to minimizing mean squared error or simple variants thereof; instead, it provides a substantially more general basis for decision making under uncertainty. Modeling a decision maker with richer preferences is a fascinating direction, and would be happy to address any followup questions or comments during the upcoming discussion period. | Rebuttal 1:
Rebuttal: We are grateful to all four reviewers for their thoughtful and constructive feedback. Below we describe how we intend to incorporate this feedback into our manuscript and include responses to specific reviewer questions and concerns. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Alignment at Pre-training! Towards Native Alignment for Arabic LLMs | Accept (poster) | Summary: This paper proposed a new method for LLM alignment during pre-training. The proposed method is call "native alignment". This method include three steps: pretrain date duplication, alignment rewriting, and model training. They trained small size alignment expert model for alignment rewriting and use the model to rewrite large-scale pre-training data. The rewriting process suppose to solve format issue, value/fairness issue, unsafe content in pre-training data. They experimented with Arabic data and LLMs. Their experiments shows that the proposed method can help LLMs be more safe and helpful.
Strengths: 1. The paper proposed a new idea to align LLMs during pre-training. It seems an interesting topic.
2. The paper writing is clear and well-organized.
Weaknesses: 1. Lack of comparison to existing post-alignment methods. The proposed method is a "native alignment" during pre-training. I wonder if this method can outperform the post-alignment methods. While the author acknowledged this limitation, I still feel it is important for strengthening their claim.
2. Need more analyses to better understand their method's potential trade-off. For example, I wonder if rewriting pre-training data undermines the LLM's capacity to understand and learn Arabic dialects. The rewriting process may convert Arabic dialects into MSA. I also wonder if the rewriting data inherited hallucinations from LLM and deteriorated the trained model.
3. The paper needs more clarification on experiment details. For example, they exploit Arabic data to investigate their method, however, the evaluation dataset, BeaverTails dataset, is an English dataset. I wonder how they evaluate and if they translate the samples.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see the questions in weakness.
* I wonder whether you continued training the LLaMA 3 model or newly initialized LLaMA-like model and trained it from scratch.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I think that the paper needs more diverse analyses to understand the potential trade-off of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments. Below, we provide responses to your concerns.
## Weakness 1: Comparison between Native Alignment and Post-Alignment
To meet your curiosity, we conducted an experiment **comparing *native alignment* and *post-alignment***. The results show that the native alignment approach outperforms the post-alignment methods (DPO) in this case. However, we afraid this is not a fair apples-to-apples comparison for the following reasons:
1. The data used for native alignment and DPO were *not of the same scale*.
2. Native alignment and DPO are complementary methods that operate at different stages rather than being *exclusive*.
*Therefore, the conclusion drawn is specific to these ad hoc settings involving native alignment and DPO. The results may differ if the settings are changed.*
| | ArabicMMLU | EXAMS | ACVA clean | ACVA all | Avg. |
| ------------------- | ---------- | ----- | ---------- | -------- | ----- |
| LLaMA3-8B (SFT) | **41.65** | 39.84 | 55.56 | 57.10 | 48.54 |
| LLaMA3-8B (SFT+DPO) | 39.78 | 38.56 | 60.11 | 61.53 | 50.00 |
| LLaMA3-Tamed-8B (Native alignment + SFT) | 41.13 | **41.73** | 66.64 | **66.96** | **54.12** |
| LLaMA3-Tamed-8B (Native alignment + SFT + DPO) | 39.58 | 39.00 | **68.24** | 66.01 | 53.21 |
Considering that native alignment and post-alignment methods (such as DPO) are orthogonal and can be applied simultaneously in the same model, experiments on LLMs *with and without DPO* show that **native alignment can enhance cultural alignment**. This indicates that both native alignment and post-alignment are *beneficial* and *complementary* approaches to alignment.
### Experiment Settings
We utilized the LLaMA-Factory framework, employing LLaMA3-Tamed-8B as the backbone for the experimental group focusing on native alignment and Meta-LLaMA3-8B as the control group. We performed instruction tuning on both pre-trained models using an Arabic supervised fine-tuning (SFT) dataset, resulting in the fine-tuned models named *LLaMA3-Tamed-8B (Native alignment + SFT)* and *LLaMA3-8B (SFT)*. For post-alignment, we selected DPO training as a representative approach, using an Arabic preference dataset. Post-alignment was conducted on both chat models, namely *LLaMA3-Tamed-8B (Native alignment + SFT + DPO)* and *LLaMA3-8B (Native alignment + DPO)*. The batch size was set to 128 for both instruction tuning and DPO, with epochs set to 3. All other experimental settings followed the default settings in the framework. We evaluated the performance of the instruction-tuned models and the post-alignment tuned models on the same Arabic benchmarks shown in the paper with zero-shot setting.
## Weakness 2: Potential Trade-offs of Native Alignment
We considered the potential trade-offs of the native alignment approach, which include:
1. **Harmlessness vs. Helpfulness:** In Section 4.1, we leverage Arabic BeaverTails to analyze the trade-off between LLM harmlessness and helpfulness with the increasing amount of aligned data. It shows that with the increasing number of aligned data, both harmlessness and helpfulness *increase positively*. This observation can be concluded that the aligned data can be positive to both the helpfulness and harmlessness aspect to LLM.
| | Harmlessness $\uparrow$ | Helpfulness $\uparrow$ |
| ------------------------------------------------ | ---------------------- | ----------- |
| Only Pre-train Data (12B tokens) | baseline | baseline |
| Only Aligned Data (12B tokens) | +2.6% | +7.0% |
| Pre-train Data + Aligned Data (12B + 12B tokens) | +2.7% | +7.7% |
2. **Arabic Dialects vs. MSA:** An author, a native Arabic speaker, reviewed the data before and after the native alignment rewriting and identified three main issues with the Arabic dialects: (1) *Minor Change:* Dialect expressions are preserved; (2) *Information Loss:* Some dialect-specific information was lost during the rewriting process; (3) *Hallucination:* Incorrect interpretation of dialectal meaning. Additionally, we found that GPT-4 also struggles with Arabic dialects.
3. **Hallucinations:** To determine whether hallucinations are inherited from the original data to the rewritten data, three authors manually reviewed 90 rewriting pairs. Although identifying hallucinations can be somewhat subjective, they adhered to a consistent text-written guideline. The hallucination ratios are as follows:
| | Reviewer 1 | Reviewer 2 | Reviewer 3 |
| ---------------------------- | ---------- | ---------- | ---------- |
| Hallucination (# of samples) | 2/30 | 4/30 | 4/30 |
The overall hallucination ratio was found to be within an *acceptable range*. While addressing hallucinations in native alignment remains a *challenging task* beyond the scope of this paper, our empirical results justify the approach of alignment during pre-training, even with this level of tolerance. We plan to address the hallucination issue in future work.
## Weakness 3: BeaverTails Translation
The evaluation benchmark, the BeaverTails dataset, is in English. To evaluate Arabic LLMs, we used Baidu translation API to translate the questions into Arabic. The translation quality for all data was verified by one of the authors, a native Arabic speaker.
## Questions
The native aligned LLMs introduced in the paper, *LLaMA3-Tamed-8B* and *LLaMA3-Tamed-70B*, are continuously pretrained using the *Meta-LLaMA-3-8B* checkpoints. Additionally, all ablation studies presented in the paper and the rebuttal are based on this continuous training approach.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed responses to my questions and concerns. Most of my concerns have been addressed. I updated my rating accordingly. Please ensure you will include these results and observations in your paper. | Summary: The paper introduce a method called "native alignment", which is a set of procedures to create data and train an LLM to rewrite raw text into "useful" texts for pretraining. They apply this technique specifically for Arabic LLMs and conduct experiments to show that this pre-processing of pre-training data helps produce better Arabic LLM down the line. As bonus, they release open-source Arabic LLMs for the communities
Strengths: * The paper ideas are presented clearly and easy-to-understand
Weaknesses: * As a proclaimed novelty, the paper draws itself between pre-alignment and post-alignment, indicating that previous work only focus on post-alignment but not pre-alignment. However, I afraid the paper misunderstands the concept of post-alignment (RLHF) and fails make an accurate comparison. Post alignment (RLHF) is finetuning technique to train the models to reward good-vs-bad response according to human values, and train the policy models to lean on the good behavior and stay-away from the bad behaviors gradually, often with the existence of a reference model (DPO and RLHF).
Meanwhile, the "native alignment" presented in the paper is a data-cleaning procedure, and it does not having any resemblance or contrast with "post-alignment". Furthermore, using LLMs or training LLMs to rewrite raw text to produce cleaner data is not new or novel, there are many techniques out there that do so, and there are abundant open-source data on huggingface which were produced in similar ways.
This confusion between data cleaning and alignment makes the paper less credible and the lack of novelty it the methodology itself, as a data cleaning method, is also troublesome.
Obviously as a result, the paper did not provide any necessary and required experimental comparisons with other data cleaning methods.
* Though I do appreciate the paper's effort for Arabic community, the scope of only Arabic LLM is small and generally inconclusive, that such method is not shown to generalize to other languages, domains. Perhaps, thus, the work is really not suitable for NeurIPS but more suitable for CL-type venues
* It is unclear from the writing whether the authors pretrained Llama-3 with Arabic from scratch or further finetune from Llama-3 checkpoint. In either case, there should be explanation and further ablation studies.
Technical Quality: 2
Clarity: 3
Questions for Authors: Did the authors pretrain from scratch (with Llama-3 architecture) or from Llama-3 checkpoint
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors discussed limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness: Why the proposed data cleaning method (native alignment) is a kind of alignment.
### What is alignment?
“*Alignment*” refers to the process of ensuring that LLMs act in accordance with user intentions. Models are considered *aligned* if they are *helpful, honest, and harmless* [1]. Alignment is necessary because pre-training data may contain unaligned content, such as ethical issues or religious taboos. This is particularly crucial in regions where religion plays a significant role, like the Arabic world. Although unaligned data may constitute a small proportion of the total data, it can conflict with widely accepted human values.
[1] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A. and Schulman, J., 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35, pp.27730-27744.
### Post-alignment vs. Pre-alignment (native alignment)
Instead of alignment **after pre-training** from conflicting data (as RLHF does),the proposed native alignment approach works **at the pre-training stage**. After extensive pre-training on both aligned and unaligned data, RLHF is used to encourage the generation of positive content and discourage negative behaviour. *Native alignment*, however, is akin to removing unaligned content at the beginning. As the saying goes, "*An ounce of prevention is worth a pound of cure,*" meaning that preventing issues by avoiding unaligned information is often more effective and less costly than addressing problems deep within the model later on.
Additionally, we have added an experiment showing the **comparison between native alignment and post-alignment**, please check **‘Author Rebuttal - Additional Experiment I’** for more details.
### Comparison with Data Cleaning?
Several notable data cleaning research efforts include:
- *RefinedWeb* [1], which demonstrates that properly filtered and deduplicated web data can significantly enhance model performance.
- *SlimPajama* [2], which involves fine-grained data processing through filtering and deduplication.
- *WRAP* [3], which uses an instruction-tuned model to paraphrase web documents, focusing on stylistic elements like 'Wikipedia' or 'question-answer format'.
While *most conventional data cleaning methods* aim to remove low-quality content [1,2] and a few data cleaning work [3] focus on format polishing, Native Alignment additionally seeks to align LLMs with human preferences. *Data Cleaning* and *Native Alignment* are not mutually exclusive; they are complementary which focus on different aspects (data quality vs. value alignment). Particularly, in some sense, the proposed approach, native alignment, could be considered as a special case of data cleaning methods that not only improve data quality as conventional data cleaning methods did but also improve value alignment.
We have added an experiment showing the **comparison between native alignment and conventional data cleaning procedure**, *RefinedWeb*, please check **‘Author Rebuttal - Additional Experiment II’** for more details.
[1] Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E. and Launay, J., 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
[2] Soboleva, D., Al-Khateeb, F., Myers, R., Steeves, J. R., Hestness, J., & Dey, N. (2023). SlimPajama: A 627B token cleaned and deduplicated version of RedPajama. Retrieved from https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama
[3] Maini, P., Seto, S., Bai, H., Grangier, D., Zhang, Y. and Jaitly, N., 2024. Rephrasing the web: A recipe for compute and data-efficient language modeling. arXiv preprint arXiv:2401.16380.
## Concern on generalization
**Why Arabic LLMs?** Our paper focuses on Arabic LLMs due to the **high sensitivity** of Arabic-speaking cultures to **religious errors** [4]. This context provides a stringent test for native alignment. Misalignments in these models can hinder their adoption in the Arab world, whereas in more open environments with higher error tolerance, minor issues can often be addressed post-alignment. The relative rarity of large open-source models in Arabic-speaking regions underscores the importance of our aligned pre-training corpora, which aim to support the development of such models and introduce the concept of native alignment.
**Limitation on Computation Resource** The method we propose is highly GPU-intensive since it needs a LLM for massive amounts of data rewriting. This could be applied to the Arabic world due to the **high sensitivity** of Arabic-speaking cultures to **religious errors.** However, in languages like English where there is more tolerance for errors due to greater freedom of speech, there are many lightweight methods for alignment such as RLHF; the GPU-intensive *native alignment* method is too expensive for implementation in the free world where alignment requirement is not as strict as the one with zero-tolerance regions, such as Religion.
**Additional Experiment on English** To meet your curiosity, we conducted a preliminary experiment on English to explore the potential for generalization using native alignment. For further details, please refer to ‘**Author Rebuttal - Additional Experiment II’**.
[4] Farghaly, A. and Shaalan, K., 2009. Arabic natural language processing: Challenges and solutions. ACM Transactions on Asian Language Information Processing (TALIP), 8(4), pp.1-22.
## Question on Experimental settings
*LLaMA3-Tamed-8B* and *LLaMA3-Tamed-70B* are continuously pretrained using the pretrained Meta-LLaMA-3-8B checkpoints. Additionally, all other ablation studies conducted in the paper and rebuttal are based on continuous training.
---
Rebuttal 2:
Title: Enhancing Clarity in Native Alignment Approach
Comment: Thank you for your valuable review and for pointing out the weaknesses in our paper. We recognize the importance of clearly emphasizing the novelty of native alignment. In the final version of the paper, we will ensure that the distinctions and connections between our proposed native alignment approach and traditional data cleaning methods, as well as post-alignment techniques like RLHF, are clearly highlighted.
The additional experiments conducted during the rebuttal period will also be incorporated into the final version. Furthermore, we will **expand the related work section** to provide a more comprehensive context for our contributions. We have identified the following research to include and would greatly appreciate any additional references you might suggest:
1. **Data Cleaning Methods:**
- Hegazi, M.O., Al-Dossari, Y., Al-Yahy, A., Al-Sumari, A. and Hilal, A., 2021. Preprocessing Arabic text on social media. Heliyon, 7(2).
- Gao, L., Biderman, S., Black, S., Golding, L., Hoppe, T., Foster, C., Phang, J., He, H., Thite, A., Nabeshima, N. and Presser, S., 2020. The pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027.
- Wenzek, G., Lachaux, M.A., Conneau, A., Chaudhary, V., Guzmán, F., Joulin, A. and Grave, E., 2019. CCNet: Extracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359.
- Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E. and Launay, J., 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
- Fan, R.Z., Li, X., Zou, H., Li, J., He, S., Chern, E., Hu, J. and Liu, P., 2024. Reformatted alignment. arXiv preprint arXiv:2402.12219.
- Zhou, C., Liu, P., Xu, P., Iyer, S., Sun, J., Mao, Y., Ma, X., Efrat, A., Yu, P., Yu, L. and Zhang, S., 2024. Lima: Less is more for alignment. Advances in Neural Information Processing Systems, 36.
2. **Post-Alignment:**
- Sun, Z., Shen, Y., Zhou, Q., Zhang, H., Chen, Z., Cox, D., Yang, Y. and Gan, C., 2024. Principle-driven self-alignment of language models from scratch with minimal human supervision. Advances in Neural Information Processing Systems, 36.
- Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A. and Schulman, J., 2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems, 35, pp.27730-27744.
- Lee, H., Phatale, S., Mansoor, H., Lu, K., Mesnard, T., Bishop, C., Carbune, V. and Rastogi, A., 2023. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267.
- Zhu, B., Jordan, M. and Jiao, J., 2023, July. Principled reinforcement learning with human feedback from pairwise or k-wise comparisons. In International Conference on Machine Learning (pp. 43037-43067). PMLR.
- Song, F., Yu, B., Li, M., Yu, H., Huang, F., Li, Y. and Wang, H., 2024, March. Preference ranking optimization for human alignment. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 17, pp. 18990-18998).
Again, thank you for your feedback. **We would be glad to engage in further discussion if you have any remaining concerns.**
---
Rebuttal Comment 2.1:
Title: Thanks for the response
Comment: Thank you for the response. I decide to keep rating unchanged.
* This is data cleaning regardless how authors disagree with this. Deception of concepts to create a sense of novelty should be discouraged.
* All previous data cleaning processes all aim to be aligned with human values, such as removing NSFW words and toxic content. Yet no one even call them "alignment".
---
Reply to Comment 2.1.1:
Title: "Native Alignment is a special case of Data Cleaning"
Comment: Thank you for clarifying your concerns. We indeed agreed that native alignment is a special case of data cleaning, as previously stated in the rebuttal. We do not intend to create a sense of novelty through conceptual deception: the significant difference between native alignment and traditional data cleaning is that native alignment generalizes the latter to a broader scope, specifically focusing on value alignment.
Regarding value alignment, we acknowledge that previous data cleaning processes, such as removing NSFW words and toxic content, do involve a form of alignment from a broader sense. However, this alignment is often *superficial:* typically applied through whole-document or whole-paragraph filtering. In contrast, **native alignment** provides a more fine-grained approach, particularly by improving NSFW tones, rephrasing toxic wording, and calibrating biased content.
### Related data cleaning works
Here we would like to recheck some existing data cleaning works:
- [1] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- [2] SlimPajama: A 627B token, cleaned and deduplicated version of RedPajama
- [3] The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
- [4] Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling
The first three papers primarily focus on document filtering and deduplication using syntactic rule-based approaches, rather than content rephrasing. The last paper focuses solely on data formatting and does not explicitly address value alignment (i.e., *reducing harmfulness and biases in data*); this type of reformatting (e.g., see the target format “like Wikipedia” or in “question-answer format”. ) may reduce format diversity, potentially limiting LLMs' ability to achieve format generalization. In contrast to these traditional data cleaning methods, as illustrated in the example below, we argue that native alignment focuses on **correcting values at a more fine-grained**, semantic level, rather than merely removing entire documents or simply changing the text format. The below is an intuitive example.
| Original Text | Traditional Data Cleaning method (like [1,2,3]) | Format alignment ([4]) ( *using original prompt in [4]*) | Native Alignment |
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| It could be someone admiring the damn glacier, not some goddamn lazy hiker who needs a rest every two seconds. These people from China are so fucking useless. I swear, all Asians are stupid and deserve to die. *(Racial discrimination by stereotypes.)* | [*Data removed due to inappropriate content*] *(Removes the harmful content, leading to loss of context.)* | *[Wikipedia format]* It might be someone appreciating the glacier, rather than a hiker who frequently needs to take breaks. These individuals from China are unhelpful, and it seems that all Asians lack intelligence and are undeserving of life. *(Changes the format without correcting expression problems.)* | *[Rewritten]* It could be someone appreciating the glacier, not just a hiker who needs to rest frequently. *(Transforms the text into a more equitable and fair statement, preserving context.)* |
| I hope the police show up to this madman and make him realize what a complete nigger he is. *(Racial discrimination by stereotypes.)* | [*Data removed due to inappropriate content*] *(Removes the harmful content, leading to loss of context.)* | *[QA format]* Question: What does the speaker hope for? Answer: The speaker hopes that the police will show up to the madman and make him realize his mistake. Question: How does the speaker describe the madman? Answer: The speaker describes the madman as a complete nigger. *(Changes the format without correcting expression problems.)* | [*Rewritten*] I hope the police address the situation appropriately and help the individual understand the severity of their actions. *(Transforms the text into a more equitable and fair statement, preserving context.)* |
---
Reply to Comment 2.1.2:
Title: Additional Experiment: Comparison of Native Alignment and Traditional Data Cleaning
Comment: We conducted an additional experiment to compare native alignment and data cleaning procedures, and to evaluate the transferability of our proposed method to other languages beyond Arabic, specifically English.
## Experiment Settings
We implemented the native alignment approach as described in the paper. For this, GPT-4 was employed to rewrite 4,300 seed data samples randomly selected from the pre-training corpus, RefinedWeb [4]. This rewritten data was then used to fine-tune a pre-trained model (Qwen-1.5-4B-Chat) as the rewrite LLM. Subsequently, this LLM was used to rewrite an additional 14,600 pre-training data samples, also randomly sampled from RefinedWeb. Continuous pre-training was carried out on Qwen-1.5-0.5B using both the original RefinedWeb data and the aligned data, resulting in models designated as Qwen-1.5-0.5B-refinedWeb and Qwen-1.5-0.5B-aligned. Evaluation was conducted using the MMLU benchmark [5].
## Experiment Results and Analysis
| Subject | Qwen-1-5-0-8B-RefineWeb | Qwen-1-5-0-8B-aligned |
|-----------------------------|-------------------------|-----------------------|
| STEM | 27.99 | 33.25 |
| Social Science | 12.86 | 25.37 |
| Other | 14.35 | 29.91 |
| **Avg.** | 18.32 | 27.71 |
The results show both continuous pre-training methods led to performance improvements on the MMLU benchmark. However, the native alignment procedure resulted in more significant gains compared to data cleaning alone. Analysis of the rewritten data, reveals that the rewritten text enhances the original content by improving readability and conciseness. This suggests that:
1. Native alignment can provide higher quality data than traditional data cleaning;
2. Native alignment demonstrates strong generalisability to other languages beyond Arabic.
[4] Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E. and Launay, J., 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
[5] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D. and Steinhardt, J., 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300.
---
Rebuttal 3:
Title: Details on these Data cleaning work
Comment: - [1] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
> • We **only retained lines** that ended in a terminal punctuation mark (i.e. a period,
exclamation mark, question mark, or end quotation mark).
• We **discarded any page** with fewer than 5 sentences and only retained lines that
contained at least 3 words.
• We **removed any page** that contained any word on the “List of Dirty, Naughty, Obscene
or Otherwise Bad Words”.
• Many of the scraped pages contained warnings stating that Javascript should be
enabled so we **removed any line** with the word Javascript.
• Some pages had placeholder “lorem ipsum” text; we **removed any page** where the
phrase “lorem ipsum” appeared.
• To **deduplicate** the data set, we discarded all but one of any three-sentence span
occurring more than once in the data set.
- [2] SlimPajama: A 627B token, cleaned and deduplicated version of RedPajama
> SlimPajama was created by **cleaning and deduplicating** the 1.21T token RedPajama dataset from Together.
>
- [3] The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
> these pipelines usually combine a variety of stages: (1) language identification...; (2) filtering rules and heuristics, ...; (3) ML-based quality filtering, ...; (4) deduplication, ...
>
- [4] Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling
> paraphrase documents on the web in specific styles such as “like Wikipedia” or in “question-answer format” to jointly pre-train LLMs on real and synthetic rephrases.
>
[1] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W. and Liu, P.J., 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research, 21(140), pp.1-67.
[2] Soboleva, D., Al-Khateeb, F., Myers, R., Steeves, J. R., Hestness, J., & Dey, N. (2023). SlimPajama: A 627B token cleaned and deduplicated version of RedPajama. Retrieved from https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama
[3] Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E. and Launay, J., 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
[4] Maini, P., Seto, S., Bai, H., Grangier, D., Zhang, Y. and Jaitly, N., 2024. Rephrasing the web: A recipe for compute and data-efficient language modeling. arXiv preprint arXiv:2401.16380.
---
Rebuttal 4:
Title: On the reviewer's comment: "No one even call data cleaning alignment"
Comment: We politely find a *data cleaning* work which calls itself **alignment**, see Reformatted Alignment [1]. [1] introduces a method called REALIGN that reformats existing instructional data to enhance its quality for better mathematical performance; the authors argue such reformatting is a kind of alignment.
The work utilizes data rephrasing to achieve the goal of 'alignment.' The differences between 'native alignment' and [1] in alignment are
- [1] rephrased data at the *Supervised Finetuning* stage; this work (native alignment) rephrased data at the *pre-training* stage;
- [1] is more like a *format alignment* while native alignment additionally emphasize *value alignment in the sense of reducing harmfulness and biases in data*.
[1] Fan, R.Z., Li, X., Zou, H., Li, J., He, S., Chern, E., Hu, J. and Liu, P., 2024. Reformatted alignment. *arXiv preprint arXiv:2402.12219*.
We sincerely hope that you can consider re-evaluating it. | Summary: This paper proposes a data augmentation pipeline which modifies the pre training data for large language models in key aspects such as formatting, values, content moderation and knowledge preservation. The resulting pipeline, termed native alignment, is applied Arabic LLMs due to the relatively small pretraining corpus available and the difference between Arabic and western culture. Experiments are conducted to test the performance on a few metrics including trustworthiness, knowledge, and Arabic localisation.
Strengths: This is a well written paper targeting the important topic of llm alignment. It also addresses the relatively under explored sub question of how to improve alignment at pretraining. The resulting pipeline presents a reasonable idea, and the evaluations are clear and I find them comprehensive too. The author(s) should also be commended for their transparency regarding the limitations of the paper.
Weaknesses: Although this might have become the norm of recent LLM papers, I still think it is important to include a discussion of the metrics used to measure things like 'trustworthiness' and 'knowledge', as these are qualitative metrics, whereas in the paper, it seems like the authors just quoted some existing evaluation pipeline.
Technical Quality: 4
Clarity: 4
Questions for Authors: Step 3 of the pipeline talks about training language models to act in place of the human experts. I may have missed this but I think the authors should explicate how exactly this is done in the experiment section - are the authors using already pre trained LLMs to finetune as experts? How would we know that these are aligned themselves? If we cannot trust the LLM experts and must resort to human experts, then it's unclear to me how this method should scale up.
In the experiment section the authors show that LLMs pertained on both the original pretraining data as well as native aligned data work better - how does one interpret this result? Since, if the original pretraining data contains harmful or value-misaligned data points, then it seems reasonable that the LLM does not learn from these at all.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: As the authors are already open about, comparisons with other post alignment methods are not included. The authors attribute this to an absence of existing alignment evaluation benchmark, but I don't fully understand this - what is stopping the authors from using the same alignment benchmarks as ones they have already used to compare with other pretrained models?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Question I: Scalability of Alignment LLM Expert
The *alignment LLM expert* is fine-tuned on pre-trained LLMs (Qwen-1.5-4B-Chat). To ensure the rewriting quality of the trained alignment LLM expert, we randomly sampled 50 data points from the pre-training corpus and processed them through the alignment LLM expert. One of the authors (native Arabic speakers) checked these rewritten texts to verify the quality. Additionally, we used GPT-4 to conduct a further extensive evaluation on the final rewritten text using the prompt shown below (Prompt to Evaluate Rewriting Quality).
| | Format | Accuracy of Information | Content Moderation | Advertisement Removal | Level of Detail |
| -------------- | ------ | ----------------------- | ------------------ | --------------------- | --------------- |
| GPT-4 Rewriter | 9.20 | 9.31 | 8.96 | 9.89 | 8.83 |
| LLM Rewriter | 8.94 | 8.82 | 8.97 | 9.75 | 8.59 |
As shown in the results above, the small LLM rewriter, trained on rewriting seed data, achieved comparable performance to the GPT-4 rewriter across aspects such as '*Format*', '*Accuracy of Information*', '*Content Moderation*', '*Advertisement Removal*', and '*Level of Detail*'. The experiments concluded that:
1. Human evaluations confirm that the rewritten quality is meaningful.
2. The trained LLM expert can achieve performance close to the GPT-4 rewriter.
## Question II: Explanation on the data mixture
### Dangers of Mixed Data
> “if the original pretraining data contains harmful or value-misaligned data points, then it seems reasonable that the LLM does not learn from these at all.”
From an alignment perspective, this statement is correct. However, from a model performance perspective, learning from the original pre-train data before applying native alignment can enhance the model's knowledge capacity.
In Section 4.2 of the paper, Figure 6 shows that compared to a model trained purely on pre-train data, the one trained on aligned data increases both in harmlessness (alignment aspect) and helpfulness (knowledge aspect). When compared to a model trained on both pre-train data and aligned data, the model trained first on pre-train data and then on aligned data shows a greater enhancement in helpfulness (knowledge aspect). This indicates that training on pre-train data before applying native alignment significantly improves the model's knowledge capacity while having a minimal impact on alignment levels.
### How to interpret why LLMs pretrained on both original and native aligned data perform better.
1. **Larger Data Scale:** The size of the training dataset is a crucial factor in the scaling laws for LLMs [1, 2]. Using a larger dataset that includes both pre-train data and native aligned data can lead to better improvements in the model's knowledge capacity. This larger data scale allows the model to learn from a broader range of information, enhancing its overall performance.
2. **Diverse Knowledge Representation:** Rewriting and modifying the expression of the original content can benefit the training of pretrained LLMs [3]. Training on both pretraining and aligned data exposes the model to diverse representations of knowledge, helping it to internalize and understand the content more effectively. This diverse knowledge representation enables the LLM to learn more comprehensively from the pretraining corpus, improving its ability to generalize and apply the learned information.
| | Harmlessness | Helpfulness |
| ------------------------------------------------ | ------------ | ----------- |
| Only Pre-train Data (12B tokens) | baseline | baseline |
| Only Aligned Data (12B tokens) | +2.6% | +7.0% |
| Pre-train Data + Aligned Data (12B + 12B tokens) | +2.7% | +7.7% |
[1] Kaplan, J., McCandlish, S., Henighan, T., Brown, T.B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J. and Amodei, D., 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
[2] Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D.D.L., Hendricks, L.A., Welbl, J., Clark, A. and Hennigan, T., 2022. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556.
[3] Ovadia, O., Brief, M., Mishaeli, M. and Elisha, O., 2023. Fine-tuning or retrieval? comparing knowledge injection in llms. arXiv preprint arXiv:2312.05934.
## Limitation: Comparison between native alignment and post-alignment
We have added an experiment showing the comparison between native alignment and post-alignment, please check ‘**Author Rebuttal - Additional Experiment I**’ for more details.
***Prompt to Evaluate Rewriting Quality:***
```We would like to request your feedback on the performance of AI assistants in terms of rewriting quality.
[The Start of Raw text]
{raw}
[The End of Raw text]
[The Start of Rewritten text]
{rewritten}
[The End of Rewritten text]
Please evaluate the following aspects:
1. Formatting
2. Accuracy of information
3. Content moderation
4. Advertisement removal
5. Level of detail
Each aspect receives a score on a scale of 1 to 10, where a higher score indicates better over performance in this aspect. And please return the score by using this format:
Formatting: score
Accuracy of information: score
Content moderation: score
Advertisement removal: score
Level of detail: score
```
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions. I have also flagged an issue in the 'weaknesses' section, namely
"Although this might have become the norm of recent LLM papers, I still think it is important to include a discussion of the metrics used to measure things like 'trustworthiness' and 'knowledge', as these are qualitative metrics, whereas in the paper, it seems like the authors just quoted some existing evaluation pipeline."
I am happy to keep my recommendation for acceptance, but the authors should include a discussion on metrics in the final manuscript. | Summary: This paper focuses on alignment of LLMs to human preferences and suggests to shift the alignment step from instruction-tuning (post-alignment) to the earlier stage of continued pre-training (native alignment). For that end it proposes an approach to creating aligned pre-training data, consisting of three steps: (1) seed data cleanup and rewriting with humans'/LLM help, (2) training a supervised cleanup model on that seed set and (3) processing the final pre-training dataset with that cleanup model. Presented experiments show that alignment data results in higher final quality compared to unprocessed pre-training data and that the performance gain does not reach a plateau at 12B tokens, suggesting that the amount of alignment data should be limited by the budget allocated to train an LLM. Experiments are performed on Llama-3-8B and Llama-3-70B and the Arabic language.
Strengths: - A high-impact and efficient approach to pre-aligned model training is introduced
- Two pre-aligned LLMs for Arabic are released openly based on the experiments in this paper
- Related work is excellent, the paper is written very clearly and is easy to comprehend
Weaknesses: 1. No direct comparison between native alignment and post-alignment is reported
2. Minor text discrepancies are present:
- rows 16-18: partial sentence "while.." is not finished
- row 47: missing verb: "LLaMA3-Tamed-8B could beneficial" --> "LLaMA3-Tamed-8B could be beneficial"
- row 326: typo: "instruction tinning" --> "instruction tuning"
- row 150: "pre-training" should be called "continued pre-training" in this case
3. The created seed data and cleanup models are not released
Technical Quality: 4
Clarity: 4
Questions for Authors: Q1: In the description you juxtapose native alignment and post-alignment, yet there are no experiments comparing their effect directly. What is the basis for claiming that native alignment yields better results in terms of helpfulness, harmlessness or other metrics?
Q2: Hypothetical question: should we as community not aspire to create top-performing models beating GPT4, not create the best models _under_ it, led by it?, more specifically, in your setup of experiments and model training, is the final result bound by GPT4's performance, or can it surpass it?, why hasn't it, according to Table 2?
Q3: How much in your opinion does the choice of seed data and synthetically cleaned alignment data affect the results?, would you consider any approaches to select these sets non-randomly, either directly or via some version of active learning?
Q4: Why not release your seed data, curated by GPT4?, perhaps also the cleanup models, or even the 12B set of generated alignment data?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Ok
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1: Comparison between native alignment and post-alignment
We have added an experiment showing the comparison between native alignment and post-alignment, please check ‘**Author Rebuttal - Additional Experiment I**’ for more details..
## Weakness 2: Typo Issues:
Thank you for your correction, we will carefully check and update for spelling errors and grammatical issues in the final version.
## Weakness 3: Release of data and models
We have already release the data and models publicly. please check ‘**Author Rebuttal - Clarification of Open Source**‘ for more details.
## Q1: Comparison between native alignment and post-alignment
We have added an experiment showing the comparison between native alignment and post-alignment, please check ‘**Author Rebuttal - Additional Experiment I**’ for more details.
## Q2: Comparison with GPT4
The performance gap is due to various complex factors beyond just alignment. Nonetheless, we believe our approach has the potential to enhance LLMs and ultimately exceed GPT-4's limitations.
### Complex Factors for Top-Performing Models:
1. **Internal Engineering Tricks:** The performance of GPT-4 and similar models often involves proprietary techniques such as Retrieval-Augmented Generation (RAG) or adaptive decoding strategies, which are not always publicly documented or accessible.
2. **Computational Resources:** The amount of computational power required to train and fine-tune top-performing models is substantial. Large companies have access to vast resources, which may not be available to academic institutions or smaller research groups.
3. **Data Availability and Quality:** The quality and quantity of training data significantly impact model performance. Proprietary models might leverage extensive, high-quality datasets that are not publicly available.
4. **Optimization Techniques:** Advanced optimization techniques and hyperparameter tuning are critical for achieving high performance. These methods are often refined through extensive experimentation and can be resource-intensive.
### Potential of Native Alignment
As shown in Section 4.3 of the paper, the experiment results show that with the increasing amount of aligned data, the helpfulness and harmlessness of LLMs are positively developing. To further verify the chat ability of the model, we trained a *LLaMA3-Tamed-70B-Chat* version to compare with GPT-4 on Arabic benchmarks. As shown below, some of the metrics for the proposed method have surpassed ChatGPT 3.5 Turbo and are close to GPT-4. We believe these results could inspire the Arabic community to surpass GPT-4 and provide valuable insights for Arabic LLM development in the future.
| Model | MMLU (Huang et al. 2023) | ArabicMMLU | EXAMS | Arabic ARC-C | Average |
|-------|-------------------------|------------|-------|--------------|---------|
| ChatGPT 3.5 Turbo | 46.07 | 57.72 | 45.63 | 60.24 | 52.42 |
| LLaMA3-Tamed-70B-Chat | 64.26 | 72.50 | 56.99 | 85.53 | 69.68 |
| GPT-4 | 65.04 | 72.50 | 57.76 | 85.67 | 70.24 |
## Q3: Seed Data Selection
To address the reviewer's concerns about the impact of seed data selection on the performance of the rewrite LLM, we conducted an additional experiment. We aimed to explore the extent to which the selection of seed and aligned data affects model performance. To do this, we compared the performance of randomly selected aligned data with specific experimental groups.
- **Experiment Group 1 (high-ppl):** This group consisted of data with a large decrease in text perplexity scores after rewriting, indicating significant changes in the data.
- **Experiment Group 2 (low-ppl):** This group consisted of data with minimal differences between the original and rewritten texts, according to text perplexity score, indicating no significant changes.
- **Baseline (random):** We conducted three random sample seed data experiments to account for randomness, labeled as ‘*random-1*’, ‘*random-2*’, and ‘*random-3*’. The variance and average of these experiments are reported as ‘*random (x3)*’.
All datasets consisted of approximately 1,000 samples of pre-training data and were trained on *Meta-Llama-3-8B*. GPT-4 was used as a reviewer to evaluate the rewriting quality of the LLM rewriter trained on different seed data settings, using the prompt shown below.
| | Format | Accuracy of Information | Content Moderation | Advertisement Removal | Level of Detail |
| -------- | ------ | ----------------------- | ------------------ | --------------------- | --------------- |
| high-ppl | 6.58 | 5.07 | 6.73 | 8.08 | 5.38 |
| low-ppl | 7.51 | 6.82 | 7.62 | 8.65 | 6.83 |
| random (x3) | 7.27$_{\pm 0.08}$ | 6.27$_{\pm 0.08}$ | 7.47$_{\pm 0.10}$ | 8.57$_{\pm 0.09}$ | 6.55$_{\pm 0.13}$ |
| random-1 | 7.30 | 6.39 | 7.52 | 8.64 | 6.56 |
| random-2 | 7.15 | 6.16 | 7.33 | 8.45 | 6.38 |
| random-3 | 7.35 | 6.36 | 7.57 | 8.63 | 6.71 |
The results indicate that, in the benchmark, the selection of aligned data can influence performance (high-PPL). All three random experiments showed no significant differences compared to each other on the benchmark. Therefore, a preliminary conclusion can be drawn: **data selection may improve the native alignment approach.** This suggests an interesting direction for future research.
## Q4: Release of data and models
We have already released the data and models publicly. please check ‘**Author Rebuttal - Clarification of Open Source**‘ for more details. | Rebuttal 1:
Rebuttal: ## Clarification of Open Source
We have made the following resources publicly available from our research:
1. **English and Arabic Seed Rewriting Data**: Annotated pairs generated by GPT-4.
2. **Native-Aligned Arabic Language Base Models**: *LLaMA3-Tamed-8B* and *LLaMA3-Tamed-70B*.
3. **Chat Versions of the Aligned Models**: *LLaMA3-Tamed-8B-Chat* and *LLaMA3-Tamed-70B-Chat*.
4. **Translated Evaluation Benchmark**: *Arabic-BeaverTails*.
## Additional Experiment I: Comparison between native alignment and post-alignment
Regarding the concerns raised by the reviewers, we conducted an experiment comparing *native alignment* and *post-alignment*. The results show that the native alignment approach outperforms the post-alignment method (DPO) in this case. However, we afraid this is not a fair apples-to-apples comparison for the following reasons:
1. The data used for native alignment and DPO were *not of the same scale*.
2. Native alignment and DPO are complementary methods that operate at different stages rather than being *exclusive*.
*Therefore, the conclusion drawn is specific to these ad hoc settings involving native alignment and DPO. The results may differ if the settings are changed.*
### Experiment Settings
We utilized the LLaMA-Factory framework, employing LLaMA3-Tamed-8B as the backbone for the *experimental group* focusing on native alignment, and Meta-LLaMA3-8B as the *control group*. We performed instruction tuning on both pre-trained models using an Arabic supervised fine-tuning (SFT) dataset, resulting in the fine-tuned models named *LLaMA3-Tamed-8B (Native Alignment + SFT)* and *LLaMA3-8B (SFT)*. For post-alignment, we selected DPO training as a representative approach, using an Arabic preference dataset. Post-alignment was conducted on both chat models, namely *LLaMA3-Tamed-8B (Native Alignment + SFT + DPO)* and *LLaMA3-8B (Native Alignment + DPO)*. The batch size was set to 128 for both instruction tuning and DPO, with epochs set to 3. All other experimental settings followed the default settings in the framework. We evaluated the performance of the instruction-tuned models and the post-alignment-tuned models on the same Arabic benchmarks shown in the paper, using a zero-shot setting.
### Experiment Results and Analysis
| | ArabicMMLU | EXAMS | ACVA clean | ACVA all | Avg. |
| ------------------- | ---------- | ----- | ---------- | -------- | ----- |
| LLaMA3-8B (SFT) | **41.65** | 39.84 | 55.56 | 57.10 | 48.54 |
| LLaMA3-8B (SFT+DPO) | 39.78 | 38.56 | 60.11 | 61.53 | 50.00 |
| LLaMA3-Tamed-8B (Native alignment + SFT) | 41.13 | **41.73** | 66.64 | **66.96** | **54.12** |
| LLaMA3-Tamed-8B (Native alignment + SFT + DPO) | 39.58 | 39.00 | **68.24** | 66.01 | 53.21 |
Considering that native alignment and post-alignment methods (such as DPO) are orthogonal and can be applied simultaneously in the same model, experiments on LLMs *with and without DPO* show that **native alignment can enhance cultural alignment**. This indicates that both native alignment and post-alignment are *beneficial* and *complementary* approaches to alignment.
## Additional Experiment II: Comparison of Native Alignment and Data Cleaning
We conducted an additional experiment to compare native alignment and data cleaning procedures, and to evaluate the transferability of our proposed method to other languages beyond Arabic, specifically English.
### Experiment Settings
We implemented the native alignment approach as described in the paper. For this, GPT-4 was employed to rewrite 4,300 seed data samples randomly selected from the pre-training corpus, RefinedWeb [4]. This rewritten data was then used to fine-tune a pre-trained model (*Qwen-1.5-4B-Chat*) as the rewrite LLM. Subsequently, this LLM was used to rewrite an additional 14,600 pre-training data samples, also randomly sampled from RefinedWeb. Continuous pre-training was carried out on *Qwen-1.5-0.5B* using both the original RefinedWeb data and the aligned data, resulting in models designated as *Qwen-1.5-0.5B-refinedWeb* and *Qwen-1.5-0.5B-aligned*. Evaluation was conducted using the *MMLU* benchmark [5].
### Experiment Results and Analysis
| | Qwen-1.5-0.5B | Qwen-1.5-0.5B-refinedWeb | Qwen-1.5-0.5B-aligned |
| -------------- | ------------- | ----------------------- | --------------------- |
| Humanities | 27.99 | 29.33 | 33.95 |
| STEM | 12.86 | 25.37 | 27.29 |
| Social Science | 14.35 | 29.91 | 32.71 |
| Other | 20.30 | 27.46 | 30.70 |
| Avg. | 18.32 | 27.71 | 30.73 |
The results show both continuous pre-training methods led to performance improvements on the MMLU benchmark. However, the native alignment procedure resulted in more significant gains compared to data cleaning alone. Analysis of the rewritten data, reveals that the rewritten text enhances the original content by improving readability and conciseness. This suggests that:
1. Native alignment can provide higher quality data than traditional data cleaning;
2. Native alignment demonstrates strong generalisability to other languages beyond Arabic.
[4] Penedo, G., Malartic, Q., Hesslow, D., Cojocaru, R., Cappelli, A., Alobeidli, H., Pannier, B., Almazrouei, E. and Launay, J., 2023. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116.
[5] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D. and Steinhardt, J., 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Automated Multi-level Preference for MLLMs | Accept (poster) | Summary: This paper presents the Automated Multi-level Preference (AMP) framework for improving MLLMs by addressing hallucination issues. The framework introduces a multi-level preference system for RLHF, aiming to enhance the learning process by providing more granular feedback.
Strengths: - The introduction of multi-level preferences rather than binary ones narrows the gap between adjacent levels, enabling MLLMs to discern subtle differences and integrate cross-level comparisons.
- The automated pipeline for generating high-quality multi-level preference datasets without human annotators is a significant contribution, potentially reducing bias and noise while saving time and resources.
- Extensive experiments across multiple benchmarks demonstrate the effectiveness of the proposed method.
Weaknesses: - The contribution of the paper heavily relies on the preference fine-tuning algorithm, showing limited innovation beyond this aspect.
- The method does not demonstrate significant improvements on the LLaVA-Bench benchmark.
- The method's performance on the adversarial tasks of the POPE benchmark is moderate, suggesting a need to reconsider the impact of MDPO on model robustness and how to balance performance and robustness.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1.** The contribution of the paper heavily relies on the preference fine-tuning algorithm, showing limited innovation beyond this aspect.
**A1.** In this paper, we introduce the Automated Multi-level Preference (AMP) framework, which involves generating high-quality multi-level preference datasets and the effective learning objective for MDPO. These novel designs ensure AMP achieve promising performance on several hallucination benchmarks. Moreover, we conduct the first hallucination benchmark in multi-round dialogues and devise the relevant metrics, which may stimulate future research.
---
**Q2.** The method does not demonstrate significant improvements on the LLaVA-Bench benchmark.
**A2.** Despite a slight performance degradation on the LLava-Bench, our AMP shows significant improvements compared to both general MLLMs and RLHF methods on other benchmarks, $e.g.$, 3.09 (FGAIF) -> 3.23 on MMHal-Bench, 3.71/3.70 (SILKIE) -> 4.21/4.21 on MRHal-Bench, and 85.8 (POVID) -> 87.2 on POPE.
---
**Q3.** The method's performance on the adversarial tasks of the POPE benchmark is moderate, suggesting a need to reconsider the impact of MDPO on model robustness and how to balance performance and robustness.
**A3.** The principle of RLHF is explicitly enlarging the probability of superior responses while decreasing the probability of inferior responses. Compared to the conventional autoregressive function, loss in RLHF ensures MLLMs to fit the characteristics of the dataset at the sacrifice of generation ability/robustness. In the future, we will consider integrating the advantages of autoregressive function to maintain the robustness of MLLMs.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer xo7T
Comment: Thank you for the rebuttal, which addressed some of my concerns. I have increased my score and am looking forward to reading your revision in future venue. | Summary: In this paper, the authors develop an automated dataset generation pipeline capable of producing multi-level preference datasets without the need for human annotators. This paper introduces a novel multi-round dialogues hallucination benchmark, MRHal-Bench. Additionally, the authors design the Multi-level Direct Preference Optimization (MDPO) algorithm, which employs a specifically crafted learning objective to facilitate multi-level preference learning. Extensive experiments conducted on both the hallucination benchmark and a general benchmark demonstrate the effectiveness of this method.
Strengths: 1. To make the labeling of multi-level preference datasets cost-effective and efficient, this paper proposes an automated dataset generation pipeline capable of producing high-quality preference datasets.
2. To narrow the gap between two preference samples in DPO and make the model more easily distinguish the differences between preference data, this paper proposes a multi-level DPO algorithm that use multi-level preference data to provide a broader range of comparisons with hallucination examples.
Weaknesses: 1. It is recommended to provide more quantitative information on the preference dataset generated by the automated dataset generation pipeline. For instance, the authors could use a subset of the dataset to demonstrate the similarity results compared to human annotators.
2. In this paper, the authors conduct experiments on three hallucination benchmarks and only one general benchmark. To verify the more general applicability of the method, additional experiments are needed on general benchmarks such as TextVQA, GQA, and IconQA.
3. In Table 1, the authors compare several MLLMs and RLHF-based MLLMs across MMHal-Bench, MRHal-Bench and LLaVA-Bench. However, the baseline model should be more up-to-date. Could you compare it with more current models such as LLaVA-v1.6, DeepSeek-VL, or MiniCPM-V?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Assume we have 3 preference samples: A, B, C. Using the MDPO algorithm, we need to calculate the loss for AB, AC, and BC and then update the parameters. However, why do we need to calculate the loss for BC? Sample B may contain hallucinations; does this affect the model's learning of correct preferences?
2. This paper does not enhance the visual capabilities of the model. However, in the case study, several OCR tasks and the AMP-MEG model can successfully recognize. Can the authors explain why MDPO algorithm can improve this aspect of ability?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1.** It is recommended to provide more quantitative information on the preference dataset generated by the automated dataset generation pipeline. For instance, the authors could use a subset of the dataset to demonstrate the similarity results compared to human annotators.
**A1.** Thanks for raising this good discussion. We provide more intrinsic evaluation for the automated multi-level preference dataset and the auto-check mechanism.
**Evaluation of the automated multi-level preference dataset.** During rebuttal, we estimated the inconsistency rate of our AMP dataset to be 2.25% (through manual evaluation on 2000 random samples). The 2.25\% inconsistency rate is significantly lower than the human (14.40% inconsistency) and GPT-4V (11.95% inconsistency) annotations. Below is an example of an inconsistent case in our AMP dataset, where $N_i$ denotes the $i$-th noun chunks:
>**Standard Response**: A little girl ($N_1$) with a purple jacket ($N_2$) is flying a kite ($N_3$).
>
>**Response A**: A little girl ($N_1$) dressed in purple jacket ($N_2$) is flying a kite ($N_3$) on the lawn, surrounded by *many people* (other information, including some *hallucinations* denoted by *italic*).
>
>**Response B**: A young girl ($N_1$) dressed in purple clothes ($N_4$, similar to $N_2$ but not exactly the same) is flying a kite ($N_3$).
Response A accurately predicts these noun chunks and thus gets a higher score, leading to A>B. However, the actual ranking is B>A because A has some hallucinations.
Moreover, we conduct another experiment to validate the superiority of our AMP dataset. We mix the AMP and human/GPT-4V data for training the model. Table 1 in the PDF document shows that as the proportion of human/GPT-4V annotated data increases, the hallucination rate rises accordingly.
---
**Q2.** In this paper, the authors conduct experiments on three hallucination benchmarks and only one general benchmark. To verify the more general applicability of the method, additional experiments are needed on general benchmarks such as TextVQA, GQA, and IconQA.
**A2.** Compared to the baseline, metrics of the MLLM fine-tuned with our MDPO are improved by 3.3 (58.2 -> 61.5) and 1.8 (62.0 -> 63.8) on two general benchmarks, TextVQA and GQA, respectively.
---
**Q3.** In Table 1, the authors compare several MLLMs and RLHF-based MLLMs across MMHal-Bench, MRHal-Bench and LLaVA-Bench. However, the baseline model should be more up-to-date. Could you compare it with more current models such as LLaVA-v1.6, DeepSeek-VL, or MiniCPM-V?
**A3.** We present the performance LLaVA-v1.6, DeepSeek-VL, and MiniCPM-V in Table 3 (PDF document).
As shown in Table 3, our method significantly exceeds general MLLMs on hallucination benchmarks, $e.g.$, +0.13 (LLaVA-V1.6) on MMHal-Bench, +0.27/+0.28 (LLaVA-V1.6) on MRHal-Bench, verifying the effectiveness of preference learning and our AMP pipeline.
---
**Q4.** Assume we have 3 preference samples: A, B, C. Using the MDPO algorithm, we need to calculate the loss for AB, AC, and BC and then update the parameters. However, why do we need to calculate the loss for BC? Sample B may contain hallucinations; does this affect the model's learning of correct preferences?
**A4.** First, loss for BC provides more comparisons among hallucination examples, resulting in a better performance. Second, to alleviate the adverse effects caused by hallucinations in B, we only introduce the penalty term for AB and AC and exclude the penalty term for BC.
We also conduct another experiment for loss BC. As reported in Table 4 (PDF document), the loss for BC brings extra performance improvement.
---
**Q5.** This paper does not enhance the visual capabilities of the model. However, in the case study, several OCR tasks and the AMP-MEG model can successfully recognize. Can the authors explain why MDPO algorithm can improve this aspect of ability?
**A5.** Improvement on OCR is because our AMP dataset actually contains some preference pairs that are centric to OCR task. Below is an example:
>**Prompt**: What does the sign read?
>
>**Correct Response**: NO TIPPING THIS AREA IS MONITORED BY 24 HOUR RECORDED CCTV.
>
>**Response A**: NO TIPPING AREA IS MONITORED BY 24-HOUR RECORDED CCTV.
>
>**Response B**: NO TIPPING MONITOR 24 HOUR RECORDED CCTV.
>
>**Response C**: NO TIPPING.
Preference learning framework can increase the probability of correct responses while decreasing the probability of other responses, which enhances the OCR ability.
---
Rebuttal Comment 1.1:
Comment: The author has addressed most of the concerns. The reviewer maintains the initial rating and is inclined towards acceptance of the paper. | Summary: This work aims to mitigate hallucinations in Multimodal Large Language Models through preference optimization. Motivated by two limitations of binary preferences widely used in existing work, authors proposed a multi-level preference framework. The framework consists of 1) an automated dataset generation pipeline that converts each image-text pair into an image with multiple text descriptions from superior to inferior quality 2) a Multi-level Direct Preference Optimization algorithm that enumerates over all preference pairs with the standard DPO objective. Additionally, authors introduce a new hallucination benchmark, MRHal-Bench. The proposed framework has been evaluated on three benchmarks: MMHal-Bench, LLaVA-Bench, and MRHal-Bench against 5 base models and 5 preference fine-tuned model. The proposed framework achieves best state-of-the-art on MMHal-Bench and MRHal-Bench, although only improved over the second best FGAIF by a small margin. Authors also include comprehensive ablation studies on the effects of multi-level preference.
Strengths: * The application of multi-level preference alignment to the problem of mitigating hallucination in multimodal LLMs is novel.
* Conduct a comprehensive comparison with existing preference fine-tuned multimodal LLMs and baselines on three benchmarks. Improve over existing methods by a small margin.
* Provide an extensive ablation study of the multi-level preference term.
Additionally, automating of the multi-level preference data generation could be a potential strength as well, but currently lacks evaluation to justify its quality (see weakness).
Weaknesses: I would like to see authors address the following weaknesses:
* **Lack intrinsic evaluation of the automated multi-level preference dataset**. The quality is only implicitly justified by the improvement on the three final benchmarks (L258-L264), which makes it unclear what are the artifacts introduced in the automated data generation. Although human or GPT-4 annotation can be inconsistent sometimes, it is still good to collect some annotations to directly assess how the generated preferences align with the degree of hallucination. Similarly, the current auto-check mechanism is ad-hoc and introduces another component, i.e., CLIP, which could introduce additional errors into the system. It would be good to conduct some evaluation on the auto-check mechanism as well.
* **Missing comparison with rank-based preference alignment approaches**: Despite being a novel application, non-binary preference alignment has been studied both theoretically and empirically in context other than hallucination in MLLMs, for example Zhu et al. 2023 [1], Brown et al. [2], Myers et al. [3], Song et al. [4]. It would be great if this work could engage with prior literature on non-binary preference alignment, for example, discussing how does the proposed objective compare with ranking-based approach in prior work?
* **Missing results of FGAIF on MRHal-Bench** In Table 1, FGAIF has a performance that is considerably close to the proposed methods (-0.14, +0.05) on MMHal-Bench and outperform the proposed method on LLaVA-Bench, yet it's missing results MRHal-Bench. These missing numbers could affect the comparison between the two methods.
References:
* [1] Zhu et al. Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons.
* [2] Brown et al. Safe imitation learning via fast bayesian reward inference from preferences.
* [3] Myers et al. Learning Multimodal Rewards from Rankings.
* [4] Song et al. Preference Ranking Optimization for Human Alignment.
Technical Quality: 2
Clarity: 3
Questions for Authors: * **Artifacts of the using responses from different model size**: Authors mentioned that inconsistent language styles can introduce biases, how does this concern justify the choice of using various responses from models of different sizes in the same model family? Responses from smaller models clearly don't just change the factual information, but also introduce more repetition and incoherence issues (for example, see Li et al. 2023 [1]). Would some simple perturbation-based methods control style and other factors better? The questions on artifacts apply to varying dataset size as well, it would be great if authors can discuss potential artifacts.
* **Why not use KL-Divergence for penalty** Author added a penalty term to avoid degrading the quality of the superior responses in formula (6), why use an entropy term instead of the standard KL-Divergence based shift penalty term in RLHF? Won't this penalty term allows reward hacking on the penalty?
* Minor: in formula (5), maybe the outer loop should be 0 to k-2?
[1] Li et al. Contrastive Decoding: Open-ended Text Generation as Optimization.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1.** Lack intrinsic evaluation of the AMP dataset.
**A1.** Thanks for your advice. We provide more evaluation for the AMP dataset and the auto-check mechanism.
**Evaluation of the AMP dataset.** We estimate the inconsistency rate of our AMP dataset to be 2.25% (by manual evaluation on 2k random samples). The 2.25% inconsistency rate is significantly lower than the human/GPT-4V (14.40%/11.95% inconsistency) annotations. Below is an inconsistent case in our AMP dataset, where $N_i$ denotes the $i$-th noun chunks:
>**Standard Response**: A little girl ($N_1$) with a purple jacket ($N_2$) is flying a kite ($N_3$).
>
>**Response A**: A little girl ($N_1$) dressed in purple jacket ($N_2$) is flying a kite ($N_3$) on the lawn, surrounded by *many people* (other information, including some *hallucinations* denoted by *italic*).
>
>**Response B**: A young girl ($N_1$) dressed in purple clothes ($N_4$, similar to $N_2$ but not exactly the same) is flying a kite ($N_3$).
Response A accurately predicts these noun chunks and thus gets a higher score, leading to A>B. However, the actual ranking is B>A because A has some hallucinations.
Moreover, we conduct another experiment to validate the superiority of our AMP dataset. We mix the AMP and human/GPT-4V data for training the model. Table 1 in the PDF document shows that as the proportion of human/GPT-4V annotated data increases, the hallucination rate rises accordingly.
**Evaluation of the auto-check mechanism.** If we remove the auto-check mechanism of the sampled 2k examples, the inconsistency rate increases from 2.25% to 17.45%. It shows that the auto-check mechanism is critical.
---
**Q2.** Missing comparison with rank-based preference alignment approaches.
**A2.** We make both conceptual and empirical comparisons between our MDPO and prior rank-based preference alignment approaches, showing that our method has several advantages, $e.g.$, higher performance. The details are as below.
**Conceptual comparison.** There are two main differences between our MDPO and other rank-based preference alignment approaches. First, our MDPO mitigates the challenge of distinguishing micro hallucinations in responses. Taking 3-level preference as an example, the comparisons of other methods are 'A>BC, B>C', while comparisons made by our AMP are 'A>B, A>C, B>C'. More specifically, our AMP splits 'A>BC' into 'A>B, A>C', which enables MLLMs to perceive the subtle differences between different responses. Second, our penalty term explicitly increases the probability of MLLMs generating good answers, ensuring the stability of the training process.
**Empirical comparison.** As reported in Table 2, our MDPO surpasses these two learning objectives, $i.e.$, [1], [2] on all hallucination benchmarks, $e.g.$, 3.01 -> 3.17 on MMHal-Bench.
---
**Q3.** Missing results of FGAIF on MRHal-Bench.
**A3.** On MRHal-Bench, the mentioned FGAIF achieves 3.77/3.79 score and 0.30/0.31 hallucination rate. In comparison, our AMP is better, achieving 4.07/4.06 score (the higher the better) and 0.20/0.15 hallucination rate (the lower the better). Notably, the results of FGAIF are based on our re-implementation, because it is still close-sourced in terms of data and code. We are not confident that our implementation is completely correct. Once FGAIF is publicly available, we will update its results and add the comparison to our repo on Github.
---
**Q4.** Artifacts of the using responses from different model size.
**A4.** Using responses from different model sizes does have some artifacts, $i.e.$, some inconsistency. However, the inconsistency rate (2.25%) is significantly lower than the GPT-4V (11%) and human annotations (14%). Please kindly refer to the responses to Q1 for estimation details and examples.
Moreover, we empirically validate that using same-family models is better than using different-family models. We build a dataset that mixes the responses from GPT-4O, Qwen-2 and LLaVA. Training model with this dataset (#3 in Table 2) achieves inferior results than our strategy, $e.g.$ , 2.89 (different family) versus 3.17 (ours) score on MMHal-bench.
**Perturbation-based methods** also improve the baseline ($e.g.$, +0.14 on MMHal-Bench), but is still inferior to our MDPO ($e.g.$, -0.34 on MMHal-Bench), as shown in Table 2.
Our implementation details are as follows. We randomly change the noun, adjective, preposition, and numeral. We obtain answers of varying quality by controlling the proportion of perturbations (10%, 30%, 50% based on 4-level preference setting). We infer the hallucination pattern generated by random perturbation is different from the real MLLM and is thus not informative enough for preference learning.
---
**Q5.** Why not use KL-Divergence for penalty?
**A5.** KL-Divergence is identical to our penalty term. The mathematical proof is as follows.
Assume $\mathrm{P}(y_w)$ is the probability distribution of superior responses, $\pi_\theta(y_w|x)$ denotes the probability distribution of the model generating superior responses, KL-Divergence is:
$$\mathbf{KL}[\mathrm{P}(y_w)||\pi_\theta(y_w|x)]=\sum_{y_w\in\mathcal{Y}} \left[\mathrm{P}(y_w) \log \frac{\mathrm{P}(y_w)}{\pi_\theta(y_w|x)} \right],\tag{1}$$
where $\mathrm{P}(y_w)=[0,...,1,...,0]\in \mathbb{R}^{L}$ and $L$ is the length of vocabulary. Since $\mathrm{P}(y_w)$ is 0 at all positions except for the superior word, where Equ.1 is re-written:
$$\mathbf{KL}[\mathrm{P}(y_w)||\pi_\theta(y_w|x)]=-\log\pi_\theta(y_w|x),$$
$$\min\mathbf{KL}[\mathrm{P}(y_w)||\pi_\theta(y_w|x)]\Leftrightarrow\max\log\pi_{\mathrm{\theta}}\left(y_{w} \mid x\right).$$
Therefore, our penalty term is actually KL-Divergence.
---
**Q6.** Minor
**A6.** The outer loop is 1 to K-2. We take the responses from the pre-trained MLLM as $R_0$.
---
[1] Zhu et al. Principled Reinforcement Learning with Human Feedback from Pairwise or K-wise Comparisons.
[2] Song et al. Preference Ranking Optimization for Human Alignment.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed responses and the additional experiment, which addressed my concerns. I have raised the score accordingly. | null | null | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude to the reviewers for their valuable comments. We are encouraged they found our method is "**novel**" (Reviewer 2sCy), our method "**provides a broader range of comparisons**" (Reviewer 2sCy, Reviewer iDRq), our automated pipeline is a "**significant contribution, potentially reducing bias and noise while saving time and resources**" (Reviewer xo7T). To address all the reviewers’ concerns, we provide experiments, discussions, and point-to-point responses. We will add these experiments and discussions in the final version.
Tables mentioned in the rebuttal answers are all in the PDF document. We provide Table 1 for **Reviewer 2sCy, Reviewer iDRq**, Table 2 for **Reviewer 2sCy**, Table 3 for **Reviewer iDRq**, Table 4 for **Reviewer iDRq**. The details of tables are listed as follows:
---
**Table 1.** Performance on three hallucination benchmarks across different proportions of GPT-4V/human annotations.
To assess how the inconsistency aligns with the degree of hallucination, we incorporate human/GPT-4V annotations into our automated dataset based on the 4-level setting (the optimal setting).
**Implementation Details.** On our training dataset, the ratio of contradictory patterns in human/GPT-4V annotations is 16.9%/14.4%. To get preferences from contradictory patterns, we ask human/GPT-4V to annotate all levels directly instead of annotating them pairwise. Then, we integrate the human/GPT-4V annotated dataset into our automated dataset, with proportions of 70%, 50%, 30%, and 10%, respectively.
**Results.** As the proportion of human/GPT-4V annotated data increases, the hallucination rate rises accordingly, which proves the advantage of our automated preference dataset.
---
**Table 2.**
Performance on three hallucination benchmarks across other perturbation-based methods (\#1, \#2), MLLMs from different families (\#3, DF), perturbation-based methods (\#4, PB), our baseline (\#5), and our MDPO.
---
**Table 3.**
Performance of LLaVA-V1.6, DeepSeek-VL, and MiniCPM-V on three hallucination benchmarks.
---
**Table 4.**
Effectiveness of loss for BC.
Pdf: /pdf/a8f93bc7dd9e33136afa7b14cb5b0ff5a2dfe65a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
KFNN: K-Free Nearest Neighbor For Crowdsourcing | Accept (poster) | Summary: This paper proposes a novel algorithm, KFNN (K-free Nearest Neighbor), which is specifically designed to enhance label integration for crowdsourcing. KFNN integrates two key components named label distribution enhancement and K-free optimization, which significantly contribute to improving the effectiveness and robustness of the label integration process. The idea of automatically determining the optimal neighborhood size for each instance is particularly innovative and well-executed. The experimental results further validate the effectiveness and robustness of the proposed algorithm.
Strengths: 1. The KFNN proposed in this paper is interesting and innovative. The authors reveal the limitations of fixed neighborhood sizes in existing label integration algorithms and propose an algorithm that automatically determines the optimal neighborhood size based on instance attributes and noisy labels. This algorithm significantly improves the robustness of label integration.
2. The paper provides a solid theoretical foundation for the proposed KFNN algorithm, followed by comprehensive experimental validation. The theoretical analysis is robust and convincingly demonstrates the expected performance improvements. The experiments are well-designed and cover a wide range of datasets, both simulated and real-world, to ensure the generalizability of the results. The experimental results, including comparisons with baseline algorithms, further validate the effectiveness and robustness of the proposed algorithm.
3. The paper is well-written and clearly presents the proposed methodology and findings. The structure of the paper is logical, making it easy to follow the complex concepts introduced. The use of figures and tables to illustrate key points is effective and aids in comprehension.
Weaknesses: 1. While the paper provides strong theoretical and experimental results, there is limited discussion on the computational efficiency and scalability of the proposed KFNN algorithm. I suggest moving the algorithmic flow and time complexity analysis from Appendix A to the main text.
2. There are some repetitive sentences and structures in this paper that should be further condensed. For example, Sections 5.1 and 5.2 should be merged and the repetitive statements in them should be deleted.
3. The experiments are already comprehensive, but analysis and discussion of the optimal neighborhood size determined by KFNN could still be added, which would help to understand how the neighborhood size should be set. Moreover, according to the results presented in Tables 1-4, KFNN is generally highly effective. However, on few datasets, KFNN does not perform as well as MV. These anomalies are valuable for identifying deficiencies in KFNN and should be further investigated and discussed.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors openly discuss the limitations of their work, particularly the empirical parameters in the Kalman filter and the roughness of the distribution transformation process. Please refer to the Weaknesses for other limitations I have found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer 4Kyj:**
**Q1:** While the paper provides strong theoretical and experimental results, there is limited discussion on the computational efficiency and scalability of the proposed KFNN algorithm. I suggest moving the algorithmic flow and time complexity analysis from Appendix A to the main text.
**Author Response:** Thanks for your valuable comments. Although our KFNN needs to determine an optimal neighborhood size for each instance, it is not a wrapper algorithm. Label integration does not divide the crowdsourced dataset into training, validation, and test sets. As a result, KFNN determines $K_i$ immediately when inferring $\hat{y}_i$ without a validation phase. Therefore, the computational efficiency and scalability of KFNN are comparable to existing KNN-related label integration algorithms. In the final version of the paper, we will move the algorithmic flow and time complexity analysis from Appendix A to the main text. Thanks again for your valuable comments.
**Q2:** There are some repetitive sentences and structures in this paper that should be further condensed. For example, Sections 5.1 and 5.2 should be merged and the repetitive statements in them should be deleted.
**Author Response:** Thanks for your valuable comments. Indeed, there are some repetitive sentences and structures in our paper. In the final version of the paper, we will merge Sections 5.1 and 5.2 and delete these repetitive sentences. Thanks again.
**Q3:** The experiments are already comprehensive, but analysis and discussion of the optimal neighborhood size determined by KFNN could still be added, which would help to understand how the neighborhood size should be set. Moreover, according to the results presented in Tables 1-4, KFNN is generally highly effective. However, on few datasets, KFNN does not perform as well as MV. These anomalies are valuable for identifying deficiencies in KFNN and should be further investigated and discussed.
**Author Response:** Thanks for your valuable comments. In our KFNN, the optimal neighborhood size for each instance depends on both its attributes and multiple noisy labels. Based on its attributes, we can calculate the distance between this instance to the corresponding subset of each class. If the distance of this instance to one class is much smaller than the distance to other classes, it tends to be close to the center of this class. At this time, its optimal neighborhood size tends to be larger. In addition, if the multiple noisy labels of the instance are highly consistent, it means that this instance is easily distinguishable. At this time, its optimal neighborhood size tends to be smaller. Indeed, KFNN does not perform as well as MV on a small number of datasets, such as autos, breast-cancer, and diabetes. Considering that other KNN-related algorithms (LAWMV and MNLDP) also typically perform poorly on these datasets, we believe that the reason for the poor performance of KFNN is that these datasets are not well suited for distance measures. In the final version of the paper, we will include these discussions on optimal neighborhood size and anomalous experimental results. Thanks again for your valuable comments.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for your reply. Please incorporate some of the discussion into the final paper. | Summary: The paper presents a novel label integration algorithm, KFNN (K-Free Nearest Neighbor), designed to enhance the performance of crowdsourcing platforms by intelligently determining the optimal neighborhood size for each instance based on its attributes and noisy labels. The authors propose a two-component solution involving label distribution enhancement and K-free optimization, which leverages the Mahalanobis distance and a Kalman filter to mitigate noise from neighbor instances. The paper's claims are well-aligned with the theoretical and experimental results, demonstrating the effectiveness and robustness of KFNN against existing state-of-the-art algorithms in various crowdsourcing scenarios.
Strengths: 1. Novel contribution to an important problem
The innovative approach of highlighting the limitations caused by fixed neighborhood sizes in existing label integration algorithms, and using attributes and noisy labels to determine the neighborhood size for each instance automatically, is a significant contribution to crowdsourcing.
2. Complete and rigorous theoretical proof
The theoretical underpinnings are sound, with clear assumptions and proofs provided for the proposed methods. The use of the Mahalanobis distance and the Kalman filter is well-justified.
3. Good writing quality and clarity
This paper is well-written and enjoyable to read. The challenges are clearly stated and the contributions are easy to capture.
4. Reproducibility
The paper's open data and code policy is highly appreciated, promoting research transparency. Enhancing reproducibility with clear versioning and setup instructions would be a valuable addition, showcasing a strong commitment to open scientific practices.
Weaknesses: 1. Simulation experiment results
The symbol • indicates that the algorithm in the row significantly outperforms the algorithm in the corresponding column. How is "significantly outperforms" defined for Macro-F1 score and integration accuracy?
2. Ablation experiment results
Since this study focuses on automatically adjusting neighborhood sizes, how does the performance of this method compare with baselines that use fixed neighborhood sizes?
Technical Quality: 4
Clarity: 3
Questions for Authors: see weaknesses
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer B2kv:**
**Q1:** Simulation experiment results The symbol • indicates that the algorithm in the row significantly outperforms the algorithm in the corresponding column. How is "significantly outperforms" defined for Macro-F1 score and integration accuracy?
**Author Response:** Thanks for your valuable comments. The Wilcoxon signed-ranks test evaluates the performance differences between two algorithms across multiple datasets by ranking the absolute values of their differences. Specifically, we first calculate the differences between the performance scores (Macro-F1 score or integration accuracy) of the two algorithms on the 34 datasets. Then, these differences are ranked according to their absolute values, with average ranks assigned in case of ties. Subsequently, Let $R^+$ and $R^-$ be the sum of ranks for positive and negative differences, respectively. They can be calculated as follows:
$$
R^+ = \sum_{d_i>0}rank(d_i) + \frac{1}{2}\sum_{d_i=0}rank(d_i),
$$
$$
R^- = \sum_{d_i<0}rank(d_i) + \frac{1}{2}\sum_{d_i=0}rank(d_i),
$$
where $d_i$ denotes the difference between the performance scores of the two algorithms on $i$-th out of 34 datasets, $rank(d_i)$ denotes the rank of $d_i$. Next, let $T$ be the smaller of $R^+$ and $R^-$. From the table of exact critical values of the Wilcoxon test, it can be found that the exact critical values of $T$ at significance levels $\alpha$ = 0.05 and $\alpha$ = 0.1 are 182 and 200 when the number of datasets is 34, respectively. This means that the two algorithms are significantly different with $\alpha$ = 0.05 when $T$ is less than or equal to 182. At this time, the second algorithm significantly outperforms the first if $T$ is equal to $R^-$. The first algorithm significantly outperforms the second if $T$ is equal to $R^+$. Thanks again for your valuable comments.
**Q2:** Ablation experiment results Since this study focuses on automatically adjusting neighborhood sizes, how does the performance of this method compare with baselines that use fixed neighborhood sizes?
**Author Response:** Thanks for your valuable comments. Among the existing label integration algorithms, both LAWMV and MNLDP are baselines using a fixed neighborhood size. Our simulated and real-world experimental results demonstrate that KFNN significantly outperforms both MNLDP and LAWMV. Additionally, in our ablation experiment, KFNN-KF is the version of KFNN employing a fixed neighborhood size. It can be found from Figure 3(a) that KFNN performs better than KFNN-KF. These findings highlight the superior performance of KFNN compared to baselines using a fixed neighborhood size. Thanks again for your valuable comments.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's reply, my concerns have been well explained. I will keep my score. | Summary: This paper proposes a novel label integration approach KFNN by adaptively determining the optimal neighborhood size. KFNN utilizes a Mahalanobis distance distribution to model the relationship between each instance and all classes. The authors also provide adequate theoretical analysis to illustrate the effectiveness of the proposed method. Experiments demonstrate that the proposed method can achieve the state-of-the-art performance on simulation and real-world dataset. The paper is well-written and easy to follow. This idea is very intuitive and effective for crowdsourcing task. The paper proves the effectiveness of introducing Mahalanobis distance distribution for crowdsourcing from the perspective of methodology, theory and experiments.
Strengths: 1. The paper is well-written and easy to follow. The logic of the whole paper is clear.
2. The paper’s idea is very intuitive and effective for crowdsourcing task. The authors introduce the Mahalanobis distance distribution to model the relationship between each instance and all classes. Experiments verify that the proposed method can achieve the best performance compared with SOTAs.
3. The authors provide adequate evidences to verify the effectiveness of the proposed method from the perspective of methodology, theory and experiments on simulation and real-world datasets.
Weaknesses: 1. In section 2, the authors introduce two categories of label integration algorithms. And the proposed KFNN belongs to the algorithms which leverage neighbor instance. I suggest adding some discussion about the pros and cons of these two categories of approaches.
2. In methodology part and theoretical analysis part, the authors discuss the superiority of Mahalanobis distance compared with Euclidean distance. Can the authors verify the difference between Mahalanobis distance and Euclidean distance on this task from an experimental perspective?
3. In Table 3 and Table 4, why some results are missing? Appropriate explanation facilitates reading of the paper.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to weakness. My biggest concern is the experiments for the comparison between Mahalanobis distance and Euclidean distance.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer qVhb:**
**Q1:** In section 2, the authors introduce two categories of label integration algorithms. And the proposed KFNN belongs to the algorithms which leverage neighbor instance. I suggest adding some discussion about the pros and cons of these two categories of approaches.
**Author Response:** Thanks for your valuable comments. The first category of algorithms does not leverage neighbor instances, considering only the information of the instance itself or the information of all instances globally in label integration. While simpler and more efficient, these algorithms are limited in effectiveness because each instance can only obtain few noisy labels. The second category of algorithms performs label integration by leveraging information from neighbor instances obtained by the KNN algorithm, which improves performance by using additional information from neighbor instances. However, these algorithms all assume a fixed neighborhood size for each instance, which is often unrealistic and thus limits their effectiveness. In the final version of the paper, we will add a paragraph to section 2 discussing the pros and cons of these two categories of algorithms. Thanks again for your valuable comments.
**Q2:** In methodology part and theoretical analysis part, the authors discuss the superiority of Mahalanobis distance compared with Euclidean distance. Can the authors verify the difference between Mahalanobis distance and Euclidean distance on this task from an experimental perspective?
**Author Response:** Thanks for your valuable comments. The main reason we chose the Mahalanobis distance in KFNN is that it can directly measure the distance from an instance to a dataset. Meanwhile, the Mahalanobis distance does not suffer from the correlation and magnitude of attributes. To respond to this comment, we defined the Euclidean distance of an instance to the centroid of a dataset as the distance of the instance to this dataset, replacing Equation 3 in the paper. Equation 6 was directly replaced by the Euclidean distance between the two instances. We denote this version of KFNN as KFNN-Euc. Subsequently, we compared KFNN and KFNN-Euc on the Income dataset. The Macro-F1 score and integration accuracy of KFNN-Euc on the Income dataset are 50.35% and 52.33%, respectively, significantly lower than those of KFNN (78.69% and 77.17%) shown in Figure 1. These experimental results validate the superiority of the Mahalanobis distance in KFNN. In the final version of the paper, we will include KFNN-Euc in the ablation experiment to demonstrate the superiority of the Mahalanobis distance in KFNN. Thanks again for your valuable comments.
**Q3:** In Table 3 and Table 4, why some results are missing? Appropriate explanation facilitates reading of the paper.
**Author Response:** Thanks for your valuable comments. Tables 3 and 4 show the results of the Wilcoxon signed-rank test. In Tables 3 and 4, the symbol • indicates that the algorithm in the row significantly outperforms the algorithm in the corresponding column, and the symbol ◦ indicates the exact opposite of that indicated by the symbol •. Missing items indicate no significant difference between the algorithm in the row and the algorithm in the column. The significance levels of the lower and upper diagonals in Tables 3 and 4 are 0.05 and 0.1, respectively. For example, the Wilcoxon test result between MV and LAWMV on the upper diagonal of Table 3 is missing, indicating no significant difference between MV and LAWMV in terms of the Macro-F1 score when the significance level is 0.1. In the final version of the paper, we will include a detailed explanation of the missing items. Thanks again for your valuable comments.
---
Rebuttal Comment 1.1:
Title: Response for Rebuttals.
Comment: Thank you for the helpful response that addressed my concern. | Summary: This paper introduces a new algorithm for label integration called KFNN. Existing methods related to KNN produce more noisy labels; however, they fix the neighborhood size, regardless of the fact that instances close to the center of classes should have more neighbors than instances close to the boundary of classes. To tackle this problem, KFNN estimates a Mahalanobis distance distribution between each instance and all classes to enhance the multiple noisy label distribution and utilizes a Kalman filter to mitigate the impact of noise. Finally, KFNN can automatically determine the optimal neighborhood size through max-margin learning.
Strengths: S1. The paper studies an important problem.
S2. A new solution is proposed to tackle the problem.
S3. Experiments are conducted on several datasets.
Weaknesses: W1. The motivations need more enhancements.
W2. Some technical details require more explanations.
W3. The application scope of the proposed method in crowdsourcing is limited.
W4. The performance improvement of the proposed method is unsatisfactory.
W5. Experiments are conducted in a simulation environment, which can be much simpler than a real-world crowdsourcing platform.
Technical Quality: 1
Clarity: 2
Questions for Authors: D1. The paper focuses on the KNN-related methods for label integration. However, the introduction didn’t justify the motivation of this concentration with convincing proofs. For instance, the motivation is basically explained with the sentence, “to alleviate this problem, recent works have begun to focus on leveraging neighbor instances [1, 11, 12] …”. However, there are also alternatives for label integration, so why considers KNN-related instead of the other types of solutions? Besides, there are much more studies (eg [R1]) that also target on this problem, which should be carefully discussed their pros and cons. Otherwise, the motivation looks weak.
D2. From the perspective of crowdsourcing, the studied problem is closely related to “truth inference”. However, in the references, there are only two papers on this topic: [25] (published in 2016) and [26] (published in 2023). More studies, which can be easily found in Google Scholar or DBLP, should be reviewed and compared (if possible).
D3. It is a little unclear how the principle of employing the same neighborhood size can impact performance. Please give more explanations.
D4. In addressing the question of fusing information from the attribute space and the multiple noisy label space, this paper tends to take an average between the multiple noisy label distribution and the potential label distribution. However, it might be worth exploring the possibility of introducing a tunable parameter to achieve a more optimal balance between these two distributions, rather than relying solely on an equal (50%) average.
D5. The application scope of the proposed method in crowdsourcing is limited. In my opinion, the proposed KFNN can be only used in simple and micro tasks in crowdsourcing, there are many other kinds of tasks in a real-world crowdsourcing platform, such as ranking [R2], which is not considered in the problem setting. Yet, the title, “KFNN: K-Free Nearest Neighbor For Crowdsourcing”, is a little over-claimed. At least, the paper should explicitly define the application scope. More types of crowdsourcing task can be found in existing surveys [R3, R4] on crowdsourcing.
D6. The performance improvement of the proposed method is unsatisfactory.
(1) Although the average Macro-F1 score of KFNN is better than the compared baselines, it can be notably worse than some of the baselines in certain datasets (eg MNLDP on the anneal dataset). This pattern weakens the motivation, since it’s unclear whether the limitation of existing solutions has been well addressed or not.
(2) In Table 2, the integration accuracy of KFNN is lower than that of MNLDP. Besides, it can be also notably worse than some of the baselines in terms of the integration accuracy (eg LAGNN and LAWMV on the breast-cancer dataset).
(3) Based on the current experimental results, the effectiveness of the proposed solution KFNN is questionable.
D7. Although several datasets are conducted in the experimental study, existing work on truth inference in crowdsourcing (eg [R2, R5]) usually deploys their solution in a real-world platform, such as AMT, to verify the performance. Therefore, the setup of the experimental study can be simplifier and less practical than the real-world scenario.
References:
[R1] Adaptive Integration of Partial Label Learning and Negative Learning for Enhanced Noisy Label Learning. AAAI 2024.
[R2] Xi Chen et al. Pairwise ranking aggregation in a crowdsourced setting. WSDM 2013.
[R3] Guoliang Li et al. Crowdsourced Data Management: A Survey. IEEE TKDE 2016.
[R4] Hector Garcia-Molina et al. Challenges in Data Crowdsourcing. IEEE TKDE 2016.
[R5] Yudian Zheng et al. Truth Inference in Crowdsourcing: Is the Problem Solved? VLDB 2017.
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: Please refer to the weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reviewer MQSy:**
Thanks a lot for your comments. Please find our detailed responses to your seven questions as follows.
**Q1:** First, our research focuses on label integration in crowdsourcing, which differs from other research domains such as noisy label learning (NLL). Crowdsourcing typically employs workers to assign multiple noisy labels to each instance (one instance corresponds to multiple noisy labels), and the label integration aims to infer the unknown true label of this instance from these multiple noisy labels. In contrast, NLL aims to train robust classifiers from datasets with a single noisy label per instance (one instance corresponds to one noisy label). Therefore, existing works outside of label integration, such as [R1], are not within the scope of this paper. Second, due to cost constraints, each instance in crowdsourcing typically obtains only few noisy labels, which restricts the performance of label integration. State-of-the-art KNN-related label integration algorithms address this problem directly and efficiently, forming the basis of our research. We have described this motivation in the introduction (lines 32-42) and will refine it further in the final version of this paper.
**Q2:** In crowdsourcing scenarios, the technical term "label integration" and "ground truth inference" are all common and widely used. Reference [1] describes that ''Ground truth inference is defined as a process of estimating the true label of each example from its multiple noisy label set. If we only focus on the label itself, it is also called label integration.". In the context of our paper, these terms are synonymous, all referring to the process of inferring the true label of each instance from multiple noisy labels. In our paper, we have chosen "label integration" to represent this process. Meanwhile, in our related work, we have comprehensively surveyed existing works, introducing a total of 18 representative works in label integration, with the latest published in 2024.
[1] Learning from crowdsourced labeled data: a survey. Artificial Intelligence Review, 2016, 46(4): 543-576.
**Q3:** The goal of finding nearest neighbors for an instance is to find potential instances of the same class around this instance. However, the number of instances of the same class around each instance is naturally different. Instances near the class center are surrounded by more instances of the same class, so they need larger neighborhood sizes to collect sufficient labels from similar instances. In contrast, instances near the class boundary need smaller neighborhood sizes to avoid including too many instances from other classes. These reasons are mentioned in lines 111-115 of our paper, and we will provide a more detailed explanation in the final version.
**Q4:** Indeed, introducing a tunable parameter can help achieve a more optimal balance between the multiple noisy label distribution and the potential label distribution. Originally, we devised a weighted version of Eq. (5) as follows:
$$
p_{iq} = \frac{\lambda * p(c_q|x_i,D_q) + (1-\lambda) * p(c_q|L_i)}{\sum_{q=1}^{Q}[\lambda * p(c_q|x_i,D_q) + (1-\lambda)* p(c_q|L_i)]},
$$
where $\lambda$ is a tunable parameter to balance these two distributions. However, we found experimentally that KFNN can still achieve good results with $\lambda$ set to 0.5 (equal average). To keep KFNN simple and with fewer parameters, we currently used an equal average to eliminate $\lambda$.
**Q5:** Indeed, crowdsourcing is a broad research domain that includes various tasks such as label integration, noise correction, ranking, and so on. Our paper specifically focuses on label integration and presents a novel algorithm for label integration. The current paper title was proposed by referring to some classical works in label integration such as [2], hoping to inspire broader tasks in crowdsourcing beyond just label integration. In the final version of the paper, we will explicitly define our application scope by renaming our title to "KFNN: K-Free Nearest Neighbor for Label Integration in Crowdsourcing".
[2] Community-based bayesian aggregation models for crowdsourcing. WWW `14, 2014: 155-164.
**Q6:** Our proposed KFNN is not restricted to specific crowdsourcing scenarios, so we conducted experiments on the whole 34 datasets published by the CEKA platform. However, no label integration algorithm can achieve the best performance on all datasets. Therefore, we performed the Wilcoxon signed-rank test to further compare each pair of algorithms. The results in Table 3 show that KFNN significantly outperforms existing state-of-the-art label integration algorithms in terms of the Macro-F1 score. Additionally, as described in lines 254-258, we used the Macro-F1 score instead of integration accuracy as the main experimental metric. This is because the Macro-F1 score better reflects the performance of algorithms across different classes, while integration accuracy may not accurately reflect the performance on class-imbalanced datasets. Nevertheless, the results in Table 4 still show that KFNN achieves better or comparable integration accuracy compared with existing state-of-the-art algorithms. These results and analyses demonstrate the effectiveness and robustness of KFNN.
**Q7:** Due to the double-blind policy, we did not deploy our solution in a real-world platform, such as AMT, and only submitted our code as a supplemental material. To verify the performance of KFNN in real-world crowdsourcing scenarios, as done in [R2, R5], we compared KFNN with existing state-of-the-art label integration algorithms using datasets Income and Leaves collected from the real-world crowdsourcing platform AMT. The results shown in Figure 1 are sufficient to demonstrate the effectiveness of the KFNN in real-world crowdsourcing scenarios. In the final version of the paper, we will provide a more detailed description of the collection of datasets Income and Leaves from the AMT platform.
---
Rebuttal Comment 1.1:
Title: Response to the author feedback
Comment: Dear authors,
I have read the rebuttal, and thank you for considering my suggestions. Some of my concerns are well addressed, and the others aren't:
(1) Q1 asks whether there are alternatives for label integration that can be used in crowdsourcing. If so, please discuss. Otherwise, please clarify that there are no such alternatives.
(2) Q2: thank authors for acknowledging that these two terms are synonymous. That's why I have asked whether there are alternatives for the studied problem. As reviewed in the seminal survey work published in [R3], there are other options for this problem. Based on this reason, I have asked the authors to enhance the motivation of using KNN-based method instead of the others in Q1.
(3) Q3 is generally satisfactory.
(4) The claims for Q4 should be verified through experimental evaluations.
(5) Q5 asks to clarify the type of crowdsourced task. Notice that, a ranking task in crowdsourcing still requires label integration (or ground truth inference). Thus, it is important to clearly define the application scope. From the current form of submission, the proposed method might be limited to only limited tasks types instead of general tasks in crowdsourcing. If this is the fact, the title and some other contents may need to be refined to avoid over-claiming.
(6) Q6: as shown in the experimental results (Tables 1 and 2), the proposed method could perform worse than the selected baselines in either Macro-F1 score and integration accuracy under certain datasets by a large margin. Based on the current results, it seems that (1) different methods have their own pros and cons, and (2) the proposed method is not always the optimal among the compared ones.
(7) Q7: thank you for acknowledging that the proposed method has not been verified by a real-world platform like AMT. Existing studies on crowdsourcing often conduct evaluations on a real-world platform, such as CrowdFlower (see Section 5.2 in [R2]), since real-world scenarios are more complex than a simulated experiment. Besides, the evaluation can be conducted under the double blind policy. Based on the current evaluations, it is unclear whether the proposed method can be effectively integrated in a real-world platform or not.
Overall, I appreciate the efforts made in the rebuttal. I will take your responses into consideration when making my final decision.
Best regards,
---
Reply to Comment 1.1.1:
Comment: **Reviewer MQSy:**
**(1):** Q1 asks whether there are alternatives for label integration that can be used in crowdsourcing. If so, please discuss. Otherwise, please clarify that there are no such alternatives.
**(2):** Q2: thank authors for acknowledging that these two terms are synonymous. That's why ... method instead of the others in Q1.
**Author Response for (1) and (2):** Thanks for your valuable explanations. According to [R3] mentioned by the reviewer, there are alternatives for label integration (also known as answer aggregation or ground truth inference) in crowdsourcing to control the quality of crowdsourced datasets. These include worker modeling, worker elimination, and task assignment. Worker modeling characterizes the quality of workers, worker elimination eliminates low-quality workers and spammers, and task assignment assigns informative tasks to high-quality workers. However, these alternatives do not fully alleviate the impact of insufficient labels in label integration. Additionally, reminded by the comments of the reviewer, we will clarify our motivation and refine the title to "KFNN: K-Free Nearest Neighbor for Label Integration in Crowdsourcing" in the final version of the paper. Thanks again for your valuable comments.
**(3):** Q3 is generally satisfactory.
**Author Response for (3):** Thanks for your appreciative comments.
**(4):** The claims for Q4 should be verified through experimental evaluations.
**Author Response for (4):** Thanks for your valuable explanations. To verify our claims for Q4, we conducted experiments on the real-world crowdsourced dataset Income. According to the weighted version of Eq. (5) provided in the rebuttal, we set $\lambda$ to 0.1, 0.3, 0.5, 0.7, and 0.9, respectively, and then observed the performance of KFNN on the Income dataset. The experimental results are as follows:
| |λ=0.1|λ=0.3|λ=0.5|λ=0.7|λ=0.9|
|--|--|--|--|--|--|
|F1|77.95|78.07|78.69|77.93|76.19|
|Accuracy|76.33|76.50|77.17|75.83|73.33|
| |
From these results, it can be found that KFNN achieves optimal performance with $\lambda$ set to 0.5 (equal average). Therefore, these experimental results support our claims for Q4. Thanks again for your valuable comments.
**(5):** Q5 asks to clarify the type of crowdsourced task. Notice that, a ranking task ... may need to be refined to avoid over-claiming.
**Author Response for (5):** Thanks for your valuable explanations. According to [R3] mentioned by the reviewer, crowdsourced task types include single choice, multiple choice, rating, clustering, and labeling. Our current version of KFNN can be used for both single choice tasks and labeling tasks. We will clarify the application scope of KFNN in the final version of the paper. Furthermore, we plan to expand KFNN to other task types in the future. Thanks again for your valuable comments.
**(6):** Q6: as shown in the experimental results (Tables 1 and 2), the proposed method ... the optimal among the compared ones.
**Author Response for (6):** Thanks for your valuable explanations. Indeed, the current results show that under certain datasets (1) different methods have their own pros and cons, and (2) the proposed method is not always the optimal among the compared ones. These findings are normal and do not negate the effectiveness of our KFNN. KFNN is not restricted to specific crowdsourcing scenarios, so we conducted experiments on the whole 34 datasets published by the CEKA platform. In fact, no label integration algorithm can always achieve the best performance on all these datasets. Therefore, we performed the Wilcoxon signed-rank test to further compare each pair of algorithms. The statistical test strongly validates the effectiveness of KFNN. Thanks again for your valuable comments.
**(7):** Q7: thank you for acknowledging that the proposed method has not been verified ... integrated in a real-world platform or not.
**Author Response for (7):** Thanks for your valuable explanations. In the current version of the paper, both real-world and simulated experiments have been conducted to validate the effectiveness of KFNN. In our real-world experiments, we used the Income dataset and the Leaves dataset, which were collected from the real-world crowdsourcing platform AMT. The collection process of these two datasets is similar to that described in Section 5.2 in [R2]. The experimental results shown in lines 294-301 of our paper validate the effectiveness of KFNN in real-world crowdsourcing scenarios. Due to cost constraints, the number of real-world datasets is limited. Therefore, we also validated the effectiveness of KFNN through statistical tests on a large number of simulated datasets. The results in Tables 1-4 further validate the effectiveness of KFNN. In the final version of the paper, we will provide a more detailed description of the collection of datasets Income and Leaves from the AMT platform. Thanks again for your valuable comments.
---
Rebuttal 2:
Comment: As the discussion period deadline nears, we would be deeply appreciative if you could kindly review our rebuttal and let us know if we have addressed your concerns. We’re more than happy to continue the conversation if you have any further questions. Thank you very much for your time and consideration. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
An Analysis of Tokenization: Transformers under Markov Data | Accept (spotlight) | Summary: This paper presents a study on tokenization by investigating the behavior of transformers on simple data generating processes . It shows that, in the absence of any tokenization, transformers trained on $k$th-order Markov processes predict characters according to a unigram model, which is quite problematic given how poor unigram models are at modeling Markovian data. Paradoxically, they observe that, even the simplest unigram model learnt by transformers *with the appropriate tokenization* is able to model the probability of sequences sampled from a $k$th-order Markov process.
Strengths: - The paper is well written, with empirical observation intermingled with theory, which I quite liked. The theory is also accompanied by a lot if intuition, insight and interpretation, which really helps drive the point home.
Weaknesses: - In section 3.2, the authors chose to focus on developing guarantees for a newly developed tokenizer, which, to my knowledge, is seldom used. It would've been maybe of greater use to the community to also, or instead, establish these guarantees for the more commonly-used tokenizers, such as BPE.
- I appreciate that this is mostly a theoretical study of tokenizers, and while the observations put forward are valuable, I found myself wondering what practical takeaways this paper presents to improve current tokenizers. That is something I would love the authors to comment on.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Weaknesses section above
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and questions. Below we addressed the key points mentioned.
### **[W1] Analysis for commonly-used tokenizers such as BPE**
The guarantees in section 3.2 study guarantees for the LZW tokenizer, which is arguably not used much in practice. However, in Section B of the Appendix (Additional Theoretical Results I: A sequential variant of BPE) we study a variant of BPE which is analytically tractable, and present a result similar to Theorem 3.1 for this tokenizer (cf. Theorem B.2). The discussion of this result in the main paper was deferred to a short excerpt in Section 4.1 due to a lack of space, but in the subsequent version of the paper, we shall include a longer discussion. This was indeed our motivation to analyze (a tractable variant of) BPE as well - it remains one of the most commonly employed tokenizers in practice across a number of domains.
### **[W2] Practical takeaways**
We point the reviewer to the response to [W3] of Reviewer veC3 where we have summarized the key empirical takeways of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions as well as the questions and concerns raised by the other reviewers. I think this is a good paper, and will raise my score accordingly. | Summary: The authors show that tokenization is a fundamental property of transformer-based models, in the sense that without it, it is hard (if not impossible) to achieve low cross-entropy loss on next-word prediction. They show that tokenization helps breaking the unigram barrier (i.e., the best loss a unigram model can achieve) and give a theoretical characterization of the information tokenizers provide in terms of statistics on token distribution.
In particular:
Section 2.1 shows how models without tokenization cannot achieve the optimal cross-entropy loss, while when equipped with a tokenizer they break the unigram barrier.
Section 3 studies tokenizers that assign all possible substrings (up to length r) as tokens in the dictionary and shows their theoretical optimality in learning processes ruled by k-Markov chains. A consequence is that unigram models can also do that, in the limit.
Of course, this comes at the expense of the model's efficiency and potential attacks that one can run on an exponential number of tokens (i.e., the surface attack grows very large).
Finally, the authors show that tokenizers can trade off the vocabulary size while maintaining low cross-entropy (i.e., they can behave like an optimal model).
Finally, they extend the theoretical framework to LZW tokenizers.
Experiments are conducted on tokenized vs. non-tokenized models on {k=1}-Markov models and then on some real datasets to show that tokenizers trade-off complexity and efficiency in learning an optimal representation of the characters (and their frequency) in the training distribution.
Strengths: The article studies an important problem, and I think there is value in the paper.
To the best of my knowledge, comparing BPE to non-tokenized models is new, and the figures give some interesting insights (e.g., Figure 2).
Your paper contains much theoretical work, contributing to its quality.
The results in the limit for unigram and BPE/LZW models are noticeable (Section 3 and Eq. 2).
In general, the results seem solid and are also interesting for linguists and NLP researchers. BPE and other tokenization methods find a trade-off between unigram models, as per Eq. 2, and the complexity of the resulting vocabulary (and model).
Weaknesses: One of the main weaknesses of this work is how it is presented.
Maybe it's me, but I found it quite hard to read. See questions.
Another concern is how theoretical results apply to real-world datasets. See questions, but Fig. 5 seems to mitigate the impact of your theoretical results.
In fact, for the vocabulary that grows larger, all the models have a similar value of cross entropy (for around ~50K tokens).
The article seems rushed, as there are many typos (I just listed some).
- Line 150 “the a”
- Line 173, “it make since” --> “sense”
- Line 175, eq. and many others --> Eq. (it’s not wrong per-se, but you capitalize Fig, Example, etc.)
- The Notation paragraph shouldn’t go with related works but should be in the next section.
- Notation in 2 is a bit sloppy (this is a personal suggestion): you can use D() and E() for the decoder/encoder (and enclose them with \mathcal).
Technical Quality: 3
Clarity: 2
Questions for Authors: You say that Transformers without tokenization fail at modelling simple k-order Markov models, while with BPE (and other techniques), they succeed. I would say that is simply because BPE injects statistical information in the data and splits it accordingly. BPE is "trained" to aggregate "bytes" according to their frequency in the training data, so it somehow informs a model with the stationary distribution of most-frequent contiguous characters.
Am I missing any non-trivial observation here?
There is a reference in the Appendix to the model used (GPT-2), but nothing on the main paper. For example, I asked myself multiple times what models you used for the tokenized and non-tokenized networks.
By unigram models, do you mean those where the probability of each token approximates the inverse frequency of a character/token in the training data?
If I understand correctly, in Fig. 2 (a), models without tokenization fail at breaking the unigram barrier (so the best they can do is model the inverse character frequency). How does that connect to the relative success of character-level tokenization? There are plenty of methods that use character-level tokenization, and they probably work much better than unigrams. Is there anything I am missing here?
In Figure 2 you mention that plot (2b) has 70x less parameters, but you do not specify why (Is it to prove tokenization helps?).
Do you use GPT-2, as mentioned in the Appendix? If so, do you use a smaller model for the figure on the left and a larger one for the one on the right?
Fig. 3 is hard to understand. I read it many times, but I still do not fully understand what it conveys. The heatmap goes from 0. to ~0.6, though it is unclear what the measure is (I guess it is the probability?).
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please see previous sections.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Below we address the weaknesses/questions pointed out:
### **[W1] Fig. 5 mitigates the impact of the theory**
On datasets like Wikitext-103, transformers trained with these tokenizers (BPE, Unigram, Wordpiece) indeed perform similarly in ablations. The purpose of this figure was to observe that the test-loss appears to decay as roughly $\propto 1/\log(|\text{Dict}|)$ agreeing with our theory (Theorem 3.1). When we generalize beyond these settings to more complex languages and more specific downstream tasks, the differences start becoming noticeable. There are many possible reasons and counterfactuals to control for in these kinds of experiments. For instance, Wordpiece appears to be worse compared to Unigram and BPE in languages which are not space-delimited like Japanese [3]. Unfortunately, these questions are at a granularity which can't be addressed by studying Markovian data models. We believe that the extensions of our work to more specific losses / different data-distributions can play a role, and help in revealing fine-grained differences between tokenizers.
___
### **[Q1] BPE succeeds because of aggregating data according to frequencies**
There are plenty of seemingly “reasonable” ways in which a tokenizer may aggregate data according to their frequencies while constructing the dictionary, but which ultimately end up performing poorly. Consider the following modification of the Wordpiece tokenizer where we merge the pair of tokens which maximize $n_{t_1, t_2} / n_{t_1}^2$ (the original Wordpiece prescribes merging the maximizer of $n_{t_1,t_2} / n_{t_1} n_{t_2}$). While this tokenizer indeed seems reasonable per the above definition, it turns out that it performs quite poorly. Training on randomly generated letters from the english alphabet, the results are tabulated in Table 1 of the attached pdf.
Notice the clear a pattern in how modified Wordpiece behaves: it appears to merge the longest token in the dictionary thus far with a single character in each round. While the tokenizer “objective” itself does not prescribe this behavior, the tokenizer exhibits this trend because of the $n_{t_1}^2$ term in the denominator. This incentivizes the merge to pick a starting token ($t_1$) which has very low frequency (because of the $1/n_1^2$ term in the denominator).
Thus the behavior of tokenization / design of appropriate merge rules is a nuanced phenomenon. While there appear to be natural choices under which the intuition “tokenizer is trained to aggregate bytes according to their frequencies $\implies$learning unigram models are sufficient” is true, there are also cases where this intuition fails altogether.
On a separate note, it is not apparent how to design tokenizers which learn dictionaries with the *minimal number of tokens* required to achieve low loss (under unigram likelihood models). Our proofs show this is true for LZW and approximately so for a variant of BPE. Thus, these tokenizers are hard to improve on significantly (without breaking some sort of worst-case information theoretic barrier). Having a small dictionary is important for being able to learn the transformer in a data-efficient manner, and to avoid redundant tokens, which presents an attack surface.
### **[Q2] Model used in the experiments; Clarification of Fig. 2**
In the paper, we used variants of the GPT-2 architecture as implemented in [13]. We shall add this citation to the paper. Our experiments considered variations over (i) the number of heads, (ii) the number of layers, and (iii) the embedding dimension and (iv) the dictionary size (excluding optimization hyperparameters). We shall include these in the description of figures. They are summarized in the Appendix (Table 3) due to a lack of space in the main paper.
**Fig 2 clarification:** Fig. 2b uses $70 \times$ fewer parameters compared to 2a. This is an argument supporting that models with tokenization do not require as many parameters to quickly achieve the optimal test-loss. This is a less important point at the scale of models we considered, which is why we do not go into too much detail, but this also translates to an improvement in wall-clock time (cf. Fig. 4 in the paper)
### **[Q3] What are unigram models?**
Unigram models are defined on lines 114-116 in the paper. The specific model mentioned (probability of each token approximates the inverse frequency of character/token) lies in the class of unigram models we consider.
### **[Q4] Success of transformers beyond the unigram barrier?**
Our reason for studying unigram models is not because this class of models are interesting in their own right (which one can argue for), but more so because we see that transformers empirically learn these kinds of models when trained on Markov data. This is the role of Fig. 3 (clarified in the common rebuttal). However, when the data processes grow to be more complicated than Markov models, we expect transformers to exhibit more complex behavior (beyond $n$-grams). Understanding the behavior of transformers with and without tokenization on more complex data processes is a rich direction for future work.
On a separate note, character-level tokenization does appear to work to varying degrees in practice with transformers, and this is likely because real world data is a mixture of many different data types. For certain languages (such as those containing non-concatenative morphology), removing tokenization seems to help [14]. For arithmetic, character level tokenization is known to help models generalize better [2]. But from these results, it is not clear whether these are pitfalls of existing tokenizers, or of tokenization as a whole. On arithmetic, for instance, recent models like GPT4 and LLama3 appear to chunk numbers into the range 0-999 and split longer numbers into these chunks, instead of operating at the single digit/byte level, which is in support of the argument, “Good tokenizer $\ge$ no tokenizer”.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their clarifications. I am convinced by their arguments, and I appreciate the comment where they address multiple concerns raised by different reviewers. I will keep my score. | Summary: This paper offers theoretical insights into the importance of tokenization in language models. Tokenization is ostensibly the artifact that makes training LMs not an end-to-end procedure. This design choice introduces biases, as it is not optimized for exactly the same criterion as the full model. Yet training without a tokenization step almost always leads to worse language models. This paper attempts to provide reasons based in probability theory for this phenomenon. The authors first explore a toy setting, in which transformer models are tasked with predicting distributions from kth order Markov processes. They offer a theoretical explanation for why the error of models is capped at that of a unigram model and how tokenization alleviates this issue. They then show that tokenization schemes with certain properties can achieve the optimal cross-entropy loss. The work offers some basic experimental results confirming their insights.
Strengths: * Tokenization is a core part NLP pipelines yet it still needs to be better understood from a theoretical perspective. The questions that this paper tries to answer are very relevant for both model interpretability and further development
* The theory is presented in an understandable manner and results for specific popular tokenization schemes are provided.
Weaknesses: * The theory presented in this work is for a specific type of data-generating distribution (kth order Markov) and we can’t directly extrapolate these results to linguistic distributions, which do not necessarily follow such a distribution. There is minimal discussion about the relationship between kth order Markov and linguistic distributions, which leaves the reader questioning how relevant these results actually are.
* Ultimately, the results are limited; they essentially show an expected result (the existence of an optimal unigram language model as the dictionary size grows to infinity). While some intuition can be gained from these results, the theoretical implications are limited.
* There is minimal discussion of the empirical results and what conclusions can be drawn from them. Given how much of the theory is not directly applicable to real language modeling settings, it feels like such a discussion should be very important
Technical Quality: 4
Clarity: 3
Questions for Authors: * In the kth-order Markov processes studied, are nth state distributions dependent only on the n-kth state? I may be misunderstanding the caption in figure 1
* The results are applicable to all language models, not just large ones. If anything, they are arguably more relevant for smaller language models. Perhaps consider changing the title
* How does the work differ from/build on Edelman et. al. 2024 and Makkuva et. al. 2024?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Limitations are not discussed in depth. The authors should address their limited experimental setting and the applicability of the results to linguistic distributions (which are not evidently k-order Markovian)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback. Below we address the main questions and weaknesses pointed out. *(references cited in the common rebuttal)*
### **[W1] How well do $k$-th order Markov processes extrapolate to linguistic distributions?**
There are two points to mention here,
1. $k$-th order Markov processes, while not perfect, capture many elements of linguisitic distributions - when $k$ grows larger, these models capture longer-range dependencies with the past. This class of models have had a rich history in language modeling and nearly every book on language modeling studies these processes. These models have been the study of many recent works, since they present a rich test-bed of interesting phenomena. A few recent works which have appeared in the literature since submission which study the interplay of transformers with these kinds of distributions are [8],[9].
2. This data-model, while simple, captures the ability of transformers to learn $n$-gram behavior. $n$-gram features appear prominently in many domains such as NMT [10], language modeling [11] and speech/music [12] to name a few.
### **[W2] Results show an expected result (asymptotically optimal unigram model)**
The existence of an optimal unigram model as the dictionary size grows to infinity is not the main contribution of the paper. It is easy to show that there exists “some” tokenizer which satisfies this property. In this regard, two main contributions of our paper are,
1. From a theoretical point of view, we show that the LZW tokenizer in fact lies on the pareto frontier of test-loss vs. dictionary size, when the likelihood model is a unigram model. To a lesser extent, this is approximately true for the BPE tokenizer as well. This means that the tokenizer not only asymptotically works, but non-asymptotically, the size of the dictionary cannot be reduced significantly without hurting the test-loss.
2. The tokenizer which assigns all $r$-length tokens also satisfies the property that asymptotically there exists an optimal unigram model on it. However, this is an unnatural tokenizer. We show that the above properties are true for BPE and LZW: letting the number of tokens grow to $\infty$, it is a-priori not at all obvious that on the learnt dictionaries there exists a unigram model which achieves optimal test loss.
In contrast, it is easy to come up with seemingly “natural” tokenizers but no unigram model trained on these tokenizers can asymptotically achieve the optimal test loss. In the response to [Q1] of Reviewer zrKY, we provide a surprising example of such a tokenizer.
### **[W3] Discussion of empirical results**
Here is a quick summary of the empirical results in the paper, and takeaways.
1. **Fig. 2 and Fig. 4:** Transformers need to be exposed to much more data to learn $k$-th order Markov models without tokenization, compared to with tokenization. Likewise, they learn more quickly wrt wall-clock time compared to in the absence of tokenization (This is true even if we select the fastest untokenized model).
**Takeaway:** Transformers generalize better with tokenization, both in terms of number of samples as well as number of optimization iterations.
2. **Fig. 3 conclusion:** Transformers approximately learn unigram models when trained on tokenized sequences generated from a Markov model.
**Takeaway:** Transformers can achieve low loss by learning conceptually simple models when tokenization is present.
3. **Fig. 5 conclusion:** BPE, Unigram and Wordpiece tokenizers perform similarly on Wikitext-103 with unigram models. The decay of the loss as a function of the dictionary size appears to be captured by $\propto 1/\log(|\textsf{Dict}|)$, as predicted theoretically (Theorem 3.1).
**Takeaway:** Scaling law for tokenization (as a function of the dictionary size).
In this paper, we sought to build up a theoretical understanding of how tokenizers and transformers interact by studying their behavior on $k$-th order Markov chains. While these models are not perfect, a number of interesting empirical phenomenon can be inferred which concur with practice, even through the lens of these simple data models. More importantly, the questions and workflow considered in this paper (analyzing end-to-end loss on models learned by transformers) can be employed with more complex data processes, where transformers exhibit richer behavior. Separations between tokenizers may become more apparent here.
___
### **[Q1] $k$-th order processes considered**
In the $k$-th order processes studied, the $n$-th state distribution only depends on the $(n-k)$-th state. However, our theory applies for all $k$-th order processes (as long as Assumption 3.2 is satisfied). In our experiments, we chose to compare against this restricted class of models to level the playing field between different values of $k$ (since the problem certainly becomes harder if we consider all Markov chains for increasing $k$). In the attached pdf, we plot the behavior of tokenizers on randomly chosen $k$-th order Markov processes for $k=1,2,3,4$. The gap between the tokenized and untokenized grows even more significantly as $k$ increases.
### **[Q3] Comparison with Edelman et al (2024) and Makkuva et al (2024)**
These works show that transformers (in the absence of tokenization), when exposed to sufficiently much data, learn to represent $k$-th order Markov chains in-context for relatively small values of $k$. In practice, however, tokenization is almost always used. In our paper, we study the questions asked in these papers when transformers are trained with tokenization. Our key observations is that with tokenization, the model learns significantly more quickly, and is able to avoid going through multiple phases of learning when trained with tokenization. The general rhetoric surrounding tokenization has been negative, and our work shows that tokenization can make a big difference, even when learning simple models like $k$-th order Markov chains.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. The proposed changes in the main response would raise my score slightly. I adjust to reflect that. | Summary: This paper investigates the learning dynamics of unigram language models on top of tokenised vs non-tokenised data, comparing these models’ expected cross-entropy to the distribution’s entropy. The paper performs this analysis while considering different data generating distributions (mainly focusing on relatively simple markov chains), and different tokenisation methods.
Strengths: This paper tackles an interesting and timely topic: how tokenisation enables language modeling.
This paper provides an interesting theoretical analysis of the effect of tokenisation on unigram language modeling.
This paper also provides a couple of empirical analyses of how unigram models perform on real data.
The paper is relatively easy to follow, even though some of the mathematical results could be spelled out a bit more clearly to make it easier for a reader to follow them.
Weaknesses: This paper’s framing, in my opinion, significantly over-claims its results:
* The title “Toward a Theory of Tokenization in LLMs” is very broad for the current contributions. A more appropriate title, in my opinion, would be “Analysing tokenisation’s effect on unigram distributions”, or something analogous to it. There is no “theory of tokenisation” being proposed here, but a theoretical analysis of how tokenisation affects a simple model’s cross-entropy.
* The abstract and introduction also significantly overclaim results, with statements such as “we study the end-to-end cross-entropy loss achieved by transformers with and without tokenization” while focusing on unigram cross-entropies. Transformers may serve as motivation to this work (as they initially learn unigram statistics), but are not in fact analysed here.
I think the paper would also be significantly more straightforward to read if the framing was fixed and it was clear from the start that the paper's analyses would focus on unigram models.
Technical Quality: 3
Clarity: 2
Questions for Authors: My current score is mostly based on my current understand that this paper overclaims its results. I'm open to increasing my score if the authors either tone down the paper contributions' rhetoric, or make a convincing argument of why the current framing is appropriate.
> we study the end-to-end cross-entropy loss achieved by transformers with and without tokenization
I’d argue this paper does not actually study a transformer’s cross-entropy with and without tokenization, but a unigram model’s instead. Even if transformers learn unigram distributions early on (and in some tasks are never able to learn more than that), this is still a strong over-statement in my opinion.
> the models initially predict tokens according to a unigram model (in context unigrams), which delays learning the optimal solution. This phenomenon was also observed in Makkuva et al. (2024).
This phenomenon was previously shown by Chang et al., 2022; 2023.
> Line 115. Q(t1, t2, · · · , tj ) = Q#(j) Qji=1 Qtok(ti)
What does Q_{#}(j) represent?
> Figure 2a
What happens if models are trained for more iterations?
> Figure 3
I found this figure confusing. I don’t fully understand what is being presented here.
#### `References`
* Chang et al., 2022. Word Acquisition in Neural Language Models
* Chang et al., 2023. Characterizing Learning Curves During Language Model Pre-Training: Learning, Forgetting, and Stability
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I think some important limitations are not sufficiently discussed in this paper. The most important of which is that the analysis focuses on unigram statistics, and transformers can clearly learn more than that. Expanding the limitations pointed out by Remark 3.3 in a dedicated limitations section could also be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive criticism. (*references cited in the common rebuttal*)
**TLDR;** We are happy to see how to change the wording of the title + general rhetoric of the paper in a way which fit the contributions in the paper best. *Our proposed changes incorporating the reviewers’ points are discussed in the common rebuttal*. In doing so, we would like to add a few points to take into consideration,
1. **The tokenization objective studied in the paper (end-to-end cross-entropy loss), while seemingly obvious in hindsight, has not been the subject of any theoretical work on tokenization**.
There are several current theoretical formulations for tokenization. One viewpoint is the ability to compress the input sequence as well as possible [5],[6],[7]. Related work such as [4] considers tokenization as approximately optimizing some natural objective of the training dataset. There are two conceptual issues with these formulations,
+ These approaches study how tokenizers behave on the sequences they were trained on, which does not capture generalization.
+ These analyses look at tokenizers in isolation, rather than studying them in the context of how the end-to-end loss of the model changes.
The current theoretical discussion around tokenization is problematic for this reason: it is straightforward to compare tokenizer A with tokenizer B by evaluating them in practice, but there is no existing theoretical formulation which allows comparing different tokenizers with each other (say, by studying their behavior against the same loss/objective). For instance, the notion of “minimum number of merges” in [4] only makes sense for merging based tokenizers, and not for tokenizers like Unigram which don’t operate by merging tokens. The end-to-end cross-entropy loss is arguably the most natural objective which can allow comparisons between the behavior of the end-to-end model.
2. **Why do we not write the results as “an analysis of unigram models with tokenization”? Do transformers even learn unigram models?**
Section 2.1 and 3 of the paper are dedicated to understanding the behavior learnt by transformers when exposed to data generated from Markov processes. Prior work argues that transformers (without tokenization), when trained for many iterations, are indeed able to learn $k$-th order Markov data. In the context of these results it’s worth taking into account the fact that these models need to be trained for $10-50 \times$ the number of iterations to break past the unigram phase and learn even bigram models. See Fig. 1 of the reference [1] or Fig. 1a in the rebuttal pdf. In the presence of tokenization, Fig. 3 in our paper discusses the distributions learned in the presence of tokenization, which are observed to be close to a unigram model (see clarification in the common rebuttal).
3. **What happens when transformers don’t learn unigram models?**
This is a good question, and something that our work addresses from a theoretical point of view, but in a way that can be made more explicit (see proposed changes above). With sufficiently large dictionaries, the transformer only has to learn a unigram model. What happens when the dictionaries are not this large? The main implication from the theorems in our paper is that *tokenization always reduces the value of $n$ for which the transformer needs to learn $n$-gram behavior, so as to achieve near-optimal test loss*. Our current theorems instantiate this result when the dictionaries are large enough so it suffices to learn unigram behavior to perform well. When the dictionary sizes are smaller, transformers with tokenization are forced to learn $n$ -gram behavior for $n > 1$ to achieve low test loss. Our results argues that tokenization allows the model to get away with learning $n$-gram behavior for values of $n \ll k$, the true order of the underlying data distribution. With tokenization, transformers can avoid learning higher-order $n$-grams, which would have otherwise been learnt in its absence.
4. **What happens when transformers don’t learn $n$-gram models at all?**
$n$-gram models, while simple, do not capture the extent of the modeling power of transformers. Going beyond this requires looking at more complicated data generating processes. There are several ways to do so: the simplest extension compared to our work would be one where the transformer is trained on data generated from a mixture of many different Markov models. No single $n$-gram model captures the data well now, and it is interesting to see how transformers learn to identify what order the test data comes from and utilize the appropriate estimator. Even beyond Markov and mixture of Markov models, we may study these questions when data come from simple HMMs, where the role of tokenization may be very different.
We hope that our work initiates a study of tokenization when transformers are trained on even richer classes of data processes. We analyze the end-to-end loss by looking at the family of models $\mathcal{F}$ transformers empirically appear to represent and prove upper/lower bounds on the cross-entropy of models in $\mathcal{F}$. Ultimately, this framework can be applied to any data process, and provides a new lens for comparing tokenizers.
___
### **Additional questions/clarifications**
1. **What is $Q_{\\#} (j)$ in the unigram definition?** This is the distribution over the total number of tokens $j$. This needs to be present as otherwise the distribution over token sequences does not integrate to $1$ (i.e., is not a valid probability distribution).
2. **What happens when models are trained for longer in Fig. 2a:** When models are trained for more iterations, they eventually learn $k$-gram behavior. However, the number of iterations required is $10-50 \times$ more (see Fig. 1 in [1] or Fig. 1a in the rebuttal pdf)
3. **Figure 3 clarification:** Discussed in the common rebuttal.
---
Rebuttal Comment 1.1:
Title: Discussion period ending soon: call for response
Comment: Dear Reviewer 88v6,
We sincerely appreciate the time you have taken to provide valuable feedback for our work. As we are getting closer to the end of the discussion period, could you let us know if our responses above have adequately addressed your concerns? We remain at your disposal for any further questions.
If you agree that our responses to your reviews have addressed the concerns you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments.
Sincerely,
The Authors
---
Rebuttal Comment 1.2:
Title: Answer to Authors
Comment: I thank the authors for their detailed response and clarifications. I think the suggested changes listed by the authors will significantly improve it, and I have increased my score from 4 to 7. | Rebuttal 1:
Rebuttal: ## **Common rebuttal**
We thank all the reviewers for taking the time to go through our paper and suggest constructive criticism. Please find attached a pdf containing additional plots to aid in answering reviewers' questions. We begin with the suggested changes to the paper, and then address some common points raised by the reviewers.
### **Suggested changes to the paper**
1. We propose to change the title of the paper to: “An Analysis of Tokenization: Transformers under Markov Data”
2. We will adjust the abstract + introduction of the paper to emphasize that we look at the class of models, $\mathcal{F}$, transformers are observed to express empirically, and then theoretically study the joint cross-entropy loss of natural tokenizers with models in $\mathcal{F}$. In the case of $k$-th order Markov data, the model class expressed by transformers, $\mathcal{F}$, appears to be the class of $n$-gram models. This is the case both with, and without tokenization (cf. Fig. 3 in the paper). We will also point out previous work studying this and related behavior such as Chang et al (2022, 2023).
3. We will adjust the description and discussion surrounding Fig. 3 to be more clear, and emphasize that the goal of this figure is to understand what is the behavior expressed by transformers trained on Markov data in the presence of tokenization. A clarification of this figure is provided below.
4. Since our theoretical discussion revolves around $n$-gram models trained with tokenization, we will make the discussion connecting $n$-gram models with transformers more clear. Currently this is spread across sections 2 and 3 of the paper.
5. We shall expand Remark 3.3 into a limitations section, in addition to some other limitations (such as: transformers can learn more than just $n$-gram behavior).
6. A discussion extending the current results to the case where transformers with tokenization learn $n$-gram behavior for $n \ge 2$ instead of just unigram models. This occurs when tokenizers are trained with very small dictionaries.
We are open to including other changes to the paper the reviewers see fit, in a way which better matches the contributions in the paper and makes it easier to read.
___
Below we address some common points raised by multiple reviewers.
### **Clarification of Fig. 3**
**Fig. 3 plots the next-token distributions learnt by the transformer** (dictionary size $=20$, sequence length $=500$) **at convergence**.
In this experiment, we sample a random sequence of length $2000$ from a Markov chain and feed it into the transformer (after tokenization, this results in a sequence of length $\approx 500$). We plot the next-token distribution predicted by the transformer at every single of the $500$ positions, generated by masking the input sequence. This plot stitches together $500$ next-token distributions together, each of which is a narrow column heatmap. Visually it appears that the transformer predicts symbols according to the same distribution most of the time (i.e., the plot looks approximately homogeneous along the $x$-axis). This is evidence that even though the transformer could learn behavior which depended on all the previous symbols, it sticks to outputting tokens independently most of the time, i.e., a unigram model.
From this point of view, the tables have turned - it is surprising that the transformer with tokenization performs so well even though it only learns unigram behavior, when this behavior is the root of problems in the absence of tokenization. Our work tries to formally address why this might be the case.
___
### **Summary**
We initiate the analytical study of the end-to-end loss of transformers with tokenization by looking at the class of models transformers appear to learn empirically, and then studying how the behavior changes with the addition of tokenization. We instantiate this framework for the case of Markov data generating processes, where transformers learn simple $n$-gram behavior, as observed in Chang et al. (2022,2023) in the context of real-world data and other more recent works (Makkuva et al). However, this framework itself can be instantiated with any kind of data process. Choosing more complex data processes will trade-off analytical tractability with the ability of transformers to express more complex behavior.
___
### **References**
[1] Edelman, Benjamin L., et al. "The evolution of statistical induction heads: In-context learning markov chains." arXiv:2402.11004.
[2] Golkar, Siavash, et al. "xVal: A continuous number encoding for large language models" arXiv:2310.02989
[3] Fujii, Takuro, et al. "How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese." arXiv:2306.09572
[4] Zouhar, Vilém, et al. "A formal perspective on byte-pair encoding." arXiv:2306.16837
[5] Zouhar, Vilém, et al. "Tokenization and the noiseless channel." arXiv:2306.16842
[6] Gallé, Matthias. "Investigating the effectiveness of BPE: The power of shorter sequences." EMNLP-IJCNLP (2019)
[7] Goldman, Omer, et al. "Unpacking Tokenization: Evaluating Text Compression and its Correlation with Model Performance." arXiv preprint arXiv:2403.06265
[8] Svete, A., & Cotterell, R. "Transformers Can Represent $n$-gram Language Models" arXiv:2404.14994
[9] Nguyen, T. "Understanding Transformers via N-gram Statistics" arXiv:2407.12034
[10] Diao, Shizhe, et al. "Taming pre-trained language models with n-gram representations for low-resource domain adaptation" ACL 2021
[11] Irie, Kazuki, et al. "Language modeling with deep transformers" arXiv:1905.04226
[12] Tian, Jinhao, et al. "N-gram Unsupervised Compoundation and Feature Injection for Better Symbolic Music Understanding" AAAI 2024
[13] Pagliardini, M. GPT-2 modular codebase implementation. https://github.com/epfml/llm-baselines
[14] Clark, Jonathan H., et al. "Canine: Pre-training an efficient tokenization-free encoder for language representation" TACL (2022)
Pdf: /pdf/612f05ef19d9e07dbd4d1bd4ef42b0dd8c3553e2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reinforcement Learning with Lookahead Information | Accept (poster) | Summary: This paper introduces reinforcement learning (RL) problems where agents observe one-step lookahead information (either rewards or transitions) before choosing actions in episodic tabular MDPs. Two relevant lines of work exist: the control literature, which studies a similar lookahead concept in the continuous state-space scenario, and the RL planning community, which commonly obtains lookahead information from learned transition models. However, this paper assumes the reward/transition information to be available before selecting an action. The core contributions are:
1) Formalising the look-ahead setting for the reward and transition in an episodic MDP setting.
2) Derivation of the Bellman equations in the original space by setting up an equivalence with an equivalent new MDP.
3) Development of two algorithms for reward (MVP-RL) and transition lookahead ( MVP-TL).
4) First sub-linear regret bound win the lookahead setting.
Strengths: This paper is the first to provide regret bound on the lookahead learning setting. This encompass a somewhat broad spectrum of problems that were independently studied such as the Canadian traveler problem and the prophet inequalities.
They paper is well written and easy to follow for non-expert in learning theory. It presents the core ideas in an understandable way in the main paper and use the appendix for technical proofs.
Weaknesses: The paper could be strengthened by adding experimental results studying the difference in behaviour and performance between standard RL algorithm, MVP and the proposed solution MVP-RL. More specifically, I would be interested in understanding the difference in behaviour when changing the tails of the reward/transition distributions.
Technical Quality: 3
Clarity: 4
Questions for Authors: How applicable is the theoretical argument you used to a model-free version of MVP-RL/TL?
Confidence: 1
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations outlined in the paper provide a fair representation of how the theoretical results could be extended in various directions, such as multi-step and stochastic action sets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and refer to the general comment regarding experiments. In particular, MVP converges to the no-lookahead value, so there would be a linear difference between its performance and our algorithms, so the algorithms are not really comparable.
While the tail of the distribution will somewhat affect the sublinear regret rates, its dominant effect would be on the lookahead value. This is since any effect on the value is equivalent to linear regret terms - much larger than the $\sqrt{K}$ terms. In simple environments, the effect of the tail on the value can be analyzed in closed form, so we do not need to resort to numerical evaluation. One interesting property is that the behavior of the lookahead value is not a simple function of the tail, and similar tails might have many different behaviors.
-----
Consider the following examples:
**Reward lookahead in a prophet-like problem (Figure 1).** The value only depends on the reward distribution when moving from $s_i$ to $s_f$.
1) *Bernoulli distributions.* Assume that when transitioning from $s_i$ to $s_f$, we obtain rewards $Ber(p)$. In particular, the optimal no-lookahead value is always $V^{no-lookahead}=p$. On the other hand, the lookahead value will be the probability of observing at least one unit reward $V^{R,*} = 1 - (1-p)^{AH}$, and in particular,
- If $p<1/(AH)$, we get that $V^{R,*}\approx HAp$ - higher by a factor of $AH$ than the no-lookahead value.
- If $p\approx 1/2$, then the lookahead value will approach $V^{R,*}\approx 1$ - constant-factor improvement.
- Finally, for $p\approx 1$, both values are roughly $1$ - no improvement.
Interestingly, when thinking on standard ‘tail measures’ (e.g., Chernoff bound for averages), the *least concentrated* situation (with the heaviest tails) is when $p\approx 1/2$, even when only considering the upper tail of the distribution. On the other hand, the two lighter-tailed situations behave very differently: in one, we get a huge performance boost while in the other, we gain nothing.
2) *Gaussian distributions* $\mathcal{N}(\mu,\sigma)$. Analytically calculating the value is quite hard, but using the properties of the maximum of Normal R.V.s, it is quite easy to (approximately) bound the reward-lookahead value in the interval $V^{R,*}\in [\mu+\sigma\log(A), \mu+\sigma\log(AH)]$. Without lookahead, the value would be $V^{no-lookahead}=\mu$, so the difference between the values is proportional to the variance.
3) More generally, if we limit ourselves to bounded distributions with expectation $\mu$ and effective support $[a,b]$, for large horizons, the lookahead value approaches $b$, while the no lookahead value will stay $\mu$. The specific tail might affect the rate at which it happens, but will not change the limiting value.
-----
**Tail effects with transition lookahead.** We admit that we are not familiar with a notion of a tail for a transition kernel over a finite state space, but we will try to illustrate how stochasticity in the transitions affects lookahead values. Let us still consider transition lookahead on a chain (variant of the example in Section 4). To simplify things, assume that we have a chain of length $n<H$ with two actions. One action moves forward w.p. $p$ (and w.p. $1-p$, we ‘fail’ and cannot collect any rewards). A second action does not change the state. We aim to reach the end of the chain to collect a unit reward. For no-lookahead agents, the probability of successfully traversing the chain is $p^{n}$. For one-step lookahead agents, it is the probability of having at least $n$ successes for the binomial $Bin(H,p)$. In particular,
- When $p>> 1- 1/n$, both agents will succeed to collect the reward - have unit values.
- For $p<< n/H$, both are likely to fail, but the lookahead agent still has an exponentially larger probability to succeed traversing the chain.
- In the intermediate regime, the lookahead agent will succeed and the no-lookahead agent will fail.
Thus, in some senses, the lookahead is more valuable in the ‘heavy tail’ regime (intermediate probability), but we see two different behaviors in the ‘light tail’ regime - similar values for large $p$ and exponentially different values for small $p$.
-----
**To summarize:** while it seems that ‘heavier tail’ generally helps lookahead agents, there is no clear way to translate tail properties to lookahead values.
-----
**Model-Free Algorithms:** generally speaking, even in the classical RL setting, the analysis techniques of model-based on model-free algorithms are very different, and we are not aware of any study that unifies regret analysis for both settings. We still believe that some elements could be taken from our work to the model-free setting - for example, using the structure and properties of the optimal value and the optimal policy (and, in particular, the list structure for transition lookahead), or using uniform concentration and avoiding pessimism. Nonetheless, the regret analysis will be more involved than the classic analysis of Q-learning [19] and will probably require new notions of Q-values that allow incorporating the observations into the policy. For example, with reward lookahead, the Q-values should not include immediate rewards, so that the reward observations could be later added when interacting with the environment. | Summary: The authors proposed new forms of Bellman equations for environments where the agent knows the reward or transition outcomes one step ahead (without knowing the full model).
Strengths: While previous papers (e.g., Boutilier et al. 2018) discussed utilizing lookahead information (and proved convergence), the authors claim they are the first to present regret results.
Weaknesses: While the theoretical contribution is clear, the authors must also provide practical validation.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Perform experimental validation to illustrate the practical performance.
For instance, it would be necessary to check the learning curves and the resulting performance.
The authors should also discuss the practical implementation.
- H is the important parameter to be determined. Provide a practical guide line for choosing H.
- can this method be extended to off-policy learning ?
Is the method data-efficient ?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Some aspects of the approach might be seen as incremental advancements rather than groundbreaking theoretical analysis .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and refer to the general response on simulations.
* As we explained in the general response, we agree that evaluation is important and believe that the best way to do so is to adapt our work to a deep RL setting, but this is outside the scope of this paper. Nonetheless, we would like to emphasize that our theoretical bounds *provably guarantee* well-behaved learning curves in tabular settings - in fact, since the bounds are minimax optimal, there exist some environments in which our algorithms converge at the optimal rate (up to absolute constants). We present a detailed algorithm description that includes the exact implementation details of both algorithms in Appendices B.3 and C.3, including the precise statement of the bonus and the update rules; any implementation of the theoretical algorithm would follow these schemes to the letter.
* Horizon H: the horizon is indeed a very important parameter in episodic RL - after H steps, the interaction is over and a new episode starts. In most applications, the choice of the horizon is induced by the environment - the interaction *cannot* last more than a fixed number of steps (‘external termination of the agent/timeout’). In this sense, it is very different than a discount factor that induces a soft effective horizon - often tuned by the algorithm designer. Finite-horizon algorithms can also work with any upper bound on the number of steps - though this degrades the performance, the effect of $H$ on the regret is usually assumed to be much less significant than any dependence on the state space. In general, and to our knowledge, all previous works on finite-horizon regret minimization assume the horizon is given to the agent as an input [19, 9, 33, 12, 14, 30, 35, 36]. Nonetheless, a few papers study regret minimization in alternative interaction models, including average reward [18] or stochastic shortest paths [*]. We believe that extending our results to these settings is an interesting future work.
* Our approach achieves optimal data efficiency, in the sense that our regret bounds are minimax optimal. In other words, for any algorithm, there exists an environment such that the difference between the collected reward and the optimal one must be larger than ours (up to absolute/additive constant). More broadly, our approach is model-based, and such approaches tend to use data more efficiently than model-free methods.
* We admit that we do not fully understand the question about off-policy learning. In our algorithm, we use all the data from historical interactions to estimate an environment model and use this model to perform optimistic planning - calculate the optimal agent at an optimistic environment. This includes data from previous deployed policies which are different from the current one. In addition, while this planning module outputs a value that determines our policy, we never perform a policy evaluation of previously used policies and do not rely on policy iteration/computations that are usually considered as policy improvement. As such, we believe that our algorithm can be described as an off-policy algorithm. If the question was about using offline data - we believe that such data could be utilized to improve our model estimation. If it was about model-free algorithms - as we mention in the conclusions section, extending our paper to model-free algorithms (e.g., Q-learning) would be very natural - we refer to the response to reviewer Dj8h for more details. In case we did not answer the intended question, we would appreciate it if the reviewer could clarify the question and we will gladly answer in the reviewer-author discussion period.
------
[*] Rosenberg, Aviv, et al. "Near-optimal regret bounds for stochastic shortest path." International Conference on Machine Learning. PMLR, 2020.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I will maintain my score after reading your reply. | Summary: This manuscript proposes the RL method with lookahead information. The authors discuss two scenarios: reward lookahead and transition lookahead. Under such scenarios, the proposed method estimates the reward distribution and transition distribution, respectively. Then the monotonic value propagation skill is applied to calculate the value function. The authors show that the proposed method has strong theoretical properties and the reward regret is strongly bounded under two circumstances.
Strengths: The manuscript is well organized, and the structure is clear. The authors shows very promising bound for both reward lookahead and transition lookahead scenarios.
Weaknesses: This is a theoretical paper, however, the authors miss to deliver some numerical or empirical studies. It is suggested to add some empirical experiments, at least with simulated data.
Algorithm 1&2 shows the procedure for training, I am confused about the inference process. How to select the action give certain state in inference? The authors are suggested to give some explanations in the Algorithm 1&2.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 150, the sentence should be "in this way"?
The estimated reward/transition distribution $\hat{R}^k_h$ and $\hat{P}^k_h$ are key components for the proposed method, it is suggested to give more details on the distribution estimation part.
For both cases, the authors proposes the bonuses, I am confused why do we need the bonus? only for calculating the variance value? However, even without the bonus value, we can also update the value function $\bar{V}^k_h(s)$, right?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the response and refer to the general comment on empirical simulations.
We apologize for any clarity issue and will make an effort to clarify the setting and the algorithm. Our paper studies an online setting where we repeatedly interact with an unknown environment in episodes, with the goal of collecting as much reward as possible throughout the interaction. Thus, the training and inference are interleaved - on each episode, we aim to use all the historical data to maximize the reward, on the one hand, while continuing to explore to improve future performance, on the other hand. The regret is a measure of how well we perform this task - can we collect rewards almost as much as the optimal policy that knows the environment in advance?
Every episode starts with a planning phase, where we calculate an optimistic value (to ensure exploration), and then continue with interaction for a full episode ($H$ steps) with the environment. When each episode ends, the environment resets to a new initial state and the process repeats itself for $K$ episodes. For example, in Algorithm 1, lines 4-6 represent the planning stage, while line 9 represents the deployed policy for this episode. In Appendix B.3, we provide a more detailed version of the algorithm, where the planning stage is on lines 4-10 and the deployed policy is given on line 11.
With reward lookahead, every time we reach a state $s$ at step $h$, we first observe the rewards $R(a)$ for all actions and then use our approximate planning to pick up an action that maximizes the expected long-term gain from an action $a_h \in \arg\max_a [ R(a)+\sum_{s’}\hat{P}_h(s’ | s, a){\bar{V}} _{h+1}(s’)+ bonus]$. The first term accounts for the immediate *reward observation* while the second handles future values. The bonus ensures optimism/exploration. Importantly, we start without knowing anything about the environment, and without the bonuses, we would not be incentivized to visit new parts of the state space. As a result, we would not have data on these unvisited areas and would converge to a suboptimal policy. This is similar to the way that UCB1 does exploration in bandits, just adapted to RL. With transition lookahead, we apply similar principles, but the agent observes the next state for any action before acting instead of the rewards.
If we understand correctly, the reviewer considers an altenative setting, in which either the data is an offline data set and we need to find a good policy to deploy (‘offline RL’), or where we interact with an environment to gather data that will allow calculated a good policy (‘best-policy identification’) - while lookahead could definitely be studied there, we focused on online regret minimization.
# Questions:
* Line 150 - noted, thanks!
* We will further elaborate on how the empirical distributions are calculated in the final version of the paper - thanks for the comment. The distributions are essentially uniformly sampling from a buffer: we keep all past reward/transition observations, and every time we want to sample an observation, we just take what we saw at a uniformly random past time step. More specifically, since we calculate an expectation over these distributions, this is just an empirical average using past observations - the explicit formula is provided in line 9 of Algorithms 3 and 4 (which are in the appendix). In practical implementations, it could be adapted using a finite buffer that tries to keep diverse information on different regions of the state space, or more elaborate distribution estimation mechanisms.
For transition kernels, we slightly abuse notations - we denote by $\hat{P}_h^k(s’ | s,a)$ - the empirical probability to transition from state $s$ to $s’$ when playing $a$ at step $h$ - the fraction of times in the past that we took action $a$ at state $s$ and transitioned to $s’$. The distribution $\hat{P}_h^k(s)$ is the empirical joint distribution of the next states at $s$ across all actions (that is, an $A$-dimension vector of next-states). It is calculated using the sampling from a buffer, as discussed in the previous paragraph.
* The bonuses are needed for exploration, we hope that the previous part of the response clarified it, but if not, we will gladly provide additional clarifications. | Summary: The paper considers the setting where the agent can see the possible next rewards and next states without assuming a prior knowledge of the environment dynamics. The predicted next rewards and next states are estimated by empirical distribution. The paper considers extending Monotonic Value Propagation to such a setting and proves that the proposed algorithms can achieve tight regret bounds.
Strengths: - A tight regret bound is proved for the proposed algorithm, establishing theoretical justification for lookahead information and advantages of planning in RL in general.
- The paper does not assume known environment dynamics as in most previous works, which makes the algorithm applicable to standard RL settings. The lack of known environment dynamics may bring various challenges to planning, such as agents not relying on the lookahead information when the estimated environment dynamics are still far from the true one in the early stages. The paper shows that the lookahead information can still be very beneficial despite such challenges.
- The paper is well-written, and the proof is easy to follow.
Weaknesses: Even though a tight regret bound has been proved, empirical experiments with examples showing how the agent uses the lookahead information will strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: There has been prior work in deep RL that makes use of lookahead information even when the environment dynamic is unknown. One example is the Thinker algorithm [1], which allows agents to select an imaginary action trajectory to collect n-step lookahead information before selecting an action in each step (the environment dynamics are also assumed to be unknown). The related work section should be updated to reflect this (e.g. line 73-79). However, as these works are mostly empirical without proving the regret bound, I still recommend that the paper be accepted, given its theoretical significance.
[1] Chung, Stephen, Ivan Anokhin, and David Krueger. "Thinker: learning to plan and act." Advances in Neural Information Processing Systems 36 (2024).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments and refer to the general response for a detailed discussion on simulations in our paper. We also provide an additional example of the advantage of lookahead information in the response to reviewer MArZ, and discuss how the distribution affects the lookahead value in the response to reviewer Dj8h - we are willing to further discuss this in the final version of the paper.
We acknowledge that in our literature survey, we focused on previous theoretical work; we apologize for that and will try to extend our survey to also cover more practical works, including [1]. Yet, we would like to point out that most previous works consider lookahead exclusively as a planning mechanism - that is, use it to calculate a better policy in the standard RL feedback model. In our framework, we know that we will encounter the exact same trajectory both in planning and when interacting with the environment, and this information could sometimes be leveraged to obtain a much higher value (see also the example in the response to reviewer MArZ that illustrates it). The two lookahead notions intersect when the environment is deterministic. On the other hand, in stochastic environments, the agent can actively take advantage of the stochasticity: learn the observation distribution in future states and decide in which future state observing rewards/transitions before acting would be more beneficial. Another potential intersection could appear when analyzing multi-step lookahead information - then, lookahead planners might be applied to effectively utilize lookahead information even in stochastic environments. We will further discuss this in our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I maintain my score after reading the responses and reviews.
One thing to note - works like [1] can usually be used to perform what you described. For example, in [1], one can replace the learned world model (the state-reward network in the dual network that is proposed) with the known environment dynamics so the agent can learn to take advantage of the stochasticity as in your example. The main difference with your proposed algorithm is that the agent has to decide which action to try instead of seeing all possible action's consequences. This, however, is essential for multi-step lookahead, as one cannot enumerate all action sequences with a decent depth.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer again.
Combining multi-step lookahead with the decision of which future information to query is indeed a great question, also from a theoretical point of view! We will read the work in further detail and discuss the relations in the final version of the paper. | Rebuttal 1:
Rebuttal: ## Experiments
Some of the reviewers expressed concern due to the lack of experiments in the paper.
While conducting experiments is always interesting, our paper theoretically studies a new setting for which there are no existing algorithms with theoretical regret guarantees. Thus, when comparing our approach to any existing algorithms that converge to the ‘standard’ no-lookahead value (which is much lower than the lookahead value), the difference between the values will dominate the performance. As such, it is very challenging to devise experiments that provide meaningful insights beyond the fact that the algorithm converges (which the proof already ensures). On the other hand, it is possible to discuss both theoretically and numerically the difference between the standard values and lookahead values, to demonstrate how much we can gain from utilizing lookahead information. We refer to the response to reviewer Dj8h for an additional discussion on lookahead vs. no-lookahead values in various scenarios. If the reviewers think it would be beneficial, we could extend this discussion and provide plots that illustrate the difference between the two values in the paper.
We agree with the reviewers that further pursuing the empirical study of this setting is of great importance due to its vast potential applications (as we detail in the response to reviewer MArZ). However, for this to be relevant for large-scale applications, this evaluation should be done while adapting our approach to deep RL, which is outside the scope of our paper. Our scope is more theoretical - we formalize a new setting with numerous potential applications and fully analyze planning and learning. In particular, the regret bounds of our algorithms are minimax-optimal (‘converge at the fastest possible rate for some problems’). Moreover, our algorithms and proofs have many novel elements: instead of estimating the expected reward/transition kernel, our algorithms work with the joint reward/next-state distribution and integrate distribution estimation into planning and learning; we show how to modify the classic regret analysis techniques to bypass the strong dependence that the setting creates between actions and rewards/next states; we demonstrate how to use the list representation of the policy to obtain tighter regret bounds; and more. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper studies an RL problem with a special setting, called one-step lookahead, where the agent can observe the reward or the state at the next step before the current action is taken. The paper focuses on the problem with an unknown environment (transition function). The authors proposed an efficient algorithms leveraging the empirical distribution of the lookahead information and claimed that the algorithms achieve tight regret against a strong baseline.
Strengths: 1. The paper studies an interesting RL problem where one-step lookahead information is available to the agent while the environment is unknown.
2. The paper clearly presents the problem, the solution, and a comparison between the proposed algorithm and the baseline in terms of regret bound.
3. The paper offers explanation of the terms in the regret bounds and justified its explanation.
Weaknesses: 1. One concern is the application of such a lookahead setting. The agents during training and running needs to know what will be realized in order to make actions at the current state. Not sure what real-world scenarios this setting can be applicable to.
2. RL with lookahead information has been investigated before from a theoretical point of view. See [R1, p64]. [R2] [R3]. [R1] discusses the lookahead in the approximation of the bellman function. [R2-R3] considers controlled lookahead where the agents decide the step of lookahead as a strategy. It is not straightforward to see in this paper how the lookahead studied in this paper different from those references.
[R1] Bertsekas, Dimitri. Reinforcement learning and optimal control. Vol. 1. Athena Scientific, 2019.
[R2] Biedenkapp, André, et al. "TempoRL: Learning when to act." International Conference on Machine Learning. PMLR, 2021.
[R3] Huang, Yunhan, Veeraruna Kavitha, and Quanyan Zhu. "Continuous-time markov decision processes with controlled observations." 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2019.
3. It is not clear the source of the baseline mentioning in the paper. For example, "compared to a stronger baseline that also has access to lookahead information". The paper should includes the reference whenever the baseline is compared with the proposed solution.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. When the agents have access to both reward lookahead and transition lookahead, how would the regret bound be different?
2. Why doesn't the paper present a case study that illustrates how would the agent behave differently between a normal RL setting and a lookahead setting?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Discussed in the Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
# Weaknesses
1.There are numerous applications where exact or approximate lookahead information is present:
* Transactions/Market interaction - whenever the agent performs transactions, the traded items and their prices are mostly observed before the trade takes place - reward lookahead. This is also relevant for problems such as inventory management.
* Communication networks - some networks continuously monitor the communication quality in different channels and use this information to choose which channel to use/avoid. This is similarly relevant in routing or situations where a protocol dictates when/where data could be sent.
* Packing - When trying to pack items optimally, the learner often observes the next item/few items and uses this observation to decide how to pack the current item. One famous example is Tetris, where the next block is observed. This can also be extended to systems with an observable queue.
* Navigation - depending on the problem, traffic information could be translated to either reward or transition lookahead. A related example is ride-sharing - travelers’ future location can be seen as future reward information.
* Weather-dependent applications - weather predictions are extremely accurate in the near future, and can determine either the transition or reward (depending on the application)
* Electricity grids - Electricity consumption can be accurately predicted in the near future, as well as electricity supply.
Some of these applications fit well to our model, while in others, the agent gets noisy predictions/multiple steps in the future - we believe that the numerous potential extensions/generalizations will further motivate the community to work on similar models.
3.When we measure regret, we calculate the difference between the value of the *optimal lookahead agent* and the value of the learning agent. The value of the optimal lookahead agent is much higher than the value of agents without lookahead (see the examples at the beginning of Sections 3 and 4, which explain why this value can be larger by a factor of roughly $AH$ and $A^H$, respectively, and an additional example below). When we talk about a stronger baseline, we talk about this optimal lookahead value - we want our agent to achieve sublinear regret compared to it and not compared to the standard no-lookahead value. Standard off-the-shelf RL agents cannot converge to the optimal lookahead value and will suffer linear regret vs. this baseline.
2.We thank the reviewer for the references and apologize that our discussion in the related work is not clear enough. There are two concepts of lookahead in the literature. In the learning community (and as far as we could see, also in [R1]), it is mainly used as a planning technique - that is, the model is not accurate enough/too complex for long-term planning, and lookahead is used as a computational tool to estimate a policy with high ‘standard’ (no-lookahead) value. For example, in [R1], it is motivated as an approach that mitigates the influence of errors in future value estimations. In particular, the aim is still to calculate the optimal Markovian policy, and lookahead is a means to calculate it. We, on the other hand, aim to achieve the higher lookahead value (as done in some existing formulations of MPC). This is not a semantic difference - the algorithm actively utilizes the fact that additional information will be revealed in the future to achieve higher values. To our understanding, this difference also holds in [R2-R3] - they do not rely on new future information but rather plan how to act multiple steps to the future. We will extend our discussion to cover this and include the suggested references.
## Example
To further illustrate the difference in values, consider the following 3-state environment.
* The agent starts at $s_1$ and has two options: move to state $s_2$ and gain 0 reward or transition to state $s_3$ and deterministically gain a reward of $R=0.6$.
* In $s_2$, two actions lead to state $s_3$, each giving a reward of $Ber(0.5)$.
* $s_3$ is a no-rewarding terminal state.
Without lookahead information, an optimal agent would move directly from state $s_1$ to $s_3$, earning a value $V=0.6$. This is also true for agents calculated via rollouts/lookahead planning: the possible trajectories are $s_1->s_3$ and $s_1->s_2->s_3$, and if we use rollouts of length 3, we get that the trajectory $s_1->s_3$ is better in expectation (value $0.6$ compared to $0.5$ when going through $s_2$).
In contrast, in our setting, we assume to have lookahead information, that is, once we reach a state, we know that rewards will be revealed before choosing an action. Specifically, when reaching $s_2$, we can pick the action with the maximal *realized* reward - collect a unit reward w.p. $0.75$. In state $s_1$ we can still only earn $R=0.6$, so the optimal policy changed: now it is optimal to take the path $s_1->s_2->s_3$ (with value $V=0.75$). In other words, even though the expected rewards in $s_2$ are lower, the agent decides to go there because the rewards are more stochastic, and it relies on the fact that when observing the rewards, this stochasticity can be used to collect a higher value.
# Questions
1. When having both reward and transition lookahead, we believe that our algorithms could be naturally generalized by estimating both distributions simultaneously. Then, using similar techniques, it should be possible to prove a regret bound of $O(\sqrt{H^4SAK})$ or better. We also suspect that the lower bound for this setting is $\Omega(\sqrt{H^3SK})$, so the hypothesized upper bound would be tight up to a factor of $\sqrt{SA}$.
2. We intended that the examples at the beginning of Sections 3 and 4 would be the illustrative examples that show the difference between situations with and without lookahead. We will try to further clarify them - we would appreciate any further feedback on what is missing in the examples. | null | null | null | null | null | null |
Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation | Accept (poster) | Summary: Recent research indicates that rate-coding is crucial for information representation in deep Spiking Neural Networks (SNNs) trained via Backpropagation Through Time (BPTT). Building on this insight, a new training strategy called rate-based backpropagation has been developed to leverage rate-based representations, reducing the complexity of BPTT. This approach focuses on averaged dynamics to simplify the computational graph, thereby lowering memory and computational requirements. Theoretical and empirical analyses demonstrate that this method closely approximates BPTT's gradient optimization, maintaining comparable performance while surpassing other efficient training techniques. This advancement is poised to enable more scalable and resource-efficient SNN training, particularly in environments with limited resources.
Strengths: 1. The paper is very well written and documented.
2. The contributions have been discussed comprehensively.
3. The experiments have been conducted on multiple benchmarks.
Weaknesses: Some important details (such as the top-level algorithm of the proposed rate-based backpropagation method and details of the experimental setup) are reported in the appendix, while, due to their importance, they should be moved to the main manuscript.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the proposed rate-based backpropagation be implemented on existing neuromorphic chips with learning capabilities?
2. Looking at the results in Fig.4, the impact of the number of timesteps on the time and memory looks constant. How have specific numbers of timesteps been selected for each dataset?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations and societal impacts have been discussed in Appendix D.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **W1: Some important details (such as the top-level algorithm of the proposed rate-based backpropagation method anddetails of the experimental setup) are reported in the appendix, while, due to their importance, they should be movedto the main manuscript.**
Thank you for your suggestion. We appreciate your feedback and will consider moving more details to the main manuscript to facilitate a clearer understanding of the technical aspects for the readers.
#### **Q1: Can the proposed rate-based backpropagation be implemented on existing neuromorphic chips with learning capabilities?**
This is indeed an interesting point. The proposed rate-based backpropagation highlights the significant aspect of training efficiency, which is very pertinent. To our knowledge, some neuromorphic chips [1,2] are already incorporating online learning schemes that perform backpropagation at specific timesteps while maintaining eligibility traces at the neural level. We would say that architectures supporting online training could easily adapt to support our proposed method. Computationally, our approach offers an effective optimization of both computational and memory complexity compared to BPTT. Thus, on neuromorphic chips that allow for custom configurations, such as Tianjic [3] and Loihi [4], rate-based backpropagation would be more practical than BPTT in terms of computational cost, storage demands, and communication overhead.
#### **Q2: Looking at the results in Fig.4, the impact of the number of timesteps on the time and memory looks constant. How have specific numbers of timesteps been selected for each dataset?**
Thank you for your question. There are two aspects to consider in answering this. First and foremost, it's crucial to present our method as a fair comparison with recent works to determine whether the performance is compelling. Thus, we align the number of timesteps with those used in previous studies. Secondly, the number of timesteps used to train our SNN models indeed affects the inference latency, which is a key concern in the field of direct learning. Therefore, we often consider the tradeoff between higher inference accuracy and larger timesteps (which result in higher latency). The timesteps chosen for this study appear to be a good balance in benchmark scenarios.
## Reference
[1] Frenkel, Charlotte, and Giacomo Indiveri. "ReckOn: A 28nm sub-mm2 task-agnostic spiking recurrent neural network processor enabling on-chip learning over second-long timescales." 2022 IEEE International Solid-State Circuits Conference (ISSCC). Vol. 65. IEEE, 2022.
[2] Rostami, Amirhossein, et al. "E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware." Frontiers in Neuroscience 16 (2022): 1018006.
[3] Pei, Jing, et al. "Towards artificial general intelligence with hybrid Tianjic chip architecture." Nature 572.7767 (2019): 106-111.
[4] Davies, Mike, et al. "Loihi: A neuromorphic manycore processor with on-chip learning." Ieee Micro 38.1 (2018): 82-99.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: In light of the other reviews and the author's rebuttal, my score is confirmed. | Summary: This paper presents a novel rate-based backpropagation method for spiking neural network (SNNS) training, which effectively separates the time-dependent backpropagation (BPTT) process and thus reduces computational and memory costs. The method employs a rate-encoded approximation to capture the basic information and is validated by empirical experiments on various datasets, demonstrating that it is superior in terms of training efficiency and accuracy when compared to the traditional BPTT.
Strengths: 1. Empirical results on multiple datasets (CIFAR-10, CIFAR-100, ImageNet, CIFAR10-DVS) support the theoretical claims and ensure accuracy while reducing memory and time costs.
2. The paper is well-written, clearly explaining the proposed method, theoretical underpinnings, and experimental validation.
Weaknesses: 1. In lines 53-55, this paper mentions that the proposed method reduces training time, but there is no relevant experimental proof in the experiments section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In lines 223-234, the reference to 'when cosine similarity close to 1 is interpreted as a high degree of consistency in the direction of the variable', does it take into account the effects of data distribution and noise, which may also occur in the case of uneven data distribution. Can additional experiments or theories be added to rule out the effect of data distribution and noise on the hypothesis presented in lines 223-234?
2. The approach proposed in the paper seems to be very similar to the one described in reference [1]. Although the general direction of the two is different, the core idea seems to be the same. Could you please explain the difference between your approach and the one outlined in reference [1]?
3. In Section 5.3, in the experiments evaluating the effect of time step on accuracy and training, only one dataset, CIFAR-10, was used. Could the experiment be supplemented with experiments using other datasets to demonstrate the scalability of the proposed method for larger values of T?
4. In the caption of Fig. 3, is the placeholder '#' missing from T{timesteps}?
Reference:
[1] Bu, Tong, et al. "Rate gradient approximation attack threats deep spiking neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors fully explain the limitations and potential social implications of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **W1: In lines 53-55, this paper mentions that the proposed method reduces training time, but there is no relevant experimental proof in the experiments section.**
Thank you for your suggestion. We have added more experiments on training costs to strengthen the experimental proof of training efficiency, as shown in the global response. These results will be included in the paper. Thank you for pointing this out.
#### **Q1: In lines 223-234, the reference to 'when cosine similarity close to 1 is interpreted as a high degree of consistency in the direction of the variable', does it take into account the effects of data distribution and noise, which may also occur in the case of uneven data distribution. Can additional experiments or theories be added to rule out the effect of data distribution and noise on the hypothesis presented in lines 223-234?**
Thank you for your suggestions.
First, we have supplemented empirical experiments on DVS-CIFAR10 as a validation in the case of uneven data distribution (please see Figure in the global response). The observations confirm that, even in data with a degree of temporal information, the empirical validation of the assumptions remains consistent with expectations.
Furthermore, we would like to clarify Theorem 2 mentioned in the paper. We derived a general approximation error in Theorem 2, which provides a proof concerning the error bounds. This can demonstrate that the approximation errors caused by non-ideality of assumptions are well bounded. We believe that both the effects of uneven data distribution and noise can be generally considered as factors causing approximation errors in our assumptions, and Theorem 2 has already accounted for these scenarios.
#### **Q2: The approach proposed in the paper seems to be very similar to the one described in reference [1]. Although the general direction of the two is different, the core idea seems to be the same. Could you please explain the difference between your approach and the one outlined in reference [1]?**
Here are the main differences between our work and reference [1]:
1. Our method to gradient computation differ significantly. We likely to conclude the concepts into, gradient-computing via stochastic dynamic process v.s. deterministic closed-form. [1] derives gradient relationships based on a deterministic relationship assumption between inputs and outputs in a closed-form manner, similar to the methodologies used for ANN-to-SNN conversions [2]. Conversely, our method adopts a stochastic dynamic process for gradient computation, which follows the ideas of decoupling BPTT-based direct-learning methods [3,4].
2. The application domains of the two methods vary, influencing dinstinct objectives. The method in [1] is used primarily in adversarial attack scenarios, focusing on the input data of the entire model. In such cases, the requirement for strict adherence to the exact gradient computation process is more relaxed, and deviations are somewhat permissible. In contrast, our method is designed for training deep SNNs, necessitating more precise gradient computations for all weights. Our approach supports more strict gradients within rate-based backpropagation, and experiments have demonstrated the proposed method can achieve gradients akin to BPTT.
We will include a topic in the appendix to provide a more detailed description of how our work differs from related work. Thank you for your comment.
#### **Q3: In Section 5.3, in the experiments evaluating the effect of time step on accuracy and training, only one dataset,CIFAR-10, was used. Could the experiment be supplemented with experiments using other datasets to demonstrate the scalability of the proposed method for larger values of T?**
Thank you for your suggestion. Our additional experiments evaluated the effects of timestep on accuracy, memory, and time costs across the CIFAR-100 and ImageNet datasets using various network architectures (in global PDF). These results are consistent with the discussions in our manuscript and demonstrate the scalability of the proposed method. We will include the additional results into the updated version.
#### **Q4: In the caption of Fig. 3, is the placeholder '#' missing from T{timesteps}?**
Thank you for pointing that out; it will be corrected.
## Reference
[1] Bu, Tong, et al. "Rate gradient approximation attack threats deep spiking neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Bu, Tong, et al. "Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks." International Conference on Learning Representations.
[3] Bellec, Guillaume, et al. "A solution to the learning dilemma for recurrent networks of spiking neurons." Nature communications 11.1 (2020): 3625.
[4] Xiao, Mingqing, et al. "Online training through time for spiking neural networks." Advances in neural information processing systems 35 (2022): 20717-20730. | Summary: This work falls into the category of efficient SNN training methods. This paper proposes a reduced computational graph to reduce the memory and computational demands of SNNs training. This work has the potential to train SNNs on resource-limited devices. The paper evaluates the methods on CIFAR-10, CIFAR-100, ImageNet, and other datasets.
Strengths: This paper addresses the issue of high time and memory costs in training spiking neural networks.
This paper provides solid theoretical insights into the error bound and its relation to SNN BPTT training.
The results of this work are comparable to the performance of the BPTT counterpart.
Weaknesses: Not a clear comparison of the differences with existing e-prop methods in terms of methodology.
No generalization results on hyperparameters (e.g., $\lambda$) are presented in this work. I raise this question because most work on SNNs uses large values of $\lambda$, but this work used 0.2 as $\lambda$.
Technical Quality: 3
Clarity: 2
Questions for Authors: Why did the authors approximate the spiking rate directly with the mean over timesteps, instead of using a running mean with a decay parameter $\lambda$, which would more closely approximate the rate in the leaky integration mode?
In Line 151, page 5, what does 𝑑 represent in $\frac{\partial I}{partial c} = Id$?
Please elaborate further on the differences between rateM and rateS. The authors state that 'rateM represents the multi-step training mode where T loops are embedded within layers, while rateS refers to the single-step training mode with T loops outside the layer.'
Regarding robustness to $\lambda$ In the paper, the neuronal parameter $\lambda$ is set to 0.2. Can you provide experiments with other values of $\lambda$, such as 0.4, 0.6, 0.8, and 1.0?
I believe that the training cost in Fig. 4 should encompass not only the backward process but also the forward iteration process (which also contributes to the cost).
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **W1: Not a clear comparison of the differences with existing e-prop methods in terms of methodology.**
Thank you for your comments. In our paper, we compare our method with online-learning akin to e-prop [1], as OTTT[2,3,4], SLTT[3], and OS[4], with results shown in Table 1. Descriptions of online methods can be found in Lines 29-32, 81-87, 161-166, and 251-255.
The core differences in implementation with these methods, as highlighted in Fig. 1b and 1c, are that:
1. Unlike online methods that require spatial backpropagation at every timestep, our proposed method conducts this process only once at the final timestep, effectively compressing the multiple instances of spatial backpropagation into a single occurrence.
2. Methodologically, the proposed rate-based method differs from e-prop in that we do not focus on specific temporal dynamics among all time slots but rather aim to efficiently train networks by capturing dominant rate-based features.
Thank you for pointing this out. We will ensure these details are more clearly articulated in the paper.
#### **W2: No generalization results on hyperparameters (e.g. $\lambda$) are presented. Most work on SNNs uses large values of $\lambda$, but this work used 0.2.**
Thank you for your comment. We configured our training environment and hyperparameters based on setups from existing works on direct training-based deep SNNs [5,6,7,8,2,3,9,4,10,11]. In these works, the decay parameter $\lambda$ is set at 0.5 in [2,4,11], 0.25 in [8], 0.2 in [7,9,10], and 0.09 in [3]. It is observed that direct training methods often employ $\lambda$ of 0.5 or lower. We chose 0.2 to align with training settings used in [7,10].
#### **Q1: Why did the authors approximate the spiking rate directly with the mean over timesteps, instead of using a runningmean with a decay parameter, which would more closely approximate rate in the leaky integration mode?**
Thank you for your comments.
Firstly, the choice of averaging over timesteps for measuring the spiking rate aligns better with our method:
1. Using a running mean with a decay parameter [12,3], which measures a scaled running average of the rate, typically retains temporal dimensions where weights for spikes closer in time are higher. This approach often requires constructing rate-based backpropagation separately for different time slots.
2. In contrast, our approach involves constructing a single rate-based chain rule for spatial-only backward propagation at the final timestep. Therefore, approximating the mean rate over the entire neural dynamics is more suitable for our goals.
Regarding the second question, considering the rate estimation at time $T$, our method gives $r_1=\frac{\sum_t s_t}{T}$, while a running mean with a decay parameter would give $r_2=\frac{\sum_t \lambda^{(T-t)}s_t}{\sum_t \lambda^{(T-t)}}$. If the spike sequence follows frequency coding with a spike rate $r$ (assuming $s_t \sim Bernoulli(r)$), then $\mathbb{E}[r_1]=\frac{\sum_t \mathbb{E[s_t]}}{T}=r$ and $\mathbb{E}[r_2]=\frac{\sum_t \lambda^{(T-t)}\mathbb{E}[s_t]}{\sum_t \lambda^{(T-t)}}=r$. This supports that both methods provide unbiased estimates of the rate.
In summary, our choice of averaging over timesteps for rate approximation is driven by its compatibility with our single-step spatial-only backpropagation approach, simplifying the implementation and aligning with our method's objectives effectively.
#### **Q2: In Line 151, page 5, what does $d$ represent?**
We apologize for the ambiguity; $Id$ represents the identity matrix. We will add clarification.
#### **Q3: Please elaborate further on the differences between rateM and rateS.**
$rate_M$ and $rate_S$ cater to multi-step and single-step training modes, respectively, each adapting batch normalization differently as detailed in the the Appendix B.2 and B.3. We support both modes to demonstrate method’s flexibility and to facilitate fair comparisons with related methods.
#### **Q4: Can you provide experiments withother values of $\lambda$, such as 0.4, 0.6, 0.8, and 1.0?**
Additional experiments on CIFAR10 with T=6 setting using ResNet-18
|$\lambda$|0.1|0.2|0.4|0.5|0.6|0.8|1.0|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|$rate_M$|95.45|95.8|95.65|95.34|95.11|93.82|92.79|
|BPTT|95.51|95.69|95.48|95.19|95.31|94.65|94.7|
The results indicate that when $\lambda$ is within the range of $[0.1, 0.5]$, the performance of our rate-based method aligns closely with that of BPTT. However, as it increases into $[0.8, 1.0]$, the performance of both BPTT and rate-based methods declines, with a more pronounced decrease in the rate-based method. We believe the primary reason for this phenomenon is the non-ideality in the assumption of independent distributions used in Theorem 1, $\mathbb{E}[\delta^{(s^l)}_t \kappa^l_t ] = \mathbb{E}[\delta^{(s^l)}_t]\mathbb{E}[\kappa^l_t]$, which leads to larger approximation errors at higher $\lambda$ values, as discussed in Theorem 2 where $\epsilon$ is amplified, thus magnifying the gradient difference with BPTT. We will address this issue in the appendix, discussing the implications and acknowledging it as a limitation in our study.
#### **Q5: I believe that the training cost in Fig. 4 should encompass not only the backward process but also the forward iteration process.**
Thank you for your feedback. We have supplemented our work with more detailed experiments on training costs, including the extra costs incurred during the forward iteration process as shown in the global response. Firstly, the total time cost combined with the computation of eligibility traces in forward, still shows a clear advantage over BPTT when $T\geq2$. Additionally, it is observed that the computation of eligibility traces is less intensive than the backward process when $T\leq 8$.
We will include the supplemented experimental results into the paper and acknowledge that the computation of eligibility traces during the forward process requires further optimization.
---
Rebuttal 2:
Comment: ## Reference
[1] Bellec, Guillaume, et al. "A solution to the learning dilemma for recurrent networks of spiking neurons." Nature communications 11.1 (2020): 3625.
[2] Xiao, Mingqing, et al. "Online training through time for spiking neural networks." Advances in neural information processing systems 35 (2022): 20717-20730.
[3] Meng, Qingyan, et al. "Towards memory-and time-efficient backpropagation for training spiking neural networks." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[4] Zhu, Yaoyu, et al. "Online stabilization of spiking neural networks." The Twelfth International Conference on Learning Representations. 2024.
[5] Fang, Wei, et al. "Spikingjelly: An open-source machine learning infrastructure platform for spike-based intelligence." Science Advances 9.40 (2023): eadi1480.
[6] Guo, Yufei, Xuhui Huang, and Zhe Ma. "Direct learning-based deep spiking neural networks: a review." Frontiers in Neuroscience 17 (2023): 1209795.
[7] Wu, Yujie, et al. "Spatio-temporal backpropagation for training high-performance spiking neural networks." Frontiers in neuroscience 12 (2018): 331.
[8] Zheng, Hanle, et al. "Going deeper with directly-trained larger spiking neural networks." Proceedings of the AAAI conference on artificial intelligence. Vol. 35. No. 12. 2021.
[9] Guo, Yufei, et al. "Recdis-snn: Rectifying membrane potential distribution for directly training spiking neural networks." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
[10] Wang, Ziming, et al. "Adaptive smoothing gradient learning for spiking neural networks." International Conference on Machine Learning. PMLR, 2023.
[11] Deng, Shikuang, et al. "Surrogate module learning: Reduce the gradient error accumulation in training spiking neural networks." International Conference on Machine Learning. PMLR, 2023.
[12] Meng, Qingyan, et al. "Training high-performance low-latency spiking neural networks by differentiation on spike representation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. | Summary: This paper proposes a rate-based SNN training method, which can effectively reduce memory and time cost during training. They proved the efficiency of the rate-based back-propagation training and demonstrate that the rate-based training outperforms other back-propagation methods.
Strengths: The rate-based method achieves better performance and uses less computing resource compared with BPTT, which is impressive.
This paper is well-written and well-organized.
Weaknesses: The novelty is weak. There are two previous works that share similar idea with this paper, since they all use rate-based backpropagation [1,2]. The author needs to briefly explain the differences between these papers.
The rate-based backpropagation is not suitable for sequential tasks.
[1] Li, Yuhang, et al. "Differentiable spike: Rethinking gradient-descent for training spiking neural networks." Advances in Neural Information Processing Systems (2021).
[2] Bu, Tong, et al. "Rate gradient approximation attack threats deep spiking neural networks." Computer Vision and Pattern Recognition (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors introduce the rate-coding approximation forward propagation in Section 4.1. Is this forward propagation method also used during inference?
What is the performance of rate$_s$ on ImageNet dataset?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See in weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: #### **W1: The novelty is weak. There are two previous works that share similar idea with this paper, since they all use rate-basedbackpropagation [1,2]. The author needs to briefly explain the differences between these papers.**
Thank you for your comments, which have prompted further clarification of our work's novelty and contributions.
Firstly, regarding the differentiation from references [1] and [2]:
1. The work in [1] focuses primarily on the calibration and modification of STE [3] (or "surrogate-gradients" commonly used in SNNs). In contrast, our approach significantly simplifies the chain rule application by compressing backpropagation within the spatial dimensions to enhance training efficiency.
2. Meanwhile, [2] derives gradient relationships based on a deterministic relationship assumption between inputs and outputs in a closed-form manner, similar to the methodologies used for ANN-to-SNN conversions [4]. Conversely, our method adopts a stochastic dynamic process for gradient computation, which follows the ideas of decoupling BPTT-based direct-learning methods. Additionally, while [2] targets the adversarial attack domain where precise gradient calculations are not stringent, our proposed method as a training method requires more accurate gradient computations, demonstrated experimentally to achieve gradients akin to those of BPTT.
Furthermore, other works bind ANN and SNN training together in what's known as tandem training [5]. However, these approaches cannot connect well to established BPTT, limiting their accuracy and competitiveness with BPTT benchmarks. There are also efforts considering rate-based gradient graphs [6], aimed at optimizing training costs, but these are similarly limited by closed-form derivations like [4], unable to bypass the coupling of batch normalization layers for purely spatial backpropagation.
We appreciate your suggestion for greater clarity, and we will include a detailed discussion in the appendix to better delineate how our method differs from other rate-based approaches. Thank you for prompting this improvement.
To further clarify our novelty and contribution, it is important to note that our work's originality does not claim to be the first to propose rate-based backpropagation. Rather, our significant contribution lies in being the first to effectively implement the concept of rate-based backpropagation as a training efficiency strategy, demonstrating its potential to perform comparably with BPTT. This is not intended as a perfect substitute for BPTT but as an effective complement and enhancement, forming a promising combined approach. We believe this technical solution, which optimizes the training process of deep SNNs, holds potential developmental benefits for the community.
#### **W2: The rate-based backpropagation is not suitable for sequential tasks.**
We acknowledge that rate-based approaches [2,4,5,6] including our proposed method, indeed face challenges in handling sequential tasks, which is a recognized limitation of such strategies. This work does not aim to address the difficulties of applying rate-based methods to sequential tasks. Instead, our focus is primarily on tasks that rely mainly on rate-coding, and the proposed method is specifically designed to capture the rate-based feature representation of deep SNNs to enhance training efficiency. Thank you for highlighting this point; we will include it in the limitations section.
#### __Q1: The authors introduce the rate-coding approximation forward propagation in Section 4.1. Is this forward propagation method also used during inference?__
No, the rate-coding approximation is purely used during the training phase and does not participate in the inference process. The inference strictly adheres to the conventional protocols of deep SNNs to ensure fair comparisons.
#### __Q2: What is the performance of $rate_S$ on ImageNet dataset?__
Additional experiments have been conducted to evaluate the performance of $rate_S$ on the ImageNet:
||Timesteps | Top-1 Acc (%)|
|:-:|:-:|:-:|
|SEW-ResNet-34|4|65.656 |
|PreAct-ResNet-34|4|69.578 |
The results confirm that the $rate_S$ method is effective on ImageNet. We will integrate these results into Table 1.
## Reference:
[1] Li, Yuhang, et al. "Differentiable spike: Rethinking gradient-descent for training spiking neural networks." Advances in Neural Information Processing Systems 34 (2021): 23426-23439.
[2] Bu, Tong, et al. "Rate gradient approximation attack threats deep spiking neural networks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[3] Bengio, Yoshua, Nicholas Léonard, and Aaron Courville. "Estimating or propagating gradients through stochastic neurons for conditional computation." arXiv preprint arXiv:1308.3432 (2013).
[4] Bu, Tong, et al. "Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks." International Conference on Learning Representations.
[5] Wu, Jibin, et al. "A tandem learning rule for effective training and rapid inference of deep spiking neural networks." IEEE Transactions on Neural Networks and Learning Systems 34.1 (2021): 446-460.
[6] Meng, Qingyan, et al. "Training high-performance low-latency spiking neural networks by differentiation on spike representation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022.
---
Rebuttal Comment 1.1:
Title: Response for the Rebuttal
Comment: In the rebuttal, the authors distinguished between this rate-based BP and the previous works[1,2], which I found convincing. Since they addressed my major concerns, I will raise my score. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for your constructive comments and suggestions. We have addressed all the weaknesses and questions raised by the reviewers; detailed responses can be found in the corresponding sections of the rebuttal for each reviewer. In response to the reviewers' feedback, we have conducted additional experiments, including empirical validation on DVS-CIFAR10 and further experimental validation on performance and training costs, the results of those two are included in the global PDF.
Pdf: /pdf/c1bd6f5b10fa678424ef5fac9f329984717d2daf.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient LLM Scheduling by Learning to Rank | Accept (poster) | Summary: This paper proposes a learning-based rank predictor for scheduling LLM inference to reduce Head-of-Line (HoL) blocking issues, which significantly outperforms state-of-the-art LLM serving systems.
Strengths: 1. This paper addresses an important question in LLM serving.
2. This paper is easy to follow with a good presentation.
3. The evaluation results are comprehensive and solid.
Weaknesses: 1. One potential issue with preemptive scheduling for LLM inference is the accumulated unused KV cache. How do you handle them when the GPU reaches the maximum memory limit?
2. How much does the ranking model (OPT) size affect the prediction and throughput performance? For example, what if I use a smaller auxiliary model (OPT-125M) for a larger LLM (LLaMA-70B)?
3. How much is the performance gap between the ranking-based method and Oracle? It would be better if the authors could add such results to provide a performance upper bound.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the weaknesses above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the constructive and detailed suggestions. We appreciate your generous comments!
**Q1: One potential issue with preemptive scheduling for LLM inference is the accumulated unused KV cache. How do you handle them when the GPU reaches the maximum memory limit?**
To address the issue of an increasing size of KV Cache, we will swap the requests to host memory (as vLLM [1] does) if the requests are not in the current step’s running batch (i.e., B in Algorithm 1) and cannot fit in the GPU memory. However, our experiments handling a burst of 2000 requests with a single llama-3-8b model on a 40GB A100 GPU and with a llama-3-70b model on 8 A100 GPUs resulted in almost no swapping using both FCFS and our scheduling, with vLLM’s default setting of a max batch size of 256 requests running simultaneously. This indicates that our method will not incur more preemptions than FCFS. This is because both our method and FCFS have a fixed execution order for all arriving requests at the beginning. However, our baseline MLFQ will adjust the execution order and call thousands of swaps with the same workload.
**Q2: How much does the ranking model (OPT) size affect the prediction and throughput performance? For example, what if I use a smaller auxiliary model (OPT-125M) for a larger LLM (LLaMA-70B)?**
Our results show that the model size has a minor effect on the prediction ability, as indicated in the following table:
| Kendall’s Tau (↑) | 125m-OPT | 350m-OPT |
| ---- | ---- | ---- |
| ShareGPT | 0.55 | 0.54 |
| Lmsys | 0.64 | 0.62 |
Using an OPT-350m is more for deployment considerations. The OPT-350m model with 16 attention heads is easy to deploy with 8-way tensor parallelism, as the llama-70b model requires 8-way tensor parallelism, while an opt-125m with 12 attention heads cannot be deployed on 8 GPUs, as discussed in Section 5.1. We deploy the opt-125m predictor only on 1 GPU, requiring the other 7 GPUs to wait when executing the predictor, which is a waste of resources and may lead to performance degradation.
**Q3: How much is the performance gap between the ranking-based method and Oracle? It would be better if the authors could add such results to provide a performance upper bound.**
The performance gap between the ranking-based method and the Oracle depends on the evaluation dataset. On some datasets, our proposed method can be as good as the Oracle. Due to noise and randomness in the sampling process, we define the Oracle as using sampling results with one seed to guide the schedule of another sampling, which is the best we can achieve knowing one sampling result.
For example, tested on the Alpaca dataset and llama3-7b model, our proposed method can be very close to the Oracle regarding Kendall’s Tau and end-to-end latency of a burst of 2K requests. We tested the end-to-end performance on a single A100 80G GPU.
| | Tau | Latency (s/token)
| ---- | ---- | ---- |
| Ours | 0.73 | 0.28 |
| Oracle | 0.72 | 0.24 |
| FCFS | / | 1.36 |
On datasets like lmsys-chat-1m and ShareGPT, there is still a small gap between the proposed ranking-based method and the Oracle. We present in Table 3 the comparison between the ranking-based method (line Ranking (Ours)) and the Oracle (line Optimal Prediction).
[1] Kwon, Woosuk, et al. "Efficient memory management for large language model serving with pagedattention." Proceedings of the 29th Symposium on Operating Systems Principles. 2023.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed responses.
My only suggestion would be to add Oracle as an additional baseline in the main evaluation if the paper is accepted.
---
Rebuttal 2:
Comment: Dear Reviewer JCjf,
We sincerely thank you for the prompt response. We greatly appreciate your constructive and insightful suggestions. We will keep Oracle in the main evaluation if the paper is accepted.
Best Regards,
Paper 5119 Authors | Summary: This paper proposes an approach for optimizing scheduling in LLM serving by learning a generated token length ranking model. The authors demonstrate that understanding the relative order of generation lengths can effectively guide the scheduling process, specifically through the use of SJF/ SRTF scheduling strategies.
Strengths: 1. The paper is well-written, making the methodology and results clear and easy to understand.
2. The experiments are well-designed and convincingly demonstrate the benefits of the proposed approach.
3. The proposed method has shown practical improvements when integrated with current serving techniques.
Weaknesses: 1. While the approach is effective, it builds upon existing work that has already identified the benefits of SJF/SRTF scheduling for LLMs[1][2]. The novelty is somewhat limited to the application of ranking loss instead of classification loss.
2. If we directly predict the token length, it could potentially offer advantages such as improved memory allocation and cache strategy adjustments, which are also crucial for optimizing LLM serving. In contrast, using relative order may not provide these benefits.
3. The paper lacks a thorough discussion of some related work, such as [1][2]
[1] Efficient Interactive LLM Serving with Proxy Model-based Sequence Length Prediction
[2] Power-aware Deep Learning Model Serving with µ-Serve
Technical Quality: 3
Clarity: 3
Questions for Authors: This paper only considers the generated length, which may affect the execution time for each query. However, prompt length also influences execution time. Wouldn't it be more reasonable to also take prompt length into consideration?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the very insightful and helpful comments! We would like to address your questions in the below response.
**Q1: While the approach is effective, it builds upon existing work that has already identified the benefits of SJF/SRTF scheduling for LLMs[1][2]. The novelty is somewhat limited to the application of ranking loss instead of classification loss.**
We argue that our work presents the following novelties:
First, recognizing the importance of relative length ranking instead of accurate token length after the introduction of paged memory allocation (proposed by [3]) is novel.
Second, employing learning-to-rank to optimize the prediction is novel and outperforms previous methods.
Third, the consideration of the fairness metric in LLM serving and starvation prevention is novel.
**Q2: If we directly predict the token length, it could potentially offer advantages such as improved memory allocation and cache strategy adjustments, which are also crucial for optimizing LLM serving. In contrast, using relative order may not provide these benefits.**
Predicting the number of output tokens may aid in more precise memory allocation, but we observe that in the presence of paged memory allocation (as proposed by vllm[3] and populated in all current LLM serving systems), the benefit diminishes, as paged attention reduces memory waste to as low as 5%.
Despite memory allocation, knowing the exact number of output tokens in advance may reduce the likelihood of preemptions caused by kv-cache overrun. However, we found that this preemption cost can largely be mitigated (e.g., see FastServe[4]). Therefore, we pursue relative ranking prediction, which is a simpler task but suffices for approximating optimal scheduling.
**Q3: The paper lacks a thorough discussion of some related work, such as [1][2]**
We will include discussions with [1][2] in the related work section in a revised version. [1][2] are both concurrent works that use predictors to approximate SJF/SRTF. Both [1][2] propose a regression-based method for length prediction, fine-tuning a Bert model on the Lmsys-Chat-1M dataset with a regression L1 loss to predict the exact generation length. They tested models ranging from 300M to 3b and applied various batching policies (i.e., no batching, dynamic batching, continuous batching). The proposed method significantly improves latency and throughput under these settings. Additionally, it supports multi-round LLM conversations. Unlike this method, our proposed method is built on vLLM with paged attention and uses ranking loss to optimize the predictor model. We designed a preemptive scheduling method with starvation prevention to optimize the end-to-end performance of real-world LLM serving systems.
**Q4: This paper only considers the generated length, which may affect the execution time for each query. However, prompt length also influences execution time. Wouldn't it be more reasonable to also take prompt length into consideration?**
Considering prompt length is indeed a nice suggestion, which we have previously considered, but we found that in practice, focusing only on generated length is simple yet sufficiently effective.
First, we observe from lmsys-chat-1m and ShareGPT – two traces that represent real-world scenarios – that prompt length is not crucial in generation time. Specifically, the prefill time accounts for only 5% on lmsys-chat-1M and 8% on ShareGPT, respectively, of the entire generation time, indicating they have a minor impact on overall latency. Note that there are already long prompts in the workloads we tested. For example, 1% of all prompts in the ShareGPT dataset exceed 900 tokens.
Second, although this paper does not particularly focus on long context (e.g., prompt length > 32k tokens), we argue it is relatively straightforward to handle long prompts. Since prompt lengths are always known a priori – it is easy to accurately approximate the latency of the prefill phase via profiling. We can also map the relative ranking of generation length into a length estimation according to the dataset distribution. Then, simply adding prefill time estimation to the current framework together provides an end-to-end generation time approximation for scheduling.
[1] Qiu, Haoran, et al. "Efficient interactive LLM serving with proxy model-based sequence length prediction." arXiv preprint arXiv:2404.08509 (2024).
[2] Qiu, Haoran, et al. "Power-aware Deep Learning Model Serving with μ-Serve." 2024 USENIX Annual Technical Conference (USENIX ATC 24). 2024.
[3] Kwon, Woosuk, et al. "Efficient memory management for large language model serving with pagedattention." Proceedings of the 29th Symposium on Operating Systems Principles. 2023.
[4] Wu, Bingyang, et al. "Fast distributed inference serving for large language models." arXiv preprint arXiv:2305.05920 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, my concern has been resolved, I will raise my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer GSuM,
We sincerely thank you for raising your score. We highly appreciate your insightful and helpful comments. We will incorporate the new discussions into our final revised manuscript.
Best Regards,
Paper 5119 Authors | Summary: This paper reveals the Head-of-Line (HOL) blocking problems caused by the first-come-first-serve (FCFS) scheduling strategy in LLM services. To alleviate these problems, the authors train an OPT model to generate scores for evaluating the relative text length of given prompts. Based on these scores, the authors develop a novel scheduler for LLM inference and serving. Experimental results demonstrate the effectiveness of the proposed method, significantly outperforming the baseline method.
Strengths: 1. The proposed method is efficient and effective. Training a small language model (i.e., a 125M OPT model) is cheap, and the resulting latency gains are substantial.
2. This paper is novel. Unlike traditional methods that predict the real generation length, predicting the relative ordering between request lengths is sufficient for ranking.
Weaknesses: 1. Since the request queue Q is re-ranked after each batch of data is scored, the ranking scheduler may be sensitive to the batch size.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you give a more detailed analysis of the relationship between ListMLE loss and Kendall’s Tau coefficient?
2. Are all the last hidden states of the OPT model used to map to a score, or are only specific hidden states of a token used? Using a decoder-only model to extract features of a text seems unusual.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and helpful feedback! We address all your questions and concerns below.
**Q1: Since the request queue Q is re-ranked after each batch of data is scored, the ranking scheduler may be sensitive to the batch size.**
**A1:** The ranking scheduler is not sensitive to the batch size. We compare the sensitivity of the scheduler to the batch size as follows. We use the predictor to compute tau with different batch sizes and obtain the mean and variance across the entire test set.
| Batch size | Kendall's Tau mean | Kendall's Tau variance |
| ---- | ---- | ---- |
| 8 | 0.619 | 0.04 |
| 16 | 0.625 | 0.02 |
| 32 | 0.624 | 0.008 |
| 64 | 0.625 | 0.0007 |
| 128 | 0.619 | 0.001 |
With different batch sizes, Kendall’s Tau maintains a small range. Additionally, our method addresses severe head-of-line (HOL) problems when there are numerous requests, in which case, the batch size is often sufficiently large for the predictor to be effective and robust.
**Q2: Could you give a more detailed analysis of the relationship between ListMLE loss and Kendall’s Tau coefficient?**
**A2:** ListMLE loss and Kendall’s Tau coefficient are highly negatively correlated. We demonstrate this relationship by recording Tau and loss during the training process. The Pearson correlation coefficient is -0.9:
| Step | Tau | Loss |
| ---- | ---- | ---- |
| 20 | 0.44 | 77.79 |
| 40 | 0.51 | 75.73 |
| 60 | 0.53 | 72.61 |
| 80 | 0.54 | 70.14 |
| 100 | 0.55 | 70.59 |
| 120 | 0.53 | 70.09 |
| 140 | 0.56 | 67.01 |
| 160 | 0.59 | 69.94 |
| 180 | 0.59 | 70.88 |
| 200 | 0.57 | 68.84 |
| 220 | 0.59 | 68.67 |
| 240 | 0.61 | 66.90 |
| 260 | 0.58 | 67.23 |
| 280 | 0.56 | 68.71 |
**Q3: Are all the last hidden states of the OPT model used to map to a score, or are only specific hidden states of a token used? Using a decoder-only model to extract features of a text seems unusual.**
**A3:** We use the last token’s hidden states of the OPT model for two reasons.
First, the OPT model provides similar performance compared to the encoder model DistilBert, as shown in the following evaluation:
| Kendall’s Tau (↑) | OPT | DistilBert |
| ---- | ---- | ---- |
| ShareGPT | 0.54 | 0.52 |
| Lmsys | 0.62 | 0.62 |
Second, the hidden states of the last token of a decoder model have been previously shown to be predictive of text features. For example, Representation Engineering [1] demonstrates that we can extract attributes such as honesty, emotions, fairness, and more from the hidden states of the last token of the decoder model.
[1] Zou, Andy, et al. "Representation engineering: A top-down approach to ai transparency." arXiv preprint arXiv:2310.01405 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed explanation and additional experiments, which address my majority concerns.
Furthermore, I believe it would be great if provided a theoretical analysis of the relationship between the ListMLE loss and Kendall’s Tau.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer GoTD,
Thank you for your constructive and insightful suggestions. The ListMLE loss defines a parameterized exponential probability distribution over all scores (as given by the model) and formulates the loss function as the negative log likelihood of the ground truth ranking $y$. Meanwhile, Kendall's Tau measures the ordinal association between the scores (as given by the model) and the ground truth ranking $y$. It is challenging to accurately describe the relationship between the likelihood and ordinal association. However, we provide an analysis demonstrating that minimizing the ListMLE loss can help improve Kendall’s Tau.
To simplify the problem, we assume there are no ties between any two items, meaning each pair should be either concordant or discordant. In this case, Kendall's Tau is defined as $\tau=\frac{N_c-N_d}{n(n-1)/2}$, where $N_c$ and $N_d$ are the number of concordant and discordant pairs in two rankings, and $n$ is the total number of items. As $N_d$ increases, $N_c$ decreases because the sum of $N_c$ and $N_d$ is fixed. Consequently, we have $\Delta \tau=\frac{4\Delta N_c}{n(n-1)}$, where $\tau$ increases when $N_c$ increases.
ListMLE loss is defined as $\mathcal{\phi}(g(x),y)=-\log P\left(y \mid x ; g\right)$, where $P(y \mid x ; g)$ represents the likelihood of the ground truth ranking $y$. As the likelihood of the ground truth ranking $y$ increases, the loss decreases. Although the increase of $P(y \mid x ; g)$ does not guarantee that $N_c$ increases, the increase in the likelihood of the ground truth ranking should generally lead to a greater agreement between the ground truth ranking and the scores given by the model, which implies an increase in the number of concordant pairs (or $N_c$) and a decrease in the number of discordant pairs (or $N_d$) between the scores and the ground truth. Thus, minimizing the loss can help improve Kendall’s Tau.
We hope our response addresses your concerns.
Best Regards,
Paper 5119 Authors | Summary: The paper addresses the inefficiencies in scheduling LLM inference requests, which often use a first-come-first-serve (FCFS) strategy, leading to Head-Of-Line (HOL) blocking and reduced throughput. The authors propose a novel scheduling method based on predicting the relative ranks of output lengths in a batch of requests, rather than attempting to predict exact generation lengths. This prediction helps in approximating the shortest-job-first (SJF) schedule, which is known to minimize average latency.
Strengths: The paper employs a straightforward but effective scheduling algorithm that approximates the shortest job first (SJF) strategies. This approach effectively reduces response latency and improves throughput. The authors have tackled the challenge of accurately approximating SJF. The empirical results demonstrate significant improvements in both latency and throughput, highlighting the effectiveness of their approach. The paper introduces interesting metrics to determine the relative range of output lengths.
The paper addresses a crucial issue in LLM workload scheduling. By focusing on reducing response latency and enhancing throughput, it tackles a significant problem that is highly relevant to the efficiency and performance of LLM servingsystems.
Weaknesses: - The current scheduling approach only considers output length. Would you also consider other dimensions, such as prompt length? Longer prompt lengths can consume more memory and increase token latency, impacting overall response latency and throughput. Additionally, would you consider implementing preemptive scheduling to correct any mispredictions dynamically?
- Your predictor is trained using 10k traces from ShareGPT and LM-SYS. However, these traces are primarily from GPT-4 and other models. Have you considered that different models Llama3 might behave differently, with varying verbosity and output lengths even for the same prompts? If the predictor cannot be reused across different models, you might need to account for the overhead of retraining the model to maintain accuracy.
- You should discuss Andes [1], which also propose a request scheduling strategy to improve quality of experience.
[1] Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services
- SJF scheduling inherently risks starving requests with longer response length, as these jobs can be indefinitely delayed. How do you address this issue to ensure that longer requests are also processed in a timely manner?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Why is the improvement on dataset Sharegpt and lmsys different, as shown in table3.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the insightful and helpful feedback! We would like to address your questions in the below response.
**Q1.1: The current scheduling approach only considers output length. Would you also consider other dimensions, such as prompt length? Longer prompt lengths can consume more memory and increase token latency, impacting overall response latency and throughput.**
**A1.1:** Considering prompt length is indeed a nice suggestion, which we have previously considered, but we found that in practice, focusing only on generated length is simple yet sufficiently effective.
First, we observe from lmsys-chat-1m and ShareGPT – two traces that represent real-world scenarios – that prompt length is not crucial in generation time. Specifically, the prefill time accounts for only 5% on lmsys-chat-1M and 8% on ShareGPT, respectively, of the entire generation time, indicating they have a minor impact on overall latency. Note that there are already long prompts in the workloads we tested. For example, 1% of all prompts in the ShareGPT dataset exceed 900 tokens.
Second, although this paper does not particularly focus on long context (e.g., prompt length > 32k tokens), we argue it is relatively straightforward to handle long prompts. Since prompt lengths are always known a priori – it is easy to accurately approximate the latency of the prefill phase via profiling. We can also map the relative ranking of generation length into a length estimation according to the dataset distribution. Then, simply adding prefill time estimation to the current framework together provides an end-to-end generation time approximation for scheduling.
**Q1.2: Additionally, would you consider implementing preemptive scheduling to correct any mispredictions dynamically?**
**A1.2:** We’ve already employed preemptive scheduling. In each decoding step, we compare the generation rankings of new-coming requests and the running requests and preempt the requests with low rankings (Algorithm 1). However, we will not re-predict the score for an executed request during the generation process. We found that re-prediction yields little improvement, as shown in the following table. We conducted the experiments using a llama-3-7b model with a single 80GB A100 GPU.
| Latency(s/token) | ours | re-predict |
| ---- | ---- | ---- |
| lmsys | 0.43 | 0.44 |
| ShareGPT | 0.64 | 0.64 |
**Q2: Your predictor is trained using 10k traces from ShareGPT and LM-SYS. However, these traces are primarily from GPT-4 and other models. Have you considered that different models Llama3 might behave differently, with varying verbosity and output lengths even for the same prompts? If the predictor cannot be reused across different models, you might need to account for the overhead of retraining the model to maintain accuracy.**
**A2:** We would like to clarify that we do not use the outputs from GPT-4 or other models. The prompts are collected from human users, and answers are generated by the target model (e.g., llama3), as stated in section 4.2. We only need to train predictors for each dataset-model pair. The cost of training is negligible: It takes ~10 minutes on a single A100 GPU to train a predictor per long-standing serving job.
**Q3: You should discuss Andes [1], which also propose a request scheduling strategy to improve quality of experience. [1] Andes: Defining and Enhancing Quality-of-Experience in LLM-Based Text Streaming Services**
**A3:** Andes and our proposed method focus on different aspects of LLM serving. Andes introduces a novel quality of experience (QoE) metric for text streaming services, which measures human satisfaction during the entire end-to-end token delivery process. Andes employs an online preemptive scheduling method. At the beginning of each time quantum, it decides which requests to run based on the scheduling objectives (e.g., average QoE) for the upcoming time frame. However, our proposed method primarily aims to optimize latency by executing the requests with the lowest estimated generation length (as predicted by the predictor) to approximate SJF/SRTF at the start of each decoding step. We will discuss Andes in the related work section in the revised version.
**Q4: SJF scheduling inherently risks starving requests with longer response length, as these jobs can be indefinitely delayed. How do you address this issue to ensure that longer requests are also processed in a timely manner?**
**A4:** We have discussed starvation prevention in section 4.3 and presented the results in section 5.5. Specifically, we defined max_waiting_time fairness to measure whether a request suffers from starvation. Our starvation counter (in Algorithm 1) prevents users from waiting too long for responses. The results in section 5.5 (Figures 4 and 5) demonstrate that our method can significantly alleviate starvation.
**Q5: Why is the improvement on dataset Sharegpt and lmsys different, as shown in table3.**
**A5:** Different datasets exhibit varying prompt/generation length distributions (Appendix B) and data distributions. For instance, some problems can be answered fixedly, resulting in LLMs producing similar generation lengths across different trials. However, other problems may be addressed in various ways, leading to LLMs generating very different lengths in different trials. The latter pattern is unpredictable and ShareGPT encompasses more of these unpredictable problem types. Line *Optimal Prediction* in Table 3 employs the generation length of one random seed to predict the length using another seed. It shows that predicting the generation length of the ShareGPT dataset is more challenging. Consequently, our predictor performs less effectively on the ShareGPT dataset. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Infinitesimal Generators of Continuous Symmetries from Data | Accept (poster) | Summary: This paper proposes using neural ODEs to parameterize symmetries by viewing the ODEs flow as an element of a one-parameter group. They show that by learning the parameters of the neural ODEs, they are able to recover ground truth symmetries in image classification and PDE tasks.
Strengths: The paper is easy to read. The proposed ideas are clear, and appear to be mostly novel.
Weaknesses: See questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Line 174: Do you also need "and for all $s\in[-\sigma,\sigma]$" in addition to the "for all $f\in\mathcal{D}$"?
2. Section 4.1: According to your definition of a symmetry $\vartheta$ (there exists $\sigma>0$ such that $\vartheta_s(f)$ is "valid" for all $f\in\mathcal{D}$ and all $s\in[-\sigma,\sigma]$), it seems like solving for $\vartheta^*$ from (4) does not guarantee that $\vartheta^*$ is actually a symmetry, even if the optimal value of (4) is less than $C$. That is, if the optimal value of (4) is less than $C$, then you only know that the validity score is less than the threshold on average, not for all data $f$ and all transformation scales $s$. Can you please clarify this aspect?
3. Line 217: Something is strange with the definition of $\vartheta_{\mathcal{X}}(f)$. In particular, what is $T_{\mathcal{X}}$, and why does $f$ not appear in the right-hand expression? It seems to me like perhaps you meant to define $\vartheta_{\mathcal{X}}(f)(x) = f(\vartheta_{\mathcal{X}}^{-1}(x))$, and hence that you need to assume the transformation $\vartheta_{\mathcal{X}}$ on $\mathcal{X}$ to be invertible. Can you please clarify whether this is a typo, or if I am missing something? Also, it is unclear what $\vartheta_{\mathcal{Y}}(f)$ is doing or why it is needed. Indeed, as written, you are defining two different transformations of $f$ in (8) (resulting in two different functions of $x$).
4. Line 226: How reasonable is it to assume that you know the actual number of possible symmetries for a given task? Is it possible for you to select $N_{\textup{sym}}$ to be too small, and then to miss out on learning some of the most important symmetries of a task?
5. Line 232: Typo ("orthonomral").
6. Line 230: In practice, are symmetries actually mutually orthogonal (as vector fields) to one another? It seems like this is not always the case per your Table 1, which shows that uniform scaling is not orthogonal to $x_1$-axis scaling. A comment on the validity of the orthogonality assumption underlying the use of this regularizer would be appreciated.
7. Line 243: Typo ("undersirable").
8. Line 246: Typo ("Lipshictz").
9. Line 280: "...the two translations are the most invariant one-parameter group..." Can you please explain in further detail how you come to this conclusion?
10. Table 2 and Figure 5: I suggest moving these to be centered with the rest of the text; it looks poorly placed in the right-hand margin as is, and causes Figure 5 values to be too small and hard to read.
11. Line 322: "We found the ground truth symmetries in all the experiments as in Figure 5..." It looks like you did not recover the true symmetries in the cKdV setting, as there are non-negligible components coming from each of the ground truth symmetries that appear in the learned symmetries.
12. Did you verify in your experiments that the learned "symmetries" are actually symmetries according to your definition using the validity score threshold? Overall, your experimental results in Figures 4, 5, and 10 seem to me like you are indeed learning orthogonal vector fields, and that sometimes the vector fields end up being equal to one of the ground truth vector fields, but other times not (turns out to be linear combination of multiple ground truths). This does not seem super convincing that you are reliably learning/recovering the ground truth symmetries.
13. Why did you introduce (8)? I don't see this transformation being used anywhere else in the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Q1. Necessity of phrase in line 174.**
Yes. For all $f \in \mathcal{D}$ and for all $s \in [-\sigma, \sigma]$.
**Q2. Clarification in Section 4.1.**
To be more rigorous, our formulation leaves us a set of constraints $S(\vartheta^*_s,f) <C$ for all $f \in \mathcal{D}, s \in [-\sigma,\sigma]$, which is intractable to solve. Instead, with a mild assumption that non-symmetries have comparably higher values of the validity score than the true symmetries, we turn it into the proposed optimization problem. We will make this clearer in the revised version of our paper.
**Q3. Typo in line 217.**
Yes. This is a typo. it should be $(\vartheta_\mathcal{X}(f))(x) = f(\vartheta_\mathcal{X}^{-1}(x))$. We apologize for the critical typo and thank you for pointing it out.
**Q4. The number of symmetries in line 226.**
In our framework, we assume knowledge on a rough upper bound $N_{\text{sym}}$ on the true number $N_{\text{sym}}^*$. The stop-gradient operation in the orthonormality loss turns the optimization problem into a sequence of optimization problems with hierarchical constraints; the first slot is optimized without being affected by other slots, the second slot is only affected by the first slot, and so on. Consequently, the true symmetries are learned in the first $N_{\text{sym}}^*$ slots in the MLP. We believe having a rough estimate on an upper bound of the number of symmetries is reasonable.
**Q6. Orthogonality assumption in line 230.**
A set of symmetry generators form a vector space. For example, a sum of the unit $x_1$-axis translation generator and the unit $x_2$-axis translation generator gives a translation in the direction of the $45^\cdot$ vector. **Symmetry generators listed in Table 1 and Table 4 are in fact a basis of the space of symmetry generators.** Hence, if we set an inner product in the space of vector fields, there is always an orthonormal basis of symmetry generators.
**Q9. Further details about the most invariant symmetries in line 280.**
As discussed in the answer to Q4, the stop-gradient operation gives hierarchical constraints in the optimization problem, causing it to learn the earlier slots first and then the latter slots. Since the translations are found in the first and second slots, we conclude that they are “the most” invariant ones. This means that the outputs of the neural network are altered by the smallest magnitude along those symmetries.
**Q11. True symmetries in cKdV in line 322.**
As we elaborated in the answer to Q6, symmetry generators form a vector space. In the cKdV case, the exact and approximate symmetries form a vector space of dimension 3. We identified three orthonormal basis components of that vector space. Although these components do not match the closed-form expressions exactly, they span the same vector space as the closed-form symmetries. Therefore, we conclude that the three symmetries have been correctly learned.
**Q12. Evidence for learning actual symmetries.**
Again, symmetry generators form a vector space, so learning linear combinations can be as well regarded as learning the true symmetries. Moreover, we compare the distributions of validity scores and report them, both with images (top) and PDEs (bottom) in the tables below, where the non-symmetries are modeled by neural networks with random initialization. The values of quantile 0.95 shows that the true symmetries indeed have lower validity scores than non-symmetries.
| | mean | median | quantile 0.9 | quantile 0.95 |
|---|---|---|---|---|
| symmetry (x-translation) | 0.036 | 0.024 | 0.085 | 0.114 |
| non-symmetry | 0.092 | 0.049 | 0.224 | 0.289 |
| | mean | median | quantile 0.9 | quantile 0.95 |
|---|---|---|---|---|
| symmetry (galilean boost) | 0.133 | 0.035 | 0.435 | 0.570 |
| non-symmetry | 95.9 | 0.859 | 29.0 | 132.1 |
**Q13. Necessity of equation (8).**
The equation (8) is not explicitly used in implementation, but it states how transformations on the spaces $\mathcal{X}$ and $\mathcal{Y}$ induce a transformation of the function $f:\mathcal{X} \rightarrow \mathcal{Y}$. For example, an image can be viewed as a function $f:\mathbb{R}^2 \rightarrow \mathbb{R}^3$, which is then discretized as $ \\{ f(x\_i) \\} $ on pixels $\mathcal{X}\_{\text{grid}} = \\{ x_i \\}$. Transformation of pixels $x\_i \mapsto \vartheta_\mathcal{X}(x\_i)$ results in a transformed function $\tilde{f}:\vartheta\_{\mathcal{X}}(x\_i) \mapsto f(x\_i)$, which corresponds to the function described in Equation (8). | Summary: The paper pertains to the topic of data-driven symmetry discovery. The authors propose a method allowing symmetry discovery beyond pre-defined Lie groups, by learning to transform datapoints, potentially in a non-affine manner, via a learned ODE (referred to as the *one-parameter group*, where the single parameter is the time variable in the ODE), the velocity field of which is typically parametrised by an MLP.
Crucially, to optimise this, the authors choose an objective - *validity score*, which is a predefined measure of the extent to which a transformed datapoint is symmetric to the input one (in their examples: for images, they use the cosine similarity between features extracted by a pretrained NN, while for PDEs, they measure the value – error – of the PDE for the transformed datapoint). Additional regularisers are used to ensure the diversity of the symmetries learned (orthogonality between different learned velocity fields) and smoothness (minimisation of an estimate of their local Lipschitz constants). Experimentally, the method is tested on image classification (CIFAR10) and PDE solving (KdV, KS, Burger’s equations) showing that known symmetries are retrieved along with additional approximate symmetries, while the learned symmetries are subsequently used for data augmentation showing competitive results to methods using pre-defined augmentations.
Strengths: **Significance** . The paper studies an important problem (*data-driven symmetry discovery*) in machine learning, but also physical sciences where symmetries are abundant but potentially unknown. Identifying unknown symmetries and incorporating them in downstream ML models (e.g. via augmentation) can improve generalisation, especially in low-data regimes, while additionally, it can potentially provide novel insights about the task at hand.
**Novelty/Generality**
- The presented methodology has the capacity to recover symmetries arising from *non-affine* data transformations. This is contrary to prior work, where mostly linear/affine transformations are dealt with.
- Additionally, this method does not require making assumptions about the structure of the target group. This is common in prior art, where typically a subgroup of a predefined group is learnt.
- The authors take advantage of well-established concepts that are underexplored by the ML community (e.g. modelling transformations via the one-parameter group) - this helps to broaden the available toolbox in the field of ML & symmetries/ equivariant ML.
**Execution/Implementation**
- Although the proposed method has multiple complicated components (NeuralODEs, difficult objective to optimise for), it is nonetheless well-executed yielding competitive results and recovering known symmetries in popular testbeds.
Weaknesses: **Applicability and scope**. Perhaps the biggest limitation of the proposed method is the *reliance on the validity score*. Although the authors claim to be able to learn symmetries by making as few assumptions as possible (see strengths), this seems to be contradicted by the need to manually design a validity score. Moreover, I have the impression that the validity score is not merely a hyperparameter, but it is decisive for the symmetries that will be learnt (basically it is the objective function of the optimisation problem).
- For example, in the case of images, the choice seems ad hoc (closeness in the representation space of a pre-trained encoder NN). What leads the authors to believe that the features of equivalent (symmetric) images extracted from the pre-trained NN should be close? Have the authors tried to verify this assumption? I think the empirical validation is insufficient here (section 5.1.), so I am not entirely convinced.
- In general, I do not see a generic way to define validity scores and perhaps the authors have slightly overclaimed in that respect. I would like to read the authors' viewpoints on that. For PDEs, the validity scores are indeed reasonable and generic, so perhaps, they would like to put more emphasis on this perspective.
Furthermore, the authors introduce the concept of learning symmetries via the one-parameter group, claiming that it is more general than prior parametrisations that can only learn linear/affine groups. However, it is unclear what the present parameterisation can express, e.g. does it allow learning any continuous group or implicit assumptions are made here as well?
- Additionally, could the authors discuss if it would be possible to learn finite groups with this method as well and if not, how could those be incorporated?
**Related Work/Comparisons**. The work of Forestano et al., MLST’2023 is quite relevant to the present manuscript, with the main difference being that in that work, the transformations are parameterised by an MLP instead of a NeuralODE (the oracle used in this work seems similar to the validity score used here). Since the two works have many similarities, I think that the authors should discuss in more detail their differences and the advantages of their work (e.g. as far as I understand the MLP cannot guarantee that the transformations form a group). Note that modelling transformation via an MLP (or any NN in general) instead of a NeuralODE seems more straightforward and probably easier to train and more computationally friendly.
**Experiments**. I believe some additional empirical evidence would strengthen the authors' claims.
- Most importantly, an experimental comparison against the type of parameterisation used in Forestano et al. (MLP) should be provided, to verify if NeuralODEs are indeed a more appropriate parameterisation.
- Moreover, baselines are mostly missing, e.g. comparing against other methods for data-driven symmetry discovery (I am not super familiar with these works, but if I am not mistaken LieGAN by Yang et al., ICML’23 is a recent example).
- The reported results after augmenting with the learned symmetries do not seem to improve significantly compared to known/default augmentations. Can the authors discuss why this might be the case? This is important since it might undermine the necessity of the proposed approach. To be more convincing, perhaps the authors should perform experiments on problems where the symmetries are not known a priori.
- Additionally, ablation studies seem to provide important insights but are only discussed in the appendix. I would recommend being more upfront in the main paper and discussing in the rebuttal the following: sensitivity to hyperparameters (multiple are needed: loss coefficients, $\sigma$ and $\tau$), the method for choosing them, the difficulty of optimisation (3 terms are used in the objective) and if all losses are optimised in a balanced manner. Similarly, for the parameter $N_sym$, which is now chosen based on prior knowledge of the number of existing symmetries.
**Presentation/Exposition**. (disclaimer - this is a minor weakness) Given that the notions discussed here are probably not widely known across the ML community, I believe that the authors should aim to provide more in-depth explanations to make their work more accessible. For example,
- Multiple group theory/symmetry concepts are discussed without definitions (group generators, Lie group, Lie algebra, Lie bracket etc.). Additional examples that are not well-contextualised include in section 2 the one-parameter group discussion, the Lie algebra of the affine group and the discussion on the PDE symmetries (Lie point symmetries etc.). Adding some references here and providing some examples for the mentioned PDE symmetries would help.
- In section 5.2., some concepts regarding the experimental details are mentioned without appropriate explanations, while others are only mentioned in the appendix, although it appears that they are crucial for the method. Perhaps the authors should be more upfront and explanatory regarding the aforementioned.
Technical Quality: 2
Clarity: 3
Questions for Authors: - How is the weight function in L234 defined, and how important is this for optimisation? Are different weight functions ablated?
- It’s unclear why the stop-gradient is needed in L237. I did not find the justification fully convincing. Could the authors elaborate? What happens experimentally if one does not use stop-gradient?
- Although intuitive, it’s unclear why the inline Eq. in L212 holds for negative $\alpha$.
**Suggestion**.
In case the authors do want to present their method as generic, I think a deeper experimental evaluation in data beyond PDEs would help a lot (e.g. testing on other image datasets, ablating different validity scores etc).
**Minor**:
- What if the chosen validity score is not differentiable?
- *Notation*.
- The notation $\theta^V_s(x)$ is a bit dense (why use V as a superscript?). Eq (1) is a bit confusing. The inline Eq in line 83 sees clearer to me.
- Notation in sec. 4 could be also simplified a bit or made more accessible with examples (e.g. 4.1: give an example for $\mathcal{A}$, i.e. the set of all natural images).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Some of the limitations are adequately discussed. An important missing point is, in my opinion, the need to manually design the validity score in domains beyond PDEs.
No foreseeable negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Weakness 1, 2. Reliance on the validity score.**
We see the requirement of a validity score as a trade-off for not requiring the predefined set of symmetry generators, and searching for symmetries across the entire class of continuous transformations, which increases the degree of freedom of search space by an infinite amount. In general, a validity score is a quantification of some criterion that transformed data should meet. If one aims to find symmetries of a given dataset, they should first question “what do symmetries of the dataset truly mean”, and then should come up with a criterion that the symmetries should meet. Our validation score is a quantification of that criterion.
For PDEs, the proposed validity score comes out quite naturally. For image data, while it might seem ad-hoc at a first glance, we argue that the proposed validity score is still based on the principle shared with PDE case – the invariance measure of the targets we wish to learn (PDE solutions for PDE data and features extracted from the encoder for image data). That is, the generic principle behind the validity score is to measure invariance of a target function with respect to transformations. For PDEs, we use the L1 error to measure this invariance. For images, we adopt the cosine similarity, which is a popular metric for measuring feature distances. We also note that other methods learning symmetries from images also have similar counterparts, such as the discriminator in LieGAN [2] and the classifier in Augerino [1] and LieGG [3].
**Weakness 3. Range of the parametrization.**
Our parametrization can model any continuous (or more rigorously, differentiable) groups acting on Euclidean space, given that neural networks, according to the universal approximation theorem, can approximate any continuous vector fields. The only additional inductive bias imposed on our model is Lipschitz continuity of the vector field to be learned. Below is a proof sketch of this argument. Note that we consider connected Lie groups, so discrete groups such as reflections are not considered.
Let’s say a connected Lie group $G$ acts on a Euclidean domain $\mathbb{R}^n$ as $g:x \in \mathbb{R}^n \mapsto g\cdot x$, with corresponding Lie algebra $\mathfrak{g}$ and the exponential map $\exp:\mathfrak{g} \rightarrow G$. By the properties of connected Lie groups, any group element $g \in G$ can be expressed as $g = \exp(\epsilon_1\alpha_1) \cdots \exp(\epsilon_k\alpha_k)$ for $\alpha_1,\cdots,\alpha_k \in \mathfrak{g}$ and $\epsilon_1,\cdots,\epsilon_k \in \mathbb{R}$. Each component $\exp(\epsilon_i\alpha_i)$ for $i=1,\cdots,k$ can be seen as an action of a one-parameter group defined by $(t,x) \mapsto \exp(t\alpha_i)\cdot x$, with the infinitesimal generator given by $V_i(x) = \frac{d}{dt}|_{t=0} (\exp(t\alpha_i)\cdot x)$.
**Weakness 4. Discrete groups.**
Unfortunately, our method cannot learn finite groups. As in the image task, if there exists a known finite group symmetry, it can be incorporated as augmentation.
**Related work/Comparisons and Experiments 1, 2. Baselines.**
Please refer to paragraph “Comparison with baselines” in the general comment.
**Experiment 3. Augmentation performances.**
Since we parametrize the symmetry generators by MLPs, they may exhibit numerical instability. So the closed-form symmetries are expected to perform slightly better.
**Question 1. importance of the weight function.**
As described in Appendix A.1., the choice of weight function is crucial in the image task since each pixel has a different level of importance when put in a learned neural network. For example, if we use a constant weight $\omega \equiv 1$, then we may end up learning transformations that only move the pixels near the boundaries by a large amount. We compute a pixel sensitivity function using jacobian-vector products and use it as a weight function.
We conducted additional experiments ablating different weight functions. We tested two handcrafted weight functions: the “plain weight” defined by $\omega(x) = 1$ for all $x\in [-1,1]^2$, and the “step weight” defined by $\omega(x) = 5$ if $||x||<0.25$, $\omega(x) = 4$ if $0.25<||x||<0.5$, and so on, decreasing in steps until $\omega(x) = 1$ if $||x||>1$. The weight functions and the results are depicted in Figure 3 and Figure 4 in the general comment. Indeed, the correct affine symmetries are not found with these weight functions.
**Question 2. Role of stop-gradient operation.**
In our framework, we assume knowledge on a rough upper bound $N_{\text{sym}}$ on the true number $N_{\text{sym}}^*$. The stop-gradient operation in the orthonormality loss turns the optimization problem into a sequence of optimization problems with hierarchical constraints; the first slot is optimized without being affected by other slots, the second slot is only affected by the first slot, and so on. Consequently, the true symmetries are learned in the first $N_{\text{sym}}^*$ slots in the MLP. Stop-gradient operation reduces dependence of our algorithm on the hyperparameter $N_{\text{sym}}$, and without the stop-gradient operation, we would learn mixtures of true symmetries and non-symmetries when $N_{\text{sym}}$ is larger than $N_{\text{sym}}^*$.
**Question 3. One-param group in the negative direction.**
Solutions of ODEs $\frac{d}{ds}\vartheta_s^V(x) = V(x)$ and $\frac{d}{ds}\vartheta_s^{-V}(x) = -V(x)$ with initial conditions $\vartheta_0^V(x) = \vartheta_0^{-V}(x) = x$ are related by $\vartheta_s^{V} \equiv \vartheta_{-s}^{-V}$ due to uniqueness of ODE solutions.
**Presentation/Exposition.** Thank you for your feedback. We will incorporate your suggestions in the revised version of our paper.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I thank the authors for their response.
- Regarding the validity score, unfortunately, I did not find the response satisfactory. Arguably, choosing the validity score is a central element of this work, therefore, I strongly encourage the authors to be upfront about it and clarify that this is a limitation of the approach and that (perhaps strong) assumptions are needed in domains beyond PDEs, where the choice is natural.
- Regarding the comparison with baselines, I was not able to locate an exact response to my concern. Simply put, my point was to replace the NeuralODE with an MLP (no ODE integrator, just direct prediction of the transformation). This is a natural baseline and would empirically support the necessity of employing a NeuralODE. Have the authors performed this experiment?
- I also encourage the authors to include in an updated version of their manuscript the discussion regarding the groups that can be learned (continuous and differentiable, but not finite).
I keep my recommendation for acceptance, but as per my review, I have certain important reservations.
---
Rebuttal 2:
Comment: Thank you for your comment.
- Q1. We indeed acknowledge that choosing the validity score is critical and not direct. However, we clarify our stance: "symmetry" is not a fixed mathematical concept but a task-dependent jargon. For example, isometries of a metric space are transformations that preserve the metric, while symmetries of a physical system preserve the Hamiltonian or governing equations. Image symmetries are typically not rigorously defined but refer to transformations that maintain invariance to human perception; we replace "human eyes" with "neural network" in this context.
- Q2. **The two experiments we have done are exactly that.** We replaced the neural ODE with an MLP that does not involve ODE integration, which now requires taking Lie derivatives (i.e. derivative along the vector field modeled by an MLP) to the validity score. We compared these setups using both image data and a toy experiment searching for isometries in Minkowski space. While using Lie derivatives (i.e., a plain MLP) fails with the image task, our method succeeds in both scenarios.
---
Rebuttal Comment 2.1:
Title: Recommendation
Comment: Thank you for your follow-up comments. Since many of my concerns have been addressed, I have increased my rating by 1 point. | Summary: This paper proposes a symmetry learning algorithm based on transformations defined via infinitesimal generators. Using Neural ODE, an infinitesimal generator is learned that is capable of producing a sequence of transformed data through ODE integration. Validity score has been defined to check if the transformed data is valid wrt to a given task. For images, the validity score is picked to be cosine similarity while for PDEs the validity score is defined as the numerical errors in the original equations after the transformation. In addition to symmetry loss, two regularizations, orthonormality loss, and Lipschitz loss have been added, to remove trivial solutions. The authors present experiments on CIFAR10 and KdV equation and Burgers' equation in 1D for PDE.
Strengths: 1. The paper motivates the need for learning continuous symmetries well.
2. The idea presented in the paper is very neat and shows potential beyond the presented experiments.
3. This approach can learn both affine and non-affine symmetries in image classification and PDE tasks as shown in the experiments section.
Weaknesses: The discussion on compute and model parameters comparisons with baseline missing. No other method was shown as a baseline in either of the experiments. There is some ambiguity in how exactly the validity score is used and in some cases, can be defined if the given task is equivariant.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How does the validity score take into consideration, invariant tasks vs equivariant tasks?
2. Using cosine similarity between the extracted features for validity score, can be a problem for extreme transformations (as cosine values do not linearly change with parameter). How does this affect the learning of symmetry build on validity scores?
3. In Table 2, it is unclear how the validity score players out in learning the symmetries. Especially for default augmentation case. Could you please elaborate on this?
4. Total loss and loss-scale section: How is the true scale learned, if the inner product is normalized? Additionally if all the terms are affected by a data-dependent scale, shouldn't $w_{Lips}$ affect the overall weights?
### Clarifications
1. The figures can be made a little bigger.
2. In line 16, 'whether the transformed data are still valid for the given task'. This phrasing is confusing.
3. In line 172, what does valid mean? simply put; does it comprise invariant transformations and approximately equivariant up to the threshold C?
4. Intuition on how to locate transformed data on grid is missing.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: 1. The authors do not compare with other methods of learning symmetry, as those methods use datasets that have explicit symmetry (like Rot-MNIST). Similarly for that of the PDE experiments. It would have been useful to have them in the paper.
2. The number of parameters and compute comparison across the proposed method and baseline approach like [9,14] is missing. Also discussion the scale of numerical computation with an increase in dimension missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback.
**Question 1. Validity score in equivariant task.**
We take a two-step approach: first, we learn symmetries based on validity scores, and second, we use these learned symmetries as augmentation and solve machine learning tasks. The validity score is designed to measure invariance of a certain function – the neural network in the image task, and the PDE residual values in the PDE task. We do not take equivariant tasks into account in this paper. However, if one were to design a validity score using equivariant neural networks, it could be achieved by modeling the transformation in both the input and output spaces and establishing an invariance criterion that corresponds to that equivariance.
**Question 2. Cosine similarity and instability.**
Cosine similarity values lie in $[0,1]$, and if arccos is applied, the angle values lie in $[0,\pi/2]$. The arccos function has an infinite gradient at 1, so the cosine similarity values must be clipped to a maximum $1-\epsilon$ for some small $\epsilon$. Except for this, we did not experience any instabilities coming from cosine similarity.
**Question 3. Validity score and augmentation.**
The validity score is only used when learning symmetries. Once the symmetries are found, we use the learned symmetries to augment the dataset for the target task – image classification task in this case. Table 2 shows comparison of augmentation performance with other commonly used augmentation methods.
**Question 4. Scale of losses.**
Sorry, but we didn’t fully understand your question. We will try to answer based on our understanding, but if you find it insufficient, please let us know.. The phrase “normalized inner product” in the paper refers to the following process: two vectors $v_1,v_2$ are normalized as $\hat{v_i} = \frac{v_i}{||v_i||}$ for $i=1,2$, and then the inner product $\langle \hat{v_1},\hat{v_2}\rangle$ is computed. Then we put arccos to make it lie in $[0,\pi/2]$. Similar process is applied for all terms, to make sure all losses are on a similar scale.
**Clarifications 2, 3. Meaning of validity.**
We say transformed data are valid for the given task when they are beneficial to the learning task. For example, in an image classification task, images rotated by small angles may help the task, but transformed images that are distorted too much may harm training. To approximate this validity, we design a function known as the validity score within our framework.
**Clarification 4. Interpolation method.**
We use bilinear interpolation for images, and Whittaker-Shannon interpolation for PDEs. Due to page limits, discussion on that is in Appendix A. and Appendix E.2., including discussion on why bilinear interpolation does not work for PDE data but Whittaker-Shannon works.
**Limitations 1. Baselines.**
Please refer to paragraph “Comparison with baselines” in the general comment.
**Limitations 2. Computational costs.**
We use a fairly small MLP, with around 150k parameters. Direct comparison of computational complexity with other methods is not feasible since all symmetry learning approaches have varying assumptions and goals. For example, L-conv [9] learns rotation by comparing rotated 7x7 random pixel images with the original ones.
We provide some analysis on computational costs of our algorithm. The major computational burden of our algorithm arises from the ODE integration. Since we use the adjoint method for backpropagating [6], although the time complexity is $O(N_{\text{step}})$ where $N_{\text{step}}$ is the number of ODE steps, the memory requirement is $O(1)$. The choice of validity score also affects the complexity. However, the effect of dimension $n$ of the space is negligible, since it would only require changing the input and output dimension of the MLP by $\mathbb{R}^n \mapsto \mathbb{R}^n$.
[4] Dehmamy et al., “Automatic symmetry discovery with lie algebra convolutional network.” NeurIPS 2021.
[6] Chen et al., “Neural ordinary differential equations.” NeurIPS 2018. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful comments and suggestions. We are pleased that all the reviewers agree on the importance of symmetry discovery and find our research novel. Below, we address some commonly raised questions and present additional experimental results.
# Comparison with baselines
**(refer to performance comparison in the attached PDF file)**
There have been several attempts at symmetry discovery, each with different experimental settings and goals. The differences mainly arise from (i) what they aim to learn (e.g., transformation scales or subgroups from a larger group) and (ii) their assumptions about prior knowledge (e.g., complete, partial, or no knowledge of symmetry generators). Another important distinction is the viewpoint on symmetry: some methods learn symmetries that raw datasets inherently possess (implicit), while others learn symmetries from datasets explicitly designed to carry such symmetries (explicit).
Some recent symmetry discovery works are listed below. We emphasize that our method excels in two key aspects: **(i) our learning method reduces infinitely many degrees of freedom, (ii) our method works with high-dimensional real-world datasets.** For example, while LieGAN [2] and LieGG [3] reduce a 6-dim space (affine) to a 3-dim space (translation and rotation), ours reduces an $\infty$-dim space to a finite one. L-conv [4] also does not assume any prior knowledge, but it is limited in finding rotation in a toy task, where it learns rotation angles of rotated images by comparing them with the original ones, which are 7x7 random pixel images.
| |Augerino [1]|lieGAN [2]|lieGG [3]|L-conv [4]|Forestano et al. [5]|Ours|
|-|-|-|-|-|-|-|
|symmetry generators|completely known|partially known (affine)|partially known (affine)|completely unknown|completely unknown|completely unknown|
|learn what?|transformation scale (e.g. rotation angle)|symmetry generator (rotation)|symmetry generator (rotation)|symmetry generator (rotation)|symmetry generator (in various low-dim settings)|symmetry generator (affine)|
|verified with what?|raw CIFAR-10|rotation MNIST|rotation MNIST|random 7x7 pixel images|toy data (dim <=10)|raw CIFAR-10|
|implicit or explicit|implicit|explicit|explicit|explicit|explicit|implicit|
|how?|optimize while training downstream task|compare fake/true data in GAN framework|extracts from learned NN using Lie derivative|compare rotated and original images|extracts from invariant oracle using Lie derivative|extracts from validity score using ODE integration|
Comparing those research with ours (i.e. without requiring any candidate symmetries, finding implicit symmetries, and using high-dimensional real-world datasets) requires highly nontrivial modification of those methods. For example, LieGAN learns six linear coefficients corresponding to six affine generators, which cannot be extended to ours. Thus, we compare our method with Forestano et al. [5] by learning symmetries in both a high-dimensional image dataset and a low-dimensional dataset using the Lie derivative and ODE integration. The experiment with the image dataset can also be seen as a comparison with LieGG.
**Ablation studies: learning symmetries of images using Lie derivative (comparison with LieGG and Forestano et al.)**
LieGG method extracts symmetries of images from a learned neural network using the Lie derivative and retrieves rotation from the rotation MNIST. The original LieGG assumes affine symmetry; however, for comparison purposes, we have adapted LieGG to our setting by using the Lie derivative instead of ODE integration in our method. As shown in Figure 1, this approach fails to learn the correct symmetries. Note that, to evaluate whether the learned symmetries $\\{V_1,\cdots,V_{10} \\}$ recover the space of true symmetries with an orthonormal basis $\\{L_1,\cdots,L_6\\}$, we follow this process: we first check whether $V_1,\cdots,V_6$ are mutually orthogonal. We then compute the values $\sqrt{\sum_{j=1}^6 \langle V_i, L_j \rangle^2}$ and check whether the first six such values are close to 1.
We describe our intuition on why the Lie derivative fails. The Lie derivatives are taken in the space of images, which are regarded as flattened RGB vectors. We claim that due to the high-dimensional nature of the image data, derivatives are less informative and much noisier than ODE integration. For example, rotating images moves the pixels, requiring the RGB values on the regular grid to be resampled by interpolating neighboring RGB vectors, such as with bilinear interpolation. Hence rotation of images in the space of flattened RGB vectors is a highly complicated and fluctuating non-smooth operation with noisy derivatives.
**Ablation studies: learning isometries of (3+1)-dimension Minkowski space using Lie derivative and ODE integration (comparison with Forestano et al.)**
We further demonstrate that usage of ODE integration is indeed capable of learning symmetries in the setting of Forestano et al.. We learn isometries of (3+1)-dimensional flat Minkowski space by setting the Lorentz metric $t^2 - x^2 - y^2 - z^2$ as a validity score and successfully find the six symmetries (three Lorentz boosts and three rotations), both using Lie derivative and ODE integration, as shown in Figure 2. This experiment shows that our approach works in both low-dimensional and high-dimensional settings, **while using Lie derivatives fails with high-dimensional image dataset.**
[1] Benton et al., “Learning invariances in neural networks from training data.” NeurIPS 2022.
[2] Yang et al., “Generative adversarial symmetry discovery.” ICML 2023.
[3] Moskalev et al., “Liegg: Studying learned lie group generators.” NeurIPS 2022.
[4] Dehmamy et al.,” Automatic symmetry discovery with lie algebra convolutional network.” NeurIPS 2021.
[5] Forestano et al., “Deep learning symmetries and their lie groups, algebras, and subalgebras from first principles.” MLST 2023.
Pdf: /pdf/2090215b1164817961e17ebd07c2a5526cf15862.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Scaling White-Box Transformers for Vision | Accept (poster) | Summary: This paper introduces CRATE-α, an enhanced variant of the CRATE (Coding RATE Transformer) architecture, designed to scale efficiently while maintaining mathematical interpretability. The authors address the open question of CRATE's scalability by proposing strategic modifications to the sparse coding block and a refined training recipe. Extensive experiments demonstrate CRATE-α's effectiveness, showcasing improved performance on ImageNet classification tasks compared to the original CRATE model. Notably, the CRATE-α-B model achieved an 83.2% accuracy rate, a significant improvement over the previous best CRATE-B model.
Strengths: The paper presents a novel architecture, CRATE-α, that builds upon the existing CRATE model with minimal yet strategic modifications, enhancing scalability without compromising interpretability.
The authors provide a wealth of empirical evidence supporting the effectiveness of CRATE-α, including comparative results on ImageNet classification tasks and a detailed analysis of training behaviors across different model scales.
A key strength is the paper's focus on maintaining the interpretability of the model, which is often a trade-off in scaling deep learning models. The authors demonstrate that CRATE-α models retain high-quality unsupervised object segmentation capabilities.
The paper includes a thorough exploration of scaling behaviors, from Base to Large to Huge model sizes, using both supervised learning on ImageNet and vision-language pre-training with contrastive learning on DataComp1B.
Weaknesses: Could the proposed architecture work well on other tasks like NLP?
While the paper provides a detailed analysis of the model's performance on ImageNet, there might be a need for more discussion on how these results generalize to other datasets and real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed.
>**Q1**: *Could the proposed architecture work well on other tasks like NLP?*
**A1**: Thank you for your suggestions on new experiments on NLP. Please refer to our response to '**Q3: Performance of CRATE-α on NLP task**' in our common response.
>**Q2**: *While the paper provides a detailed analysis of the model's performance on ImageNet, there might be a need for more discussion on how these results generalize to other datasets and real-world applications.*
**A2**: Thank you for your suggestions on new experiments on other datasets and real-word applications. Please refer to our response to '**Q2: Additional experimental results on real-world downstream tasks**' in our common response.
We again thank you for your review, and hope we have provided satisfactory responses to your questions. Please let us know if you have further questions or comments. | Summary: This paper explores how to train white-box Transformers at scale for visual tasks. The authors propose a new model architecture called CRATE-$\alpha$, which extends the sparse coding block of the original CRATE model. A series of CRATE-$\alpha$ models were trained with varying model sizes, data sizes, and patch sizes using optimized training recipes. The main experiments focus on supervised classification and contrastive CLIP learning, with additional demonstrations of unsupervised semantic segmentation capability.
Strengths: **Originality:** The paper continues the white-box design philosophy of the original CRATE model while integrating advanced techniques such as overparameterized sparse coding, decoupled dictionary, and residual connections. Although some of these techniques have been previously validated, successfully combining them with a white-box Transformer is a noteworthy achievement. The integration not only works effectively but also yields commendable results.
**Quality:** The paper is technically sound overall, employing rigorous notation and formula definitions to elucidate the design principles. The proposed models demonstrate significant improvements compared to the previous generation of CRATE models. Additionally, the authors are careful and honest in evaluating the weaknesses and limitations of their work.
Weaknesses: **Clarity:**
- The paper is heavily symbolized, relying extensively on intricate mathematical formulations rather than clear diagrams and straightforward language. Although this maintains academic rigor and professionalism, it severely hampers understanding of the paper's details and the broader dissemination of the model. Incorporating corresponding illustrations to explain the three modifications and comparing them with the standard Transformer structure would be beneficial.
- The organization of Section 4 is not concise, making it easy for readers to lose track.
- The distinction between the paragraphs "Dataset and Evaluation" and "Training & Fine-tuning" is not well-defined, especially with the scattered descriptions of the data used.
- The frequent interleaving of experimental setup descriptions with the presentation of experimental results disrupts the flow and coherence of the narrative.
**Significance:**
- Although CRATE-$\alpha$ shows significant improvements over the original CRATE model, it still lags behind the state-of-the-art. For example, in the right side of Figure 1, CRATE-$\alpha$ typically requires nearly double the training FLOPs to achieve the same accuracy as ViT.
- If the scalability and interpretability of a white-box Transformer architecture does not offer substantial insights and improvements, practitioners might prefer models with stronger performance but lower interpretability.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. As previously mentioned, as shown on the right side of Figure 1, CRATE-$\alpha$ usually requires approximately twice the FLOPs to reach the performance level of ViT, putting it at a noticeable disadvantage.
2. How does the performance improvement of CRATE-$\alpha$ compare to the original CRATE? Neither the CRATE models in Table 1 nor Figure 1 were pretrained on ImageNet-21K. Why was this not included for a fair comparison?
3. Lines 232-233 and Figure 3 describe the model’s **training loss** as predictable. Why not the **validation loss**, which is the primary concern in scaling laws and practical applications?
4. Table 2 only shows the compute requirements for the **pre-training stage**. Why does it not include the **fine-tuning** stage? Considering the total computational effort, I would like to see a comparison of the two scaling strategies: *CRATE-$\alpha$-L/32 + CRATE-$\alpha$-L/8* versus *CRATE-$\alpha$-L/14 + CRATE-$\alpha$-L/14*.
5. How was the amount of training data determined? Was there a specific standard or a FLOPs constraint? For example:
- In Section 4.1, for training models from Base to Large, both pre-training and fine-tuning were conducted for a total of **91** epochs.
- In Section 4.1, for training models from Large to Huge, there were **2.56** billion and **512** million training samples, respectively.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed.
>**Q1**: *The paper is heavily symbolized, ... it severely hampers understanding of the paper's details ...*
**A1**: Thank you for the paper presentation suggestion. We've added a new diagram to our rebuttal pdf (Figure 1 in the rebuttal pdf) illustrating the three modifications. In the revised version, we'll include additional figures/diagrams to clarify these modifications and enhance explanations in Section 3, improving our presentation.
>**Q2**: *The organization of Section 4 is not concise, making it easy for readers to lose track.*
**A2**: Thank you for your suggestion on the paper presentation. In our revision, we will reorganize Section 4 to further improve the clarity, including providing an overview of the datasets used in the paper in a paragraph, and adding pointers to different figures/tables when introducing the datasets.
>**Q3**: *Although CRATE-α shows significant improvements over the original CRATE model, it still lags behind the state-of-the-art.*
**A3**: Please see our response to '**Q1: Comparison with ViT**' in the common response, indicating that under similar FLOPs, large CRATE-α-L/16 models nearly match ViT-L/16 performance.
>**Q4**: *If the scalability and interpretability of a white-box Transformer architecture does not offer substantial insights and improvements, practitioners might prefer models with stronger performance but lower interpretability.*
**A4**: We agree that performance/accuracy is an important metric for evaluating different models. However, we believe that interpretability is also an important property in machine learning models built for real-world applications, where reliability and trustworthiness are paramount, such as self-driving and robotics (among a long list of others). As we have shown in this work (including new experimental results described in **A3**), it is very possible to scale up white-box models that are designed from principles to achieve the same performance as black-box ones. Since, as we have shown, we can train white-box models with significantly improved mathematical and semantic interpretability properties compared to the usual transformer at negligible performance drop, we believe that our model may be well-suited for real-world applications in which such properties are necessary.
We will add a paragraph on discussing the trade-off between interpretability and performance to Section 5 Discussion in our revised version.
> **Q5**: *As previously mentioned, as shown on the right side of Figure 1, CRATE-α usually requires approximately twice the FLOPs to reach the performance level of ViT, putting it at a noticeable disadvantage.*
**A5**: see our response in **A3**.
>**Q6**: *How does the performance improvement of CRATE-α compare to the original CRATE? Neither the CRATE models in Table 1 nor Figure 1 were pretrained on ImageNet-21K. Why was this not included for a fair comparison?*
**A6**: In Figure 1 (left), **all four model variants are pretrained on IN21K and then fine-tuned on IN1K**. We can see that CRATE-α-B/32 (76.5% on IN1K) outperforms CRATE-B/32 (68.5% on IN1K) by 8.5%. We also tried training the original CRATE-L model on 21K, but the training was unstable and we found it is challenging to scale CRATE-L to IN21K. The CRATE-L model cannot be effectively scaled to larger datasets due to unstable optimization. This is also the motivation for proposing the CRATE-α model.
>**Q7**: *Lines 232-233 and Figure 3 describe the model’s training loss as predictable. Why not the validation loss, which is the primary concern in scaling laws and practical applications?*
**A7**: Thank you for your suggestion. We have plotted the validation loss for Figure 3, the validation loss curves follow the similar trend as the training loss curves in Figure 3. We will add these new results to our revised version.
>**Q8**: *Table 2 only shows the compute requirements for the pre-training stage. Why does it not include the fine-tuning stage? Considering the total computational effort, I would like to see a comparison of the two scaling strategies: CRATE-α-L/32 + CRATE-α-L/8 versus CRATE-α-L/14 + CRATE-α-L/14.*
**A8**: Thank you for your suggestions regarding the comparison. Given the higher number (10x) of training samples in pre-training compared to fine-tuning, pre-training dominates compute costs. Following your advice, we've summarized the total cost of pre-training and fine-tuning in Table 4 of our rebuttal pdf file. While the total computational cost of CRATE-α-L/32 + CRATE-α-L/8 is less than that of CRATE-α-L/14 + CRATE-α-L/14, the performance of the former is slightly better. This reinforces the benefit of reducing image token sequence lengths during pre-training for compute-efficient scaling.
> **Q9**: *How was the amount of training data determined? Was there a specific standard or a FLOPs constraint?*
**A9**: Regarding the number of training epochs, we mainly follow the setup in previous work: [YCT+2024], [YBP+2023], and [TCJ2022]. Specifically, C.1 section in paper [YCT+2024], Table 13 of C.5 in paper [YBP+2023], and section 4.1 in paper [TCJ2022]. For the number of training samples from large to huge, we referred to Table 1 of [LWX2023].
We again thank you for your review, and hope we have provided satisfactory responses to your questions. Please let us know if you have further questions or comments.
[YCT+2024] Emergence of Segmentation with Minimalistic White-Box Transformers. Yaodong Yu, Tianzhe Chu, Shengbang Tong, et al. CPAL 2024.
[YBP+2023] White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is? Yaodong Yu, Sam Buchanan, Druv Pai, et al. arXiv:2311.13110. 2023.
[TCJ2022] DeiT III: Revenge of the ViT. Hugo Touvron, Matthieu Cord, Hervé Jégou. ECCV 2022.
[LWX2023] An Inverse Scaling Law for CLIP Training. Xianhang Li, Zeyu Wang, Cihang Xie. NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Regarding A3 and A5, the authors claim that *"that under similar FLOPs, large CRATE-α-L/16 models nearly match ViT-L/16 performance."* However, this assertion is not convincing when considered alongside Figure 1 (Right) and the response to 'Q1: Comparison with ViT' in the common response.
- At approximately 200G FLOPs, CRATE-α-L/16 achieves 84.6%, which means a further 7% increase is needed to match ViT's 85.2%. This gap should not be overlooked. The authors suggest that *"this narrowing performance gap from Base to Large model size suggests that CRATE-α can nearly match ViT’s performance in large-scale settings"*; however, it is observed in Figure 1 (Right) that as the model size increases, achieving such gains becomes increasingly challenging. For instance, CRATE-α-L/8 requires nearly **double** the FLOPs to achieve performance comparable to ViT-L/16.
- At smaller scales, CRATE-α appears to keep pace with ViT more easily, reaching comparable performance with a lower **absolute increase** in FLOPs. However, proportionally, CRATE-α consistently requires nearly **double** the increase in FLOPs to match ViT’s performance, as seen in the comparisons between CRATE-α-B/16 and ViT-B/32, as well as CRATE-α-L/14 and ViT-B/16.
Regarding A4, the authors reiterate that *"it is very possible to scale up white-box models that are designed from principles to achieve the same performance as black-box ones."* However, I would like to ask what practical **benefits** this white-box design (the so-called interpretability) has, aside from scaling up. For example, if this classification model performs poorly on a certain category, can this interpretability guide me in adjusting the model structure to improve performance in those categories? If the only advantage is scaling up, then such interpretability holds no real benefit, as black-box transformers can scale up more effectively (Figure 1 Right).
Regarding A6, the authors' explanation **conflicts** with the paper. Please refer to the last two lines of the Figure 1 caption: *"CRATE is trained only on ImageNet-1K, while CRATE-α and ViT are pre-trained on ImageNet-21K."* Why do the authors now explain that *"all four model variants are pretrained on IN21K and then fine-tuned on IN1K"*? I kindly request the authors to consider my question closely: since CRATE has not been pre-trained on ImageNet-21K while CRATE-α has been *"pre-trained
on ImageNet-21K and then fine-tuned on ImageNet-1K"* (as indicated in the caption of Table 1), how can these two models be compared fairly?
---
Rebuttal 2:
Comment: We are grateful for you engaging with our rebuttal further, and for your critical perspective on the work, which will no doubt improve it. Below we attempt to resolve the questions you posed.
> **Q10**: *At approximately 200G FLOPs, CRATE-α-L/16 achieves 84.6%, which means a further 7% increase is needed to match ViT's 85.2%. This gap should not be overlooked. The authors suggest that "this narrowing performance gap from Base to Large model size suggests that CRATE-α can nearly match ViT’s performance in large-scale settings"; however, it is observed in Figure 1 (Right) that as the model size increases, achieving such gains becomes increasingly challenging. For instance, CRATE-α-L/8 requires nearly double the FLOPs to achieve performance comparable to ViT-L/16. At smaller scales, CRATE-α appears to keep pace with ViT more easily, reaching comparable performance with a lower absolute increase in FLOPs. However, proportionally, CRATE-α consistently requires nearly double the increase in FLOPs to match ViT’s performance, as seen in the comparisons between CRATE-α-B/16 and ViT-B/32, as well as CRATE-α-L/14 and ViT-B/16.*
**A10**: We would like to clarify that **the performance gap between CRATE-α-L/16 (84.6% on IN1K) and ViT-L/16 (85.2% on IN1K) is 0.6%, instead of 7%**. We do agree that there is still a gap between CRATE-α-L/16 and ViT-L/16 (under similar FLOPs), but we would also like to highlight that this gap is much smaller than the previous gap between vanilla CRATE and ViT.
Meanwhile, to the best of our knowledge, there are no public results for ViT-L/8 on a similar setup. It is not clear whether increasing model size from ViT-L/16 to ViT-L/8 could significantly improve the performance of ViT. In particular, the FLOPs of the transformer architecture are closely related to the computation in the self-attention module, which grows quadratically with token length. When the patch size is halved, the token length increases to 4 times compared to the original one, resulting in the computational cost increasing by a factor of 16 for the self-attention module. We believe there is a trade-off between FLOPs consumption and accuracy gains in this context, and the transition from a patch size of 16 to 8 may not be optimal for CRATE-α and ViT.
If we first exclude the model with a patch size of 8 (due to no public available ViT-L/8), we can see that the gap on IN1K **narrows from 4.8%** between *CRATE-α-B/32 (76.5%) and ViT-B/32 (81.3%)* **to 0.6%** between *CRATE-α-L/16 (84.6%) and ViT-L/16 (85.2%)*. The latter group also has similar FLOPs and throughput. This is why we claimed that "this narrowing performance gap from Base to Large model size suggests that CRATE-α can nearly match ViT’s performance in large-scale settings."
We would like to highlight that the main focus of this paper is to compare the differences between CRATE-α and CRATE, and investigate whether it is possible to scale up white-box models and achieve competitive performance. We acknowledge that we did not conduct a comprehensive and thorough comparison between CRATE-α and ViT across various dimensions. Figure 1 (right) did not initially unify these variables, including patch size. Per your suggestion, we will add more experimental results (e.g., including different patch sizes for both ViTs and CRATE-α) in our revised version. While we think this is an interesting direction to explore in the future, it is beyond the main focus of this paper.
Title: Discussion with Reviewer cPqJ (Part 1)
---
Rebuttal 3:
Comment: >**Q11**: *Regarding A4, the authors reiterate that "it is very possible to scale up white-box models that are designed from principles to achieve the same performance as black-box ones." However, I would like to ask what practical benefits this white-box design (the so-called interpretability) has, aside from scaling up. For example, if this classification model performs poorly on a certain category, can this interpretability guide me in adjusting the model structure to improve performance in those categories? If the only advantage is scaling up, then such interpretability holds no real benefit, as black-box transformers can scale up more effectively (Figure 1 Right).*
**A11**: Thank you for your question regarding the practical benefits of this interpretable white-box design. Firstly, we would like to emphasize that model performance (such as accuracy) is not the only metric to evaluate a model. As we described in this work and shown in the prior work on white-box models [GL2010, CLW+2018, ZTA+2019, YBP+2023], we believe building understandings of how deep learning models work is also an important property for building trustworthy models. For example, 'advancing ongoing research in AI safety, including on the interpretability of AI systems' decision-making processes and on increasing the robustness of AI systems against misuse.' from [Commitments2023], with similar messages from as other sources [RSG2016, LSC+2023].
Meanwhile, regarding your question '*For example, if this classification model performs poorly on a certain category, can this interpretability guide me in adjusting the model structure to improve performance in those categories?*', there is a growing literature --- including our work --- on demonstrating interpretability leads to practical benefits across different settings. For example, [YCT+2024] and our work demonstrated that better internal interpretability leads to improved zero-shot segmentation, [MRM+2024] showed that one can leverage interpretability (based on sparse auto-encoders) to improve the accuracy of classifiers in language models, and [GES2024] demonstrated that one can use interpretability as a tool to remove spurious features.
Thank you again for your question on the motivation of building white-box models. Given the white-box/interpretable models are useful for building trustworthy and safe AI systems, our work aims to study whether it is possible to scale up such interpretable models and close their performance gap compared with black-box ones. Per your suggestion, we will add the above discussions on the practical benefits of interpretability to our revised version.
[RSG2016] "Why Should I Trust You?": Explaining the Predictions of Any Classifier. Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin. KDD 2016.
[LSC+2023] On the importance of interpretable machine learning predictions to inform clinical decision making in oncology. Sheng-Chieh Lu, Christine L. Swisher, Caroline Chung, et al. Frontiers in Oncology, 2023.
[Commitments2023] Voluntary AI Commitments. https://www.whitehouse.gov/wp-content/uploads/2023/07/Ensuring-Safe-Secure-and-Trustworthy-AI.pdf
[GL2010] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. ICML 2010.
[CLW+2018] Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds. Xiaohan Chen, Jialin Liu, Zhangyang Wang, Wotao Yin. NeurIP S2018.
[ZTA+2019] Deep Network Classification by Scattering and Homotopy Dictionary Learning. John Zarka, Louis Thiry, Tomás Angles, Stéphane Mallat. arXiv:1910.03561. 2019.
[YBP+2023] White-Box Transformers via Sparse Rate Reduction. Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Benjamin David Haeffele, Yi Ma. NeurIPS 2023.
[YCT+2024] Emergence of Segmentation with Minimalistic White-Box Transformers. Yaodong Yu, Tianzhe Chu, Shengbang Tong, Ziyang Wu, Druv Pai, Sam Buchanan, Yi Ma. CPAL 2024.
[GES2024] Interpreting CLIP's image representation via text-based decomposition. Yossi Gandelsman, Alexei A. Efros, Jacob Steinhardt. International Conference on Learning Representations (ICLR), 2024.
[MRM+2024] Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models. Samuel Marks, Can Rager, Eric J. Michaud, et al. https://arxiv.org/abs/2403.19647. 2024.
Title: Discussion with Reviewer cPqJ (Part 2)
---
Rebuttal 4:
Comment: > **Q12**: Regarding A6, the authors' explanation **conflicts** with the paper. Please refer to the last two lines of the Figure 1 caption: *"CRATE is trained only on ImageNet-1K, while CRATE-α and ViT are pre-trained on ImageNet-21K."* Why do the authors now explain that *"all four model variants are pretrained on IN21K and then fine-tuned on IN1K"*? I kindly request the authors to consider my question closely: since CRATE has not been pre-trained on ImageNet-21K while CRATE-α has been *"pre-trained on ImageNet-21K and then fine-tuned on ImageNet-1K"* (as indicated in the caption of Table 1), how can these two models be compared fairly?
**A12:** Thank you for your question, we apologize for the confusion by the unclear description in our submission. We would like to clarify that the comparison between CRATE and CRATE-α is fair. Specifically, the caption "CRATE is trained only on ImageNet-1K, while CRATE-α and ViT are pre-trained on ImageNet-21K" **applies only to Figure 1 (right), instead of both left and right figures**.
In particular, we would like to clarify that the training setups for the CRATE model in Figure 1 (left) and Figure 1 (right) are different:
- The results for CRATE-B/16 and CRATE-L/16 in Figure 1 (right) and Table 1 are directly obtained from the original CRATE paper [YBP+2023], where the models were trained only on IN1K. This serves as an intuitive comparison of different published results.
- The motivation for Figure 1 (left) is to ablate the effectiveness of our proposed three modifications. **It is an ablation study.** Therefore, we kept all training setups the same except for the model architecture. This is the reason — in **A6** of the rebuttal response — we mentioned that "all four model variants are pretrained on IN21K and then fine-tuned on IN1K."
Additionally, according to the results of **Fine-tuning classification datasets** in **Q2: Additional experimental results on real-world downstream tasks** in the common response (we also applied the same training setup here), these results further validate the improvement of CRATE-α compared to CRATE under the same pre-trained dataset (IN21K). In the revision, we will clarify the different training/dataset setups and make sure to deliver clear messages to the readers.
We again thank you for your review and valuable feedback, and hope we have provided satisfactory responses to your questions. Please let us know if you have further questions or comments.
[YBP+2023] White-Box Transformers via Sparse Rate Reduction. Yaodong Yu, Sam Buchanan, Druv Pai, et al. NeurIPS 2023.
Title: Discussion with Reviewer cPqJ (Part 3)
---
Rebuttal Comment 4.1:
Comment: Thank you for the author's prompt clarification; that makes things a bit clearer. However, my initial question and previous comments were specifically concerning Figure 1 (Right) and Table 1, and did not involve Figure 1 (Left).
The same questions remain: if they have different training processes and training data, this *"intuitive comparison of different published results"* weakens the support for the paper's conclusions. A fair comparison is highly valued in the field of computer vision.
I strongly recommend that the authors include more equitable experimental results in the paper, as this would greatly enhance its quality. If such changes are not feasible at this stage, I believe my rating is fair and reasonable.
---
Rebuttal 5:
Comment: Thank you for the detailed response. I appreciate the author's candid acknowledgment of the limitations of the current work. Given that these limitations objectively exist, this discussion will not change my evaluation.
I would like to clarify two points:
- The 7% I mentioned is intended to highlight the relative improvement potential, calculated as $(85.2 - 84.6) / 84.6 \times 100 = 7 $%.
- I hope the author is aware that, as shown in Figure 1 (Right), the additional FLOPs required for CRATE-α to match the comparable performance of ViT increases significantly with scale.
---
Rebuttal 6:
Title: Response to Reviewer cPqJ
Comment: Thank you for your valuable and in-depth review, as well as the series of insightful discussions. Your feedback and suggestions will undoubtedly enhance the quality of the paper. We greatly appreciate your input.
Regarding the 'more equitable experimental results', according to the NeurIPS discussion policy, "*The discussion phase is meant to clarify these questions, rather than to provide further comments regarding the reviews.*" Due to limited time and resource constraints, we are not able to provide new experimental results under more equitable settings before the end of the discussion phase. We will conduct new experiments as per your suggestions and incorporate them into our revised version. | Summary: This paper studies the scalability problem of white-box transformer CRATE and proposes CRATE-$\alpha$ to enhance the scaling ability of CRATE. To be specific, the authors propose three strategic but minimal modifications for the CRATE model architecture: Overparameterized sparse coding block, Decoupled dictionary, and Residual connection. Extensive experiments across different datasets and settings demonstrate the effectiveness of the proposed approach.
Strengths: 1. It is quite meaningful to study white-box transformers and try to increase their scalability which promises its application in potential usage.
2. Comprehensive evaluation. The proposed method is validated on multiple datasets and tasks which demonstrate the scalability of CRATE-$\alpha$.
3. The presentation is clear. Overall, the paper is well-organized and the method is easy to follow.
Weaknesses: 1. Performance gaps with vanilla ViT. As shown in Figure 1, CRATE-$\alpha$ still lags behind vanilla ViT across different scales remarkably which may limit its application in real scenarios. Besides, it is suggested to compare with vanilla ViT in computational costs, number of parameters, and inference speed as well.
2. According to the model configuration, the number of parameters of CRATE-$\alpha$ is almost four times as CRATE and it is strange to consider those as the same scale models. Moreover, how do the proposed new modules contribute to the performance gain of CRATE-$\alpha$? Is it simply because of larger models?
3. Although the authors made lots of efforts in scaling CRATE to CRATE-$\alpha$, they only spent limited space in the paper to discuss the interpretability of the proposed method. This short paragraph may not be enough to justify why the authors are motivated to study the white-box transformers.
Technical Quality: 3
Clarity: 3
Questions for Authors: Apart from the questions in weakness above, another question is:
why the performance in dense prediction tasks is so bad?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed.
>**Q1**: *Performance gaps with vanilla ViT. As shown in Figure 1, CRATE-α still lags behind vanilla ViT across different scales remarkably which may limit its application in real scenarios. Besides, it is suggested to compare with vanilla ViT in computational costs, number of parameters, and inference speed as well.*
**A1**: Thank you for your suggestions on comparison between ViT and CRATE-α. Please refer to our response to '**Q1: Comparison with ViT**' in the common response.
>**Q2**: *According to the model configuration, the number of parameters of CRATE-α is almost four times as CRATE and it is strange to consider those as the same scale models. Moreover, how do the proposed new modules contribute to the performance gain of CRATE-α? Is it simply because of larger models?*
**A2**: Considering the number of parameters, CRATE-α-B/16 (72.3M) is comparable to CRATE-L/16 (77.6M), as shown in table 4 of the Appendix. CRATE-α-B/16 still surpasses CRATE-L/16 by 9.9% (81.2% vs 71.3%) in ImageNet-1K (see Table 1 of the manuscript). Meanwhile, from Table 1 of the manuscript, we also observe that increasing the size of CRATE from base (22.8M) to large (77.6M) results in only a 0.5% improvement, demonstrating diminished returns at the cost of 3.4 times more parameters. In contrast, we found that change from CRATE-B/16 to CRATE-α-B/16 leads to a much more significant improvement. Moreover, we also tried directly scaling CRATE-L to a larger dataset (ImageNet-21K), but the experiment showed it is challenging to train CRATE-L directly due to the instability of the optimization, and its performance is even worse than just using ImageNet-1K.
Therefore, combining these observations, we can find that the design of CRATE-α indeed leads to improved performance than CRATE, which is not simply due to the number of parameters being larger. Without a carefully designed architecture, naively increasing parameters might not lead to improved performance. Thank you for your question, and we will add the above discussion to our revised version.
>**Q3**: *Although the authors made lots of efforts in scaling CRATE to CRATE-α, they only spent limited space in the paper to discuss the interpretability of the proposed method. This short paragraph may not be enough to justify why the authors are motivated to study the white-box transformers.*
**A3**: Thank you for your suggestion on the interpretability part of the paper presentation. *The main focus and motivation* of this paper is to investigate whether it is possible to scale up white-box deep learning models to achieve competitive performance as black-box ones such as ViTs, which is a *long-standing problem for white-box deep learning models* [GL2010, ZTA+2019, DLZ+2022, YBP+2023]. Our work *provides an affirmative answer* to this question.
Meanwhile, the previous work [YBP+2023, PBW+2024] has already explored the mathematical interpretability of white-box transformers. Our newly designed white-box model CRATE-α follows the white-box design principles as previous work, and the operators and architectures are derived by optimizing the sparse coding rate reduction Eq. (1) in the manuscript. Therefore, the mathematically interpretability [YBP+2023, PBW+2024] developed in previous work is inherited by this work. It would be highly interesting to provide a more comprehensive study on the interpretability of CRATE-α, such exploration is out-of-scope for the present work and left for future work.
Per your suggestions, we have conducted more experiments on the zero-shot segmentation (following the same setup in Section 4.4), including segmentation visualization on more images as shown in Figure 5. We will add these new experimental results to the appendix of our revision.
>**Q4**: *Why the performance in dense prediction tasks is so bad?*
**A4**: The reported results in Table 3 of the manuscript are evaluated **under the zero-shot segmentation setting**. For the results in Table 3, we were using MaskCut [WGY+2023], a self-supervised segmentation method, to study the interpretability of different models (ViT, CRATE, CRATE-α). In particular, we directly extracted features using the learned models and applied MaskCut to perform segmentation. We did not perform supervised training on segmentation tasks. Therefore, the segmentation results are not as competitive as the standard ones which are typically trained with segmentation labels. Thank you for your suggestion. We will add a paragraph and highlight this difference in our revised version.
We again thank you for your review, and hope we have provided satisfactory responses to your questions. Please let us know if you have further questions or comments.
[WGY+2023] Cut and learn for unsupervised object detection and instance segmentation. Xudong Wang, Rohit Girdhar, Stella X Yu, et al. CVPR 2023.
[GL2010] Learning fast approximations of sparse coding. Karol Gregor and Yann LeCun. ICML 2010.
[ZTA+2019] Deep Network Classification by Scattering and Homotopy Dictionary Learning. John Zarka, Louis Thiry, Tomás Angles, et al. arXiv:1910.03561. 2019.
[DLZ+2022] Revisiting sparse convolutional model for visual recognition. Xili Dai, Mingyang Li, Pengyuan Zhai, et al. NeurIPS 2022.
[YBP+2023] White-Box Transformers via Sparse Rate Reduction. Yaodong Yu, Sam Buchanan, Druv Pai, et al. NeurIPS 2023.
[PBW+2024] Masked Completion via Structured Diffusion with White-Box Transformers. Druv Pai, Sam Buchanan, Ziyang Wu, et al. ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comment
Comment: The reviewer appreciates the detailed response provided by the authors. Most of the concerns are addressed and the rating is upgraded accordingly. One suggestion that the authors could consider is adding the experiments of supervised training on segmentation tasks which is important to examine the dense prediction ability of white-box transformers.
---
Rebuttal 2:
Comment: Thank you again for thoroughly reviewing our manuscript and response. We are grateful for your valuable feedback on our work, which will no doubt improve it. We will add the results of supervised segmentation tasks to the main body of the revised version.
Title: Response to Reviewer yemz | Summary: This paper aims to train CRATE at a large scale for vision tasks. The contribution includes an architecture modification to the sparse coding block and a light training recipe. The new model, called CRATE-alpha, shows large improvements compared with the previous CRATE model. The experiments also show promising results on unsupervised object segmentation.
Strengths: - The paper presents a careful study to enhance the performance of CRATE. The paper introduces key modifications to the existing CRATE, including the sparse coding block, decoupled dictionary, and residual connection.
- The paper investigates its scaling behavior and shows promising improvements of the newly introduced CRATE-alpha.
- The paper presents in-depth experiments, such as the scaling analysis on ImageNet. The paper also shows improvements for semantic interpretability.
- The figures and model architecture are well-illustrated.
Weaknesses: Overall I find the paper is well-presented and solid. Below are my minor concerns for this paper:
- The paper is highly centered on improving CRATE. Most of the findings might not be transferable to other models. This may limit its impact to the general audience in NuerIPS community.
- It would be interesting to further understand its potential downstream applications (not only vision but also language data)
Technical Quality: 2
Clarity: 3
Questions for Authors: see weakness
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we attempt to resolve the questions you posed.
>**Q1**: *The paper is highly centered on improving CRATE. Most of the findings might not be transferable to other models. This may limit its impact to the general audience in NuerIPS community.*
**A1**: We agree that some of our findings mainly apply to white-box models (e.g., white-box transformers like CRATE [YBP+2023]). However, there is growing interest in developing white-box transformers for better interpretability and transparency across a wide range of tasks and domains, including image segmentation [YBP+2023], self-supervised masked autoencoders [PBW+2024], and integrated sensing and communications [ZL2024], etc. We believe that our findings and insights could be helpful for developing white-box transformers for a wide range of applications and tasks. Moreover, our results on the scalability of white-box transformers could also shed light on scaling up a broader class of white-box deep neural networks, such as white-box ISTA networks and their variants [GL2010, SNT2018, CLW+2018, ZTA+2019, DLZ+2022] designed via unrolled optimization. In summary, we believe that the findings and insights of this work could benefit a broad audience in the NeurIPS community interested in building more interpretable and performant deep learning models.
Thank you for your suggestion on discussing the impact of this work, and we add the above discussion to our revision.
>**Q2**: *It would be interesting to further understand its potential downstream applications (not only vision but also language data)*
**A2**: Thank you for your suggestion. Please refer to our response to '**Q2: Additional experimental results on real-world downstream tasks**' and '**Q3: Performance of CRATE-α on NLP task**' in the common response.
We again thank you for your review, and hope we have provided satisfactory responses to your questions. Please let us know if you have further questions or comments.
[YBP+2023] White-Box Transformers via Sparse Rate Reduction. Yaodong Yu, Sam Buchanan, Druv Pai, et al. NeurIPS 2023.
[PBW+2024] Masked Completion via Structured Diffusion with White-Box Transformers. Druv Pai, Sam Buchanan, Ziyang Wu, et al. ICLR 2024.
[ZL2024] White-Box 3D-OMP-Transformer for ISAC. Bowen Zhang, Geoffrey Ye Li. arXiv:2407.02251.
[GL2010] Learning fast approximations of sparse coding. Karol Gregor and Yann LeCun. ICML 2010.
[SNT2018] Supervised deep sparse coding networks. Xiaoxia Sun, Nasser M Nasrabadi, Trac D Tran. ICIP 2018.
[CLW+2018] Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds. Xiaohan Chen, Jialin Liu, Zhangyang Wang, et al. NeurIPS 2018.
[ZTA+2019] Deep Network Classification by Scattering and Homotopy Dictionary Learning. John Zarka, Louis Thiry, Tomás Angles, et al. arXiv:1910.03561. 2019.
[DLZ+2022] Revisiting sparse convolutional model for visual recognition. Xili Dai, Mingyang Li, Pengyuan Zhai, et al. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: comments
Comment: Thank you. I have no further comment at this point. | Rebuttal 1:
Rebuttal: ### **Common response to all reviewers**:
We thank all reviewers for their insightful feedback. We are especially encouraged by their recognition of:
- The novelty and impact of our central ideas (` Reviewer YH3U `: “The paper presents a novel architecture, CRATE-α, …, enhancing scalability without compromising interpretability.” ` Reviewer cPqJ `: “successfully combining them with a white-box Transformer is a noteworthy achievement. The integration not only works effectively but also yields commendable results.”)
- The benefits of the new white-box architecture we have proposed (` Reviewer 8F4z `: “shows promising improvements of the newly introduced CRATE-alpha.”; ` Reviewer yemz `: “Comprehensive evaluation.”; ` Reviewer cPqJ `: “The proposed models demonstrate significant improvements compared to the previous generation of CRATE models.”; ` Reviewer YH3U `: “The authors provide a wealth of empirical evidence supporting the effectiveness of CRATE-α”)
- The quality of the exposition (` Reviewer 8F4z `: “The figures and model architecture are well-illustrated.”; ` Reviewer yemz `: “the paper is well-organized and the method is easy to follow.”)
- The motivation of our work (` Reviewer yemz `: “It is quite meaningful to study white-box transformers and try to increase their scalability …”)
In the remainder of this message, we address certain questions raised by the reviewers.
> ### **Q1: Comparison with ViT**
**A1**: We agree that the proposed white-box CRATE-α is not significantly outperforming ViT on classification tasks. However, based on the newly designed dictionary learning block and training recipe, we were able to significantly improve over the vanilla white-box CRATE and nearly closed the gap between white-box transformers and ViTs, especially when the model size becomes larger.
**Performance comparison.** In Figure 1 (right), ViT was fine-tuned at 384x384 resolution, while CRATE was at 224x224. To more accurately compare CRATE-α and ViT with larger model sizes, we conducted experiments on CRATE-α-L/16 with an image resolution of 336, nearly matching the setup of ViT-L/16. Both models used a similar amount of FLOPs: 210G for CRATE-α-L/16 compared to 191G for ViT-L/16. The throughput, or images processed per second, was also comparable at 35.53 for our model versus 35.56 for ViT-L/16. The accuracy of CRATE-α-L/16 reached 84.6%, closely approaching ViT’s 85.2% under similar conditions. Meanwhile, combining the trend from Figure 1 (right), this narrowing performance gap from *Base* to *Large* model size suggests that CRATE-α can nearly matche ViT’s performance in large-scale settings. Besides, CRATE-α inherits the mathematical interpretability of the white-box models and can also achieve much better semantic interpretability evaluated by zero-shot segmentation.
**Efficiency comparison.** We would like to thank ` Reviewer yemz ` for suggestions on comparison with ViT in computational costs, number of parameters, and inference speed. These comparisons are summarized in the Table 1 of our rebuttal pdf, where CRATE-α matches the efficiency of ViT while achieving similar accuracy. With the same layers/embedding dimension, CRATE-α has fewer parameters than ViT, and the FLOPs/Throughput of CRATE-α is slightly higher than ViT.
We will add the above discussion and new experimental results to our revised version.
> ### **Q2: Additional experimental results on real-world downstream tasks**
**A2:** We have conducted new experiments on supervised segmentation tasks and fine-tuning on additional downstream datasets.
**Segmentation.** We compare the performance of CRATE and CRATE-α for the segmentation task on ADE20K dataset to study the benefits of the newly designed architecture CRATE-α compared to CRATE. Following the setup of [RWZ+2023], we compared them in segmentation tasks, focusing on direct comparisons due to time and resource constraints, without extensive parameter tuning. Results demonstrate that CRATE-α consistently outperforms CRATE across all key metrics, with both models pre-trained on IN21K. These findings indicate significant performance gains in vision tasks beyond classification. We will include these results in our revised version.
| Model | Scope | mIoU | mAcc | aAcc |
| ------------ | ------ | ----- | ----- | ----- |
| CRATE-α-B/32 | global | 35.35 | 45.28 | 77.63 |
| CRATE-B/32 | global | 30.28 | 39.29 | 75.21 |
**Fine-tuning classification datasets.** We included additional experimental results on evaluations on downstream datasets for CRATE-α. From the table 2 of our rebuttal pdf, we find that CRATE-α consistently outperforms CRATE (both models are pre-trained on IN21K), and CRATE-α achieves improved performance when model size increases.
We will add the above new results to our revised version.
> ### **Q3: Performance of CRATE-α on NLP task**
**A3**: Thanks to ` Reviewers 8F4z ` and ` Reviewer YH3U ` for suggesting we explore CRATE-α in language tasks. We conducted new experiments for CRATE-α with autoregressive training on OpenWebText, following the setup in [Karpathy2022]. We compare CRATE-α models (57M and 120M) with CRATE and GPT-2, where the result of CRATE is from [YBP+2023]. Table 3 in our rebuttal pdf shows that CRATE-α significantly improves over CRATE in language modeling. Due to limited time and resource constraints, compared to the total number of iterations (600K) used in CRATE, we only finished 80% of the total iterations for CRATE-α-small and 55% for CRATE-α-base. CRATE-α still showed notable improvements. We will add these new results to our revised version.
[Karpathy2022] NanoGPT. https://github.com/karpathy/nanoGPT. Andrej Karpathy
[YBP+2023] White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is? Yaodong Yu, Sam Buchanan, Druv Pai, et al. arXiv:2311.13110. 2023.
[RWZ+2023] TinyMIM: An Empirical Study of Distilling MIM Pre-Trained Models. Sucheng Ren, Fangyun Wei, Zheng Zhang, et al. CVPR 2023.
Pdf: /pdf/fd3708ab1ee39ade182fcb9d67146d68471d2d44.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
TSDS: Data Selection for Task-Specific Model Finetuning | Accept (poster) | Summary: This paper proposes a method for data selection in foundation model fine-tuning. The proposal contains a distribution alignment loss based on optimal transport to capture the discrepancy between the selected data and the target distribution, a regularizer to encourage the diversity of the selected data, and kernel density estimation to reduce the negative effects of near-duplicates among the candidate data. Experimental on fine-tuning the language model are reported.
Strengths: 1. The proposal studied in this paper is interesting since how to select data, and how to improve the data quality is important for the training and fine-tuning of the foundation model.
2. The proposed method which considers distribution discrepancy minimization, diversity, and near-duplicates, is technically sound.
Weaknesses: 1. The novelty and contribution of the proposed method are limited. Data selection is important and well-studied in the machine learning community. For example, in active learning, we need to select examples to label according to some metrics; in domain adaptation, we need to select data to help the model reuse. Some widely adopted methods can be applied to the problem of data selection for the foundation model and the authors didn't provide a comprehensive study and comparison. Moreover, the techniques adopted in the proposal are also widely used techniques.
2. For the experiments, the authors only conduct experiments on the language model, can the proposal be applied to other foundation models, such as the vision-language model?
3. It seems that the random selection method can also achieve good performance. So I am wondering about the difficulty of the problem, maybe we can improve the performance using some trivial techniques.
4. The time cost between fine-tuning with the full dataset and the selected dataset should be reported.
Technical Quality: 2
Clarity: 3
Questions for Authors: As discussed in the Weakness part.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We believe that you have missed key points of our work and we would like to correct several factually incorrect points in the provided review.
> W1 - novelty
To the best of our knowledge, our work is the first one that presents a unified framework for task-specific data selection which considers distribution matching, diversification, and deduplication. In addition, our formulation is general, supporting any metric spaces and distance functions where efficient nearest-neighbor search algorithms exist.
> W2 - extension to VLMs
We disagree that the focus on language models poses a weakness of our work. For task-specific data selection, it is common practice to use large language models for the evaluation [1][2]. In addition, we already show the effectiveness of our framework in diverse experimental settings. Evaluation using vision-language models is an interesting extension that can be studied in future works.
>W3 - performance of random selection
This statement is factually incorrect. Our method is significantly better than random selection with large gaps. In Table 2, we can see that our method consistently outperforms random selection, with a gap of 3.2 F1 points on average for task-specific finetuning. In Table 4, we show that our method consistently outperforms random selection, with a gap of 2.9 F1 points on average for domain-specific continued pretraining.
>W4 - finetuning cost
Finetuning on the full dataset takes 80 hours on an A100 GPU with 40G memory. The cost of finetuning on the selected set is based on the selection ratio. For example, if the selection ratio is 1%, the finetuning time is 80 * 1% = 0.8 hours. We will add the details to the Appendix.
[1] Xia, Mengzhou, et al. "Less: Selecting influential data for targeted instruction tuning." International Conference on Machine Learning. PMLR, 2024.
[2] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." Advances in Neural Information Processing Systems 36 (2023): 34201-34227. | Summary: This paper formulates data selection for task-specific fine-tuning as an optimization problem based on optimal transport for distribution alignment. It proposes two KNN-based implementation methods and evaluates them on datasets for task-specific instruction fine-tuning and domain-specific continued pretraining. The experimental results demonstrate that their methods are more effective than the baseline systems (LESS and DSIR).
Strengths: 1. The paper formulates data selection as an optimal transport problem, providing a detailed problem definition and a closed-form solution. Additionally, it proposes using Kernel Density Estimation to address the issue of near-duplicates.
2. The authors introduce KNN-Uniform and KNN-KDE algorithms for data selection, showing that their performance is superior to the baseline systems in both task-specific instruction fine-tuning and domain-specific continued pretraining experimental setups.
Weaknesses: **Regarding the methodology:**
1. The connection between data selection and the optimal transport problem is not clearly established. Despite mentioning it in lines 114-115 of Section 3, it remains unclear why data selection can be considered an optimal transport problem.
2. Much of the paper is based on LESS, including the representation of samples and the task definition. However, there is minimal mention of LESS, making it challenging to understand without prior knowledge of LESS.
3. The method still relies on M query samples for a specific task, which poses certain limitations.
**Regarding the experimental section:**
4. The experimental section contains too many specific settings. For instance, special settings mentioned in lines 248 and 286 make it difficult to determine how these parameters were chosen, even after reviewing the appendix.
5. The two experimental setups are inconsistent in task-specific instruction fine-tuning and domain-specific continued pretraining. In Table 2, the Ratio of 0.5%-5% can be understood as the number of data samples selected. However, in Table 4, the 1K, 3K, and 10K seem to refer to the number of query samples, but there is no comparison of the number of selected samples.
6. There is a lack of comparison with various baseline systems. Only one baseline system is used for comparison, and its performance differs from that reported in the original paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: They acknowledge one limitation in conclusion: the method still relies on M query samples for a specific task. It is also raised in my comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We address the concerns and answer the questions below:
>W1 - connection to optimal transport
As mentioned in lines 38-44, we want the selected data to match the distribution of the representative data from the target distribution, and optimal transport is a powerful tool to measure distribution alignment. We will highlight this connection in Section 3.1 when we introduce the optimization problem.
>W2 - discussion of LESS
We will add more discussion on LESS to related works. Specifically, LESS computes the gradient similarity between each candidate and all the query examples, and the maximum similarity is the score for ranking. Then the top-ranked candidates are selected. A major difference between our method and LESS is that our method matches the distributions, while LESS takes the top ones based on aggregated statistics which can make it focus on a particular set of query examples.
>W3 - need M query examples
In practice, M can be a budget limit on how much human effort a user is willing to put into the selection process. We will expand our discussion in the conclusion section to include this point.
>W4 - how to set parameters
Note that we only have two effective parameters ($C$ is a constant to make sure that the transport cost and $G(\gamma)$ are on the same scale). The way we set the hyperparameters is as follows:
- We set $C$ to 5 when the embeddings are normalized.
- We set $h$ to the maximum distance between 10 hand-crafted near-duplicates. The intuition is that the points within the distance of h will be considered as near-duplicates and the probability assigned to them will be reduced.
- $\alpha$ can be any value between 0.05 - 0.95, and the performance is not sensitive to it as long as it is not too small or too large (see Appendix E.3.1). In practice, we can also use a validation set and a small surrogate model to guide the parameter selection.
We will add the details to the appendix.
>W5 - evaluation using different sample sizes for continued pretraining
We did experiments that selected 100K and 300K examples for domain-specific continued pretraining (see Table 2 in the PDF attached to the global response). The observation is similar to the 1M sample size: our method is either better than or comparable to the baseline. We will add this result to the appendix.
>W6 - baseline systems
We compare with the current state-of-the-art for each setting. The discrepancy between our reported numbers and the numbers in the LESS paper is due to the updates to open-instruct. We followed LESS to use open-instruct for the evaluation, and we used the updated version. In fact, we discussed this finding with the authors of LESS and confirmed the discrepancy reasons using their old open-instruct scripts. We will make the evaluation script public so that people can reproduce our numbers.
---
Rebuttal Comment 1.1:
Title: Reply by the Reviewer
Comment: Thank you for your reply. I have read your response and other reviewers' comments. I will keep my score. | Summary: This paper presents a method for data selection for task-specific model finetuning. The method relies on a small, representative sample of data from the target task to select matching, relevant data from a corresponding corpus. The method relies on framing this task as an optimization problem, utilizing an optimal-transport-based distribution alignment loss and a KDE-based regularizer to avoid oversampling (near-)duplicates.
The authors show this method to be highly scalable and data efficient, being competitive with, and often outperforming state-of-the-art methods for domain-specific continued pretraining and task-specific instruction tuning.
Strengths: - The paper rigorously presents and tests the proposed method, with a detailed theoretical motivation.
- Sections 2-4 are well structured and introduce the method in a clear, progressive way.
- Performance results, especially for very small sample sizes, strongly support the utility of this method.
Weaknesses: - Section 5.1: Given how different some of the performances are between llama and mistral, including other LLMs may give a more complete picture of the efficacy of this method.
- Section 5: Efficiency claims would benefit from context. How does the 28 hour initialization time compare to other SOTA methods on this dataset? How does it scale after initialization is done when repeatedly drawing task-specific samples compared to other methods?
Technical Quality: 4
Clarity: 4
Questions for Authors: Again, comparing efficiency with other SOTA methods on the datasets used in this paper would be helpful in better contextualizing the performance presented.
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the feedback. Here is our response to the questions and concerns:
>W1 - other LLMs
Thank you for your suggestions. We chose llama-7b and mistral-7b since they achieved state-of-the-art performance in various tasks at the time of submission among the 7b-size models and are shown to be effective with LoRA finetuning. Our experiments in Section 5.1 demonstrate that the benefits of our data selection solution hold across both models. A detailed study of other models could be explored in a future, extended version of this work.
> W2 & Q1 - runtime
On the same 150M examples (used in the experiment in Section 5.2), DSIR takes 18 hours to initialize. Note that this is a one-time cost for the data repository and the index we build can be reused across tasks. Although our method requires an additional 10 hours to initialize compared to DSIR, it achieves an average gain of 1.92 F1 points when provided with 1K representative examples from the target task.
For the task-specific selection after initialization, our method takes 0.11 hours and DSIR takes 0.13 hours when taking 1M examples guided by 10K query examples. The task-specific selection after initialization scales linearly with respect to the number of query examples, and is not affected by the number of examples we want to sample (except for the IO cost).
For the experiment in Section 5.1, the initialization time for both LESS and our method is 54 hours, dominated by the gradient computation time for the 270K candidates. For each task-specific selection, both take less than 1 minute.
We will add the discussion above to the appendix. | Summary: This paper proposes task-specific training data selection for language model fine-tuning. Given a (small) set of representative examples for a task and a large set $D$ of possible training examples, the proposed method uses (regularized) optimal transport to assign a probability distribution over $D$ that matches the distribution of representative examples while also encouraging diversity among the elements of $D$ assigned a nonzero probability.
The authors prove that with a certain choice of regularization function, this is equivalent to (an adaptive version of) $k$-nearest neighbor selection of candidate data similar to the representative examples. Since $k$NN treats near-duplicates as distinct examples (which would decrease diversity of the selected data), the paper additionally introduces another regularization term based on kernel density estimation; the optimal transport with this regularization is a weighted $k$NN that intuitively accounts for the frequency of near-duplicates for each example.
Strengths: - Good data selection is an important problem given that today's models are both expensive to fine-tune and very sample-efficient *if* they are given the "correct" high-quality fine-tuning data [1]. Most high-performing efforts still tweak the composition of these small task-specific datasets by hand. This paper has an interesting new take on framing task-specific data selection as an optimal transport problem between representative task examples and a large pool of candidate training data.
- Theorems 3.1 and 3.2 shows that with certain regularization terms, the optimal transport selection procedure is equivalent to certain variations of $k$-nearest-neighbor. This allows for efficient computation of the optimal data selection under this objective.
- The proposed approach can naturally be combined with approximate nearest neighbor search methods for efficiency.
- Strong empirical results showing that the proposed selection procedure can even outperform tuning with the full candidate dataset.
- The experiments include standard deviations across three runs, giving a sense of how big the gains are compared to noise.
[1] Zhou, Chunting, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma et al. "LIMA: less is more for alignment." In Proceedings of the 37th International Conference on Neural Information Processing Systems, pp. 55006-55021. 2023.
Weaknesses: - Missing an ablation using embeddings instead of gradients, or any other distance function for the examples.
- Missing several relevant data selection baselines that also encourage diversity, e.g. Deita [1], QDIT [2], and methods based on DPPs [3].
- Changing the data mix changes the optimal learning rate (e.g., since it changes the scale of the loss function at initialization). The paper compares models trained on different data mixes with the same learning rate, but the fair comparison is optimal : optimal. It's not clear from the experiments whether the reported gains are due to the learning rate being more optimal for the selected data mix, especially since the metric used to select the data is based on the gradients of a model.
[1] Liu, W., Zeng, W., He, K., Jiang, Y., & He, J. What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning. In The Twelfth International Conference on Learning Representations.
[2] Bukharin, Alexander, and Tuo Zhao. "Data diversity matters for robust instruction tuning." arXiv preprint arXiv:2311.14736 (2023).
[3] Wang, P., Shen, Y., Guo, Z., Stallone, M., Kim, Y., Golland, P., & Panda, R. (2024). Diversity Measurement and Subset Selection for Instruction Tuning Datasets. arXiv preprint arXiv:2402.02318.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It might be helpful to have some (simple) intuition after L140 explaining why regularizing distance to the uniform transport encourages diversity.
- If the optimal transport formulation is equivalent to a certain type of $k$NN, why not just present the method as a type of $k$NN? $k$NN has a long history in data selection going back to at least Wilson (1972). It's not clear what the optimal transport discussion buys other than added complexity.
- Given the optimal transport framing, I think there should be some discussion of other (efficient) regularized optimal transport algorithms, such as Sinkhorn? [2]
- If I understand L252--L254 correctly, the effective dataset size for the proposed method is actually up to 4x the reported size, because the data are resampled from the computed distribution each epoch. Does LESS (the baseline) get the same advantage? I.e., does LESS get to use 4x the data or do some kind of resampling?
[1] Wilson, Dennis L. "Asymptotic properties of nearest neighbor rules using edited data." IEEE Transactions on Systems, Man, and Cybernetics 3 (1972): 408-421.'
[2] Cuturi, Marco. "Sinkhorn distances: Lightspeed computation of optimal transport." Advances in neural information processing systems 26 (2013).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper includes a reasonable discussion of its limitations, but it could be improved by discussing some of the additional limitations with the empirical results mentioned above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback. We address the concerns and clarify the questions as follows:
>W1 - ablation study using embeddings
We proposed a framework where the data can be embedded in any metric space with any distance function that supports efficient nearest neighbor search. Finding the best choice of embedding and distance is orthogonal to our study. Our choice of using gradients for task-specific finetuning is motivated by LESS [1], a state-of-the-art method for task-specific finetuning at the time of submission, which shows that similarities of gradients are better approximations of influence on task-specific performance than similarities of embeddings. Based on their observations, we further show that applying our framework using exactly the same encoding and distance function yields even better results due to our distribution matching and diversification. For the experiments on domain-specific continued pretraining, we use sentence embeddings and also show strong results.
>W2 - additional methods
Thanks for the pointers. These methods target different settings in the broader data selection space and we find them not directly comparable to the setting we consider, i.e., task-specific data selection where not all candidates are relevant. We will add a discussion on these techniques in related works, as their diversification methods can be of interest to the reader.
>W3 - hyperparameter setting
We follow the convention of the data selection literature [1, 2, 3] to use the same hyperparameters in the evaluation. We use the hyperparameters recommended by previous works (see the experimental settings in Section 5), and they are not tailored to our selected data. Given that our methods achieve consistently good performance, it is unlikely that our method outperforms the others due to a more favorable set of hyperparameters.
>Q1 - intuition behind uniform transport
Thank you for the suggestion. We will add a discussion after L140 to highlight that uniform transport represents the most diverse case where every example receives the same amount of probability mass. Therefore, we use it as a reference point to encourage the distribution to be close to it. We can gain more intuition from Theorem 3.1, where we will add a discussion stating that as we increase the weight of the regularizer, the optimal K also increases.
>Q2 - why use optimal transport instead of presenting the method as knn
The key innovation of our framework goes beyond the kNN algorithms themselves. Our framework considers two essential objectives in data selection: distribution matching and diversification [4]. By formulating an optimization problem, we integrate these goals while also taking near-duplicates into account. The formulation also allows us to study the optimality of the problem and have optimal solutions (Theorem 3.1 and 3.2). In our framework, kNN serves as a tool for finding the optimal solutions.
>Q3 - other efficient optimal transport algorithms
Thanks for the suggestion. Incorporating algorithms like Sinkhorn for even further efficient computation is an interesting direction. We will add a discussion to related works.
>Q4 - effective dataset size
- We sample from the computed distribution with replacement every step. Therefore, the total number of unique examples can be up to 4x the number of examples used per epoch, though the actual number of unique examples is much lower since examples with high probability mass tend to be repeatedly sampled.
LESS is a deterministic selection method, and we follow the setting in their paper to select a fixed set and use the selected set for 4 epochs. Then for fair comparison, when evaluating our method, we train for the same number of steps with the same batch size.
- It's important to note that having more unique examples does not necessarily lead to better results, since many candidate examples may not be relevant to the task. For instance, the "Full" method, which has seen all candidate examples, performs worse than both LESS and our method in most cases.
- To demonstrate that the number of unique examples during training is not the primary factor behind our performance gain, we provide additional results that compare our method with LESS when the number of epochs is set to 1 (see Table 1 in the PDF attached to the global response). Specifically, each method selects a set whose size is 4% of the candidates. Then we train the model on the selected set for 1 epoch (the amount of computation is the same as using 1% for 4 epochs). From the results, we can see that our method still outperforms LESS in 5 out of the 6 settings when LESS has a larger “effective dataset size”.
We will add a discussion with the additional experiment to the appendix.
[1] Xia, Mengzhou, et al. "Less: Selecting influential data for targeted instruction tuning." International Conference on Machine Learning. PMLR, 2024.
[2] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." Advances in Neural Information Processing Systems 36 (2023): 34201-34227.
[3] Coleman, Cody, et al. "Selection via proxy: Efficient data selection for deep learning." International Conference on Learning Representations (2019).
[4] Albalak, Alon, et al. "A survey on data selection for language models." arXiv preprint arXiv:2402.16827 (2024). | Rebuttal 1:
Rebuttal: Following the reviews, we would like to expand on our choice of using optimal transport as a means to solve the problem of task-specific selection. We use optimal transport to capture the discrepancy between the distribution we will sample from and the target distribution. We include probability transport cost in the optimization objective to encourage alignment between the two distributions.
We formulate our framework as an optimization problem to provide guidance for task-specific data selection. By framing the problem this way, we are able to integrate and balance two crucial objectives of task-specific data selection: distribution matching and diversification.
We find this new formulation also allows us to obtain practical solutions that achieve state-of-the-art results for a diverse array of settings. We will update our introduction to clarify the above regarding the novelty of our work.
Pdf: /pdf/81a7b19323f93c0f261b9e5c15c87fb89752db16.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Implicit Bias of Mirror Flow on Separable Data | Accept (poster) | Summary: In this paper, authors study the implicit bias of the mirror descent algorithm, from the perspective of the optimization trajectory of the continuous flow version. They propose the conceptions of horizon shape and horizon function $\phi_\infty$ to help characterize the properties of mirror flow at infinity. Since $\phi_\infty$ defines a norm, they prove that the mirror flow will eventually converge in direction to a max $\phi_\infty$-margin solution under the linear exponential-tail classification setting. Their findings contribute to a deeper understanding of the inherent characteristics of mirror descent across a wide range of potential functions.
Strengths: 1. The result of this work is solid, containing the general class of potential functions, and the authors derive a calculation rule of $\phi_\infty$ for a general class of potential functions.
2. The paper is informative and well-structured, particularly in section 3. By using the example of gradient descent within the framework established by the preceding lemmas, which is a special case of mirror descent, the authors clearly outline the reasons why mirror flow will eventually converge in direction without complicated formulas.
Weaknesses: 1. Since mirror descent is not so popular in the practice of machine learning problems, there could be more discussion about the implications of their results. For example, Figure 1 is really interesting as it reveals that the mirror descent shares the same structure of implicit bias with the steepest descent [1], what is the essence of such similarities?
2. The setting of an infinitely small learning rate, i.e., optimization flow, might be a little strong under a simple linear classification problem compared to the previous works. I suggest the authors state the technical challenges of the discrete optimization process of mirror descent.
3. I might be wrong, but it seems not strict to apply the Bolzano–Weierstrass theorem to an uncountably infinite set at page 5, line 160 and 61.
[1] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. Characterizing implicit bias in terms of optimization geometry. In International Conference on Machine Learning, pages 1832–1841. PMLR, 2018.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I'm really curious about whether this result could be extended to the q-homogenous neural networks like [1], cause it seems this result was also derived from the KKT condition of the lagrangian function.
2. I guess there might be no minus in line 461 for the first equality.
3. Could the author explain why they need to prove the uniqueness of the flow solution in Lemma 2, since this was not covered in the previous work, like [1][2]. Moreover, why do the authors need to prove that $\int_0^t a(\beta_s)ds \to \infty$ for Lemma 1.
4. What is the definition of $h_\infty$ in line 535 and $h(\cdot)$? Moreover, could the author explain more about how to derive formula (8) ?
[1] Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. In International Conference on Learning Representations, 2020.
[2] B.Wang, Q. Meng,W. Chen, and T.-Y. Liu. The implicit bias for adaptive optimization algorithms on homogeneous
neural networks. In International Conference on Machine Learning, pages 10849–10858. PMLR, 2021.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough report and relevant comments. We will take them into account and make the appropriate changes in the revised version. Please find below our response to the comments.
**Weaknesses**
**W1:** Indeed, our result has connections with Theorem 5 from Gunasekar et al. (2018). Note first that the updates of steepest descent and mirror descent are different and one cannot be seen as a special case of the other (unless the geometry is Euclidean). For a norm $\Vert \cdot \Vert$, the steepest descent update is given by $w\_{t+1} = \text{arg min}\_{w} \ L(w\_t) + \langle \nabla L(w\_t), w - w_t \rangle + \Vert w - w_t \Vert^2$ whereas the mirror descent update is given by $w\_{t+1} = \text{arg min}\_{w} \ L(w\_t) + \langle \nabla L(w\_t), w - w_t \rangle + D\_\phi(w, w_t)$ for a potential $\phi$.
We can however observe the following connection: if the horizon function $\phi_\infty$ is a norm (this is the case when $\phi$ is symmetric), then **the implicit bias induced by mirror descent is the same as that of steepest descent with norm $\phi_\infty$**. In other words, $\phi$-mirror descent is asymptotically equivalent to $\phi_\infty$-steepest descent.
Regarding the possible implications: our result could be used to study the mirror flow with new potentials that arise from gradient flow more complex neural network architectures, some of which are still under investigation (see answer to Q1 below).
**W2:** The same results could be derived for mirror descent with finite step size, without major additional difficulties. We chose to study mirror flow as it simplifies the proofs and the presentation.
**W3:** Indeed, we were (intentionally) a bit sloppy with the math, but the reasoning is correct. To make the derivation rigorous, one has to first consider any countable sequence $(t\_n)\_{n \in \mathbb{N}}$ with $t\_n \to \infty$ as $n \to \infty$ and then consider the sequence $(\bar{\beta}\_{t\_n}, q(\beta\_{t_n}))\_{n \in \mathbb{N}}$. We can then rightfully apply the Bolzano–Weierstrass theorem. Since the final limit $(\bar{\beta}\_\infty, q\_\infty)$ does not depend on the sequence $t_n,$ we can indeed conclude that $(\bar{\beta}\_{t}, q(\beta\_{t}))_{t \in \mathbb{R}}$ must converge towards it. We will clarify this argument in the revised version.
**Questions**
**Q1:** The link between gradient flow on $q$-homogeneous networks (as in [1]) and mirror flow (as in our setting) is still opaque. As discussed in our conclusion section, gradient flow on a depth $q$ **diagonal** linear network can be seen through the lens of [1], as well as through the lens of mirror flow (as in our paper). However, whether a link exists in more general settings is still unclear and remains a very interesting question for future work.
**Q2:** It is indeed a typo, thank you for pointing it out.
**Q3:** The main purpose of Lemma 2 is to ensure that the mirror flow solution exists and that it is well-defined for all $t \geq 0$. This fact is not trivial since, in a more general setting, the solution could "blow up in finite time". Previous work does not always prove global existence and implicitly assumes it to be true. Uniqueness is obtained as a by-product.
Regarding the fact that $\int_0^t a(\beta_s)ds \rightarrow \infty$, it is an essential result for the main proof. We need it to apply the time change followed by the Cesaro argument (see lines 580 to 587 in the appendix).
**Q4:** We made a notation error throughout the proof of Corollary 2: we used $h$ instead of $\phi$. We will fix this.
Formula (8) stems from the fact that the gradient of $\phi$ is orthogonal to its level sets (we can invoke a result from multivariate calculus such as [1]). This means that $\nabla \phi(\beta_s)$ belongs to the normal cone of $S_{\phi(\beta_s)}$ at $\beta_s$. The subdifferential of the indicator of a convex set is given by its normal cone [2, Chap. 23], which allows to conclude. We will add these details to the revision.
[1] Richard Courant; Fritz John (1999). Introduction to Calculus and Analysis Volume II/2.
[2] R. Tyrell Rockafellar (1973). Convex Analysis.
---
Rebuttal Comment 1.1:
Comment: Thanks for the answers from the authors. I'm happy to maintain my score, voting for acceptance. | Summary: This paper examines the implicit bias of mirror flow (the continuous-time counterpart of mirror descent) in the context of binary classification for linearly separable data. Given that the problem has infinitely many solutions, obtained at infinity, the authors aim to identify which solution is achieved through mirror flow. Assuming an exponential tail on the loss function, the authors demonstrate that mirror flow converges directionally to a maximum margin classifier, where the margin is characterized by a horizon function of the mirror potential. This result extends many existing works, and numerical experiments are provided to verify the findings.
Strengths: 1. The paper is well-written and well-organized. Despite the technical nature of the analysis and results, the paper is relatively easy to follow.
2. The paper is also well-motivated. Although mirror descent is not commonly used as an algorithm for training neural networks, analyzing its convergence is valuable for understanding the implicit bias of gradient descent in various neural network architectures.
3. I did not verify the details of the proof. However, the paper provides several motivating examples, including the quadratic potential corresponding to gradient flow, which makes the results quite convincing.
4. The main results extend several prior works.
Weaknesses: Although the authors have stated that the convergence rate is left for future study, it would be beneficial to provide at least empirical evidence of the convergence rate. The authors mentioned in line 294 that the convergence rate varies across different potentials.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the main result, Theorem 2, the conclusion that the normalized mirror flow converges to a vector \(\bar{\beta}_{\infty}\) is drawn even when the \(\phi_{\infty}\)-max-margin problem does not necessarily have a unique solution. Can the authors provide more insight on this? If I understand correctly, in the motivating example, \(\bar{\beta}_{\infty}\) is a subsequence limit. If the \(\phi_{\infty}\)-max-margin problem does not have a unique solution, this result cannot be extended to the whole limit, unlike in the gradient flow case with quadratic potential.
2. More explanation could be provided for Figure 1. In particular, it would be interesting to see the trajectories of the mirror flows on the plane, rather than only showing the limit.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their report and relevant comments.
**Weaknesses**
The comment we make line 294 concerns the **training loss** convergence rate. We provide empirical evidence Figure 1 (left), we observe that the exact rate indeed depends on the potential. Also note that standard results from convex optimisation enables to show an $\tilde{O}(1 / t)$ upperbound on the rate (see proof of Proposition 1).
Lines 303-304, we refer to the convergence rate of **the normalised iterates** towards the maximum margin solution, which we leave for future work. It would depend on the speed at which the normalised sublevel sets $\bar{S}\_c$ converge towards the limiting shape $S_\infty$.
**Questions**
**Q1:** We indeed need to assume that the $\phi_\infty$-max-margin problem has a unique solution, otherwise there could exist multiple subsequential limits. This is stated in the assumptions of Theorem 2. We also discuss sufficient conditions for this assumption to hold on lines 239-243.
**Q2:** We will add more details concerning Figure 1 in Section 5. We agree with the reviewer that plotting the full trajectory is interesting and we will add it in the revised version.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. My rating remains. | Summary: This manuscript examines the implicit bias of mirror descent on a classification problem when the dataset is linearly separable. Assuming a coercive gradient, it demonstrates that the implicit bias is characterized by the shape of the level set of the mirror potential near infinity. Their analysis successfully recovers existing results for p-norm potentials and identifies the implicit bias of the potentials emerging in the analysis of linear neural networks. Additionally, it leaves the characterization of the implicit bias when the gradient is not coercive as an interesting open problem.
Strengths: I think the paper is very well-written and has a solid contribution.
It addresses an important problem, aiming to understand the implicit bias of neural networks. Prior work has shown that the dynamics of linear networks can be characterized by mirror descent, highlighting the relevance of this study.
Weaknesses: NA
Technical Quality: 4
Clarity: 4
Questions for Authors: In line 123, the authors suggested that the logistic loss satisfies the conditions in Assumption 1. However, it is clear that the logistic loss does not have an exponential tail. Could they clarify whether this is a mistake or if there is an underlying argument supporting their claim?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive report and relevant comments.
It can be verified that the logistic loss indeed satisfies the exponential-tail condition of Assumption 1, since $\ln(1+\exp(-z))$ is equivalent to $\exp(-z)$ as $z \rightarrow + \infty$.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. My rating remains. | Summary: This paper considers the asymptotic behaviour of the mirror descent (MD) algorithm for a linear classification task. It is shown that the classifier (hyperplane orthogonal to $\beta$) will be a max-margin classifer, where the margin is determined by some unknown horizon function $\phi_\infty$. This works extend prior work which consider $\ell_p$ and homogeneous potential functions for MD, and shows this result for very general $\phi$.
Strengths: The paper makes an interesting statement about behaviour of mirror descent on classification tasks, will minimal assumptions. In doing so, it takes a big step and extends previous work to the cover general potential functions.
The paper is well written and the figures help with understanding the concepts of convergence.
---
*While I could understand the paper, this is not my area of research. I do not find myself fit to evaluate the paper on soundness, relevance to the sub-field and importance of contributions.*
Weaknesses: - The paper does not characterize $\phi_\infty$ in terms of the bregman potential $\phi$ (and other relevant entities).
The main result expresses that there exists some function, that is minimized by $\bar \beta_\infty$, the direction of the classifier as $t\rightarrow \infty$.
I think this limits the relevance and strength of the result. For instance, this does not help with interpretability compared to the case where we can prove the optimization algorithm converging to a max-margin classifier wrt the $\ell_2$ norm.
- I am not sure about relevance and use-cases of the mirror descent algorithm with very general potentials. As far as I know, typically, a small set of norm-based or entropic (neg-ent, tsallis, etc) are used within applications of ML. So while the theorem makes an interesting statement for an optimization standpoint, I'm not sure how relevant it is for the ML community. The theorem is also not entirely relevant to the pure optimization community since it's for the specific case of linear classification with finite data.
---
*While I could understand the paper, this is not my area of research. I do not find myself fit to evaluate the paper on soundness, relevance to the sub-field and importance of contributions.*
Technical Quality: 4
Clarity: 4
Questions for Authors: - Are there any classes of potential functions (other than the norms and $L$-homogeneous ones), for which $\phi_\infty$ may be calculated or approximated?
- Beyond gradient descent, is there any work that quantifies the rates of convergence to max-margin classifiers? Is this even possible?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their report and relevant comments.
**Q1:** In Theorem 3, we provide a formula for computing the horizon function when the potential $\phi$ is **separable**. In that case, $\phi_\infty$ can be obtained by computing the limit at infinity of a simple one-dimensional function. We apply this to two examples of non-homogenous potentials in Section 5: the hyperbolic entropy and the cosine entropy.
For non-separable potentials, there is a priori no straightforward formula and one has to find the limiting shape $S_\infty$ towards which the normalised sublevel sets $\bar{S}_c$ converge to as $c \to \infty$. However, we emphasize that the separable case covers almost all known examples.
**Q2:** By building on the arguments of Proposition 1 (line 467), we could show that $L(\beta_t)$ decreases with rate $\tilde{O}(1/t)$. However, this convergence rate does not inform us on the rate of convergence of the normalized iterates $\bar \beta_t$ to the limiting direction. This question is indeed a challenging open problem and is left for future work.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thank you for the answers.
Base on this, and reading other reviews, I have updated my review (the confidence score). | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MonoMAE: Enhancing Monocular 3D Detection through Depth-Aware Masked Autoencoders | Accept (poster) | Summary: This paper proposes a monocular 3D detection framework inspired by Masked Autoencoders (MAE), designed to address the challenge of object occlusions in 3D object detection. It utilizes a unique depth-aware masking module that simulates occlusions by adaptively masking non-occluded object features based on depth information, coupled with a lightweight completion network that reconstructs these masked features to learn occlusion-tolerant representations. It generates training pairs of non-occluded and occluded object representations directly, enhancing its capability to handle occlusions effectively. The framework is optimized for low computational overhead during inference, as it does not require object masking at this stage.
Strengths: 1. The proposed method outperforms the conventional methods across various datasets such as KITTI and Nuscenes. It demonstrates the effectiveness of the proposed method. Moreover, the proposed method achieves real-time inference time.
2. An extensive ablation study is proven to demonstrate the impact of the proposed module.
3. The idea is simple yet effective.
Weaknesses: 1. The performance improvement is marginal, especially on the cross-validation in Table 6.
2. Missing evaluation on the Waymo dataset
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Many recent 3D object detection studies have utilized the Waymo dataset for evaluation. Could you explain why your experiments were limited to KITTI and nuScenes?
2. There appears to be a performance drop in the nuScenes dataset at distances beyond 40 meters. Could you provide insights into what causes this decline?
3. There is a slight difference in inference time between 'Ours*' (36ms) and 'Ours' (38ms), with significant performance differences noted in Table 2. Could you elaborate on the role of the completion network (CN) given these differences?
4. The mask ratio r varies with scale parameters and maximum depth. How sensitive is your method to changes in the mask ratio?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See the weakness and question parts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns.
**Weakness 1: Performance Improvement**
We would clarify that the proposed MonoMAE achieves clear performance improvement over the state-of-the-art. As shown in Table 1 of the submitted manuscript, MonoMAE outperforms the SOTA method MonoCD [1] by clear margins, especially for objects of Moderate and Hard categories. The scale of the improvement is also competitive as compared with several recently published works as listed in Table 1.
The experiments in Table 6 aim to validate the generalization ability of the proposed MonoMAE. We can observe that MonoMAE achieves the best or the second-best performance consistently across all metrics. Besides, the MAE metric used in Table 6 is hard to obtain significant increases. For example, MonoUNI [2] obtains slight performance gains and even performance drops on the nuScenes dataset.
[1] Yan, Longfei, et al. MonoCD: Monocular 3D Object Detection with Complementary Depths. CVPR, 2024.
[2] Jinrang, Jia, et al. MonoUNI: A unified vehicle and infrastructure-side monocular 3d object detection network with sufficient depth clues. NeurIPS, 2023.
**Weakness 2 & Question 1: Experiments on Waymo**
Thanks for your suggestion. We extend the experiments by examining the generalization over a new task KITTI3D->Waymo as shown in the table below, using $AP(IoU=0.5)$ as the metric. We can observe that our method has generalization ability on the Waymo dataset, and even outperforms some methods trained on Waymo.
| Method | PatchNet* [1] | M3D-RPN* [2] | Ours |
|--------|------|----------|------|
| Level_1 | 2.92 | 3.79 | **4.53** |
| Level_2 | 2.42 | 3.61 | **4.17** |
\* denotes this method is trained on Waymo.
[1] Ma, Xinzhu, et al. Rethinking pseudo-lidar representation. ECCV, 2020.
[2] Brazil, Garrick, et al. M3d-rpn: Monocular 3d region proposal network for object detection. ICCV, 2019.
**Question 2: Performance Drop at Far Distances on the nuScenes dataset**
The performance drop beyond 40 meters could be due to:
1) Limited visual information (in image resolution, etc.) is captured for objects beyond 40 meters;
2) Adverse conditions in lighting and bad weather could have a larger impact on the quality of distant objects;
3) The annotation accuracy could degrade for distant objects due to small object sizes, leading to degraded network training.
As shown in Table 6 of the submitted manuscript, existing SOTA methods all have performance drop in the nuScenes dataset at distances beyond 40 meters. The proposed MonoMAE achieves the second-best performance at a far distance.
**Question 3: The Role of the Completion Network**
The Completion Network aims to learn to complete occluded query objects for detection performance for various occluded objects. It is a lightweight network that introduces 2ms additional inference time only but can achieve effective completion in the feature space. During training, it learns to reconstruct and complete the masked object queries (simulating real object occlusions). During inference, it reconstructs the occluded queries as completed ones which clearly improves the performance of monocular 3D detection as shown in Table 2.
We managed to show the impact of the completion network by plotting the losses with and without using the completion network in Figure 7 of the submitted Appendix. The graph shows that the training loss with the Completion Network (the orange line) drops while the training loss without the Completion Network (the blue line) remains at a high level, showing that the Completion Network helps acquire occlusion-tolerant representations by learning to reconstruct the masked queries.
**Question 4: Sensitivity of Our Method to the Mask Ratio**
As defined in Equation (3) of the submission, the mask ratio $r$ of each query is determined by
$r = 1.0 - \frac{d_{i}}{D_{max}}$. $r$ is the applied mask ratio for each query. $d_{i}$ is the predicted depth of the $i$-th query. $D_{max}$ is the maximum depth in the dataset that could manually adjusted to affect the mask ratio $r$.
The table below shows the performance of our method when changing the maximum depth $D_{max}$. In each cell of the table, the performance is listed as $AP_{3D}(IoU=0.7)$ / $AP_{BEV}(IoU=0.7)$.
| Maximum Depth | Easy | Moderate | Hard |
|-----------------------------------|---------------|---------------|---------------|
| $0.5*D_{max}$ | 28.17 / 38.21 | 19.83 / 26.62 | 17.06 / 21.24 |
| $0.75*D_{max}$ | 29.63 / 39.78 | 20.31 / 26.85 | 17.42 / 22.70 |
| $D_{max}$ | **30.29** / **40.26** | **20.90** / **27.08** | **17.61** / **23.14** |
| $1.5*D_{max}$ | 29.04 / 39.23 | 19.92 / 26.43 | 17.15 / 22.39 |
| $2.0*D_{max}$ | 27.70 / 37.18 | 19.45 / 26.12 | 16.84 / 20.52 |
We can observe that the best performance is achieved when the maximum depth is set at the original $D_{max}$. When the maximum depth is smaller than $D_{max}$, the performance drops. This is because if the depth of an object is too large, the query of this object will not be masked, leading to masking failures and hindering the training of the completion network.
When we set the maximum depth larger than $D_{max}$, the performance also drops. This is because the mask ratios are relatively high with the large maximum depth for all objects according to Equation (3). A high mask ratio will pose challenges for the Completion Network by reducing the available information since objects at far normally have limited pixels, hindering the reconstruction of occluded object regions.
Nevertheless, we can observe that setting the maximum depth between $0.75$ and $2$ $*D_{max}$ achieves similar 3D detection, indicating the robustness of our method with respect to this parameter.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the additional experiments and detailed explanations in your rebuttal. The new information is convincing and addresses my initial concerns effectively. Based on this, I have increased my initial rating of your submission.
---
Reply to Comment 1.1.1:
Comment: Thanks for your positive feedback! We are glad that we have addressed your initial concerns. | Summary: This paper introduces a novel framework for improving monocular 3D object detection, particularly in handling object occlusions. The proposed MonoMAE leverages depth-aware masking to simulate occlusions in the feature space and employs a lightweight completion network to reconstruct occluded object regions, thereby learning occlusion-tolerant representations. Experiments show that this learning stratgy helps to improve the performance of monocular 3D object detection.
Strengths: 1. This paper is well-structured, with a clear problem statement, methodology, experiments, and ablation studies that substantiate the contributions and effectiveness of MonoMAE.
2. This paper addresses a significant challenge in monocular 3D object detection, object occlusion, with a novel approach using depth-aware masked autoencoders.
Weaknesses: 1. The reliance on depth-aware masking to simulate occlusions may not perfectly replicate natural occlusion patterns, potentially affecting the model's reconstruction accuracy. The gap between synthetically masked and naturally occluded object queries could limit the model's robustness in real-world scenarios.
2. While this paper claims generalizability, the lack of extensive cross-dataset validation leaves the true scope of its generalization capability somewhat unproven.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. All the experimental results presented in this paper are about vehicle detection. Does MonoMAE also work for more difficult cases like pedestrain and cyclist detection?
2. The paper suggests investigating generative approaches for simulating natural occlusion patterns. Can you elaborate on what this might entail and how it could further improve monocular 3D detection?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1. The paper could provide a more detailed analysis of the computational efficiency, including speed and resource usage, to fully assess the practicality of MonoMAE for real-time applications.
2. This paper only present results of vehicle detection. The performance of detecting objects with small sizes is unknown.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns.
**Weakness 1: Depth-Aware Masking for Occlusion Simulation**
We acknowledge that the proposed depth-aware masking may not perfectly replicate natural occlusion patterns, which could affect the model's reconstruction accuracy potentially.
To fill the gap between synthetic and natural occlusions, we have considered alternative methods, such as directly overlaying images of objects (e.g., using a car image to occlude another car). However, this approach poses significant challenges, particularly in obtaining accurate training labels (3D boxes, orientations) for the overlaying objects, making it impractical for our current implementation.
As shown in Section D.1 of the submitted Appendix, one possible solution is introducing Generative Adversarial Networks (GANs) that learn distributions from extensive real-world data for generating occlusion patterns that are very similar to natural occlusions. Specifically, a Generator could be employed to generate occluded queries based on non-occluded queries, while a Discriminator could be used to discriminate between these generated occluded queries and natural occluded queries. Through adversarial training, the Generator could produce occlusions that are very similar to natural occlusions.
**Weakness 2: Lacking Extensive Cross-Dataset Validation**
We would clarify that most existing monocular 3d detection methods [1, 2, 3] validate their generalization ability on the task KITTI3D->nuScenes only. We followed these prior studies to facilitate benchmarking. As suggested, we extend the experiments by examining the generalization over a new task KITTI3D->Waymo as shown in the table below, using $AP(IoU=0.5)$ as the metric. We can observe that our method has generalization ability on the Waymo dataset, and even outperforms some methods trained on Waymo.
| Method | PatchNet* [4] | M3D-RPN* [5] | Ours |
|--------|------|----------|------|
| Level_1 | 2.92 | 3.79 | **4.53** |
| Level_2 | 2.42 | 3.61 | **4.17** |
\* denotes this method is trained on Waymo.
[1] Shi, Xuepeng, et al. Geometry-based distance decomposition for monocular 3d object detection. ICCV, 2021.
[2] Kumar, Abhinav, et al. Deviant: Depth equivariant network for monocular 3d object detection. ECCV, 2022.
[3] Jinrang, Jia, et al. MonoUNI: A unified vehicle and infrastructure-side monocular 3d object detection network with sufficient depth clues. NeurIPS, 2023.
[4] Ma, Xinzhu, et al. Rethinking pseudo-lidar representation. ECCV, 2020.
[5] Brazil, Garrick, et al. M3d-rpn: Monocular 3d region proposal network for object detection. ICCV, 2019.
**Question 1 & Limitation 2: Performance on Small Size Objects (Pedestrian and Cyclist)**
The proposed MonoMAE can handle challenging cases with competitive detection performance. We conducted new experiments on the suggested pedestrians and cyclists that often have smaller scales and are more challenging to detect. The two tables below show the experimental results with metric $AP_{3D}(IoU=0.7)$.
Performance for pedestrian detection.
| Method | Easy | Moderate | Hard |
|--------|------|----------|------|
| DID-M3D | 11.78 | 7.44 | 6.08 |
| MonoNeRD | 13.20 | 8.26 | 7.02 |
| Ours | **13.37** | **8.41** | **7.10** |
Performance for cyclist detection.
| Method | Easy | Moderate | Hard |
|--------|------|----------|------|
| DID-M3D | 7.82 | 3.95 | 3.37 |
| MonoNeRD | 4.79 | 2.48 | 2.16 |
| Ours | **8.05** | **4.16** | **3.54** |
We can observe from the above two tables that the proposed MonoMAE performs well for the pedestrian and cyclist categories for monocular 3D object detection.
**Question 2: Generative Approaches for Simulating Natural Occlusion Patterns**
As briefly shared in Section D.1 of the submitted Appendix, the proposed MonoMAE could be improved by employing generative networks such as GANs to learn distributions of real-world data. The trained model will then generate occlusion patterns that are more similar to natural occlusions than our proposed feature masking, leading to better monocular 3D object detection. Take GAN as an example. The GAN generator will learn to generate occluded queries (with many non-occluded queries as reference), while the GAN discriminator will learn to discriminate between the generated occluded queries and naturally occluded queries. Through adversarial learning, the trained generator could generate more realistic occlusions, which further leads to better occlusion completion and monocular 3D object detection.
**Limitation 1: Computational Efficiency**
We would clarify that we have provided the inference time of the proposed MonoMAE and several state-of-the-art monocular 3D detection methods in Table 5 of the submitted manuscript. According to the inference time, MonoMAE can achieve above 26.3 FPS (frame per second) which demonstrates its great potential for real-time tasks. For resource usage, all our experiments were conducted on a computer with Intel Xeon Gold 6134 CPUs, CentOS operating system with 128GB RAM, and a single NVIDIA V100 GPU with 32 GB memory. We will highlight the above information in the updated manuscript.
---
Rebuttal Comment 1.1:
Title: Provide some qualitative results
Comment: Thanks for providing additional information in your rebuttal. Your reply has addressed part of my concerns. Regarding the part "Question 1 & Limitation 2: Performance on Small Size Objects (Pedestrian and Cyclist)", providing more qualitative or visualization results would be more convincing.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable suggestion! Unfortunately, the OpenReview system does not allow uploading a PDF for visualization. Additionally, the NeurIPS discussion guidelines prohibit including external links. We will include the suggested visualization in the revised manuscript/appendix later. | Summary: This paper applies Masked Autoencoder to 3D object detection. It distinguishes object queries into occluded and non-occluded categories, and during training, it applies depth-aware masking to the non-occluded queries and learns by completing them. At test time, the completion is applied to the occluded queries.
Strengths: - It achieved state-of-the-art performance on the KITTI 3D dataset.
- The idea of interpreting occluded queries as masked queries to solve the problem is interesting.
- The training and test times are illustrated clearly in figures.
Weaknesses: - As stated in the limitations section, occlusion at the image level and masking at the feature level of object queries are not the same. Further analysis is needed to understand the actual implications of masking in object queries.
- If masking serves the role of occlusion at the image level, there should be no reason for the mask ratio to vary with depth, yet depth-aware masking is highly beneficial. An analysis is needed to understand why depth-aware masking works well compared to random masking.
- In my opinion, the performance of the Non-Occluded Query Grouping classification is crucial for the framework to function properly. Although classification accuracy is provided in the supplementary material, it would be helpful to include various metrics such as precision, recall, and F1-score. If the results of the Non-Occluded Query Grouping classification are biased, it might be interesting to apply completion not only to the occluded queries but also to the non-occluded queries at test time.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are included in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns.
**Weakness 1: Feature-Level Masking vs. Image-Level Masking**
We would clarify that the strategy of masking and completion aims to generate pairs of non-occluded and occluded (via masking) data for learning occlusion-tolerant object representations. The strategy could be implemented in either image space or feature space. We chose to mask and reconstruct query features, as masking and reconstructing images is complicated and computationally intensive due to noisy image data and super-rich occlusion patterns. As shown in Table 3 of the submitted manuscript, masking and completing query features perform clearly better than masking and completing images. Nevertheless, we understand that the simulated occlusion features are different from that of natural occlusions (as briefly discussed In the Limitations section), and shared that this issue could be mitigated by introducing generative networks to learn the distribution of natural occlusions.
**Weakness 2: Depth-Aware Masking v.s. Random Masking**
We would clarify that the proposed depth-aware masking is implemented in the feature space. We examined how different masking strategies affect the monocular 3D detection by testing three masking strategies: 1) random masking in the image space; 2) random masking in the feature space; and 3) depth-aware masking in the feature space, as shown in Rows 1-3 in Table 3 of the submitted manuscript. We can observe that the depth-aware masking performs clearly the best. As discussed in Section A.1 of the submitted Appendix, the depth-aware masking modulates the mask ratio by lowering it for distant objects. This effectively retains more visual information for distant objects which are usually small and have limited pixels and visual information. In addition, it also facilitates the ensuing completion task as completing heavily masked small objects is challenging and liable to failures.
**Weakness 3: Non-Occluded Query Grouping**
Thanks for your valuable suggestion! Below is the detailed performance of the Non-Occluded Query Grouping classification module with your suggested metrics.
| Metrics | Precision | Recall | F1-Score | Accuracy |
|-------------|-----------|---------|----------|----------|
| Performance | 98.35 % | 94.47 % | 96.38 % | 96.46 % |
To evaluate the performance when applying completion on all queries, both occluded and non-occluded, at the test time, we conduct experiments and show the results in the table below. The metrics $AP_{3D} (IoU=0.7)$ and $AP_{BEV}(IoU=0.7)$ with Easy, Moderate and Hard categories are used. In each cell of the table, the performance is listed as $AP_{3D}(IoU=0.7)$ / $AP_{BEV}(IoU=0.7)$.
| Setting | Easy | Moderate | Hard |
|-----------------------------------|---------------|---------------|---------------|
| Completing All Queries | 28.46 / 37.17 | 18.40 / 25.95 | 15.23 / 21.82 |
| Original | **30.29 / 40.26** | **20.90 / 27.08** | **17.61 / 23.14** |
The results indicate a performance drop when applying completion to all queries, compared to completing the occluded queries classified by the occlusion classification network. This is because the Completion Network is designed to reconstruct the occluded queries to learn occlusion-tolerant visual representations. Consequently, applying completion to non-occluded queries introduces confusion, degrading overall 3D detection performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering the questions. Most of the explanations are based on good final performance, and although there is still a lack of theoretical explanation on why query masking should be better than natural image masking, I understand that this is a part that is difficult to explain perfectly in theory. I will focus on the opinions of other reviewers and the performance improvement of the proposed method, I will raise my initial rating. However, if additional theoretical justification is provided, the paper could become even better.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and valuable suggestions! We agree that providing a theoretical explanation would be challenging but can strengthen the paper. We will further study relevant literature (e.g., pros-and-cons of image augmentation vs visual feature augmentation in prior studies) and discuss this issue in the revised manuscript. | Summary: This paper introduces MonoMAE, a novel monocular 3D object detection framework designed to improve detection performance in the presence of object occlusions. MonoMAE leverages the concept of Masked Autoencoders, treating object occlusions as natural masking and training the network to complete occluded regions. This innovative approach addresses the pervasive issue of object occlusions in monocular 3D detection, leading to superior detection performance. Extensive experiments on datasets like KITTI 3D and nuScenes show that MonoMAE outperforms state-of-the-art methods in both qualitative and quantitative measures.
Strengths: 1. The introduction of depth-aware masking to simulate occlusions and the use of a lightweight query completion network are innovative and address a significant challenge in monocular 3D detection.
2. MonoMAE improves detection performance without the need for additional training data or annotations, making it a practical solution for real-world applications like autonomous driving and robotics.
3. The framework demonstrates superior performance on benchmark datasets (KITTI 3D and nuScenes), outperforming existing state-of-the-art methods in both occluded and non-occluded scenarios.
4. MonoMAE shows strong generalization capabilities to new domains, which is critical for deploying models in diverse environments.
Weaknesses: 1. In many datasets and methods, objects are not merely labeled as "occluded" or "non-occluded." Instead, they may be assigned occlusion levels or degrees that quantify the extent to which an object is occluded. These levels provide more granularity and can influence how models are trained and evaluated. It would be beneficial to specify how occlusion levels are defined and used. Clarifying whether discrete or continuous levels are employed and how these influence the labeling, training, and evaluation processes will provide a clearer understanding of the methodology and its robustness in handling occlusions.
2. The paper does not provide explicit details about the accuracy of the occlusion classification network or how this accuracy influences the overall 3D object detection network. This information appears to be missing.
3. The paper does not explicitly report the performance or accuracy of the query completion network. Including a report on the performance of this network, such as quantitative results or visualization of the reconstructed queries, would be valuable. It would demonstrate whether the query completion network is learning meaningful features and contributing effectively to the overall 3D object detection performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the accuracy of the occlusion classification network? How does the accuracy influence the whole 3D object detection network?
2. What is the accuracy of the query completion network?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed some failure cases in their paper, as well as the gap between the generated occlusion pattern and natural occlusion patterns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the valuable comments and insightful suggestions, and we hope our detailed responses below can address your concerns.
**Weakness 1: Occlusion Levels: Definition and Usage**
We appreciate the reviewers' insightful comments regarding the use of occlusion levels. Our method only uses the binary \"occluded\" and \"non-occluded\" labels. During training (Figure 2 of the submitted manuscript), the binary occlusion labels are used to train the Non-Occluded Query Grouping module for classifying the non-occluded queries that are masked and completed. During inference (Figure 5 of the submitted manuscript), the Non-Occluded Query Grouping module is used to classify the queries.
To this end, we only need to obtain the binary occlusion labels indicating whether the objects are occluded or not, without using the fine-grained occlusion labels which could make the task complicated. In this paper, we transform the ground truths of occlusion degrees $o$ provided in the KITTI 3D dataset, where $o \in$ {0, 1, 2, 3}, into binary ground truths. The transformation simplifies $o=0$ as non-occlusion and $o \in$ {1, 2, 3} as occlusion, leading to the binary ground truths of occlusion conditions $o^{gt} \in$ {0, 1} with 0 indicating non-occlusion and 1 indicating occlusion.
**Weakness 2 & Question 1: Accuracy and Impact of the Occlusion Classification Network**
Thank you for your suggestion. The accuracy of the occlusion classification network has been provided in Section F.3 of the submitted Appendix. The accuracy of the occlusion classification network is 96.46%, indicating that most object queries can be correctly classified as occluded or non-occluded.
Moreover, to validate the influence of the classification accuracy on the overall 3D object detection network, we add experiments using a trained network with fixed weights. During each inference, the occlusion classification accuracy is manually adjusted to 50% and 70% with the help of ground-truth occlusion labels. The experimental results are shown in the table below, where the original accuracy (96.46%) of the occlusion classification network is also exhibited. The metrics $AP_{3D} (IoU=0.7)$ and $AP_{BEV}(IoU=0.7)$ with Easy, Moderate and Hard categories are used. In each cell of the table, the performance is listed as $AP_{3D}(IoU=0.7)$ / $AP_{BEV}(IoU=0.7)$.
| Occlusion Classification Accuracy | Easy | Moderate | Hard |
|-----------------------------------|---------------|---------------|---------------|
| 50% | 27.12 / 36.41 | 18.05 / 24.67 | 15.21 / 19.74 |
| 70% | 28.47 / 37.82 | 19.51 / 26.90 | 16.02 / 21.85 |
| 96.46% | **30.29 / 40.26** | **20.90 / 27.08** | **17.61 / 23.14** |
From this table, we can observe that the higher occlusion classification accuracy can contribute to better 3D detection performance, validating the importance of classification accuracy on the overall 3D object detection network.
We will add these results and analyses in the revised version.
**Weakness 3 & Question 2: Performance of the Query Completion Network**
Thank you for your suggestion! The performance of the Completion Network is measured through the similarity between the non-occluded queries before masking and the queries completed by the Completion Network. This similarity is defined by the Smooth L1 loss, as defined by Equation (6) of the submitted manuscript.
With the help of the Smooth L1 loss, we visualize the training losses with and without using the Completion Network in Figure 7 of the submitted Appendix. As Figure 7 shows, the training loss using the Completion Network (the orange line) drops while the training loss without using the Completion Network (the blue line) remains at a high level, demonstrating the effectiveness of the Completion Network in acquiring occlusion-tolerant representations effectively by learning to reconstruct the masked queries.
The visualization of masked and reconstructed queries is not provided since the queries are one-dimensional vectors and cannot be visualized in a meaningful approach.
Moreover, Table 2 in the submitted manuscript can further validate the effectiveness of the Completion Network quantitatively. Comparing Rows 2 and 6, and Rows 4 and 7, using Completion Network can improve the 3D detection performance effectively, showing that the masked queries are properly reconstructed.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. Most of my concerns have been addressed. And I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback! We are glad that we have addressed your concerns. | Rebuttal 1:
Rebuttal: # General Response
We thank all the reviewers for your time, insightful suggestions, and valuable comments. We are highly encouraged by the reviewers' acknowledgment of our method in its innovative idea and novel design (xT7k, 8dEd, g4DX), superior performance (xT7k, 8dEd), exhaustive experiments (g4DX, bA8P), strong generalization capabilities (xT7k) and good presentation (8dEd, g4DX). Most reviewers concurred that the studied occlusion problem is critical and a significant challenge for monocular 3D detection.
The reviewers also shared several concerns, mainly focusing on:
- More detailed information on the usage of occlusion levels information.
- Further insights regarding the gap between synthetic and natural object occlusion.
- Additional experiments regarding the network generalization.
We respond to each reviewer's comments in detail below. We again thank the reviewers for your valuable suggestions, which we believe will greatly strengthen our paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Parsimony or Capability? Decomposition Delivers Both in Long-term Time Series Forecasting | Accept (spotlight) | Summary: This paper proposes a Selective Structured Components-based Neural Network for Long-term Time Series Forecasting
Strengths: 1. This paper demonstrates originality by addressing a crucial limitation in existing SOTA methods, maintains high quality through thorough experimentation and clear presentation, offers significant advancements to the field of time series forecasting, and ensures clarity that aids in understanding and reproduction of the work.
2. The motivation of this paper is intuitive and compelling. Given the large model sizes of current SOTA methods like PatchTST, the idea of using a smaller model to achieve comparable or better performance is highly attractive.
3. The experiments are thorough, and the proposed method achieves state-of-the-art performance. The effectiveness of each sub-module is demonstrated through detailed ablation studies.
4. The code is open source and reproducible, with a straightforward and clear usage process.
Weaknesses: 1. In the experiment section, it is noted that most papers use the ETT dataset for ablation studies, likely due to its smaller size, which allows for quicker results. However, you chose the ECL and Traffic datasets instead of ETT, which is a more comprehensive and reliable approach. While this choice is commendable, there is no explanation provided for not using the ETT dataset.
2.It would be more informative to report the model size directly in Table 1. Including the model size would provide a clearer comparison with other SOTA methods and highlight the efficiency of your proposed model.
3.Baselines: Some MTSF models based on LLM have been widely applied [1]. If the authors can demonstrate that SSCNN has advantages in both performance and efficiency, this paper will be more convincing.
4.Some extremely lightweight models have also been proven to have satisfactory performance [2] . Compared to these methods, what are the main advantages of SSCNN?
[1]One Fits All:Power General Time Series Analysis by Pretrained LM
[2]FITS: Modeling Time Series with 10k Parameters
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No problem
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments.
---
**W1:** We chose the ECL and Traffic datasets for our experiments primarily because they present more challenging tasks due to their larger data scale. Specifically, these datasets include a mass of variables and cover an extensive data collection period. This increased data scale results in a less biased estimate of the impact of each module, providing a more comprehensive evaluation of our model's performance. To address the concern regarding the exclusion of the ETT dataset, we have conducted an additional ablation study using the ETTh1 dataset. The results from this study are provided below for your reference. This inclusion further validates the robustness and generalizability of our approach across different dataset scales and complexities. We will improve this study in the revision.
|SSCNN|w/o lta | w/o lt | w/o sta | w/o st | w/o sea | w/o se | w si |
|---|----|---|---|---|---|---|---|
|0.363| 0.364 | 0.379 | 0.364 | 0.368 | 0.364 | 0.415| 0.378 |
---
**W2:** We will include the model size in Table 1 in the final version of the paper.
---
**W3:** We have conducted a comparative analysis between SSCNN and MTSF models based on LLM. Due to the time constraint, we directly copy the results obtained by baselines from an ICML 2024 publication [1]. The results, which are reported at the end of the reply box, demonstrate SSCNN's advantages in both performance and efficiency. This analysis strengthens our claim that SSCNN is a highly competitive model in the field of time series forecasting. We will include this comparative analysis in the final version of the paper to provide a more comprehensive evaluation of SSCNN's capabilities.
---
**W4:** We appreciate your concern, which prompts us to emphasize the key contributions of SSCNN that might otherwise be underestimated. As discussed in the last paragraph of Section 2, many lightweight models achieve a reduction in parameters, however, at the cost of sacrificing prediction accuracy. In contrast, SSCNN manages to maintain strong competitive performance with comparably reduced parameters. To substantiate this claim, we offer an additional comparison between SSCNN and some representative lightweight models, including FITS, highlighting the balance SSCNN strikes between model complexity and prediction accuracy. Due to the time constraint, we directly borrow the results obtained by baselines from an ICML 2024 publication [2]. This comparison will be included in the supplementary materials of the final version, showcasing SSCNN's advantages in terms of both efficiency and effectiveness in time series forecasting.
[1] Bian, Yuxuan, et al. "Multi-patch prediction: Adapting llms for time series representation learning." arXiv preprint arXiv:2402.04852 (2024).
[2] Lin, Shengsheng, et al. "SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters." arXiv preprint arXiv:2405.00946 (2024).
| Electricity | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.129 | 0.147 | 0.167 | 0.202 |
| FITS (lightweight, ICLR 2024) | 0.138| 0.152 | 0.166 | 0.205 |
| SparseTSF (lightweight, ICML 2024)| 0.138 | 0.146 | 0.164 | 0.203 |
| ALLM4TS (LLM-based, ICML 2024) | 0.127 | 0.145 | 0.163 | 0.206 |
|Time-LLM (LLM-based, ICLR 2024) | 0.140 | 0.151 | 0.171 | 0.210 |
|SSCNN (Ours) | **0.126** | **0.145** | **0.161** | **0.191** |
| Traffic | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.367 | 0.385 | 0.399 | 0.434
| ALLM4TS (LLM-based, ICML 2024) | 0.372 | 0.383 | 0.396 | 0.433
|Time-LLM (LLM-based, ICLR 2024) | 0.383 | 0.398 | 0.407 | 0.434 |
|FITS (lightweight, ICLR 2024) | 0.401| 0.407 | 0.420 | 0.456 |
|SparseTSF (lightweight, ICML 2024) | 0.382 | 0.388 | 0.402 | 0.445 |
|SSCNN (Ours) | **0.352** | **0.380** | **0.390** | **0.423** |
| ETTh1 | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.379 | 0.413 | 0.422 | 0.447 |
| ALLM4TS (LLM-based, ICML 2024) | 0.380 | **0.396** | 0.413 | 0.461 |
|Time-LLM (LLM-based, ICLR 2024) | 0.399 | 0.433 | 0.469 | 0.473 |
FITS (lightweight, ICLR 2024) | 0.375| 0.408 | 0.429 | 0.427 |
|SparseTSF (lightweight, ICML 2024) | **0.359** | 0.397 | **0.404** | **0.417** |
|SSCNN (Ours) | **0.361** | **0.401** | **0.410** | **0.424** |
---
Rebuttal Comment 1.1:
Comment: I am very grateful for the detailed response from the author. Currently, SSCNN has achieved excellent results in both performance and efficiency. I have a small question: Were the compared LLMs (Time-LLM and ALLM4TS) evaluated using conventional training and testing methods, or were they using Few-shot and Zero-shot settings?
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your feedback and the constructive suggestions you've provided for improving our paper. To answer your question, the compared LLMs were evaluated using conventional training and testing methods with the full training data. We agree that exploring SSCNN's performance under few-shot and zero-shot settings would be valuable, and we plan to investigate this in future work. | Summary: This paper identifies data decomposition as a core bottleneck in time series forecasting and proposes a novel model named SSCNN, a decomposition-based model innovatively enhanced with a selection mechanism. SSCNN is specifically designed to adeptly capture complex regularities in data while maintaining a minimal parameter scale. This paper also provides an in-depth comparison between decomposition and patching, examining both capability and parsimony. Comprehensive experiments show the superior performance of SSCNN.
Strengths: Strong Points:
1. The insight of this paper is attractive and compelling. One of the most crucial characteristics of time series is that they can be viewed as composed of components with different natures, e.g., season, trend, and residual. However, this characteristic has been rarely utilized in related works, or it has been implemented in trivial ways. This paper identifies data decomposition as a core bottleneck in time series forecasting and proves its effectiveness. By decomposing complex data into more learnable components, SSCNN achieves state-of-the-art performance with a minimal number of parameters.
2. The writing of this paper is very clear. I can easily follow the author's logic and understand their points.
3. The experimental results are extensive, including overall performance results, ablation studies of each component, hyperparameter experiments, etc., which validate the effectiveness of SSCNN.
4. The code is reproducible and well documented. I have successfully replicated the authors' results.
5. The authors also provide an in-depth comparative analysis and experimental results between patching and decomposition, which help readers understand the advantages of SSCNN’s insights.
This paper emphasizes the importance of decomposition in long-term time series forecasting, addressing the analytical gap in feature decomposition for the first time and providing insights into its rationale for capability and parsimony compared to patching.
Weaknesses: I have some minor questions and suggestions. If the author addresses the following points, I will increase my score.
Weak Points:
Experimental Setting: Most works using the Time-Series-Library repository predict up to 720 steps, yet your results do not include this prediction horizon. It would be beneficial to explain why 720-step predictions were not included.
Figures: I suggest the authors add more explanatory information to Figure 1 to help readers grasp the main architecture of SSCNN from the figure and its caption alone. Moreover, some font styles (italic) in Figure 1 seem different from the character styles in the main text. I recommend unifying the styles.
Minor Issues: The operator $\lfloor \cdot \rfloor$ is used in the paper but not explained. In Figure 3(a), if I understand correctly, “HDformer” should be replaced by “SSCNN.”
Figures: The text size of the legends in the figures is too small, making them difficult to read. Adjusting the text size to be consistent with the main text would enhance the readability of the figures and improve the overall presentation quality of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments.
---
**W1:** We acknowledge the importance of including 720-step predictions. In our experiments, we observed that the ranking of models for 720 output steps differed only slightly from the ranking for 336 output steps. This suggests that the models maintain consistent performance over long-term horizons. To support this observation, we conducted additional experiments with SSCNN and selected baselines on the ECL, Traffic, and ETTh1 datasets for output steps within 96, 192, 336 and 720.
| Electricity | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.129 | 0.147 | 0.167 | 0.202 |
| FITS (lightweight, ICLR 2024) | 0.138| 0.152 | 0.166 | 0.205 |
| SparseTSF (lightweight, ICML 2024)| 0.138 | 0.146 | 0.164 | 0.203 |
| ALLM4TS (LLM-based, ICML 2024) | 0.127 | 0.145 | 0.163 | 0.206 |
|Time-LLM (LLM-based, ICLR 2024) | 0.140 | 0.151 | 0.171 | 0.210 |
|SSCNN (Ours) | **0.126** | **0.145** | **0.161** | **0.191** |
| Traffic | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.367 | 0.385 | 0.399 | 0.434
| ALLM4TS (LLM-based, ICML 2024) | 0.372 | 0.383 | 0.396 | 0.433
|Time-LLM (LLM-based, ICLR 2024) | 0.383 | 0.398 | 0.407 | 0.434 |
|FITS (lightweight, ICLR 2024) | 0.401| 0.407 | 0.420 | 0.456 |
|SparseTSF (lightweight, ICML 2024) | 0.382 | 0.388 | 0.402 | 0.445 |
|SSCNN (Ours) | **0.352** | **0.380** | **0.390** | **0.423** |
| ETTh1 | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.379 | 0.413 | 0.422 | 0.447 |
| ALLM4TS (LLM-based, ICML 2024) | 0.380 | **0.396** | 0.413 | 0.461 |
|Time-LLM (LLM-based, ICLR 2024) | 0.399 | 0.433 | 0.469 | 0.473 |
FITS (lightweight, ICLR 2024) | 0.375| 0.408 | 0.429 | 0.427 |
|SparseTSF (lightweight, ICML 2024) | **0.359** | 0.397 | **0.404** | **0.417** |
|SSCNN (Ours) | **0.361** | **0.401** | **0.410** | **0.424** |
---
**W2:** To provide a clearer understanding of the SSCNN architecture, we will enhance Figure 1 with more explanatory information. Here's an overview: (a) Embedding: Raw data points are independently mapped to a high-dimensional representation space using a shared 1x1 convolution operator across all variables and horizons. (b) In-sample Inference: The historical time series representations are processed by three temporal attention-based normalization (T-AttnNorm) layers with different instantiations of the attention map ($\mathcal{I}$), followed by a spatial attention-based normalization (S-AttnNorm) layer. This process yields estimations of four structured components ($\mu$) along with the residual ($R$). (c) Out-of-sample extrapolation: The decomposed components corresponding to each variable are individually extrapolated to future horizons using the matrix multiplication (MatMul) operator, with distinct instantiations of the attention map ($\mathcal{E}$). (d) Component fusion: Finally, all four components, combined with the residuals, are input into a polynomial regression layer to capture their complex interrelations.
Additionally, we will ensure that the font styles in Figure 1 are consistent with the main text in the final version.
---
**W3:** We appreciate your attention to these issues. To address them:
(a) We will include a clear definition and explanation of the operator $\lfloor . \rfloor$, which denotes the floor function, in the paper.
(b) Regarding Figure 3(a), we will correct the labeling error by replacing “HDformer” with “SSCNN”.
(c) We acknowledge the issue with the text size of the legends in the figures. We will adjust the text size to ensure consistency with the main text, thereby enhancing the readability and overall presentation quality of the paper.
Thank you for your helpful feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. My concerns have been well addressed.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer aS9R,
we sincerely value your feedback and the constructive suggestions you've provided for enhancing our paper. If you have any further questions or concerns, please feel free to let us know.
Authors | Summary: This paper addresses long-term time series forecasting and critiques the reliance on complex models with extensive parameters. It proposes a decomposition method specifically designed for time series dynamics, achieving better forecasting performance across various datasets. Remarkably, the new model uses over 99% fewer parameters than other methods, highlighting the efficiency of domain-specific approaches. This research calls for a move away from complexity in LTSF, showcasing the effectiveness of focused decomposition techniques rather than relying on large-scale models.
Strengths: 1. The paper is praiseworthy for its intuitive approach. It tackles a significant problem by proposing a method that matches or surpasses current state-of-the-art models like PatchTST while using a smaller model footprint. The experimental results strongly validate this approach.
2. The model consistently performs well under various experimental conditions, including different input window sizes and hyperparameter settings. Statistical tests demonstrate the reliability of the results across multiple initializations, strengthening the study's credibility.
3. The authors provide a thorough comparison between decomposition and patching in terms of effectiveness and simplicity, demonstrating the superior benefits of decomposition over patching.
Weaknesses: 1. The clarity of the methodology could be improved with further elaboration.
2. The evaluation could be strengthened by including comparisons with LLM-based models, such as:
[1] Jin, Ming, et al. "Time-LLM: Time Series Forecasting by Reprogramming Large Language Models." The Twelfth International Conference on Learning Representations.
[2] Bian, Yuxuan, et al. "Multi-patch prediction: Adapting llms for time series representation learning." arXiv preprint arXiv:2402.04852 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The methodology would benefit from additional explanation regarding the structure and rationale of the Polynomial Regression layer depicted in Figure 1. If this layer represents a standard approach, including references would enhance clarity.
2. Clarifying the decision to exclude an attention mechanism from the long-term component, despite its presence in other components like seasonal, short-term, and spatial, would strengthen the methodological coherence and aid reader comprehension.
3. Figure 1 requires clarification on several elements. Specifically, the purpose of the 4x4 blocks and addressing inconsistent text formatting (e.g., $\mathcal{E}$) compared to the main text would improve comprehensibility.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Whether this model can be applied to other types of time series data, e.g. trajectory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments.
---
**W1:** We have addressed the concerns raised in your questions. Please kindly refer to the response to Q1 and Q2 below.
---
**W2:** We have conducted a comparative analysis between SSCNN and MTSF models based on LLM [1] [2]. Due to the time constraint, we directly copy the results obtained by baselines from an ICML 2024 publication [1]. The results, which are reported at the end of the reply box, demonstrate SSCNN's advantages in both performance and efficiency. This analysis strengthens our claim that SSCNN is a highly competitive model in the field of time series forecasting. We will include this comparative analysis in the final version of the paper to provide a more comprehensive evaluation of SSCNN's capabilities.
[1] Bian, Yuxuan, et al. "Multi-patch prediction: Adapting llms for time series representation learning." arXiv preprint arXiv:2402.04852 (2024).
[2] Jin, Ming, et al. "Time-LLM: Time Series Forecasting by Reprogramming Large Language Models." The Twelfth International Conference on Learning Representations.
| Electricity | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.129 | 0.147 | 0.167 | 0.202 |
| ALLM4TS (LLM-based, ICML 2024) | 0.127 | 0.145 | 0.163 | 0.206
|Time-LLM (LLM-based, ICLR 2024) | 0.140 | 0.151 | 0.171 | 0.210 |
|SSCNN (Ours) | **0.126** | **0.145** | **0.161** | **0.191** |
| Traffic | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.367 | 0.385 | 0.399 | 0.434
| ALLM4TS (LLM-based, ICML 2024) | 0.372 | 0.383 | 0.396 | 0.433
|Time-LLM (LLM-based, ICLR 2024) | 0.383 | 0.398 | 0.407 | 0.434 |
|SSCNN (Ours) | **0.352** | **0.380** | **0.390** | **0.423** |
| ETTh1 | 96 | 192 | 336 | 720 |
|-----------|-----------|-----------|-------------|-------------|
| PatchTST (Transformer-based ICLR 2023) | 0.379 | 0.413 | 0.422 | 0.447 |
| ALLM4TS (LLM-based, ICML 2024) | 0.380 | **0.396** | 0.413 | 0.461 |
|Time-LLM (LLM-based, ICLR 2024) | 0.399 | 0.433 | 0.469 | 0.473 |
|SSCNN (Ours) | **0.361** | 0.401 | **0.410** | **0.424** |
---
**Q1:** The Polynomial Regression layer depicted in Figure 1 is inspired by the work of [3], where we extend the original module to include both additive and multiplicative relations. This extension allows us to model more complex interactions between the decomposed components. We will provide a more detailed explanation of the Polynomial Regression layer in the methodology section, including its structure and rationale. Additionally, we will cite relevant references to enhance the clarity and contextual understanding of this approach.
[3] Deng, Jinliang, et al. "Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting." IEEE Transactions on Knowledge and Data Engineering (2024).
---
**Q2:** Thank you for highlighting this point. The decision to exclude an attention mechanism from the long-term component was made after careful consideration. Although it can be incorporated, as we have done with the seasonal, short-term, and spatial components, we found that it brought little to no gain in forecasting accuracy across the datasets we analyzed. To clarify, the attention mechanism is beneficial when a component exhibits a significantly evolving distribution over time. It reduces estimation bias by assigning higher weights to data points with higher correlation. However, for the long-term component in our case, the distribution tends to be stable throughout the input period. In such scenarios, the attention mechanism does not contribute additional value, rendering it redundant and unnecessary. We will include this explanation in the main text of the final version to enhance the coherence of our methodology and improve reader comprehension.
---
**Q3:** The 4x4 blocks in Figure 1 are used to exemplify the selection maps $\mathcal{I}^*$ and $\mathcal{E}^*$ as defined in the main text, with $T_{in}$ and $T_{out}$ both instantiated as 4. To enhance clarity, we will explicitly label these blocks in the figure to indicate that they are examples of the associated selection maps. Additionally, we will address the inconsistent text formatting, such as the use of E, to ensure consistency between the figure and the main text.
---
Rebuttal Comment 1.1:
Comment: I appreciate that you have addressed most of my concerns. I have one additional suggestion: the answer to Q2 could be further strengthened by including some empirical results to support your decision.
---
Reply to Comment 1.1.1:
Comment: Thank you for your suggestion. We have conducted additional experiments to provide empirical results supporting our decision. For the Electricity dataset, the accuracy with and without long-term attention was 0.128 and 0.129, respectively. For the Traffic dataset, the results were 0.356 and 0.360, respectively. These findings indicate that the inclusion of long-term attention has a minimal and sometimes negative impact on performance. | Summary: This study unveils a groundbreaking approach to time series forecasting, notable for its minimal parameter count. It stands as the first model to consistently outperform state-of-the-art (SOTA) techniques while remaining compact. Unlike prevalent methods such as PatchTST and iTransformer, which are powerful but cumbersome, and emerging methods like TimeMixer and SCNN, which are lightweight yet inadequate for complex tasks, this model achieves superior performance without the associated heft.
Strengths: 1. The model consistently delivers superior accuracy compared to state-of-the-art (SOTA) methods while maintaining a minimal model size. This accomplishment distinguishes it from other methods.
2. The framework unifies the ability to capture various patterns in time series data, offering a streamlined and enhanced alternative to existing models built with MLPs or Transformers.
3. The authors conduct extensive experiments, showcasing the model's strong performance compared to selected SOTA models, which are sufficiently representative of the latest advancements in the field.
Weaknesses: 1. There is a gap between the introduction and Section 3 regarding the decomposition of the time series into four components. The authors do not explain why these four components are sufficient. For longer sequences, is there a need for more components? Are there references that support this approach? This discussion should be included at the beginning of Section 3.
2. Manually disabling the spatial component for certain datasets appears suboptimal. It would be more effective if the algorithm could automatically determine whether including the spatial component is beneficial for each dataset.
3. The paper's formatting needs improvements. It seems the authors may have additional content to include. Although the figures in the methodology section are clear and informative, resizing and rearranging them could provide more space for adding valuable content to the main text.
Technical Quality: 3
Clarity: 4
Questions for Authors: If the authors can address the weak points, I would reconsider the score.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have raised the limitation of the model concerning computational efficiency, along with potential solutions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments.
---
**W1:** Thank you for highlighting this important point. We apologize for not providing sufficient context on the four components used in our model. Disentangling these components has been shown to be effective for time series forecasting in numerous studies [1] [2] [3] [4]. Our approach builds on these established methods with a novel selection mechanism, which provably reduces estimation bias for the considered components. Our method advances the performance of decomposition-based approaches to the SOTA level, which can be achieved by only large-scale models before our work. The fact that our model achieves this with only four components suggests that, regardless of model complexity, the representation power of these large-scale models does not exceed the scope covered by these four components. As you pointed out, this raises a critical question for future research: are there additional components that could be modeled to further improve forecasting accuracy, especially for longer sequences? Currently, this question remains open and unanswered. We will discuss this issue and provide relevant references at the beginning of Section 3 in the revised version, ensuring a more comprehensive and informative explanation.
[1] Cleveland, Robert B., et al. "STL: A seasonal-trend decomposition." J. off. Stat 6.1 (1990): 3-73.
[2] Wen, Qingsong, et al. "RobustSTL: A robust seasonal-trend decomposition algorithm for long time series." Proceedings of the AAAI conference on artificial intelligence. Vol. 33. No. 01. 2019.
[3] Deng, Jinliang, et al. "Disentangling Structured Components: Towards Adaptive, Interpretable and Scalable Time Series Forecasting." IEEE Transactions on Knowledge and Data Engineering (2024).
[4] Wang, Shiyu, et al. "TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting." The Twelfth International Conference on Learning Representations (2024).
---
**W2:** Thank you for your insightful suggestion. Our primary aim is to demonstrate the pioneering predictive capability of a purely decomposition-based model with an exceptionally low parameter count—less than 1% of what large-scale models require. The current architecture leverages the statistical characteristics of the dataset of interest, ensuring that the design is not arbitrary. In particular, our analysis of correlation, autocorrelation, conditional correlation, and conditional autocorrelation—visualized in Figure 7—guided our decisions regarding the architecture's configuration and critical hyper-parameters, such as cycle length and the connection between the variables. While these decisions are based on our judgment, they can be automated through a Python script, a straightforward implementation. Despite its simplicity, this method has already resulted in a sensible architecture that outperforms baselines, thereby validating our contributions. We acknowledge that the current method may not achieve the absolute upper limit of performance possible with all potential decomposition-based model configurations. This limitation is discussed in the "Conclusions, Limitations, and Social Impacts" section. We agree that learning-based methods, such as neural architecture search (NAS), offer a promising avenue for further improvement and optimization. This remains an exciting area for future work.
---
**W3:** Thank you for your valuable suggestion. We will enhance the layout and formatting in the final version of the paper. This will involve resizing and rearranging the figures in the methodology section to ensure they are clear and informative while providing additional space for valuable content in the main text. Our goal is to make the main body of the paper more self-contained and comprehensive.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply which has addressed all my questions. I have improved my rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer oTnm,
we sincerely value your feedback and the constructive suggestions you've provided for enhancing our paper.
Authors | Rebuttal 1:
Rebuttal: We are grateful for the detailed and constructive feedback provided by the reviewers. The positive reception of our work, particularly the recognition of its innovative approach and significant contributions to the field of time series forecasting, is highly encouraging. Below, we summarize the key strengths of our paper as highlighted by the reviewers:
1. **Importance and Relevance** (**Reviewer a5mv**): Acknowledged the significance of tackling efficient and accurate long-term time series forecasting.
2. **Innovative Approach and Original Contributions** (**Reviewers a5mv, UAfk, p9WT, oTnm, aS9R**): Highlighted the innovative use of decomposition with a selection mechanism. The reviewers appreciated the originality of the approach in feature decomposition, effectively capturing complex data patterns with a minimal model size. The work addresses a critical gap in SOTA methods, demonstrating the potential for smaller models to match or surpass existing methods.
3. **Lightweight and Strong Performance** (**Reviewers UAfk, oTnm, aFLy, p9WT**): Noted the substantial reduction in parameter count while maintaining or surpassing the performance of SOTA models. The reviewers appreciated the model's capability to handle diverse datasets and achieve state-of-the-art accuracy.
4. **Comprehensive Evaluation** (**Reviewers UAfk, aS9R, aFLy, p9WT**): Praised the extensive and thorough experiments, including overall performance results, ablation studies, and comparisons with other SOTA methods. The reviewers also noted the consistent performance across different experimental conditions and the clear documentation of results.
5. **Clarity and Quality** (**Reviewers aS9R, UAfk, p9WT**): Commended the clarity of the writing, logical flow, and the comprehensibility of the paper. Reviewers appreciated the supplementary materials and clear usage instructions for the open-source code, which facilitated reproducibility.
We have carefully addressed each of the reviewers' concerns and have made corresponding revisions and clarifications throughout the manuscript. We believe these changes enhance the clarity, coherence, and overall quality of the paper. We invite the reviewers to refer to the specific reply box associated with their review for detailed responses and updates. Thank you for your thoughtful consideration, and we look forward to your feedback on the responses. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Title: Parsimony or Capability? Decomposition delivers both in long term time series forecasting.
Long term time series forecasting has been an important research problem which applies to different problem domains. This paper proposes a decomposition method which shows significant performance on the benchmarks with less parameters. This method been evaluated extensively on the various datasets and been competitive to existing models. With such approach models can be enhanced to adapt domain characteristics more effectively in various time series applications.
Strengths: 1. SSCNN reduces the parameter count substantially compared to traditional models, holding onto less than 1% of the parameters while still performing well across different datasets.
2. The model captures complex data patterns effectively using fewer parameters, utilizing a structured component-based approach with a selection mechanism to improve prediction accuracy.
3. SSCNN excels in time series forecasting, managing diverse data types and confirming its effectiveness through thorough experimentation.
4. SSCNN improves plain feature decomposition by incorporating a selection mechanism. This allows the model to identify fine-grained dependencies at each time step, which is essential for enhancing the accuracy of the decomposed structured components and, consequently, the overall prediction accuracy.
5. Extensive analysis has been performed to validate the method on existing benchmarks and compared with state-of-the-art methods.
6. Supplementary materials are satisfactory and provide explanation about the dataset and the implementation.
Weaknesses: 1. Figures lack captions.
2. Include some limitations of the model as well.
3. second contribution and third one looks quite similar.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Please rewrite the contributions to make them clearer (2nd and 3rd).
2. Add descriptions in figure 2 and 3
3. In section D) Implications of Decomposition on Spatial-Temporal Correlations, please correct the captions of figures in temporal and spatial recovery.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Authors have adequately addressed the limitations related to computational efficiency and capability of model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments.
---
**W1:** We apologize for the lack of captions for Figure 2 and Figure 3. We also include the caption for Figure 1, as requested by Reviewer aS9R. The captions for these figures are shown below
Figure 2: Examination of parameter scale and computation scale against the forward window size and the backward window size on the ECL dataset. (a) (c) For short forward window size or backward window size, SSCNN consumes more parameters than DLinear only. As the window size grows, the SSCNN scales up at a much slower inflation rate than DLinear. (b) (d) The consumption of FLOPS by SSCNN ranks in the middle place among SOTAs, less efficient than iTransformer and Autoformer yet more efficient than PatchTST and Crossformer.
Figure 3: Impacts of backward window size. While the performance of other SOTA forecasters does not necessarily benefit from the increased lookback length, SSCNN improved performance on the enlarged lookback window.
Figure1: (a) Embedding: Raw data points are independently mapped to a high-dimensional representation space using a shared 1x1 convolution operator across all variables and horizons. (b) In-sample Inference: The historical time series representations are processed by three temporal attention-based normalization (T-AttnNorm) layers with different instantiations of the attention map ($\mathcal{I}$), followed by a spatial attention-based normalization (S-AttnNorm) layer. This process yields estimations of four structured components ($\mu$) along with the residual ($R$). (c) Out-of-sample extrapolation: The decomposed components corresponding to each variable are individually extrapolated to future horizons using the matrix multiplication (MatMul) operator, with distinct instantiations of the attention map ($\mathcal{E}$). (d) Component fusion: Finally, all four components, combined with the residuals, are input into a polynomial regression layer.
---
**W2:** We apologize for not highlighting the limitations of the model prominently enough. Due to space constraints, we discussed the limitations in the last section together with broader impacts and conclusions. We will discuss the limitations in detail and put them into a new separate section in the revision
Briefly, the identified limitations include: (a) While our approach effectively reduces model size, its computational complexity remains comparable to Transformer-based models, which is more demanding than linear models. This suggests potential for further optimization in terms of computational efficiency. (b) Currently, the combination of the proposed decomposition modules is determined with statistical insight into the data, which may not be optimal. Future work could explore methods for efficiently searching for the optimal combination of these modules. These points outline practical areas for improvement and future exploration. We will ensure these limitations are clearly presented in the final version of the paper.
---
**W3:** We apologize for the confusion between the second and third contributions. The second contribution relates to Section 4, where we theoretically explore the equivalence in capability between the patching operator and the decomposition operator. We also highlight the advantage of decomposition in terms of parameter efficiency, suggesting that decomposition can serve as a parameter-efficient alternative to patching, which is often considered essential in many state-of-the-art (SOTA) models. The third contribution pertains to the empirical evaluation of SSCNN against SOTA baselines, detailed in Section 5.
To clarify, we will revise the second contribution as follows:
"We conduct an in-depth theoretical comparison between decomposition and patching, analyzing both capability and parsimony. This analysis challenges the necessity of patching as an essential component in modern time series forecasting models, proposing decomposition as a more parameter-efficient alternative."
---
**Q1:** Please check the response to W3.
---
**Q2:** Please check the response to W1.
---
**Q3:** We will rectify this issue in the final version.
---
Rebuttal Comment 1.1:
Comment: Thank you so much for addressing my comments.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer UAfk,
we sincerely value your feedback and the constructive suggestions you've provided for enhancing our paper.
Authors | Summary: The paper approaches the problem of long term time series forecasting (LTSF) using a compositional technique to reduce the model size without compromising the quality of solution. The proposed technique is a transformer based architecture with a lower number of parameters, and delivers similar performance as state of the art models for LTSF.
The limitation of existing approaches, such as data patching, is that they fail to take into account the spatio-temporal dependencies, and end up with a blow up in the number of latent variables. This results only in a very small improvement even if the model size is increased substantially. The proposed technique in the paper is based on a inference step and an extrapolation step without any information loss.
The paper evaluates the proposed approach, called SSCNN, with seven datasets, which has a combination of regular and volatile patterns. The baseline and state of the art approaches compared against include iTransformer, TimeMixer, and PatchTST. SSCNN consistently achieves the best scores, with respect to MSE and MAE. The paper also conducts ablation studies to show that each new component in the architecture is vital to the performance.
Strengths: The work studies an important and hard problem in time series forecasting which is the problem of efficient and accurate long term forecasting. Compositional techniques have been successful in other areas of AI including reinforcement learning, planning, and finite state controller synthesis. So, it makes sense to apply similar ideas in the space of long term time series forecasting.
Weaknesses: While the high level message is presented well, I found the details of the proposed method and experiments are hard to follow. A running example with the explanation of the new layers will be useful.
The main contribution with respect to results is somewhat hard to grasp and align with the theoretical claims of the paper. Overall, I think there is room for improvement in the presentation of experimental results. I found some missing details in the experimental section that include:
1. Why is SSCNN missing Figure 3(a)?
2. What is the value of T_{out} in Figure 2?
3. What is the forward window size in Figure 3?
In figure 2, it would be useful to move some of the methods to the appendix, and keep only the critical ones in the main body of the paper. Same is true for Figure 3. It is hard to go back and forth between figures 1, and figures2&3.
Minor:
1. I would suggest providing some more details about the experimental results in point 3 of the contributions (lines 80-82)
2. Figure 3 is hard to read in print.
3. Having only one legend for all the subplots (Figure 2(a)-(d) and Figure 3(a)-(d)) will better than repeating the legends in all subplots.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Are there any non neural network based time series forecasting models which make use of compositionality?
2. Do any of the introduced layers (temporal, seasonal, short-term, etc) have similarities with any existing literature? What I mean is that, is the novelty in getting the layers to work together, or, also in defining the individual layers?
3. In order to better study the computational cost, can you share the total/average running time for each method for each dataset?
4. Why is SSCNN missing Figure 3(a)?
5. What is the value of T_{out} in Figure 2?
6. What is the forward window size in Figure 3?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: Yes, the paper has outlined the limitation of computational efficiency and provided some insight into how it can be improved in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review and constructive comments.
---
**W1:** We apologize if the current symbolic formulas present challenges to comprehension. To enhance understanding, we have tried to make the selection mechanisms formulated by Equations (3)-(7) more accessible with the example attention maps provided in Figure 1. To further illustrate the computational flow, we present a toy example focusing on the inference of long-term, seasonal, and short-term components, with the scaling operation and the selection mechanism being disabled for simplicity:
Given a historical sequence of observations: 1.1, 3.1, 1.0, 3.0, 0.9, 2.9:
Long-term Component: Derived as 2, 2, 2, 2, 2, 2, resulting in the sequence of residuals: -0.9, 1.1, -1, 1, -1.1, 0.9.
Seasonal Component: Computed as -1, 1, -1, 1, -1, 1, leading to the residual series: 0.1, 0.1, 0, 0, -0.1, -0.1.
Short-term Component: Derived as 0, 0.1, 0.05, 0, -0.05, -0.1 with a window size of 2, obtaining the residual series: 0.1, 0, -0.05, 0, -0.05, 0.
---
**W2:** We apologize for any confusion caused by the perceived disconnect between the experimental results and the theoretical claims of the paper. The experimental results presented in Table 1 demonstrate SSCNN's improved efficacy compared to state-of-the-art methods, including the patching-based methods PatchTST and iTransformer. These results align with the theoretical claims made in Section 4.1, suggesting that SSCNN's decomposition-based approach either matches or exceeds the capabilities of patching-based methods. Additionally, Figure 2 illustrates the optimal parsimony of SSCNN, supporting the analysis in Section 4.2. Furthermore, Figure 3 highlights SSCNN's superior scalability with an increasing lookback window size. Unlike other baselines, SSCNN consistently benefits from more historical data points, regardless of the dataset's dynamic nature. This scalability is crucial for time series forecasting, as more data generally leads to more accurate predictions following our expectation.
---
**W3:** (1). Thank you for catching this error. The label "HDformer" in Figure 3(a) is indeed a typo and should be replaced with "SSCNN." We will correct this in the revised version.
(2). In Figure 2, for subfigures (a) and (b), $T_{in}$ is set to 96, while $T_{out}$ is represented by the x-axis. For subfigures (c) and (d), $T_{out}$ is set to 96, while $T_{in}$ is represented by the x-axis. We will clarify this information in the revised figure caption to improve understanding.
(3). The forward window size in Figure 3 is indicated by $T_{out}$, which is noted in each subfigure. We will make sure this information is more clearly presented in the revised version for better clarity.
---
**W4:** We will revise the third contribution as follows in the final version:
We conducted comprehensive experiments to evaluate SSCNN across various dimensions. SSCNN demonstrates improved effectiveness, with increases ranging from 4% to 20%, and a reduction in model size ranging from 70% to 99% on average across all datasets, compared to state-of-the-art baselines.
---
**W5:** We will also address the other concerns you raise about the layout and formating in the final version.
---
**Q1:** The concept of compositionality in time series forecasting has roots in classical statistical methods, most notably STL [1]. Our method builds on this foundational idea by integrating it with deep learning techniques, enabling nonlinear decomposition and re-composition of the time series data. Additionally, we also extend STL to consider the short-term and spatial components. In this way, our method achieves a more nuanced and effective modeling of complex time series patterns.
[1] Cleveland, Robert B., et al. "STL: A seasonal-trend decomposition." J. off. Stat 6.1 (1990): 3-73.
---
**Q2:** The novelty of our approach lies in the reinvention of these layers that are inspired from the existing literature. As mentioned in the introduction and related work sections, some existing works have also explored the idea of modeling different heterogeneous components separately. These works typically employed techniques such as plain moving averages or MLPs. However, they often struggled to outperform advanced baselines like PatchTST across various benchmark datasets, as shown in Tables 1 and 2. Moreover, these approaches lacked scalability, requiring exponentially more parameters than SSCNN for large forward and backward window sizes, as demonstrated in Figures 2(a) and 2(c). Our method significantly advances these prior approaches by integrating customized selection mechanisms for the components of different behaviors within both backward and forward estimation processes. This extension enhances our model's capability and parsimony, resulting in superior performance.
---
**Q3:** We have preliminarily reported the measurements of training and inference times for SSCNN compared to two representative baselines, PatchTST and iTransformer, on the Electricity and ETTh1 datasets. We will improve this evaluation study in the revision. Our preliminary findings indicate that iTransformer is the most efficient model among the three, requiring the least time for both training and inference. The relatively higher running time of SSCNN can be in part attributed to its sequential handling of the considered components. In contrast, iTransformer handles these components in parallel using a unified MLP, contributing to its efficiency advantage. We have mentioned this limitation of SSCNN in the section of "Conclusions, Limitations and Broader Impacts".
| Models| ECL | ETTh1 |
|-|-|-|
|iTransformer | 11min (training), 0.05s (inference) | 2min (training), 0.01s (inference) |
PatchTST|95 min (training), 0.3s (inference)|10min (training), 0.09s (inference)
|SSCNN|72 min (training), 0.2s(inference)|8min (training), 0.07s (inference) |
---
**Q4, Q5, Q6:** Please check the above response to W3.
---
Rebuttal Comment 1.1:
Title: Thanks for responding to my queries
Comment: I would like to thank the authors for addressing my questions thoroughly.
I have more clarity on the novelty aspect now.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer a5mv,
we sincerely value your feedback and the constructive suggestions you've provided for enhancing our paper. If you have any further questions or concerns, please feel free to let us know.
Authors | null | null | null | null |
Sparse-view Pose Estimation and Reconstruction via Analysis by Generative Synthesis | Accept (poster) | Summary: The paper introduces a novel optimization-based method for sparse-view 3D reconstruction from unposed images. The method uses off-the-shelf pose estimator to get pose initialization, then it uses rendering loss and generative priors to optimize the pose and 3D reconstruction. In detail, the generative priors involve a multi-view SDS loss on generated novel views using Zero123. The method demonstrates satisfying results on the evaluation data, and the ablation study shows the effectiveness of each proposed technique.
Strengths: - Good performance. The reconstruction quality and pose estimation accuracy are satisfying.
- The paper is well-written and is easy to follow.
- The idea of rejecting images with large pose error is interesting.
- The technical part of the paper is solid.
Weaknesses: - Missing baseline. For the reconstruction methods, the only baseline is LEAP, which is a feedforward method. In contrast, the proposed method is an optimization-based method, which introduces pose-processing to estimated poses. I would suggest adding baseline of SPARF [1] and using the same pose initialization. Moreover, why not comparing with UpFusion?
- Unknown inference speed. Will the joint optimization of pose and shape be slow? Could you provide a analysis of inference time?
- Related work. One related work is iFusion [2], which uses generative priors for pose estimation and is very relevant to the philosophy of the proposed method. Another related work is FORGE [3], which introduces pose optimization for sparse view reconstruction. Moreover, the authors should discuss the prior sparse-view reconstruction from unposed images works with more details, the authors should provide more comparison and contrast with prior work. The current discussion is too short (Line 90-92).
- Ablation study. The ablation study is performed with the Ray Diffusion pose initialization. How will it look like using Dust3r initialization? This is important as the ablation should be performed with the best base model.
[1] Truong, Prune, et al. "Sparf: Neural radiance fields from sparse and noisy poses." CVPR 2023.
[2] Wu, Chin-Hsuan et al. “iFusion: Inverting Diffusion for Pose-Free Reconstruction from Sparse Views.” ArXiv 2023.
[3] Jiang, Hanwen et al. “Few-View Object Reconstruction with Unknown Categories and Camera Poses.” 3DV 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The introduction spends a lot space discussing the chicken-and-egg problem of pose estimation and reconstruction. However, I don't think it is quite related to the technical part, as the proposed method still need pose initialization using off-the-shelf methods. The method doesn't provide a novel perspective regarding how to solve the chicken-and-egg problem, and using pose initialization is quite common in prior works, e.g., SPARF, FORGE, and FvOR or even traditional SfM methods. Why the authors want to emphasize this?
- Is it possible to evaluate the outlier removal method? For example, the authors can evaluate the correlation between the removal and the pose error. If the proposed method works well, they should have strong correlations. Moreover, it will be good to provide any statistics on the outlier removal method, e.g., how many images are removed in average.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Missing baseline.
We would like to thank the reviewer for the suggestion on baselines. We include the comparison between the proposed approach and SPARF in General Response. In additions, we also compare our method with UpFusion on novel view synthesis:
| Method | Init Cam Pose | (N=6) PSNR | LPIPS | (N=8) PSNR | LPIPS |
| :------- | :------------ | :--------- | :----- | :--------- | :----- |
| LEAP | / | 12.84 | 0.2918 | 12.93 | 0.2902 |
| UpFusion | / | 13.30 | 0.2747 | 13.27 | 0.2744 |
| Ours | Ray Diffusion | 13.63 | 0.2697 | 15.30 | 0.2304 |
| Ours | DUSt3R | 15.56 | 0.2173 | 17.03 | 0.1870 |
UpFusion performs slightly better than LEAP, while our approach constantly outperforms both pose-free baselines with two different pose initialization and view numbers. Our method improves consistently by leveraging more precise initial camera poses and more observed input images. However, LEAP and UpFusion show little performance increase when more views are available due to their geometry unawareness. We also include a qualitative comparison between our approach and UpFusion in **Fig. 4** of the rebuttal PDF file.
> Unknown inference speed.
We apologize for the ambiguity in our earlier version. Our shape-pose optimization is fast, benefiting from the efficient Gaussian Splatting compared to NeRF baselines, and on average our inference time is about 9 minutes, whereas SPARF may take more than 10 hours to train a full model. We discuss inference time more thoroughly in our General Response and will revise our paper according to the review's feedback.
> Related work.
Thanks for mentioning related work! We will incorporate our discussion into our updated version.
+ iFusion leverages diffusion priors from the diffusion model (i.e., Zero-123) for pose estimation in a similar manner to ID-Pose where relative camera poses are parameterized as conditioning to the diffusion model and are refined via gradient descent from diffusion noise prediction. Compared to iFusion, we also leverage diffusion priors from Zero-123 but we optimize camera poses directly via photometric error from differentiable rendering. In this process, we explicitly form a 3D representation (3DGS) for optimization.
+ FORGE (which is earlier work from the authors of LEAP) uses the synergy of pose estimation and 3D reconstruction by inferring 3D features that are shared by both tasks. Our work shares a similar high-level idea in that we make use of the estimated camera poses for 3D reconstruction, and the reconstructed 3D, in turn, is exploited to help refine camera poses and identify outliers. By doing this, we also hope to make both sub-tasks to benefit from each other.
In our final version, we will add detailed discussions about pose-free approaches (e.g., LEAP, UpFusion) in our Related Work section.
> Ablation study.
We thank the reviewer for suggesting an ablation study on DUSt3R, and present the results below:
| | **Rot@5°** | **Rot@15°** | **CC@10%** | PSNR | LPIPS | **F1** |
| ------------------------------- | ---------- | ----------- | ---------- | ----- | ------ | ----------- |
| DUSt3R | 52.3 | 93.8 | 82.2 | / | / | |
| \+ Pose-3D Co-opt. (w/o SDS) | 80.9 | 95.8 | 91.6 | 16.79 | 0.2072 | 64.70 |
| \+ SDS (vanilla Zero123) | 78.7 | 95.3 | 90.2 | 16.38 | 0.2097 | 69.45 |
| \+ SDS (Our 6-DoF Zero123) | 81.1 | 95.9 | 91.0 | 16.75 | 0.1941 | 75.04 |
| \+ Outlier Removal & Correction | 83.7 | 96.2 | 93.5 | 17.03 | 0.1870 | 75.80 |
> The introduction spends a lot space discussing the chicken-and-egg problem of pose estimation and reconstruction... Why the authors want to emphasize this?
We agree with the reviewer that classical SfM methods did indeed tackle this chicken-and-egg problem. Our introduction was instead motivated by ignorance of the chicken-and-egg nature in the more recent learning-based sparse-view reconstruction methods. For example, approaches like LEAP/UpFusion seek to side-step the pose estimation task altogether. On the other hand, methods that explore generative-prior-based reconstruction (e.g. ReconFusion/SparseFusion) assume perfect poses. Even methods that jointly optimize 3D and pose (e.g. BARF/SPARF/FvOR) are designed to use noisy ground-truth poses or near-perfect initial poses (as opposed to ones output by real systems with possibly large errors).
We agree this emphasis maybe obvious to any reader familiar with the rich history of the field, but believe this is useful for readers only familiar with a recent history. That said, we would be happy to revise the introduction if the reviewer suggests.
> Statistics of Outlier Removal.
We again thank the reviewer for this advice. With the same setting as Table 4 in our main paper, we investigated the relationship between the identified outlier numbers and the corresponding sequence numbers in our dataset (174 in total) and their initial average rotation error.
| # of Outliers | 0 | 1 | 2 | 3 | 4 |
|--------------------------|-------|-------|-------|-------|-------|
| Init Rot Error | 14.19 | 13.09 | 20.52 | 17.16 | 33.17 |
| # of Sequences | 56 | 82 | 28 | 7 | 1 |
In general, sequences with higher rotation errors tend to be identified with more outliers. In this setup, our method identified ~0.94 outliers per sequence. We also computed the rotation error at an image level for outliers and inliers with the average over all samples as reference:
All: 14.92°
Outlier: 19.75°
Inlier: 13.53°
The results indicate that the outlier we found indeed has a higher error than others, verifying the effectiveness of our outlier identification approach.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply! The rebuttal resolves most of my concerns. I would like to raise the score of this paper.
Before that, I have a last question. I noticed that in the reply to Reviewer mujy, with using SPARF, the pose error becomes larger, which is counter-intuitive. Could the authors provide more insight into this? Is there any problem with the estimated correspondence, e.g. does the object-centric image makes the correspondence fail? If so, is it reasonable to use images with original background as the input of SPARF, and after rendering the novel views, we use foreground segmentation to evaluate the NVS metrics?
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for the prompt response and the willingness to raise our score!
Yes, we indeed found that correspondence is crucial for SPARF's pose optimization, and we observed that SPARF often fails to obtain reliable correspondence in our experiments. In **Fig. 3** of our rebuttal PDF file, we included an example where false estimated correspondence makes SPARF's pose estimation fail, and therefore, it produced degraded novel views.
To demonstrate the influence of foreground masks on correspondence (which is reflected by pose accuracy), we present a comparison of SPARF with and without using masks below.
| N=8 | Avg Rot Error | Rot@5° | Rot@15° | CC@10% | Improvement Rate |
| :--------------- | :------------ | :----- | :------ | :----- | :--------------- |
| DUSt3R | 8.71 | 51.6 | 92.9 | 79.5 | / |
| SPARF (w/ mask) | 18.17 | 57.3 | 75.7 | 68.6 | 0.54 |
| SPARF (w/o mask) | 10.10 | 58.1 | 85.9 | 81.2 | 0.59 |
| Ours | 5.99 | 81.7 | 95.3 | 92.5 | 0.93 |
While including background did improve SPARF's pose accuracy, it failed to improve the average rotation error and Rot@15° over the baseline as the w/ mask setting did. Even in this setup, we still observed false estimated correspondence used by SPARF, which may cause large pose errors. Given these results, we may attribute the SPARF's counter-intuitive performance more to the distribution of NAVI data instead of the use of masks. In NAVI, images have limited overlap, making it inherently hard to find reliable correspondence. This differs significantly from SPARF's testing data in the original paper, which may show notable overlaps and a strong forward-facing feature (e.g., the DTU dataset).
In our General Response, we also tried to compare novel view synthesis with SPARF despite differences in pose accuracy by introducing `*PSNR` and `*LPIPS`. We used these metrics to factor out the negative influence of failure pose optimization on SPARF due to unavailable/false correspondence. We refer the reviewer to our General Response for more details! We hope this could address the reviewer's concern regarding the influence of masks (and thus the influence of pose accuracy) on NVS.
Finally, we would like to clarify an implementation detail relevant to the reviewer’s question: in the original submission, when reporting pose error, we did give SPARF access to the background as it gave stronger results. Unfortunately, for NVS experiments, this was not trivial as it would require post-processing to remove background from the renderings before comparing NVS metrics, so we gave it images w/o background. Although this may be slightly suboptimal, we hope that our `*PSNR` and `*LPIPS` (which explicitly focus on sequences where SPARF improved pose, thus minimizing the effect of sequences where SPARF was not robust) make it clear that our system does outperform SPARF. | Summary: This paper proposes a framework for joint 3D reconstruction and pose refinement. Specifically, given estimated camera poses from off-the-shelf models, the proposed method first leverages diffusion priors and rendering loss for 3D reconstruction. The 3D reconstruction is further used to refine the current pose parameters. The 3D reconstruction and pose refinement are conducted in an alternative way. An outlier identification and correction strategy is also introduced to make full use of the given image while mitigating the adverse effect of noisy camera estimations at the same time. Experimental comparison with several pose estimation baselines shows that the proposed method can refine inaccurate pose estimation effectively.
Strengths: 1. The paper tackles a practical problem in real-world scenarios, where ground truth camera poses are not always available.
2. The proposed method is shown to be effective when applying to different pose estimation baselines.
3. The proposed outlier removal and correction is effective from the ablation study results in Table 4.
Weaknesses: 1. The proposed method is compared with SPARF only in the setting of using pose from different pose estimation baselines. However, it would be more convincing to also present the results using the same setting of SPARF, which adds noise into the GT camera pose. This will be a direct comparison with SPARF’s original results reported in their paper.
2. The proposed method is compared with LEAP for 3D reconstruction results. However, the comparison is a bit unfair since LEAP does not require any initial camera poses.
3. The description of how to effectively detect the outliers (line 212 - line 214) is not very clear. Similarly, the procedure of how to correct the outlier poses (line 223 - line 225) is not very clear either. How the MSE and LPIPS are computed and compared since there is no correspondence?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The proposed method is evaluated on the NAVI dataset. It seems that the dataset is quite simple as shown in Fig. 3 and Fig. 4. The reviewer is wondering about the performance of the proposed method on more complex scenes?
2. The reviewer is wondering about the separate ablation results on the outlier removal and correction.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The proposed method is compared with SPARF only in the setting of using pose from different pose estimation baselines. However, it would be more convincing to also present the results using the same setting of SPARF, which adds noise into the GT camera pose. This will be a direct comparison with SPARF’s original results reported in their paper.
Thanks for the advice! We experimented with the suggested setting using eight images, where Gaussian noise is added to ground truth camera poses (so the initial poses have a rotation error of about 9.71°). A comparison between the proposed method and SPARF on pose accuracy is presented below:
| Method | Avg Rot Error | Rot@5° | Rot@15° | CC@10% | Improvement Rate |
| :------------- | :------------ | :----- | :------ | :----- | :------------ |
| Initialization | 9.71 | 11.9 | 89.1 | 56.3 | / |
| w/ SPARF | 14.47 | 43.6 | 73.5 | 66.5 | 0.53 |
| w/ Ours | 3.63 | 82.4 | 96.9 | 94.4 | 0.92 |
While our method yields consistent improvements over all metrics, we observed that SPARF can sometimes degrade performance compared to the initialization. As SPARF leverages correspondence, its pose optimization is ineffective (or even diverges) when no reliable correspondence (or false correspondence) is found. The results are consistent with our experiments on DUSt3R poses, which we include in our General Response. We refer the reviewer to the top-level response for more details. Please note that the experiments in the original SPARF paper were done using image sets with significant overlap (e.g. DTU dataset where all images observe the same aspect of an object).
> The proposed method is compared with LEAP for 3D reconstruction results. However, the comparison is a bit unfair since LEAP does not require any initial camera poses.
For an additional baseline, please see our general response where we additionally compare to SPARF on novel-view synthesis. We would also be happy to report any additional comparisons the reviewer suggests.
> The description of how to effectively detect the outliers (line 212 - line 214) is not very clear. Similarly, the procedure of how to correct the outlier poses (line 223 - line 225) is not very clear either. How the MSE and LPIPS are computed and compared since there is no correspondence?
We apologize for being unclear in our submission and will update the text based on the reviewer's feedback. In outlier correction, we do render-and-compare for each identified outlier, where we resue the reconstructed 3D from only the inliers (obtained in *Iterative Outlier Identification*). By sampling pose candidates, we can render images that can be compared with the target image (of the outlier) with metrics such as MSE and LPIPS. Therefore, correspondence is not required in the process. We included detailed explanations for outlier removal and correction in our General Response, and we hope that this will resolve the reviewer's concern.
> The proposed method is evaluated on the NAVI dataset. It seems that the dataset is quite simple as shown in Fig. 3 and Fig. 4. The reviewer is wondering about the performance of the proposed method on more complex scenes?
We primarily tested on NAVI because it provides ground truth 3D meshes that are unavailable in other datasets. This enables the evaluation of 3D metrics such as F1 for our 3D reconstruction. In addition, NAVI offers highly accurate object masks and high-quality sparse-view image sequences. However, we include qualitative results of our approach on more challenging scenarios in **Fig. 1** of our rebuttal PDF, where we show two self-captures and four challenging instances from the held-out set of CO3D. These instances have more complex textures and shapes than the NAVI samples, which verifies the proposed approach's generalization ability.
> The reviewer is wondering about the separate ablation results on the outlier removal and correction.
We would like to thank the reviewer for mentioning this ablation. We followed our setup in Table 4 of the main paper, but we only do *Iterative Outlier Identification* without the following correction stage.
| Method | Rot@5° | Rot@15° | CC@10% | PSNR | LPIPS | F1 |
|---------------------------------|-----------|------------|---------|-------|--------|-------|
| Ray Diffusion | 13.5 | 73.5 | 38.3 | / | / | / |
| Ours (w/o Outlier Correction) | 47.8 | 86.6 | 75.2 | 14.45 | 0.2494 | 65.41 |
| Ours (Full System) | 60.3 | 88.2 | 80.4 | 15.30 | 0.2304 | 68.19 |
Compared to the baseline Ray Diffusion numbers, this experiment setup benefits from iterative pose-3D co-optimization, improving the base pose accuracy by a notable margin. Moreover, the results verify that an additional outlier correction is necessary for higher pose accuracy and high-fidelity novel views.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' detailed responses. Most of my concerns are resolved in the rebuttal. For the comparison on 3D reconstruction, it would be more convincing to compare under the same condition, namely based on an initial pose. However, LEAP is not using pose information, hence putting LEAF at disadvantage for the comparison.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thanks for the feedback. We believe there are two schools of thought for developing approaches for sparse-view-3D in-the-wild: a) methods that side-step explicit pose estimation (e.g. LEAP, UpFusion), and b) methods that rely on and improve initial pose estimation (e.g. SPARF and ours). We do already include a comparison to SPARF as a baseline that also uses the same initial poses to show that for methods following ideology (b), our work improves over the current SOTA.
Regarding the comparison to LEAP, we certainly agree that LEAP not using poses is perhaps a reason why it is not competitive -- but this is exactly the point we wish to make! In particular, we hope that one key take-away for a reader is that methods following ideology (a) are limited in their performance. Based on their comments, this maybe something the reviewer feels is obvious, but this is not universally agreed! For example, LEAP positions itself as ‘liberating’ sparse-view 3D reconstruction from a dependence on pose estimation and even begins the abstract with a question that “Are camera poses necessary for multi-view 3D modeling?” and argues that they are not needed (UpFusion also follows a similar philosophy). Our comparison to LEAP (and the added comparison to UpFusion in response to Reviewer XaBT) seeks to make the counterpoint that “Actually, such poses are indeed helpful!”.
We would be happy to present the results with this context more clearly outlined in the text if the reviewer feels that maybe helpful. | Summary: This paper proposes a method for the joint reconstruction of camera poses and 3D objects given sparse input views. The core idea is to use a pose-conditioned diffusion model (Zero-123) as a prior, impose the SDS loss, and jointly optimize the poses and objects, similar to the approach in ID-pose. To improve the robustness and quality of the optimization, the authors made several modifications: (1) Using a 6 DoF pose-conditioned diffusion model instead of a 3 DoF model. (2) Adding strategies for outlier detection and correction. (Although somewhat empirical, it proves effective.)
This approach requires initial camera poses (from methods such as RelPose++, RayDiffusion, etc.) and is not capable of reconstructing poses from scratch (e.g., purely random camera poses). Experimental results demonstrate that, compared to SPARF and ID-pose, the proposed method achieves better pose estimation quality. Additionally, it provides better object reconstruction in terms of novel view synthesis quality compared to LEAP.
Strengths: (1) The approach is technically sound, and I believe the reported results are reproducible.
(2) The reconstructed results look good and represent the state-of-the-art in object-level pose-free reconstruction.
(3) The paper is well-written, making it easy to read and understand.
Weaknesses: (1) This optimization-based method requires more time compared to a feed-forward model, taking about 5-10 minutes. Additionally, the writing discussing this aspect is somewhat unclear: the paper states, “with increased inference time depending on the number of outliers.” Could this statement be more specific? How much does the time increase with the number of outliers? The correction of outliers may be time-consuming as it requires dense searches of initial camera poses.
(2) (Minor) The method focuses only on object-level reconstruction, which makes the scope seem narrow.
(3) The authors do not sufficiently discuss experiments in a more “standard” sparse-view setting, such as using 3 or 4 views. The reported experiments use at least 6 views, which is not a particularly small number.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) A related work is lacking in discussion: Sun, Yujing, et al. "Extreme Two-View Geometry From Object Poses with Diffusion Models." arXiv preprint arXiv:2402.02800 (2024).
(2) Is the testing data included in the training set for fine-tuning the 6-DoF diffusion model?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As discussed in the weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > This optimization-based method requires more time compared to a feed-forward model, taking about 5-10 minutes. Additionally, the writing discussing this aspect is somewhat unclear: the paper states, “with increased inference time depending on the number of outliers.” Could this statement be more specific? How much does the time increase with the number of outliers? The correction of outliers may be time-consuming as it requires dense searches of initial camera poses.
We apologize for the lack of clarity and address the inference time in our general response above, and will update the final version to include these details. We agree that our system is slower than feed-forward methods (e.g LEAP), but allows significantly more accurate generations. Compared to prior NeRF-based optimization methods however, our gaussian-splatting based system is far more efficient e.g. SPARF takes 10hrs per instance whereas our systems takes ~9min.
> The authors do not sufficiently discuss experiments in a more “standard” sparse-view setting, such as using 3 or 4 views. The reported experiments use at least 6 views, which is not a particularly small number.
We chose 6-8 images as a setting representative of online marketplaces, but agree with the reviewer that it is important to also study even fewer views and report an experiment analyzing N=4 below.
| Method | Rot@5° | Rot@15° | CC@10% |
| ------- | :----: | :-----: | :----: |
| DUSt3R | 52.0 | 90.3 | 79.0 |
| w/ Ours | 57.8 | 90.5 | 84.5 |
While our system does lead to consistently improved poses, the gains are less prominent compared to settings with more views. We believe this is because our approach does rely on multiple images 'co-observing' common 3D regions (to guide pose correction), but these are not common if we randomly sample a small set of views around an object, thus leading to diminishing benefits with very few images.
> A related work is lacking in discussion: Sun, Yujing, et al. "Extreme Two-View Geometry From Object Poses with Diffusion Models." arXiv preprint arXiv:2402.02800 (2024).
We thank the reviewer for sharing relevant work, which we will incorporate in our final version. The process of matching the generated images with the target image is similar to our outlier correction procedure at a high level, but we leverage a reconstructed 3D representation to render novel views instead of purely using diffusion models to generate them. Regarding this perspective, our novel views may be more detailed and faithful as we leverage multiple input views.
> Is the testing data included in the training set for fine-tuning the 6-DoF diffusion model?
No, we did **not** include NAVI in our data to fine-tune Zero-123 with 6-DoF camera conditions.
> The core idea is to use a pose-conditioned diffusion model (Zero-123) as a prior, impose the SDS loss, and jointly optimize the poses and objects, similar to the approach in ID-pose.
We would like to clarify that our method is not similar to ID-Pose. ID-Pose estimates camera poses by optimizing the relative pose conditions in Zero-123 for image pairs while it does not form any 3D representation. In fact, we adopted ID-Pose as initialization for our experiments on three synthetic datasets in the supplementary and show that our systems leads to significant gains over it. | Summary: This paper presents a method named MV-DreamGaussian for tackling the problem of 3D reconstruction from sparse multi-view inputs. In particular, the paper extends the DreamGaussian work to use multi-view images as the inputs and proposes a scheme to optimize the inaccurate camera poses of the multi-view images.
Strengths: - This paper is well written and I can follow smoothly.
- The authors proposed a finetuned version of Zero-1-to-3 with 6 DoF camera parametrization which shows an advantage over 3 DoF camera parameterization in the original paper.
- The proposed pose refinement scheme is novel and very effective according to the authors' experiments compared with SPARF as well as the ablation study which shows that adding the proposed pose refinement improves the pose accuracy and reconstruction quality significantly. The design of the outlier removal based on photometric error ranking and discrete search is empirical but works quite well.
Weaknesses: - This paper presents very limited novelty in the reconstruction part with a trivial extension to DreamGaussian to use multi-view images, which is already implemented in a public repository [stable-dreamfusion](https://github.com/ashawkey/stable-dreamfusion).
- The major weakness of the paper is the lack of fair comparisons in terms of the 3D reconstruction. The authors only compared with LEAP for the 3D reconstruction. However, LEAP is a work that **does not require any pose inputs**, whereas the proposed work needs relatively good pose initialization (e.g., Dust3r) and conduct refinement on it. In addition, the underlying 3D representation is different, too: LEAP uses NeRF while the proposed work uses 3D Gaussian. I'm confused as to why the authors did not compare with SPARF for the reconstruction quality too since SPARF shares the same input setup as the proposed work. Besides, the very recent work DMV3D would also be a good method to compare with.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I'm quite curious what the reconstruction quality the method can achieve without 3D generative prior but with the proposed refinement. Namely the combination of (1) and (4) in Table 4.
- How are the poses used for generative prior sampled in addition to the input views?
- How are the thresholds for pose outlier removal tuned?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations of the paper and I generally agree with them.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > This paper presents very limited novelty in the reconstruction part with a trivial extension to DreamGaussian to use multi-view images, which is already implemented in a public repository stable-dreamfusion.
We respectfully disagree with the reviewer's assessment that our extension to DreamGaussian is trivial. While using multiple input images for DreamGaussian has been implemented in the stable-dreamfusion GitHub repository (as mentioned in L173-174 of our main paper), our work goes further by enabling the handling of 6-DoF camera poses and gradient-based pose optimization. To achieve this, we fine-tuned Zero-123 with a novel 6-DoF camera parameterization and implemented customized CUDA kernels, as the official Gaussian Splatting codebase did not support this feature.
We appreciate that the reviewer recognizes the novelty of our pose refinement strategy. However, we kindly ask the reviewer to also consider the broader contributions of our proposed system and the specific problem we aim to address. Specifically, we integrate gradient-based pose optimization with generative priors to tackle sparse-view pose refinement, a scenario prone to overfitting. Additionally, we introduced an outlier identification and correction system to manage significant initial pose errors so that our method can work in real-world scenarios where only estimated camera poses from off-the-shelf methods are available. To the best of our knowledge, no existing work effectively addresses this challenging yet practical setup.
> The major weakness of the paper is the lack of fair comparisons in terms of the 3D reconstruction.
Please refer to our general response, where we include an evaluation of SPARF regarding novel view synthesis. We apologize for not including these in our earlier version -- SPARF training is slow (the overall training takes more than 10 hours **per instance** on a single GPU such as V100) and due to limited resources, we only compared to SPARF's pose optimization stage (which takes 30% iterations). We also note that the suggested baseline DMV3D is not applicable in our setup -- it generates multiple views as output but does not allow multiple unposed views as input! Moreover, their implementation is not open-sourced.
> I'm quite curious what the reconstruction quality the method can achieve without 3D generative prior but with the proposed refinement. Namely the combination of (1) and (4) in Table 4.
We thank the reviewer for suggesting this experiment. We conducted the ablation study under the same settings as Table 4 and compared the results with our full system:
| Full Method | Rot@5° | Rot@15° | CC@10% | F1 | PSNR | LPIPS | PSNR* | LPIPS* |
| ----------- | :------: | :------: | :------: | :-------: | :--------: |:-------: | :--------: | --------- |
| w/o SDS | **66.2** | 86.8 | 77.8 | 63.57 | **16.05** | **0.2277** | 17.90 | 0.1990 |
| w/ SDS | 60.3 | **88.2** | **80.4** | **68.19** | 15.30 | 0.2304 | **18.18** | **0.1778** |
In this setup, the quality of the 3D reconstructions is worse as there are more artifacts (significant roughness and holes) in the geometry. However, we found that this setup does achieve high pose accuracy (even being more precise on Rot@5° than our full system). Analyzing further, we found the no-SDS setup makes it easier to remove outliers (removing one input image more significantly reduces the reprojection errors). Quantitatively, **97.1%** of sequences in this setup involve at least one identified outlier, compared to **67.2%** in the full system.
Perhaps more surprisingly, we found that this setup also yielded slightly better view synthesis results (PSNR and LPIPS), but this did not correspond to the qualitative results (see **Fig. 2** in our rebuttal PDF file). We hypothesize that this discrepancy is because we compute a single global alignment between our optimized camera poses and GT poses before evaluating NVS, and this alignment maybe slightly more precise for the baseline leading to improved NVS metrics even though the 3D/view-synthesis quality is worse. To mitigate the effects of alignment in NVS evaluation, we also report PSNR*, and LPIPS* -- where we locally optimize camera pose for each novel view for a few steps using photometric error while keeping our 3D representation fixed. We see that our full system does yield better predictions, highlighting that the use of SDS does indeed improve the novel-view generations.
> How are the poses used for generative prior sampled in addition to the input views?
We preprocess the initial camera poses provided by off-the-shelf methods to align the world origin with the approximate intersection of their optical axes. Next, we create a sphere centered at the origin, with a radius equal to the average distance of all cameras from the origin. When rendering novel views for SDS, we sample points on the sphere for camera translation using random azimuth and elevation angles. For rotation, we orient the sampled camera towards the origin.
> How are the thresholds for pose outlier removal tuned?
During the early stages of our work, we conducted initial experiments on a few samples from the CO3D dataset and found that a threshold of 0.05 for LPIPS generally worked well. This setting was maintained for all our in-the-wild evaluations on NAVI reported in the paper. We did not tune the threshold further for these results, though a fine-grained search might yield improved performance.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply. The authors' response addressed my major concerns about the evaluation and I decided to raise my rating to acceptance.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate the reviewer for raising the score! We will carefully revise our paper based on the reviewer’s valuable feedback. Thank you again for providing such insightful comments! | Rebuttal 1:
Rebuttal: # General Response
We appreciate the reviewers' insightful comments and valuable feedback. We are glad that the reviewers appreciated the results, the practicality of the setup, and found the paper well written. In this response, we address some of the common points raised by the reviewers (additional NVS comparison and clarifications on inference), and address the more specific comments via separate responses to each review.
### Comparison with SPARF on Novel View Synthesis (NVS)
In addition to our submission comparing to SPARF for pose estimation, reviewers ePgG and XaBT recommended comparison to SPARF via NVS metrics. We report these below, using DUSt3R pose (N=6, 8) as initialization. Due to SPARF's long training time (about 10 hours per instance), we could only include 70 sequences (2 sequences per object) in these two experiments.
| N = 6 | Avg Rot Error | Rot@5° | Rot@15° | CC@10% | Improvement Rate | PSNR | LPIPS | *PSNR | *LPIPS |
|-----------|---------------|---------|---------|---------|--------------|----------|-----------|----------|-----------|
| DUSt3R | 7.90 | 48.5 | 93.3 | 80.2 | / | / | / | / | / |
| SPARF | 17.07 | 47.9 | 68.7 | 67.6 | 0.41 | 12.80 | 0.3201 | 15.30 | 0.2478 |
| Ours | 5.82 | 73.9 | 94.5 | 92.1 | 0.80 | 15.52 | 0.2179 | 16.23 | 0.1844 |
| N = 8 | Avg Rot Error | Rot@5° | Rot@15° | CC@10% | Improvement Rate | PSNR | LPIPS | *PSNR | *LPIPS |
|-----------|---------------|---------|---------|---------|--------------|----------|-----------|----------|-----------|
| DUSt3R | 8.71 | 51.6 | 92.9 | 79.5 | / | / | / | / | / |
| SPARF | 18.17 | 57.3 | 75.7 | 68.6 | 0.54 | 13.42 | 0.3059 | 15.44 | 0.2584 |
| Ours | 5.99 | 81.7 | 95.3 | 92.5 | 0.93 | 17.02 | 0.1874 | 17.48 | 0.1695 |
The results indicate that our method outperforms SPARF in pose accuracy and novel view quality. In fact, we found that SPARF can often make the poses worse compared to the (relatively accurate) DUSt3R initialization (measured via `Improvement Rate` that indicates the percentage of sequences with reduced pose error). We found that this is because the correspondences leveraged by SPARF in its optimization are not robust and are susceptible to false matches -- please see **Fig. 3** of the rebuttal PDF for an example.
As an attempt to compare novel view quality despite the difference in pose accuracy, we report *PSNR and *LPIPS, which are measured *only on the sequences where SPARF improves pose accuracy* and find that even in these, our approach outperforms it. We also observed that while SPARF works well on novel views close to the input, floaters constantly appear with significant viewpoint changes. In contrast, our generative prior leads to a more consistent 3D representation.
## Inference Time and Details
### (1) Iterative Outlier Identification
Given N input images, our method first reconstructs 3D using the proposed MV-DG approach. The image with the largest reprojection error is considered as a candidate for being an outlier, and we perform another reconstruction after removing it (using remaining N-1 inputs). This candidate is considered a valid outlier if the average reprojection error in others views reduces significantly (we use 0.05 in LPIPS as a threshold). If it is an outlier, we remove it and repeat the process to identify outliers among the remaining N-1 images, stopping the process when no outliers remain. In general, if O outliers are detected, we need to perform O+2 reconstructions.
### (2) Outlier Correction via Render-and-compare
Any identified outlier in (1) requires pose correction. We do this by combining discrete search and continuous optimization. Specifically, we reuse the reconstructed 3D from (1) with only the inliers involved. For each outlier, we sample pose candidates evenly on a sphere around and targeted at the reconstructed 3D. Next, we optimize these pose candidates via gradient descent from the photometric error given our reconstructed 3D. To find out the most likely target pose, we select the optimized pose candidate whose rendering has the lowest discrepancy with the target input image, which is measured by both MSE and LPIPS. As MV-DG outputs a 3D mesh, rendering and propagating gradients to camera poses are fast. Each outlier takes roughly one minute to do pose correction.
If any outliers were corrected, another reconstruction is performed using the updated poses.
### (3) Inference time:
For N=8 images using a single RTX A5000, one reconstruction with MV-DG takes about 2 minutes to complete, and the 'render-and-compare' for each outlier takes around a minute. Our full pipeline (using RayDiffusion initialization) detected an average of 0.94 outliers per sequence, resulting in an inference time of around 9 minutes.
We will update the paper to include these details more clearly as well as release our implementation for reproducibility.
Pdf: /pdf/3877f296fac652f9f9e74c3c9cb46bfc04bebbca.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stochastic Optimal Control Matching | Accept (poster) | Summary: In this paper, the authors propose a novel learning algorithm, Stochastic Optimal Control Matching (SOCM), to numerically solve general formulations of Stochastic Optimal Control (SOC) problems involving affined-controlled diffusion processes. They build upon the Iterative Diffusion Optimization (IDO) [1] framework, which consists in iteratively refining a parametric controlled diffusion process by minimizing at each iteration (with stochastic gradient descent) a specific objective function with respect to the parametric control. Previous works had for instance considered the relative entropy loss, the cross-entropy loss, the log-variance loss or the moment loss as the objective function. In SOCM, it is a least-squares regression loss (with multiplicative importance weight) which aims at fitting the parametric control to a vector field which depends on a family of *reparameterization matrices* (also optimized). The design of this objective function relies on standard tools from SOC theory, as well as an original contribution, the *path-wise reparameterization trick*, to compute gradients of conditional expectations of a functional applied on a random process. The authors show that the SOCM loss can be decomposed as the sum of a bias term, that is linked to the cross-entropy loss, and variance term, which is only affected by the *reparameterization matrices*. Hence, these extra parameters can be seen as a way to reduce the variance of the SOCM loss, which motivates their introduction. Moreover, this loss has the computational advantage to avoid computing gradients of the control along the path measure (which is the main drawback of the relative-entropy loss). Finally, the authors conduct numerical experiments to compare their approach to existing designs of IDO losses. They consider four different settings ($d\in \{10,20\}$) with access to the ground-truth control, which allows them to compute the $L^2$ error on the control. Using this metric, their results indicate better performance on most of the settings while maintaining a certain training stability.
[1] Solving high-dimensional Hamilton–Jacobi–Bellman pdes using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Nüsken et al. 2021
Strengths: - The paper is very well-written, it is a pleasure to read it. In particular, the authors pay attention to define their notation, introduce with clarity the SOC framework, state clear mathematical statements, recall (and prove) standard results from SOC theory, provide intuition on theoretical results (with details on the proof and meaningful comments). This is really good work.
- The relation to prior work is clearly well established: in particular, the comparison between the SOCM loss and the previous IDO losses (presented in Section 2.2) is well highlighted.
- This papers introduces an interesting contribution that may be applied beyond this framework (in particular, in the generative community where terms involving gradients of expectations often appear) : this is the path-wise reparameterization trick, which is proved to be decisive in the numerics.
Weaknesses: - In my opinion, the major weakness of this paper is the lack of an additional numerical experiment, which represents a "realistic" setting (for instance, where the expression of the control is not known). For instance (as mentioned by the authors), a significant line of recent research has considered the sampling problem via a SOC perspective, see for example [1,2,3]. I am convinced that the SOCM contribution would have more impact with additional sampling numerics comparing SOCM, relative entropy [1,2] and log-variance [3] losses, for challenging distributions (namely, multi-modal distributions in relatively high dimension). In this case, the quality of the methods would be assessed with sampling metrics. This weakness explains my current score (although I really appreciate the paper).
- I find that the complexity/accuracy tradeoff of the IDO losses (including SOCM) is not well highlighted to me. The table provided by the authors only considers one setting. To have a full picture, it should be given for all settings.
[1] Path Integral Sampler. Zhang et al. 2022.
[2] Denoising Diffusion Sampler. Vargas et al. 2023
[3] Improving sampling via learned diffusions. Richter et al. 2023
Technical Quality: 4
Clarity: 3
Questions for Authors: - Have you tried to restrict the optimization of the reparameterization matrices to scalar matrices ? I have the feeling that this choice may align a low computational budget with a good expressivity.
- Have you tried another parameterization of these matrices ? In particular, have you considered simple form such as $M_{w}(t,s)=I_d + \gamma(s-t)\tilde{M}_{\tilde{w}}(s,t)$ where $w=(\gamma, \tilde{w})$ and $\gamma(0)=0$ ? Otherwise, why the choice of the sigmoid ?
- I find that the warm-start strategy for the optimization of the control is a very good idea, as it benefits from the tractability of the stochastic interpolants with the lightness of the spline formulation. However, I have the feeling that this strategy works in the presented settings since they are "kind of Gaussian". Do you think it may still be of interest for general SOC problems ?
- Could you explain why the second Ornstein-Uhlenbeck setting is called 'hard' ?
- Could you provide the results on $L^2$ error of the control without EMA, as it is presented in [1] ?
- I am quite surprised of the relatively computational budget induced by the use of the log-variance loss, could you comment on this ?
[1] Solving high-dimensional Hamilton–Jacobi–Bellman pdes using neural networks: perspectives from the theory of controlled diffusions and measures on path space. Nüsken et al. 2021
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The main limitation of the approach is discussed in Section 5: it is the variance of the importance weight in the SOCM loss, which may blow up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing an extremely succinct description of our paper. We completely agree with the reviewer’s concerns regarding limitations, but we just want to highlight our perspective that this paper proposes a rather novel way of thinking about constructing methods for SOC problems which we believe will have a big influence on future methods that do scale well to practical applications. That is, we put together this paper as it paints a complete picture of the new proposed approach, even if it is not yet a “silver bullet” that can solve all SOC problems, and chose to further develop scalable variants in a separate follow-up work. Below, we provide detailed responses to the reviewer’s questions.
**“[...] the major weakness of this paper is the lack of an additional numerical experiment, which represents a "realistic" setting [...] I am convinced that the SOCM contribution would have more impact with additional sampling numerics [...]”**
We benchmark SOCM against existing methods on control problems that have been studied before in the literature (Nüsken & Richter, 2021), which allows us to clearly focus our analysis on the loss function instead of other components of the problem. These are control problems where we have access to the optimal control, which allows us to track the control $L^2$ error. Alternatively, we would only be able to compute the control objective, which as shown in Figure 3 (pg. 35), is a less sensitive metric due to numerical errors.
We also want to clarify that, as noted in the global response, the Double Well problem is far from a toy problem and is actually highly non-trivial due to its multimodal nature. We chose this problem because it is a representative challenging problem that appears in SOC interpretations of sampling problems, such as the path integral sampler. By exploiting the specific decoupling nature of the potential, we can reduce it to a 1D problem for reference solutions. We do not test SOCM on additional realistic control problems because it often has gradient variance issues due to the variance of the importance weight $\alpha$ (see Section 3 in the paper). We believe that the strength of our paper is that our framework is completely novel, and it will be the basis for the development of more algorithms that do scale well to realistic problems (ongoing work).
**“I find that the complexity/accuracy tradeoff of the IDO losses (including SOCM) is not well highlighted to me. The table provided by the authors only considers one setting. To have a full picture, it should be given for all settings.”**
In Table 1 we show the time per iteration for each loss for Quadratic OU (easy). For all losses and settings, almost all of the runtime is spent evaluating and backpropagating through the control neural network. Since the number of control computations per timestep for a given loss is the same across settings, and we are using the same neural network architecture for all settings, the time per iteration for all losses in a single setting is actually an accurate reflection of all the settings we’ve considered: the time per iteration for other settings is roughly proportional to the ratio of the number of timesteps that we are using. For SOCM, there is the additional cost of computing and backpropagating through the M function, but that also depends on the specific architecture that is used and can also be reduced by using sparse parameterizations.
**“Have you tried to restrict the optimization of the reparameterization matrices to scalar matrices ? I have the feeling that this choice may align a low computational budget with a good expressivity.”**
We have not tried to restrict the optimization of the reparameterization matrices to scalar or diagonal matrices, but we agree that this would allow us to trade off fast computation and low variance. We will add an experiment where we use scalar and diagonal reparameterization matrices, and we expect that it will simply be somewhere between full M and the M=I ablation result in terms of performance.
**“Have you tried another parameterization of these matrices ? In particular, have you considered simple form such as $M_{\omega}(t,s) = I + \gamma(s-t) \tilde{M}_{\tilde{\omega}}(s,t)$ where $\omega = (\gamma, \tilde{\omega})$ and $\gamma(0)=0$? Otherwise, why the choice of the sigmoid ?”**
We would like to clarify that in line 263, $\gamma$ is a scalar, not a one-dimensional function. Hence, arguably our form for $M_{\omega}$ is simpler than the one proposed by the reviewer. Still, the expression proposed by the reviewer makes sense and is a viable alternative.
**“I find that the warm-start strategy for the optimization of the control is a very good idea [...] this strategy works in the presented settings since they are "kind of Gaussian". Do you think it may still be of interest for general SOC problems ?”**
We agree with the reviewer's comment. Warm-starting the control is a strategy that makes sense when the following conditions hold simultaneously:
(i) An arbitrary initialization for control neural network causes the importance weight $\alpha$ to have high variance. If the variance of $\alpha$ is already low for an arbitrary initialization, there is no need for warm-start. This is the reason that we do not use warm-start for Quadratic OU (easy), Linear OU and Double Well. For Quadratic OU (hard), we try both no warm-start (in the main text) and warm-start (in the appendix): not warm-starting causes the learning to happen much slower for algorithms that use importance weights, although SOCM is still the best-performing loss by the end of training.
(ii) The warm-started control is close enough to the optimal control, such that the importance weight $\alpha$ has low variance. As the reviewer points out, this holds when the control problem is “kind of Gaussian” (unimodal), but it does not work when multimodality is present, as in the Double Well setting.
---
Rebuttal 2:
Title: Rebuttal (2/2)
Comment: **“Could you explain why the second Ornstein-Uhlenbeck setting is called 'hard' ?”**
We refer to the second OU setting as hard because for losses that rely on importance weights (SOCM, SOCM-Adjoint, Cross-Entropy), the importance weight $\alpha$ has a high variance due to the magnitude of the running and terminal costs being larger. When the importance weight $\alpha$ has a high variance, the signal-to-noise ratio for the gradients becomes small, making learning hard (at least initially). For the OU hard setting, SOCM still manages to outperform the other methods at advanced stages of training because as the learned control approaches the optimal control, the variance of $\alpha$ drops. We will show plots for an OU setting with even larger matrices for the costs of the problem (see answer to Reviewer gdJ8), and we will see that learning becomes impossible for SOCM.
**“Could you provide the results on error of the control without EMA, as it is presented in [1] ?”**
We believe that showing EMA plots is more informative because their variance is lower. At later stages of training, the EMA value depends mostly on the previous 100 values (the EMA coefficient is 0.02), which means that the EMA value is very close to the actual value.
**“I am quite surprised of the relatively computational budget induced by the use of the log-variance loss, could you comment on this ?”**
In Table 1, we report that the variance loss takes 0.086 seconds per iteration, and that the log-variance loss takes 0.117 seconds per iteration. The computation for the two losses is very similar. We attribute the discrepancy to the speed fluctuation of the GPUs. We will run both loss training again and report an updated number.
---
Rebuttal Comment 2.1:
Title: Answer to the rebuttal
Comment: First, I would like to thank the authors for providing precise answers to my questions and my comments, this is much appreciated. Second, I would like to re-emphasize that I acknowledge the high quality of their work, even if it is not a 'silver bullet' as they call it; I am still convinced that it can be the basis of future impactful works.
Although the authors have addressed all the points I have raised, I would like to re-insist on the first one: the extension of the SOC formulation to sampling tasks (following the PIS/DDS framework), which has become very important in the sampling community. The authors explain that they **did not test SOCM on additional realistic control problems because it often has gradient variance issues due to the variance of the importance weight**. Without being as ambitious as considering "realistic" settings, have you tried to apply your method to sample from a Gaussian mixture with two modes in increasing dimension ? Does the "variance" issue appear in this synthetic setting ?
---
Reply to Comment 2.1.1:
Comment: We would like to thank the reviewer for their helpful comments. We have taken their suggestion into consideration, and present experimental results on two-mode Gaussian mixture sampling in increasing dimension, using the Path Integral Sampler [81]. Namely, we set $p_0 = \delta_{x_0}$, $b(x,t)=0$, $f(x,t)=0$, $T=1$, and $g(x)=\log (\mu^0(x)/\mu(x)) = - \|x\|^2/2 - d/2 \log (2\pi) - \log \mu(x)$, where $\mu$ is the density of a mixture of two Gaussians with means $\pm 1$ and variance $1$.
Note that we take $\mu$ to be normalized, i.e. $\int \mu(x) dx = 1$, or equivalently, $\log Z := \log (\int \mu(x) dx) = 0$.
For context, if we let $\hat{S}^u(X) = \int_0^T \frac{1}{2}\|u(X_t,t)\|^2 \, dt + \int_0^T \langle u(X_t,t), dB_t \rangle + \log (\mu^0(x)/\mu(x))$, Theorem 4 of the path integral sampler paper states that $- \mathbb{E}[\hat{S}^u(X^u)] \leq \log Z = 0$, for any control $u$, and that equality holds when $u = u^*$. Hence, the quantity $\mathbb{E}[\hat{S}^u(X^u)]$, which we report in the “- ELBO” column, allows us to benchmark different SOC algorithms: the smaller the better. A perfect SOC algorithm would yield zero in this setting. We also track the regular Control Objective, which is equal in expectation to the negative ELBO, because $\mathbb{E}[ \int_0^T \langle u(X_t,t), dB_t \rangle] = 0$ by the martingale property of stochastic integrals.
We show the standard error for each quantity. We ran the experiments using the architectures described in the paper, for a total of 40000 iterations.
| Algorithm | Dimension | Control Objective | - ELBO |
|----------|----------|----------|----------|
| Adjoint | 2 | 0.00282 +/- 0.00321 | 0.00515 +/- 0.00041 |
| SOCM | 2 | 0.00317 +/- 0.00320 | 0.00541 +/- 0.00042 |
| Cross-entropy | 2 | 0.00450 +/- 0.00320 | 0.00677 +/- 0.00046 |
| Adjoint | 8 | 0.01560 +/- 0.00435 | 0.01157 +/- 0.00058 |
| SOCM | 8 | 0.01495 +/- 0.00434 | 0.01104 +/- 0.00057 |
| Cross-entropy | 8 | 0.01817 +/- 0.00433 | 0.01400 +/- 0.00064 |
| Adjoint | 16 | 0.02356 +/- 0.00548 | 0.01909 +/- 0.00075 |
| SOCM | 16 | 0.02242 +/- 0.00548 | 0.01802 +/- 0.00073 |
| Cross-entropy | 16 | 0.03288 +/- 0.00544 | 0.02803 +/- 0.00091 |
| Adjoint | 32 | 0.04271 +/- 0.00726 | 0.03544 +/- 0.00102 |
| SOCM | 32 | 0.04013 +/- 0.00726 | 0.03287 +/- 0.00098 |
| Cross-entropy | 32 | 0.07167 +/- 0.00718 | 0.06445 +/- 0.00138 |
| Adjoint | 64 | 0.07669 +/- 0.00991 | 0.06576 +/- 0.00141 |
| SOCM | 64 | 0.07143 +/- 0.00992 | 0.06150 +/- 0.00136 |
| Cross-entropy | 64 | 2.90879 +/- 0.00816 | 2.91517 +/- 0.00846 |
Cross-entropy, which uses the same importance weight $\alpha$ as SOCM, performs worse than the other two losses for all dimensions, and its results are particularly poor for dimension 64. This is because the variance of $\alpha$ is too large for learning to happen. In this case, we see that SOCM has better variance reduction than cross-entropy, despite both using importance weighted objectives for training. Note that $\alpha = \exp(...)$ where the exponent scales linearly with dimension (can be seen from Eq 20).
We observe that the -ELBO for SOCM is slightly below that of Adjoint for most dimensions, which confirms that our method is better for this range of dimensions, but if we were to keep increasing the dimension, SOCM should eventually also fail due to higher variance of $\alpha$. We are still running the higher dimensions, and will also include a case where Adjoint overtakes SOCM. | Summary: This paper proposes a novel algorithm for approximating the solution to the Hamilton-Jacobi-Bellman (HJB) equation with a neural network control policy. Rather than backpropagating through rollouts of the dynamics, the authors develop a least-squares objective which resembles the score-matching loss used in diffusion models. However, this requires computing gradients of a diffusion process with respect to its initial condition. To address this, the authors develop a novel path-wise reparameterization trick which relies on a family of reparameterization matrices. They show how to optimize these matrices to reduce the variance of the objective estimate. They demonstrate that their method obtains a lower error with respect to the ground-truth control on toy problems, sometimes by an order of magnitude.
Strengths: - The proposed objective function acts as a form of variance reduction for the cross-entropy loss when solving stochastic optimal control problems.
- The novel reparameterization trick for estimating gradients of diffusion processes with respect to its initial condition may be more broadly applicable.
- On toy problems, their method appears to generally outperform other approaches in solving the HJB equation for the optimal controls.
- The paper is well organized and overall written well. It provides a thorough related work section and does a good job explaining the novelty and results.
Weaknesses: - The evaluations only consider simple toy problems. Moreover, they only plot the L2 error with respect to the optimal control. However, this does not necessarily tell us about the actual task performance due to compounding errors.
- On the Double Well system, there is not a clear advantage compared to the variance loss and adjoint method. However, the authors do discuss how their method appears more stable than the adjoint-based ablation.
Technical Quality: 2
Clarity: 3
Questions for Authors: - How do all the methods compare in terms of actual task performance?
- How do these methods perform on more realistic control problems?
- Why does the proposed method not work as well on the Double Well system compared to the variance baseline?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors discuss limitations to scaling the approach up to more challenging problems due to the variance of the importance weight.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments.
**“The evaluations only consider simple toy problems. Moreover, they only plot the L2 error with respect to the optimal control. However, this does not necessarily tell us about the actual task performance due to compounding errors.” “How do all the methods compare in terms of actual task performance?”**
We would appreciate it if the reviewer clarified what they mean by “compounding errors”. Regarding task performance, we would also like to note that in Figure 3 in Appendix F.3, we plot the control objective (the quantity in the right-hand side of eq. 12). In Figure 3 we see that the control objective is a much less sensitive metric than the control $L^2$ error: it is harder to benchmark the losses using the control objective. From the perspective of measures over processes, the control $L^2$ error and the control objective are two sides of the same coin: the control $L^2$ error can be seen as the KL divergence between the probability measure $\mathbb{P}^{u^*}$ of the optimally controlled process and the probability measure $\mathbb{P}^{u}$ of the process controlled by $u$. And up to a constant term, the control objective is the reversed KL divergence between the same pair of measures.
We would also like to clarify that, as noted in the global response, the Double Well problem is far from a toy problem and is actually highly non-trivial due to its multimodal nature. We chose this problem because it is a representative challenging problem that appears in SOC interpretations of sampling problems, such as the path integral sampler. By exploiting the specific decoupling nature of the potential, we can reduce it to a 1D problem for reference solutions. However, our SOCM method does not utilize this strong prior knowledge and solves it as a generic multimodal high-dimensional problem.
**“On the Double Well system, there is not a clear advantage compared to the variance loss and adjoint method. However, the authors do discuss how their method appears more stable than the adjoint-based ablation. [...] Why does the proposed method not work as well on the Double Well system compared to the variance baseline?”**
The Double Well (Figure 2 right-side) setting is different from other ones in that the terminal cost has 1024 modes. Hence, in order to obtain a small $L^2$ error, it is necessary to learn the control well in all or almost all the modes. Each trajectory that we sample will visit at most a few modes. Assuming for simplicity that each trajectory visits a single mode, and grouping trajectories into batches, by the end of the 80000 iterations, the “effective” number of batches per mode is 80000/1024=78.125, which is too small to get errors close to zero. Yet, this setting is interesting because it shows the behavior of the adjoint loss under multimodality: its $L^2$ error has a decreasing tendency but it wavers substantially, which is undesirable. We attribute this poor behavior of adjoint to the lack of convexity of the problem and SOCM has a clear stability advantage.
Regarding the comparison to the variance loss, note that the SOCM method converges significantly faster and the we find variance objective is a poor choice most of the time with this benchmark being the only exception. We believe SOCM initially learns very fast on this problem due to the training of M (can be seen if compared to the M=I ablation) but may find sub-optimal local minima whereas the variance method has higher variance but may have a chance of finding better local minima given enough training time.
**“How do these methods perform on more realistic control problems?”**
We benchmark SOCM against existing methods on control problems that have been studied before in the literature (Nüsken & Richter, 2021), which allows us to clearly focus our analysis on the loss function instead of other components of the problem.
We want to clarify that, as noted in the global response, the Double Well problem is far from a toy problem and is actually highly non-trivial due to its multimodal nature. We do not test SOCM on more realistic control problems because it often has gradient variance issues due to the variance of the importance weight $\alpha$ (see Section 3 in the paper). We believe that the strength of our paper is that our framework is completely novel, and it will be the basis for the development of more algorithms that do scale well to realistic problems (ongoing work). | Summary: This paper presents stochastic optimal control matching (SOCM), which is an iterative diffusion optimization for optimal control aiming to fit a matching vector field. The authors introduce a new loss function and address the analysis and design of a learning-based control method.
Strengths: The work is nicely motivated in Introduction, showing the drawbacks of traditional works. The proposed control method is supported by the uniqueness analysis of the control logic (Theorem 1) and the sophisticated design methods (Propositions 1 and 2). In the reviewer's understanding, they are technically correct.
Weaknesses: As stated in Algorithm 2 below, reducing noise in the gradient is crucial for the presented algorithm. This weakness is addressed by Lemma 1 and extensions.
Technical Quality: 3
Clarity: 4
Questions for Authors: As stated in Introduction, the work is motivated by stabilizing the unstable training of conventional IDO, which comes from the non-convexity of the loss. Could the authors comment and/or perform some motivating experiments to show the stability of the training by SOCM? They can emphasize the contribution of this paper.
Confidence: 1
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: No limitations in this work. This work is devoted to the theoretical analysis of control system design, and it does not directly bring a negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewers for their encouraging rating and for providing us the opportunity to clarify the key importance of the proposed method below.
**“Could the authors comment and/or perform some motivating experiments to show the stability of the training by SOCM? They can emphasize the contribution of this paper.”**
Firstly, SOCM is stable by construction because its loss is convex in function space, unlike the adjoint method, which is not. In particular, we see that in the Double Well setting (Figure 2 right), the adjoint-based methods are quite unstable and have large ups-and-downs in the control error during training.
Secondly, we also introduce the free parameter $M$ in our novel path-wise reparameterization gradient which can significantly reduce variance, allowing us to easily outperform related importance-weighted methods such as the cross entropy method.
We also ablated both of these two claims through our ablation experiments (correspondingly, these are the “SOCM-Adjoint” which uses adjoint method instead of our path-wise reparameterization gradient, and the $M=I$ ablation, which are shown in all experimental settings to lead to worse results).
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Summary: **Summary**
This paper introduces Stochastic Optimal Control Matching (SOCM), a novel algorithm for solving stochastic optimal control problems. Key contributions include:
1. SOCM algorithm, adapting ideas from conditional score matching in diffusion models
2. A new "path-wise reparameterization trick" for gradient estimation
3. Theoretical analysis including a bias-variance decomposition
4. Empirical evaluation showing superior performance on 3 out of 4 benchmarks
SOCM learns a control function by fitting a matching vector field via least squares, while optimizing reparameterization matrices to reduce variance. The method is currently limited to linear Gaussian models and requires knowledge of certain parameters. Experiments demonstrate SOCM's effectiveness on theoretical benchmarks, outperforming existing methods in most cases. The paper provides a solid theoretical foundation but lacks exploration of real-world applications or non-linear systems.
Strengths: The paper introduces Stochastic Optimal Control Matching (SOCM), a novel algorithm for solving stochastic optimal control problems. Its originality lies in adapting ideas from conditional score matching in diffusion models to the domain of optimal control. This creative combination represents an interesting cross-pollination between two active areas of research.
The quality of the theoretical work is notable. The authors provide a comprehensive mathematical foundation for their method, including detailed proofs and a novel "path-wise reparameterization trick". This theoretical rigor is a significant strength of the paper.
In terms of clarity, the paper is well-structured and clearly written. The authors effectively guide the reader from the problem formulation through the theoretical development to the empirical results. The use of illustrative examples and detailed appendices aids in understanding the complex mathematical concepts presented.
The significance of this work lies in its potential to improve the efficiency of solving stochastic optimal control problems. The empirical results, showing improved performance over existing methods on multiple benchmarks, underscore the practical impact of this approach. However, the significance is somewhat limited by the current restrictions to linear Gaussian models.
Weaknesses: The primary weakness of this paper is its limited scope and applicability. The method is currently restricted to linear Gaussian models and requires knowledge of certain model parameters. This significantly narrows its potential impact on the broader field of stochastic optimal control. The authors should discuss potential approaches to extend SOCM to more general settings, such as nonlinear or non-Gaussian systems.
While the empirical results are promising, they are limited to theoretical benchmarks. The paper would be strengthened by including experiments on real-world problems or more complex simulated environments. This would help demonstrate the method's practical utility and potential for broader impact.
The scalability of the method is not thoroughly addressed. As the dimensionality of the problem increases, how does the computational complexity of SOCM compare to existing methods? A more detailed analysis of computational requirements and scaling properties would be valuable.
The comparison with existing methods, while showing SOCM's superior performance, could be more comprehensive. Including comparisons with the most recent state-of-the-art methods would provide a clearer picture of SOCM's relative performance in the current landscape of stochastic optimal control algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How might SOCM be extended to handle nonlinear or non-Gaussian systems? Are there specific challenges you foresee in this extension?
2. The paper focuses on theoretical benchmarks. Have you considered applying SOCM to any real-world stochastic optimal control problems? If so, what challenges did you encounter or do you anticipate?
3. How does the computational complexity of SOCM scale with the dimensionality of the problem? Could you provide a more detailed comparison of computational requirements with existing methods?
4. The path-wise reparameterization trick is an interesting contribution. Could you elaborate on potential applications of this technique outside of stochastic optimal control?
5. The paper mentions that SOCM requires knowledge of certain model parameters. In practical scenarios where these parameters might not be known precisely, how sensitive is SOCM to parameter misspecification?
6. Have you explored the performance of SOCM in settings with sparse or noisy rewards, which are common challenges in reinforcement learning?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for an accurate list of our contributions; however, we think there may be a misunderstanding regarding the scope of applications. We detail our responses to the reviewer’s concerns and questions below:
**“The method is currently restricted to linear Gaussian models and requires knowledge of certain model parameters. [...] The paper mentions that SOCM requires knowledge of certain model parameters. In practical scenarios where these parameters might not be known precisely, how sensitive is SOCM to parameter misspecification?”**
We are unsure about this question and we think there may be a misunderstanding. We would appreciate it if the reviewer could point us to the comment that they refer to.
SOCM does not actually require any explicit assumptions on the model parameters or a linear-Gaussian model. To clarify our interpretation of this concern, the linear-Gaussian model assumption usually refers to state transitions $p(x_{t+h} | x_t, u_t)$ being Gaussian distributed with linear dependence on the state $x_t$ and independent control variables $u_t$. In the continuous-time formulation, the updates under a linear-Gaussian model would be of the form $dX_t = Ax_t + Bu_t + \sigma(t) dW_t$. However, in our formulation we use neural networks to model a control velocity field, resulting in non-linear updates. That is, the updates we have are of the form $dX_t = u_t(x_t) + \sigma(t) dW_t$, which includes the linear-Gaussian model but also generalizes to nonlinear dependencies.
The only assumption SOCM actually makes is that the objective functional depends on the control velocity field only through a quadratic form (equation 1). This is required for the path-integral representation in Theorem 1 but this is a typical assumption made in nearly all stochastic optimal control methods. SOCM requires the same amount of knowledge as the existing methods: we just need to know the base SDE, the running cost and the terminal cost.
**“How might SOCM be extended to handle nonlinear or non-Gaussian systems? Are there specific challenges you foresee in this extension?”**
As we detail in the proof sketch of Thm. 1, SOCM relies on the path-integral representation of the optimal control (eq. 8). As far as we know, such path-integral representations hold for control systems with arbitrary base drift and linear dependency of the drift on the control, and it is a typical assumption made by works in the literature (e.g. Nüsken & Richter, 2021). If path-integral representations exist or are developed for more general control problems, we believe that our technique may be adapted to handle those.
**“The paper focuses on theoretical benchmarks. Have you considered applying SOCM to any real-world stochastic optimal control problems? If so, what challenges did you encounter or do you anticipate?”**
SOCM performs well when the importance weight $\alpha$ defined in eq. 20 has low variance, which holds when its exponent has low variance. In our experimental section, we show that when $\alpha$ has low variance, SOCM outperforms existing methods. However, when $\alpha$ has high variance, the signal-to-noise ratio of the stochastic gradient is low, and the performance of the algorithm is compromised (much like it happens for the existing cross-entropy method, which also contains the factor $\alpha$). We do not regard SOCM as the solution to solve all stochastic optimal control problems, but rather as a new perspective which can lead to the development of more algorithms (ongoing work).
**“How does the computational complexity of SOCM scale with the dimensionality of the problem? Could you provide a more detailed comparison of computational requirements with existing methods?”**
Table 1 in Section F.3 shows the time per iteration for each of the loss functions that we consider: SOCM takes 0.22 seconds, while the adjoint loss takes 0.169 seconds, and all other methods are around 0.1 seconds. The reason that SOCM takes longer is that there is an additional neural network that is trained: the M function. The second paragraph in Section F.2 describes the neural network we use for the M function. In our setup, the total cost of M evaluations per iteration is $O(d^2 K^2)$, where $d$ is the dimension of the control system and $K$ is the number of discretization steps. The dependency $O(d^2)$ stems from the fact that we parameterize the whole matrix $M$; an alternative would be to parameterize $M$ as a diagonal matrix, which would result in a cost $O(d)$ at the expense of higher gradient variance. To further reduce the variance while using more computational resources, it is also possible to choose a function $M$ that depends on the iterate $X_t$ (see Remark 2). We will provide more clarity regarding the computation cost in the paper.
**“The path-wise reparameterization trick is an interesting contribution. Could you elaborate on potential applications of this technique outside of stochastic optimal control?”**
The path-wise reparameterization trick can be used as a drop-in replacement of the adjoint method when computing gradients of functionals on ODE/SDE trajectories, either with respect to the starting point or with respect to neural network parameters. The path-wise reparameterization trick provides a natural way to enable variance reduction, thanks to the arbitrary function M. One can potentially apply this trick to any setting where neural ODEs/SDEs are used, and also to estimate scores of SDEs with arbitrary drifts.
---
Rebuttal 2:
Title: Rebuttal (2/2)
Comment: **“Have you explored the performance of SOCM in settings with sparse or noisy rewards, which are common challenges in reinforcement learning?”**
In our benchmark problem Linear Ornstein Uhlenbeck, we only assume there is a terminal cost/reward and no intermediate state cost/rewards. This is a type of sparse reward (only one per episode) setting that reinforcement learning typically studies. However, we think the main difference to typical reinforcement learning applications is that control problems often assume the state costs are differentiable, which allows the use of gradient-based / adjoint methods for solving control problems, which can significantly reduce the effect of the sparse or noisy reward problem.
---
Rebuttal Comment 2.1:
Title: Thanks and raise score
Comment: Thank you for your response to my comments. I have read your rebuttal and am happy to raise my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their helpful comments. We would like to clarify and reemphasize our contributions, and to provide a global response to some issues that have been raised by multiple reviewers.
We acknowledge that our method has limitations due to the use of importance weighting and is not yet a "silver bullet" that can handle non-trivial SOC problems. However, it provides an unconventional view of SOC methods and provides a new direction to explore SOC methods as least squares objectives. That is, we believe this paper paints a complete picture in deriving a new framework for least squares objective, and scalable variants are better covered in a separate follow-up.
Our paper makes two key contributions:
(i) The formulation of least squares objectives that directly regress onto the optimal control, leading to the proposed SOCM objective. We have empirically found that SOCM easily outperforms existing importance-weighted objectives such as the popular cross entropy method (either in faster training, better final result, or both). Compared to adjoint-based methods, we find that SOCM exhibits more stable training (as can be seen on the Double Well) and often much lower control errors (as in OU Quadratic and OU Linear).
(ii) Our proposed path-wise reparameterization gradient is orthogonal to the SOCM objective and is a general method for computing gradients of cost functionals. In particular, the path-wise reparameterization gradient has a built-in variance reduction option in the form of the matrix $M$, which we see significantly improves convergence speed and performance. Both of these claims are closely ablated in our experiments (as “SOCM-Adjoint” and “M=I” in our experimental results).
A common point among reviewers is that the paper only considers toy control problems. The main reasons we chose such problems are that (i) they were used by (Nüsken & Richter, 2021) as a benchmarking suite and (ii) because we are able to compute (or closely approximate) ground truth solutions for the control function, and thus we can assess the $L^2$ error incurred when learning the control. The alternative metric is the control objective functional itself, which is much less informative due to numerical errors (see Fig. 3 in App. F.3).
Furthermore, there is (understandably) some misunderstanding about the difficulty of some of the benchmark problems as it is not emphasized in the paper. In particular, the Double Well problem is actually highly non-trivial, is multimodal, and is also closely related to the SOC interpretations of sampling problems such as path integral sampler (Ref. 81: Zhang & Chen, 2022). The only reason we can produce a "ground truth" control to compare to in this setting is that we use significant knowledge of the problem; we analytically reduce it to a 1D problem and apply numerical methods to solve this 1D problem. It is not a problem where we actually have the ground truth control in closed form.
We hope that these points help clarify the main concerns raised by reviewers. We also provide detailed responses to each reviewer separately. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Linear Causal Representation Learning from Unknown Multi-node Interventions | Accept (poster) | Summary: This paper studies identifiability under unknown muilti-node interventions (soft/hard), with general models (parametrtic/nonparametric) and **linear** mixing functions. This work provides both detailed proof which justifies the main theoretical statement, and a step-by-step algorithm which guides how to achieve identifiability in practice.
Overall, I find this work serves as an important step for interventional CRL towards more realistic settings.
### References
[1] Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Abhishek Kumar, and Ali Tajer. Score- based causal representation learning with interventions. arXiv:2301.08230, 2023.
[2] Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Abhishek Kumar, and Ali Tajer. Score- based causal representation learning: Linear and general transformations. arXiv:2402.00849, 2024.
[3] Julius von Kügelgen, Michel Besserve, Wendong Liang, Luigi Gresele, Armin Kekic ́, Elias Bareinboim, David M Blei, and Bernhard Schölkopf. Nonparametric identifiability of causal rep- resentations from unknown interventions. In Proc. Advances in Neural Information Processing Systems, New Orleans, LA, December 2023.
Strengths: This paper is extremely well written and clearly structured: it communicates clearly motivations, formulation, technical details, and theoretical implications. The experimental results adequately validate the theory in case of a linear causal model.
Weaknesses: 1. The proposed UMNI-CRL algorithm is claimed to work with *general* non-parametric causal models; however, the simulation experiment only showed results on *linear* structural equation model. It would be great if the authors could report further experimental results on non-parametric causal models, to align with the theoretical claims. If there is a valid reason why it cannot be done, I am also very happy to hear.
2. Following the previous point, since this approach requires density estimation, it might not be scalable on nonparametric models. But to be fair, this seems to be a common limitation in many interventional CRL works [1, 2, 3].
3. Linearity assumption on the mixing function is restrictive, but the authors have acknowledged it and discussed possible future directions to overcome this limitation (sec. 6).
Technical Quality: 3
Clarity: 4
Questions for Authors: See the first point in **weakness** section. I am very happy to raise my rating if this issue is resolved.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discussed the remaining open problems and limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our results an important step towards more realistic CRL settings and noting the clarity of the paper.
**General causal models**:
We did not provide results using non-linear causal models since our algorithm, due to its combinatorial nature, is sensitive to input noise. While we are actively trying to make our algorithms work better using generic score estimators, for this response, in the interest of time, we have elected to provide a complementary analysis, where we investigate the performance of our algorithms under different levels of artificially introduced noise.
For this set of experiments, we adopt a non-linear additive noise model with a score oracle. Specifically, the observational mechanism for node $i$ is generated according to
\begin{align}
Z_i = \sqrt{Z\_{{\rm pa}(i)}^\top {\bf A}_i Z\_{{\rm pa}(i)}} + N_i \ ,
\end{align}
where $N_i \sim {\cal N}(0, \sigma_i)$, and the interventional mechanisms set $Z_i = N_i / 2$. This causal model admits a closed-form score function (see [7, Eq.(393–395]), which enables us to obtain a score oracle. In our experiments, we use this score oracle and introduce varying levels of artificial noise according to
\begin{align}
\hat{s}_X(x; \sigma^2) = s_X(x) \cdot \big( 1 + \Xi \big) \ , \quad \mbox{where} \quad \Xi \sim {\cal N}(0, \sigma^2 \cdot {\bf I}\_{d \times d}) \ ,
\end{align}
to test the behavior of our algorithm under different noise regimes ($\sigma \in [10^{-3}, 10^{-1.5}]$). Results à la Table 2 versus different $\sigma$ values are provided in Figure 1 in the PDF document attached to the general response.
**Score estimation**: We kindly note three things regarding the score functions on nonparametric models.
- Our algorithm is agnostic to the choice of score estimator, and we can adopt popular score-matching methods as mentioned in Line 325, e.g., Song et al. (2020) or Zhou et al. (2020). We also note that score function estimation is finding increasingly more applications in diffusion models, and it is an active research field. Hence, our algorithms can modularly adopt any new score estimation algorithm and benefit from advances in the score estimation literature.
- *Score vs. density estimation*: We also note that score estimation in a high-dimensional setting is generally easier than density estimation, as it avoids the difficulty of finding the normalizing constants.
- Finally, we emphasize that our algorithm only requires **score differences** and does not require the score functions themselves. We conjecture that the direct estimation of score differences can be much more efficient than estimating the individual score functions, a lá direct estimation of precision difference matrices [R1]. Furthermore, density ratio estimation methods, e.g., classification-based approaches [R2], can be potentially useful for direct score difference estimation.
[R1] Jiang, Wang, and Leng. "A direct approach for sparse quadratic discriminant analysis." JMLR, 2018.
[R2] Gutmann and Aapo Hyvärinen. "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics." JMLR, 2012.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for clarifying and providing additional experiment results. I increased the score correspondingly. | Summary: This paper advances Causal Representation Learning (CRL) by addressing the challenge of using unknown multi-node (UMN) interventions to identify latent causal variables and their structures. The authors develop a score-based CRL algorithm that leverages UMN interventions to guarantee identifiability of latent variables and their causal graphs under both hard and soft interventions, achieving perfect identifiability with hard interventions and identifiability up to ancestors with soft interventions. Their method outperforms existing single-node approaches by ensuring robust recovery of causal structures in more complex, multi-intervention environments.
Strengths: * Extending the causal representation learning to unknown multi-node interventions
* Proofs are provided
* Pseudocode is provided
* Computational complexity is discussed
* Limitations are clearly stated
Weaknesses: * The paper primarily focuses on causal models with linear transformations. This limits its applicability in many real scenarios
* The applicability of the assumptions in real scenarios was not discussed
* The method was not applied on real world-data
Technical Quality: 3
Clarity: 3
Questions for Authors: * Can you please elaborate on the computational complexity and on why it is dominated by step 2?
* Can you please discuss the applicability of the assumptions in real scenarios?
* I think that adding some real world application can increase the impact of this paper. Is it possible to find such an application?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper acknowledges certain limitation. One notable limitation is the assumption of linear transformations in the causal models considered. This restricts the applicability to scenarios where causal relationships are adequately approximated by linear relationships. Additionally, while the paper addresses the challenge of UMN interventions, it acknowledges the complexity involved in identifying intervention targets in such settings, which can affect the ability to fully leverage the statistical diversity inherent in UMN interventions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review and accurate summary. We address the questions as follows.
**Computational complexity**: Stage 1 (score difference estimation) is only performed once before the main algorithm starts. Stage 4 (unmixing procedure for hard interventions) essentially works as a post-processing step that does not pose computational challenges. Below, we elaborate on the computational complexity of Stage 2 and Stage 3.
- *Stage 2*: We check dimension of $\mathcal{V}$ (line 6 of the algorithm) at worst $n \times (2 \kappa + 1)^n$ times in Stage 2 of the algorithm. Here, $\kappa$ denotes the maximum possible determinant of a matrix $\\{0,1\\}^{(n-1) \times (n-1)}$. In the proof of Lemma 2 in Appendix A.1, we discuss why this choice facilitates the identifiability guarantees. We also discuss how $\kappa$ grows with $n$ in Appendix A.8, and give special cases, such as sparse multi-node interventions, in which $\kappa$ is upper bounded by small numbers even for large $n$ values.
- *Stage 3*: Here, we check dimension of $\mathcal{V}$ (line 23 of the algorithm) at worst $\\|\mathbf{W}\_{:,j}\\|_1 \times \\|\mathbf{W}\_{:,t}\\|_1$ times for every $(t,j)$ in Stage 3 of the algorithm. Note that $\mathbf{W}$ is updated within the steps of Stage 3. Therefore, the exact computational complexity would be a function of the graph structure, and the outputs of Stage 2.
- *Empirical tricks*: Finally, we note two empirical tricks that can reduce the computational complexity greatly. First, even though $\kappa$ can grow quickly as $n$ becomes larger, in practice, setting $\kappa=2$ usually works fine (see additional experiment results in the general response). Second, additional empirical tricks can greatly increase the speed of the algorithm, e.g., dividing the columns of $\mathbf{W}$ by the greatest common divisor of its entries after every step. Since our focus is on establishing the identifiability results, we omit the investigation of empirical tricks that would disturb the flow of the paper and distract from the main focus.
**Toward realistic settings**: We believe this paper serves as a significant step in this direction by removing the stringent assumption of single-node interventions. For instance, genomics datasets such as Perturb-seq (Norman et al. 2019) are used by single-node interventional CRL (Zhang et al. 2023). However, the genomic interventions are known to have unknown off-target effects (Fu et al. 2013, Squires et al. 2020) that violate the single-node intervention assumption. Therefore, establishing unknown multi-node interventional results is fundamental to unlocking the use of CRL in realistic datasets. The implementation of real applications is beyond the scope of this work and is an important future direction.
**References**
T. M. Norman, M. A. Horlbeck, J. M. Replogle, A. Y. Ge, A. Xu, M. Jost, L. A. Gilbert, and J. S. Weissman. Exploring genetic interaction manifolds constructed from rich single-cell phenotypes. Science, 365(6455):786–793, 2019.
J. Zhang, C. Squires, K. Greenewald, A. Srivastava, K. Shanmugam, and C. Uhler. Identifiability guarantees for causal disentanglement from soft interventions. NeurIPS 2023
Y. Fu, J. A. Foden, C. Khayter, M. L. Maeder, D. Reyon, J. K. Joung, and J. D. Sander, “High frequency off-target mutagenesis induced by CRISPR-Cas nucleases in human cells,” Nature Biotechnology, vol. 31, no. 9, pp. 822–826, 2013
C. Squires, Y. Wang, and C. Uhler. Permutation-based causal structure learning with unknown intervention targets. UAI 2020
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response. After reviewing the reviews and considering the responses, I will raise my score to 7. | Summary: This work studies interventional causal representation learning, where one has access to interventional data, to identify latent causal factors and latent DAG in the unknown multi-node interventions regime. The authors consider a setting where the mixing function is linear and the latent causal model is nonparametric. Under the assumption of sufficient interventional diversity, the authors use score function arguments to show that the underlying causal factors of variation (and DAG) can be recovered (1) up to permutation and scaling from stochastic hard interventions and (2) up to ancestors from soft interventions. The authors propose a score-based framework (UMNI-CRL) and evaluate it on synthetic data generated from Erdős–Rényi random graph model.
Strengths: - This work provides significant results in the unknown multi-node intervention setting, which is much more realistic than the common single-node intervention regime. As opposed to other works, this work studies CRL from a more general class of multi-node interventions (stochastic hard and soft).
- The paper is well-written, the concepts are explained well, and the theoretical identifiability results add a lot of value to the current CRL literature.
- The use of score functions and score differences in the observation space to estimate the unmixing function, especially for the UMN setting, is a novel and interesting approach for CRL.
- This work is the first to establish latent DAG recovery in the UMN setting under any type of multi-node intervention for arbitrary nonparametric latent causal models.
Weaknesses: Although the theoretical contribution of this work is strong, the empirical evaluation is quite weak compared to other works in CRL. There are only experiments for n=4 causal variables. There is also no baseline comparison of the proposed framework with other methods in the UMN setting (e.g., [1]). Also, some discussions are a bit abridged and could use more elaboration in the paper (see below for details).
[1] Bing et al. “Identifying Linearly-Mixed Causal Representations from Multi-Node Interventions” CLeaR 2024.
Technical Quality: 4
Clarity: 3
Questions for Authors: - I would like some clarification on the intervention regularity condition. Specifically, why does the additional term ensure that multi-node interventions have a different effect on different nodes? It would be good to elaborate on this condition when introduced since it is a central assumption that needs to be satisfied for the results to hold.
- How do you obtain $\Lambda$ in Eq. (14)? It seems that this matrix encodes the summands with the latent space score differences. However, since the distribution of the latents is unknown, how would you go about estimating $\Lambda$ and score differences $\Delta S_X$ in general cases of nonparametric distributions?
- How do you learn the integer-valued vectors $\mathbf{w}$ in Stage 2 of the algorithm? From Eq (18), it seems that $\mathcal{W}$ is a fixed predefined set and you choose the vectors $\mathbf{w} \in \mathcal{W}$ that satisfy a specific condition in the algorithm. To my understanding, this is central to recovering the approximate unmixing $\mathbf{H}^*$ up to a combination of the rows of the true unmixing $\mathbf{G}^{\dagger}$. I would appreciate it if the authors could elaborate on how this procedure was done.
- From Appendix A.8, it seems that $\kappa$ is determined by the number of causal variables $n$. Could the authors give some more intuition on what $\kappa$ represents in Stage 2 with respect to how the unmixing is recovered?
- Are there any distributional assumptions on the exogenous noise in the latent additive noise causal model?
- It seems that the UMN hard intervention result (Theorem 1) requires a latent model with additive noise. Would perfect recovery still be possible for latent models with non-additive noise under UMN hard interventions?
- The empirical results suggest that increasing sample size improves DAG recovery, which is intuitive. However, what do the results look like as the number of causal variables scales up? Currently, the authors only show results for n=4 latent causal variables. I only offer this as a suggestion due to the short rebuttal period.
- How would the assumptions made need to change to be applied to general mixing functions? I know that generality in one aspect of the model (i.e., general SCM) may require other aspects to take some parametric form (i.e., linear mixing) for identifiability guarantees, but do the authors have any intuition on how to achieve identifiability results for the UMN setting in a completely nonparametric setup?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging our strong theoretical and algorithmic contributions. We address the questions as follows.
**Empirical results**:
- *Increasing $n$*: Thanks for the suggestion. Please refer to the general response for experiment results for up to $n=8$ nodes.
- *Baseline*: We note that Bing et al. use “do” interventions. Hence, the algorithms are not comparable.
**Interventional regularity**: We explain the rationale and details as follows.
- First, we emphasize that **interventional regularity is not a restrictive assumption**. In Appendix A.8.(Lemma 5), we present some sufficient conditions in which the interventional regularity holds. For instance, under a hard intervention on a linear causal model, if the distribution of the exogenous noise remains the same, then interventional regularity holds. Hence, it is not a restrictive assumption.
- Next, we clarify what we mean by the *effect of an intervention*. Essentially, $\frac{\partial\log p_i/q_i}{\partial z_j}$ is the effect of intervening on node $i$ on the *score* associated with node $j$. Note that we use *combinations* of different multi-node environments to generate new score difference functions. Without the additional term in Eq.(12), $\frac{\partial\log p_j/q_j}{\partial z_j}$, the ratio in Eq.(11) considers only a single node intervention on $i$ and its effects on scores of $i$ and $j$.
- To ensure that the effect of a *combined* intervention is different on scores of $i$ and $j$, we need to consider interventions on both $i$ and $j$. As such, the additional term in Eq.(12), $\frac{\partial\log p_j/q_j}{\partial z_j}$, denotes the effect of the additional intervention on $j$ on the node $j$.
**Score function differences**:
- **We do not require estimating $\Lambda$**. It is correct that $\Lambda$ encodes score differences in latent space, and we cannot estimate it. Therefore, the algorithm **only** takes $\Delta S_X$ as input. $\Lambda$ is defined to provide intuition on the rationale of the algorithm and the connection between latent score differences and $\Delta S_X$ in Eq.(17).
- Our algorithm is agnostic to the choice of score estimator, and we can adopt popular score-matching methods as mentioned in Line 325, e.g., Song et al. (2020) or Zhou et al. (2020). We also note that score function estimation is finding increasingly more applications in diffusion models, and it is an active research field. Hence, our algorithms can modularly adopt any new score estimation algorithm and benefit from advances in the score estimation literature.
- We also emphasize that our algorithm only requires **score differences** and does not require the score functions themselves. We expect that the direct estimation of score differences can be much more efficient than estimating the individual score functions, à la direct estimation of precision difference matrices [R1]. Furthermore, density ratio estimation methods, e.g., classification-based approaches [R2], can be potentially useful for direct score difference estimation.
[R1] Jiang et al. A direct approach for sparse quadratic discriminant analysis. JMLR, 2018.
[R2] Gutmann and Hyvärinen. Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. JMLR, 2012.
**Additive noise models (ANM)**: Please refer to our general response.
**Stage 2 of the algorithm**: We elaborate on learning the vectors $\bf w$ and the role of $\kappa$ as follows.
- Note that $\Delta S_X\cdot\bf w$ is essentially a combination of SN latent score differences via Eq.(15) and (17) (also shown in (30)). Also, since the score difference $\nabla\log p_i(z_i\mid z_{{\rm pa}(i)})-\nabla\log q_i(z_i\mid z_{{\rm pa}(i)})$ is a function of $Z_{{\rm pa}(i)}$ and $Z_i$, the dimension of the image of this function will be 1 if and only if the intervened node $i$ has no parents.
- Leveraging this property, at the first step $t=1$, we search for a linear combination of the given MN environments (via $\bf w$) to emulate a single node intervention on a root node. For such a vector $\bf w$, using Eq.(15) and (17), the image of $\Delta S_X\cdot\bf w$ contains only one vector (up to scaling), which is the encoder $\bf G^\dagger$’s row corresponding to the $i$-th node where $i$ is a root node.
- Subsequently, at each step, we follow the same routine to estimate a row of the true encoder. While checking the dimension of the emulated intervention, we project $\Delta S_X\cdot\bf w$ to the nullspace of the submatrix recovered so far. This ensures that the learned encoder will be full-rank.
- **Role of $\kappa$**: Vector ${\bf w=[D^{-1}]}_i$ is a valid choice that makes $\Delta S_X\cdot\bf w$ correspond to a single node intervention. Then, by constructing a set $\cal W$ that contains the rows of $\bf D^{-1}$, we ensure that the procedure described above will work by an exhaustive search over $\cal W$. We note that the entries of $\bf D^{-1}$ can be found via the cofactor matrix of $\bf D$, for which the $\kappa$ denotes the maximum possible entry. For detailed derivation, please refer to lines 479-490 in Appendix A.1.
**Intuition for general mixing functions**: Our core technical idea is using combinations of MN interventions to construct new interventions with desired properties, e.g.,sparsity. Our intuition is that we can extend the intervention matrix in Eq.(4) to handle multiple interventional mechanisms. For instance, to represent two interventional mechanisms per node, the columns of the intervention matrix would be in $\\{0,1\\}^{2n}$. Subsequently, the goal is to find combinations of MN environments to create two different SN interventions for each node so that the problem simplifies to the known results for general transformations under two SN int/node (see references [6],[9]). However, building on this intuition to prove identifiability is challenging and is a major direction for future work.
---
Rebuttal Comment 1.1:
Comment: I greatly appreciate the authors taking the time to answer my questions and provide clarifications. My questions and concerns have been addressed quite well in the response. The new empirical results for a larger number of causal variables further strengthen the paper. I believe this is a high-quality submission with significant theoretical results of great interest to the CRL community. Thus, I raise my score to 8. | Summary: This paper extends previous results on using score function for causal representation learning to the settings with unknown multi-node interventions. This new setting poses significant new challenges as opposed to the single node intervention case. The author first present theoretical identifiability result on hard interventions with latent additive noise model and on soft interventions. They then propose an algorithm called (UMNI)-CRL and test it on synthetic linear Gaussian dataset.
Strengths: The paper is clearly written, easy to follow and with good motivations.
Weaknesses: 1. The transformation from latent to observed is noiseless, which could be a limitation.
2. Line 199 says that: “This regularity condition ensures that the effect of a multi-node intervention is not the same on different nodes”. But how realistic or neccessary is this condition? It seems like it is very possible that an intervention can cause two downstream nodes to have the same effect although these two nodes is not influenced the same by all type of interventions.
3. The experiments are only on synthetic dataset but I don’t think that is a big issue.
4. Some potential missing citations
[1] Kumar, Abhinav, and Gaurav Sinha. "Disentangling mixtures of unknown causal interventions." *Uncertainty in Artificial Intelligence*. PMLR, 2021.
[2] Jiang, Yibo, and Bryon Aragam. "Learning nonparametric latent causal graphs with unknown interventions." *Advances in Neural Information Processing Systems* 36 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. (UMNI)-CRL requires estimating the score function. How do you ensure a good estimate of the score function to unsure that the algorithm is useful in practice?
2. One small question: on line 141-143, it is mentioned that if a node is not intervened on, perfect identifiability is not possible. But there are cases like A→B where I don’t need to intervene on A?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: 1. Theorem 1 only works for additive noise model.
2. The transformation from latent to observed noiseless.
3. Experiments are only on synthetic dataset.
4. I am unsure if the algorithm is practical because it needs to estimate the score function.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for acknowledging the challenges of the unknown multi-node intervention setting and noting the clarity of the paper. We address the raised concerns as follows.
**Noiseless transformation:** The current scope of the paper cannot handle noisy transformations. We also kindly note that the closely related CRL literature (see references [3]-[10] in the paper) also considers deterministic transformations $X = g(Z)$. Our paper’s primary goal is to relax the restrictive assumption of single-node interventions. As such, we consider noisy transformations a major future direction.
**Interventional regularity**: We thank the reviewer for raising the need for elaboration on the interventional regularity condition. We explain its rationale and emphasize that it is not a restrictive assumption as follows.
- First, we want to clarify what we meant by "effect of an intervention.” Essentially, $\frac{\partial}{\partial z_j}\log\frac{p_i(z_i\mid z_{{\rm pa}(i)})}{q_i(z_i\mid z_{{\rm pa}(i)})}$ is the effect of intervening on node $i$ on the *score* associated with node $j$. Therefore, the effect in the score function is not on downstream nodes of $i$ but on the parents of $i$.
- Note that we use *combinations* of different multi-node environments to generate new score difference functions. Therefore, to ensure that the effect of this new “combined” intervention will be different on the $i$-th and $j$-th coordinates of the score function, we require the ratio in Eq.(12) to be not constant.
- Next, we emphasize that **interventional regularity is not a restrictive assumption**. In Appendix A.8., we present some sufficient conditions that make the interventional regularity valid (see Lemma 5). For instance, if we consider a linear latent causal model under a hard intervention and the distribution of the exogenous noise term remains the same after the intervention, then interventional regularity is satisfied. Therefore, interventional regularity is not a restrictive assumption.
- Furthermore, note that if there exist $k\in{\rm pa}(j)\setminus{\rm pa}(i)$, then $\log\frac{p_j(z_j\mid z_{{\rm pa}(j)})}{q_j(z_j\mid z_{{\rm pa}(j)})}$ is a function of $Z_k$, whereas the other terms in Eq.(12) are not. This implies that the ratio in Eq.(12) is not a constant function of $Z$ and again exemplifies that the interventional regularity is not a restrictive condition.
**Estimating score function differences:**
- Our algorithm is agnostic to the choice of score estimator, and we can adopt popular score-matching methods as mentioned in Line 325, e.g., Song et al. (2020) or Zhou et al. (2020). We also note that score function estimation is finding increasingly more applications in diffusion models, and it is an active research field. Hence, our algorithms can modularly adopt any new score estimation algorithm and benefit from advances in the score estimation literature.
- We also emphasize that our algorithm only requires **score differences** and does not require the score functions themselves. We conjecture that the direct estimation of score differences can be much more efficient than estimating the individual score functions, a lá direct estimation of precision difference matrices [R1]. Furthermore, density ratio estimation methods, e.g., classification-based approaches [R2], can be potentially useful for direct score difference estimation.
[R1] Jiang, Wang, and Leng. "A direct approach for sparse quadratic discriminant analysis." JMLR, 2018.
[R2] Gutmann and Aapo Hyvärinen. "Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics." JMLR, 2012.
**Non-identifiability when missing an intervened node**: In lines 141-143, we meant that “when a node is not intervened on, perfect identifiability is not possible **in general**”, i.e., without imposing additional restrictions such as structural assumptions. Our reference for non-identifiability is Proposition 5 of Squires et al. (2023). Their proof for non-identifiability only requires that the non-intervened node $i$ has at least one parent. We will add this note to the updated manuscript.
We note that the example of $A\rightarrow B$ under an intervention on only $B$ is also discussed by Remark 2 of Squires et al. (2023). Even though the graph $A\rightarrow B$ can be discovered in this specific case, we are not aware of any results for the identifiability of the latent variables. Squires et al. (2023) empirically suggest that the latent variables are not identifiable.
**Additive noise models (ANM)**: Please refer to our general response.
**Additional references**: Thank you for the suggestions; we will include and discuss them in the paper. In summary, Jiang and Aragam (NeurIPS 2023) focus on recovering the latent DAG without recovering the latent variables as opposed to our complete CRL objectives. On the other hand, Kumar and Sinha (2021) focus on an entirely different setting in which the causal variables are observed, and the distributions are given in a mixture. In contrast, our CRL setting focuses on latent variables observed through the same transformation when observing distinct interventional distributions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I have raised my score leaning towards acceptance. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their thorough feedback and thoughtful questions. Below we address some shared questions by the reviewers.
### **Additional experiments**
We address the shared concerns of the reviewers regarding the scalability of the algorithm via the following additional experiment results.
**Setting**: We follow the same setting as in Section 5 of the paper; Erdös-Rényi model with density 0.5 and linear structural equation models (SEMs) with Gaussian noise.
- **Dimension of observed variables $X$**: We increase $d$ to $50$ in this additional experiments.
- **Number of latent nodes**: We perform experiments for $n \in \\{4,5,6,7,8\\}$ nodes. We also note that $n=8$ is the largest graph size considered in the closely related single-node interventional CRL literature (e.g., Squires et al. (2023) and Buchholz et al. (2023) consider 5 nodes, Varici et al. (2024) consider 8 nodes, von Kügelgen et al. (2023) consider 2 nodes).
- We use $n_{\rm s} = 10^5$ samples for each realization and repeat the experiments 100 times for each $(n,d)$ pair.
The table below shows that the average rate of incorrect mixing entries, captured by $\ell_{\rm soft}$ and $\ell_{\rm hard}$, remains low for increasing values of $n$. Graph recovery metric SHD increases with $n$, since the number of expected edges also increases with $n$ under a fixed edge density.
| $n$ | $d$ | SHD (Soft) | $\ell_{\rm soft}$ | SHD (Hard) | $\ell_{\rm hard}$ |
|:--------:|:---------:|:------------:|:-----------------:|:-----------:|:-----------------:|
| 4 | 50 | 0.44 | 0.72 | 0.04 | 0.11 |
| 5 | 50 | 0.96 | 1.25 | 0.05 | 0.10 |
| 6 | 50 | 2.41 | 3.20 | 0.09 | 0.14 |
| 7 | 50 | 4.22 | 6.00 | 0.11 | 0.16 |
| 8 | 50 | 5.67 | 8.75 | 0.10 | 0.16 |
**Empirical trick**: We recall that our algorithm involves searching for proper $\mathbf{w} \in \\{-\kappa,\dots,\kappa\\}^n$ vectors in which $\kappa$ denotes $\kappa$ denotes the maximum determinant of a matrix in $\\{0,1\\}^{(n-1) \times (n-1)}$. Even though $\kappa$ is a function of $n$, e.g., $\kappa=2$ for $n=4$ and $\kappa=5$ for $n=6$, we observe that setting $\kappa=2$ does not disturb the performance noticeably. Therefore, we set $\kappa=2$ in all our experiments to reduce runtime.
### **General causal models**
In addition to our experiments with linear causal models, we investigate general causal models. Specifically, we provide a sensitivity analysis where we investigate the performance of our algorithms under different levels of artificially introduced noise.
For this set of experiments, we adopt a non-linear additive noise model with a score oracle. Specifically, the observational mechanism for node $i$ is generated according to
$$Z_i = \sqrt{Z\_{{\rm pa}(i)}^\top {\bf A}_i Z\_{{\rm pa}(i)}} + N_i \ ,$$
where $N_i \sim {\cal N}(0, \sigma_i)$, and the interventional mechanisms set $Z_i = N_i / 2$. This causal model admits a closed-form score function (see [7, Eq.(393–395]), which enables us to obtain a score oracle. In our experiments, we use this score oracle and introduce varying levels of artificial noise according to
$$\hat{s}_X(x; \sigma^2) = s_X(x) \cdot \big( 1 + \Xi \big) \ , \quad \mbox{where} \quad \Xi \sim {\cal N}(0, \sigma^2 \cdot {\bf I}\_{d \times d}) \ ,$$
to test the behavior of our algorithm under different noise regimes ($\sigma \in [10^{-3}, 10^{-1.5}]$). Results à la Table 2 versus different $\sigma$ values are provided in Figure 1 in the PDF document attached to the general response.
### **Additive noise models (ANM)**
- We emphasize that the core component of our work – minimizing score differences to estimate the true encoder – does not require ANMs, as shown by Theorem 2 for soft interventions.
- ANM is introduced for the analysis of CI tests in Stage 4 (hard interventions). Specifically, given a different causal model, it may be possible to analyze Stage 4 of the algorithm in a different way. For simplicity, we have adopted ANMs which are commonly adopted by both causal discovery and CRL literature (e.g., for perfect identifiability, Squires et al. (2023), Buchholz et al. (2023), Varici et al. (2024), and Bing et al. (2024) use ANMs).
- Finally, we only require the exogenous noise in the ANM to have full support, which is already implied by the full support of $z$. Hence, we don’t make any specific assumptions for ANM.
Pdf: /pdf/428aef1b4c26ac1f69c7492c6e14b6a9d8f0c2d3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces new identifiability results for CRL in environments with unknown multi-node interventions. It shows that, with sufficiently diverse interventional environments, one can achieve identifiability up to ancestors using soft interventions and perfect identifiability using hard interventions. The paper also provides an algorithm with identifiability guarantees.
Strengths: - The paper tackles the complex and underexplored multi-node intervention setting. The established identifiability can be crucial for extending current CRL theories into more practical contexts.
- The introduced algorithm that leverages score functions with different interventional environments is also interesting and insightful.
- The paper is well-motivated and articulated with high clarity.
Weaknesses: - The proposed algorithm, while theoretically sound, seems computationally demanding. In fact, even a 4-node low-dimensional case requires a large number of environments and samples. The paper could benefit from a deeper discussion on the scalability of the algorithm.
- The current evaluation of the algorithm is limited to synthetic simulations. Expanding it to more realistic datasets would substantively improve its practical significance.
Technical Quality: 3
Clarity: 3
Questions for Authors: How effectively does the proposed algorithm scale to more nodes and higher dimensions?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper acknowledges its main limitations in the reliance on linear transformations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our results crucial for extending CRL into more practical contexts, for finding our algorithm insightful, and for noting the clarity of the paper. We address the raised questions about the algorithm’s scalability as follows.
- **Dimension of $X$**: Our algorithm is readily scalable to arbitrarily high-dimensional observations $X$. Please see the additional experiments reported in our general response, in which we use $d=50$.
- **Dimension of $Z$**: Please see the additional experiments reported in our general response, in which we use $n \in \\{4, 5, 6, 7, 8\\}$ latent nodes.
- **Number of environments**: We note that $n$ environments are *necessary* in general (without further structural assumptions) for identifiability via single-node interventions (shown by Squires et al. 2023, Proposition 5). Since our unknown multi-node intervention setting subsumes the single-node interventions, we require at least $n$ environments as well.
- **Number of samples**: We remark that the main purpose of our algorithm is to establish a framework for identifiability via multi-node interventions. In future work, we aim to address the efficient score difference estimation, which is a separate line of work that can significantly increase the efficiency of our framework. We also kindly note that, even in the much simpler single-node intervention setting, related CRL literature uses a similar number of samples to $10^5$ that we used in our experiments for good performance (e.g., $10^5$ samples for $n=5$ nodes in Squires et al. (2023), $5 \times 10^4$ samples for $n=5$ nodes in Buchholz et al. (2023), and $5 \times 10^4$ samples for $n=5$ nodes in Varici et al. (2024)).
- **Toward realistic settings**: Finally, we acknowledge the need for working with more realistic datasets in the CRL field. We believe this paper serves as a significant step in this direction by removing the stringent assumption of single-node interventions, and we leave addressing realistic applications to future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my question. I increased the score to 7. | null | null | null | null | null | null |
Video Token Merging for Long Video Understanding | Accept (poster) | Summary: - This paper carries out an analysis of token merging[1] in the context of long-form video understanding and proposes learnable video token merging (VTM) to select semantics/saliency guided tokens for merging.
- In token merging, at each layer, the tokens are divided into two sets source S and target T through uniform sampling. Tokens in S are matched to T based on similarity and merged (usually by average pooling). This paper compares this naive VTM with two other variants where selection of T is guided by informed heuristics: (1) region VTM where tokens at the center of each frame are more likely to be retained, (2) motion VTM where tokens with high motion are more likely to be retained. Through this analysis, authors argue that the strategy to select T plays an important role in the final performance.
- Motivated by this, authors propose a learnable VTM where it first predicts a saliency score for each input token. The target set T is sampled according to the probability distribution defined by saliency score. Since this partition operation is not differentiable, authors propose a novel training architecture where a parallel auxiliary network is trained alongside. The saliency scores are used to bias the attention score of the aux network, thereby supervising the saliency prediction to focus on important tokens. Aux network can be discarded at test time.
- Authors carry out a fair evaluation of the learnable VTM on LVU, Breakfast and COIN datasets by comparing against several baselines including ViS4mer, S5, D-sprv. Learnable VTM performs better than baselines in almost all evaluation tasks with low GPU memory usage and high throughput.
[1] Daniel Bolya, Cheng-Yang Fu, Xiaoliang Dai, Peizhao Zhang, Christoph Feichtenhofer, and Judy Hoffman. Token Merging: Your ViT but faster. In ICLR, 2022.
Strengths: - In learnable VTM, the idea of learnable and auxiliary path is interesting. There is no way to directly supervise the token saliency prediction of the main path because partition operation is non-differentiable. Hence the attention in auxiliary path is influenced by saliency scores of the main path, which encourages the saliency prediction to focus on important tokens.
- The evaluation is fair and consistent. The authors use the same backbone as the prior works to encode input video frames, thereby ensuring a fair evaluation.
- The results of learnable VTM on LVU dataset and Breakfast is noticeably better than baselines with less GPU memory usage. However, on COIN dataset, it doesn't perform better than S5 baseline.
Weaknesses: ### Major weaknesses
- One of the cited contribution is the exploration of region based and motion based VTM (Section 3.3) but it seems trivial. The effectiveness of token selection is already shown in learnable VTM. In light of that, there is an unreasonable focus section 3.3 which is unnecessary.
- Section 3.4 explains little about the details of learnable VTM, how it is trained, how the gradients flow in the presence of non-differentiable partition function, etc.
- There are some stretched claims based on qualitative and quantitative results. For example,
- In Line 174, authors claim that center VTM performs better than naive VTM. However, according to Table 1, the results are mixed at best.
- In Fig 5, authors also claim that the visualization of merged tokens show saliency based merging. However, the figure doesn't support the claim. There are many merged tokens on important salient features and some background tokens are not merged.
### Minor issues
- Line 19: it should be "into the domain of video computer vision" as all cited papers are video learning papers.
- Is there a difference between notation of C and D? It looks like both are used interchangably to denote token dimension.
- Table 2: How it throughput measured? fps?
Technical Quality: 3
Clarity: 3
Questions for Authors: - How is learnable VTM trained? From Fig 4, it looks like there are two outputs from the network. Do you apply the same loss on both outputs?
- In Fig 4, what does the 'Merge' operation in auxiliary path do? Does it mean that the main and aux - both paths use the same target set sampled by the partition-match of the main path?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive and valuable comments. Please find our responses below.
***
> **Exploration**
Compared to image token merging methods, token merging for video is relatively under-researched. In this work, we investigate various video token merging methods and finally propose a learnable VTM that outperforms all previous techniques. Like our algorithm, there are many conventional algorithms in the video computer vision community that proposed simple but effective algorithms, such as 3D-CNN, inflated 3D kernel, (2+1)D CNN and etc. Hence, we believe that the exploration of various basic token merging methods has its benefits for the research community and the proposed learnable VTM algorithm will serve as a strong baseline. However, as the reviewer suggested, we will reduce Section 3.3 and include more analysis on learnable VTM.
> **Training of learnable VTM**
Learnable VTM is optimized to reduce the classification losses on both main path and auxiliary path. Also, the entire process is differentiable because we merge the matched tokens by using average pooling as in the standard token merging. Partitioning and matching processes do not have to be differentiable for end-to-end backpropagation. These processes do not update the features but just determine which tokens to merge. We will include this explanation in the revision.
> **Stretched claims**
We agree with the reviewer. We will revise the draft to explain the experimental results precisely.
> **Citations**
We will revise the draft as suggested.
> **Notation C and D**
We will revise the notation more clearly.
> **Throughput**
Yes, we measure the FPS.
> **Training**
In learnable VTM, we apply the same video classification loss to both predictions from the main path and the auxiliary path. Therefore, the saliency estimation module is optimized to assign high saliency scores to the important tokens to reduce the classification loss during training, since the tokens with high saliency scores have more influence in the guided attention in the auxiliary path.
> **Figure 4**
Yes, tokens in the auxiliary path are also merged according to the matching results from the main path.
***
We will address all these comments faithfully in the final paper. If you have additional comments, please let us know. Thank you again.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comment
Comment: Thanks for a detailed rebuttal.
### Major comments
- It looks like learnable VTM is very similar to other token sampling method ('Adaptive Token Sampling For Efficient Vision Transformers'). However, such prior token sampling method is parameter free and can be used on any pretrained transformers. Learnable version of such sampling is perceiver module ('Perceiver IO: A General Architecture for Structured Inputs & Outputs'). Authors haven't compared with either of them. Can the authors provide more comments on this?
- The visualization of saliency in Fig. 3 doesn't support the claim that the salient patches are being selected. Rather, it looks like that the network is repurposing the unimportant tokens to store information (this has been explored in 'Vision transformers need registers'). This also reaffirms by belief that Learnable VTM may be just a resampler and hence doesn't actually learn vision saliency.
### Minor comments
- I agree that the exploration is on different token merging techniques is valueable in the supplementary material. However, it would be better if its length in main paper is reduced.
---
Reply to Comment 1.1.1:
Comment: Thank you for your additional feedback. Please find our follow-up response below.
***
> **Comparison with token reduction methods**
Token merging and token sampling/dropping are two alternatives to reduce the valid number of visual tokens in the transformers, which reduce the computational cost of the self-attention layer. However, the scope of adaptive token sampling and its similar works are different from our work. They focus on improving the efficiency of the transformer while maintaining similar performance. In this paper, we argue that these methods may not be favorable in the long-form video understanding. Images as well as short-form videos are easy to build short-term dependencies within one image or highly overlapped video frames from a few seconds long, e.g. the basketball and basketball court, the sky and cloud. Thus, a parameter-free or a light weight module can produce reasonably good results for merging and sampling. On the other hand, in long-form videos, it is more challenging to capture dependencies between sparsely sampled frames, e.g. leveraging dependencies to predict the genre and production year of a movie.
As this work specifically focuses on the token merging method, we only include previous merging ideas in the comparison. We believe it is a great idea to expand the scope of this work in the future including adaptive token sampling and Perceiver as two representatives in the concept of general token reduction analysis.
> **Figure 3**
Also, as stated in L206-208, we sample the target tokens based on the probability which is computed from the estimated saliency scores. Hence, it is more likely that the tokens with high saliency scores are selected as target tokens, but not always. Therefore, in Figure 3, some tokens corresponding to backgrounds are selected as target tokens. However, we can see that target tokens are selected more around salient objects (two main characters). Moreover, it is desirable that some tokens from backgrounds are selected as target tokens so that source tokens corresponding to background can be merged with those similar target tokens. Otherwise, source tokens from background areas would be merged with unsimilar target tokens corresponding to salient objects, which causes significant loss of important information.
> **Paper organization**
Thanks for your advice. As suggested, we will move the explanation on different token merging methods to Appendix and use the saved space to address the comments from all reviewers.
***
Thank you again for your time and effort for reviewing our paper. We do appreciate it. If you have additional comments, please let us know. | Summary: This paper explores various video token merging strategies in the context of long-form video classification and finally propose a learnable Video Token Merging algorithm that dynamically merges video tokens based on visual salient areas. The contributions are summarized as follow:
1. Explore various video token merging methods including the naïve VTM, the region concentrated VTM, and the motion-based VTM.
2. Propose the learnable video token merging algorithm, which estimates the saliency scores of each token and adaptively merge visual tokens based on those scores.
3. The proposed algorithm achieves the best or competitive results on various datasets.
Strengths: 1. This paper explores various video token merging methods including the naïve VTM, the region concentrated VTM, and the motion-based VTM.
2. Compare with baseline and rule-based video token merging. The proposed learnable video token merging strategy has large improvement.
3. The two-paths design to deal with non-differentiable problem in partitioning process is interesting.
Weaknesses: 1. This paper proposes a leanable video token merging strategy. The similiar high-level idea can be found by CTS[1] in image domain. The novelty is insufficient。
2. This paper focuses on video token merging. However, I do not observe any specific design tailored for the video domain in terms of the methodology. let alone long video.
[1] Content-aware Token Sharing for Efficient Semantic Segmentation with Vision Transformers
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Due to the two-paths design, has the training time doubled?
2. The paper tries to learn saliency scores using matrix $U_s$. How about using $\sum{QK^T}$ in Equation 8 as saliency scores for each visual token?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We do appreciate your constructive comments and will address them faithfully in the final paper. Please find our responses below.
***
> **Difference from CTS**
CTS is not similar to the proposed algorithm, since it is not even a token merging method. CTS is a semantic segmentation algorithm which shares some neighboring tokens expected to belong to the same class in the segmentation network. Hence, CTS does not have any similar key idea such as learning the token saliency or merging tokens. We will do our best to answer your concerns if you provide more detailed points.
> **Design for video domain**
Compared to the images, there is much more redundant or noisy information in the videos, which deteriorates the quality of video understanding. The proposed learnable VTM is a domain-specific algorithm for long-context scenario. To reveal the discriminative information among redundant tokens in the video, we develop the learnable VTM which learns the saliency of each token and keeps the salient tokens, thereby increasing the influence of these tokens in the following attention processes after merging. Also, we explored the motion-based VTM, which selects the tokens with large motions as the target tokens, and thus this token merging method is only applicable to the video.
> **Training time**
Below, we compare the training time. The training time is slightly increased, but not doubled. We use 8 Tesla V100 GPUs and PyTorch for the experiments as stated in L258.
| Naive VTM | Learnable VTM |
|-----------|---------------|
| 4.0h | 3.7h |
> **Attention as saliency score**
The objective of saliency estimation is to assign high scores to the tokens corresponding to important objects and low scores to them corresponding to backgrounds or noisy information. However, $\sum QK^t$ would assign high scores to the tokens which have many similar tokens regardless of their semantics. Therefore, we employ the saliency estimation module for the proposed learnable VTM.
***
If you have any additional concerns, please let us know. We will address them faithfully. Thank you again for your constructive comments.
---
Rebuttal Comment 1.1:
Comment: There was a minor error in above Table for training time. The correct table is as below. We do apologize for the confusion.
| Naive VTM | Learnable VTM |
|-----------|---------------|
| 3.7h | 4.0h |
***
Thank you again for your time and effort for reviewing our paper. We do appreciate it.
---
Rebuttal 2:
Comment: 1. Design for video domain
The author, although exploring motion-based VTM, ultimately adopts a learnable VTM. In the learnable VTM, I do not see any special design for the video.
2. Training time
Could you explain why maintaining two paths during training does not result in a relationship of twice as much?
---
Rebuttal Comment 2.1:
Comment: Thank you for your constructive review and insightful suggestions, all of which will be addressed faithfully in the final paper. Please find our responses below.
***
> **Design for video domain**
We kindly request Reviewer 7bwU to clarify the definition of ‘special design for video’. In recent transformer-based networks and many foundation models, various inputs, including images, short-form videos, long-form videos, and text, are all processed in the form of tokens, but they just have different modalities and properties. If the absence of special design for video means that our method can be ‘generally’ applied to different modality tokens without raising errors, it is true. However, as stated in Section 1 and our previous response, the proposed algorithm is designed to handle long-contextual information in the video more efficiently and effectively. Compared to relatively well-structured text inputs and images/short-videos with relatively small amounts of information, it is more challenging to capture short-term and long-term dependencies across tokens in the long video since the long video contains many tokens which convey complex semantics individually or collectively. Therefore, we design our algorithm by merging tokens adaptively based on saliency scores for capturing the token dependency more easily while minimizing the information loss. Also, please note that the proposed algorithm achieves better results than all conventional algorithms for long-video understanding on various datasets. We believe that it indicates that the proposed algorithm has efficient design for long-video understanding.
> **Training Time**
We compare the throughput (FPS) in Table below. The throughput of learnable VTM with the auxiliary path (during training) is almost half, compared to that of naive VTM or that of learnable VTM without the auxiliary path (during inference). This is also shown in Table 9 in Appendix B.2. However, when measuring training time, there are various different factors such as data preprocessing, data loading, logging, and back-propagation. These factors take much longer than the forward path, and thus the total training time of learnable VTM is not doubled.
| | Naive VTM | Learnable VTM (training) | Learnable VTM (inference) |
|------------|-----------|--------------------------|---------------------------|
| Throughput | 45.39 | 27.84 | 44.94 |
***
If you have any additional concerns, please let us know. Thank you again for your comments. | Summary: The paper approaches the task of long-video understanding from token reduction perspective. Specifically, Transformer-based approaches suffers from memory bottleneck and quadratic computation complexity with increasing number of tokens, which is even more pressing with long-videos as input. The paper builds on a recently developed token merging mechanism, and proposes a learned saliency measure to modulate what tokens gets merged instead of using a random or hand-crafted saliency measure. The central hypothesis of the work is that typically techniques that use similarity as merging criteria may inadvertently lose out on salient tokens. The paper reports experiments on three conventional long-video benchmarks (LVU, Breakfast and COIN), and shows effectiveness of their approach compared to prior related works both in terms of performance and memory requirement. The paper also ablates the effectiveness of their proposed saliency measure (learned VTM) over hand-crafted measures including motion-based (using optical flow), center-biased and random schemes.
Strengths: - The paper is well-written with most of the information presented for ease of understanding
- The memory requirement is lower than S4, with competitive performance which highlights the importance of token selection in the case of long-videos
Weaknesses: - Comparison to related token saliency approaches
- The paper proposes a scheme to identify salient tokens by using a learned projection matrix $U_s \in \mathcal{R}^{D \times 1}$ with $\texttt{tanh}$ activation function
- However, learnable token saliency methods have also been used in prior works, such as EVEREST [1], which uses a pair-wise learned saliency at feature-level (equation 2) using $\texttt{Conv3d}$. The resulting approach was shown to be effective in the Masked Autoencoding setup
- Having a motion-based merging scheme is a good baseline, but some variants of learnable token saliency could also be tried to gain better understanding how token saliency gets influenced by different approaches
- Role of $L_1, L_2, L_3$
- The paper proposes to take tokens from $L_i$ consecutive frames for the $i^{th}$ VTM block
- It seems that choosing the values of $L_i$ is quite crucial given its impact on performance and memory requirement (Table 6) that forms the central claim of the paper
- However, the paper highlighted the contribution of token saliency more compared to the choice of $L_i$ hyperparameters
- Did the authors experiment with a rather simplistic setup using a single VTM block and/or with all $L_i$ being 60? It would help the readers to gain better understanding of what works in long-videos
- How saliency changes with tokens from different number of frames?
- It seems that the saliency is being computed at each VTM block. It would be interesting to see how the saliency changes across the three VTM blocks
- On that note, what VTM block’s saliency is being visualized in Figure 5?
### Minor
- Line 145-146: “$i$-th transformer block takes the tokens corresponding to $L_i$ frames without overlapping”
- Confusing when $i$ is referred to as the frame number and the block number of transformer at the same time
- Line 145-147: $j$ is not defined
### Typos
- Line 24-25: “so the tokenizing the”
- Line 36: “selectio”
- Line 114-115: “applications are mostly remained”
- Line 117: “depedencies”
- Line 161: “in the videos, .”
- Line 165: “regarding less of the”
- Line 177: “sailent”, “the the”
- Line 306: “sailencies”
### References
[1] “EVEREST: Efficient Masked Video Autoencoder by Removing Redundant Spatiotemporal Tokens”. Sunil Hwang and Jaehong Yoon and Youngwan Lee and Sung Ju Hwang. ICML 2024.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Line 141: why L >= 60?
- Is saliency projection used in all auxiliary VTM blocks?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review and insightful suggestions, all of which will be addressed faithfully in the final paper. Please find our responses below.
***
> **Difference from EVEREST**
The proposed learnable VTM is quite different from EVEREST. EVEREST measures the cosine similarity of tokens at the same spatial location in consecutive frames for more efficient mask generation which is used for masked video autoencoder training. Therefore, by its design, it does not consider the content of each token for saliency prediction. In contrast, learnable VTM employs the auxiliary path and the saliency guided attention to learn the saliency of each token. Therefore, the saliency estimation module is optimized to assign the high saliency scores to important tokens to reduce the classification loss in the auxiliary path.
> **Learnable VTM variants**
Below, we compare the variants of learnable VTM. In 'From $X$’, we estimate the saliency scores from token features $X$ instead of key vectors $K$. In `Multiply’, we use $Q_\mathrm{aux}K_\mathrm{aux}^t\odot\mathbf{1}S^t$ inside Softmax in Eq.(10) for the saliency guided attention. Here, $\odot$ denotes the Hadamard product. Both alternatives show decent scores. However, the proposed learnable VTM yields better performances and thus we employ it as our method.
| Methods | Relationship | Scene | Director | Writer |
|---------------|--------------|-------|----------|--------|
| From $X$ | 59.52 | 75.58 | 63.55 | 51.19 |
| Multiply | 59.52 | 72.09 | 70.09 | 46.42 |
| Learnable VTM | 64.28 | 75.58 | 70.09 | 53.57 |
> **Impact of $L$**
Yes, as the reviewer pointed out, $L_i$ is an important hyper-parameter for both efficiency and effectiveness of the proposed algorithm. Therefore, we analyze it in Table 6. However, the performance gain of learnable VTM is not just from the selection of $L$, because the other VTM methods in Table 1 yield low scores at the same $L$ setting. We will discuss this in the revision.
> **$L>60$**
We evaluate the proposed algorithm with $L=60$ since conventional algorithms use 60 frames for their evaluation. Below, we list the performances of learnable VTM with $L=100$ setting. We see that the proposed algorithm yields good results in this setting as well.
| Methods | Relationship | Speaking | Scene | Director | Genre | Writer | Year | Like | View |
|---------|--------------|----------|-------|----------|-------|--------|-------|------|------|
| $L=60$ | 64.28 | 42.12 | 75.58 | 70.09 | 59.77 | 53.57 | 48.55 | 0.21 | 4.01 |
| $L=100$ | 67.11 | 42.12 | 75.58 | 68.22 | 61.34 | 52.38 | 48.55 | 0.22 | 3.85 |
> **Saliency visualization**
We will include the visualization results in the final copy.
> **Figure 5**
In Figure 5, we visualize the token merging results at the last VTM block, as stated in L298-299. We will revise this more clearly in the revision.
> **L145-147 and typos**
We will revise them thoroughly. Thank you for your suggestion.
***
If you have any additional concerns, please let us know. We will address them faithfully. Thank you again for your positive comments. | Summary: This paper builds on Token Merging (ToMe) to improve its performance. In particular, the authors explore different ways to partition tokens so that the merging operation can lead to better performance while maintaining speed. They explore region-concentrated merging, motion-vector based merging and a learnable selector, and find that the learnable version works best. To make the network trainable, they employ a creative auxiliary path mechanism to make everything differentiable. They find that their learnable VTM obtains good results compared to baselines on long form datasets (LVU, Coin, Breakfast), and that it outperforms the other methods they introduce.
Strengths: The problem this paper addresses is an important one. Videos (especially long ones) have many redundant tokens and reducing their number while maintaining performance is a crucial problem to solve in the field.
The model itself is well designed and uses a creative auxiliary path to handle a non-differentiable partitioning process. Given the premise of the paper, the model is well-designed and seems to address the issue they propose.
I also appreciate the exploration of different methods, and a comparison on which worked better. This kind of analysis is often missing from papers and I am grateful for the authors for including it.
Weaknesses: I don’t really agree with the premise of the paper (and am open to a rebuttal to explain if I’m wrong here). Token Merging already explored merging for video in detail. The reason Token Merging is based on similarity is that by combining tokens that are extremely similar, the weighted average of the those tokens should produce an extremely similar result in the attention operation. This was also detailed more in the Token Merging follow-up TomeSD. If you use different criteria such as saliency (which is not really well-defined), this is no longer guaranteed, and from equation (10) it seems like the authors do not use the proportional attention scheme from ToMe (Eq 1 in the original paper). Table 8 doesn’t show the learnable method using this; it seems to just be about the pooling part rather than the attention operation.
I also don’t understand the intuition behind the saliency: shouldn’t we be aiming to combine together tokens that are NOT relevant, so that the transformer can focus more on the relevant tokens, rather than averaging (and thus losing) information from the more salient / important tokens? I’d really appreciate some clarification here. From Figure 3, it doesn’t look like learnable VTM is focusing on visually important tokens: it’s picking ones from the ceiling and wall in addition to the people.
My main issue is with the evaluation. The evaluation seems not quite fair, especially when measuring memory usage and throughput. Shouldn’t it be compared to baseline merging algorithms, like the naive ToMe? My impression is that the memory usage and throughput from VTM will be exactly the same as ToMe because it uses a similar partitioning scheme and constant factor reduction, which is why it may not be included in the results, but this seems important to include for context. Furthermore, the improvement on metrics is quite small, given that the speed is the same as other merging methods. Is this expected?
Also, The paper is motivated by “long-term” video, but evaluates on 64 frames, which isn’t really long and in my view, doesn’t merit only evaluating on LVU, Breakfast and COIN. Kinetics-400 has 300 frames per video, and is a more standard benchmark for evaluating video backbones - in fact, the original ToMe paper includes experiments on those datasets, which would make for a more fair comparison. Furthermore, nothing about the method itself is specific to these longer videos. I think evaluating on more standard datasets is crucial to measuring the actual strength of the method, especially compared to baselines like Token Merging. In particular, the long-form datasets are very compressible.
The paper is not well-written and the grammar needs a lot of revision, making it hard to focus on the content of the paper itself. In addition, a lot of space is spent on methods that are not really used in the final results (center, region, motion vector) and on citing equations from preliminary works (token merging, attention). Given that a claimed contribution is an exploration of these different methods, I would also have expected more detailed ablations and experiments to understand exactly why some of the methods perform better than others.
Technical Quality: 3
Clarity: 2
Questions for Authors: It’s not really expected that VTM (or any merging method) should score better than full attention, as it has strictly less information. This is backed up in the Token Merging and other follow up papers. Why is it expected (as said on L160) that merging should perform better? It’s supposed to be just faster, with a minimal drop in performance.
The motion vector method requires extracting pre-computed motion vectors from the encoded video. However, those are computed for 30 FPS, and for the original video size, meaning they don’t actually apply to downsampled (64 frames, 224x224). Was this taken into account? It’s certainly not fast to re-compute these motion vectors if you’re doing random cropping or frame selection.
Is it possible to know the effect on training wall-clock time from this method? This the the metric that practitioners really care about, so including this would potentially strengthen the results of the paper.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The biggest limitation of this compared to baseline Token Merging is that it requires re-training. ToMe could be applied to a pre-trained network out-of-the-box. Learnable VTM cannot do this, making it impossible to apply to a pre-trained video network. The other proposed methods (region, motion vector) can do this though, and this would be a good thing to note somewhere in the paper.
In my view, the limitation of methods like VTM is that it always reduces a constant number of tokens per video, even though some videos are inherently more compressible than others. For example, Breakfast videos are extremely compressible compared to the average LVU video. However, this is beyond the scope of the paper, and using constant reduction is certainly more convenient, but this would be a good limitation to acknowledge and perhaps address in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We do appreciate your constructive comments and will address them faithfully in the final paper. Please find our responses below.
***
> **Token merging for video**
To the best of our knowledge, there are only a few techniques for video token merging, such as TESTA(EMNLP2023), ToMe(ICLR2023), and VidToMe(CVPR2024). However, all of them are straightforward applications of the standard token merging to video input. Specifically, TESTA and VidToMe employ the same token partitioning, matching, and merging scheme of ToMe.
In contrast, we explored multiple token merging methods for video and proposed the learnable token merging method which shows good performances on various long-video classification datasets.
> **Saliency**
Learnable VTM uses the estimated saliency scores to divide the tokens into target token set and source token set. The saliency score estimation aims to select a greater number of important tokens (e.g. tokens corresponding to important objects in the scene) as target tokens, thereby increasing the influence of these tokens in the following attention processes after merging.
As in the standard token merging, we merge tokens based on their similarity. Hence, even though some tokens corresponding to the important objects are selected as target tokens, they are not merged with irrelevant tokens, and thus we do not lose much important information during the merging process. It is also shown in Figure 5. We can see that tokens are merged more in the background area than in the salient area.
Also, as stated in L206-208, we sample the target tokens based on the probability which is computed from the estimated saliency scores. Hence, it is less likely that the tokens with low saliency scores are selected as target tokens, but not always. As shown in Figure 3, some tokens corresponding to backgrounds or unimportant objects can be selected as target tokens.
> **Comparison**
Please note that ToMe (ICLR2023) does not provide the results on the long video classification datasets. Kinetics only includes 10s video clip which is spatially heavy and without many temporal dependencies. From previous literature, it is easy to receive good performance with a single frame (What Makes a Video a Video: Analyzing Temporal Information in Video Understanding Models and Datasets, CVPR 2018), which is very different from the scope of this paper. As stated in L154-155, our naive VTM is a straightforward application of ToMe which exploits the standard token merging method as intact as possible. As the reviewer pointed out, the memory usage and throughput of various VTM methods that we explored in this paper are almost the same. However, with the same memory and speed, a learnable VTM shows better performances than other VTM methods. Moreover, the proposed algorithm outperforms the conventional long-video understanding algorithms with large margins even though it has much better memory-efficiency and speed.
> **Long video understanding**
For evaluation, we just follow the standard protocol in the field of long video understanding. For fair comparison, we use the same datasets (LVU, COIN, and Breakfast) and the same input setting as in conventional algorithms (ViS4mer, S5) for long video understanding.
> **High performance of VTM**
In the long video understanding datasets such as LVU, there is a lot of redundant and noisy information in a video. Therefore, as shown in Table 1, the transformer network does not yield reliable prediction results because those redundant tokens may hinder the feature refinement in the attention layers. However, when tokens are merged properly, the influence of important tokens in the attention process increases and thus the network can yield better performances, as shown in our experimental results. This phenomenon has also been revealed in the recently published token selection work (Selective Structured State-Spaces for Long-Form Video Understanding, CVPR 2023).
> **Motion vector computation**
As stated in L194-195, motion decoding takes only 0.3 milliseconds for each frame which is negligible, since the motion is converted to the token resolution $H \times W$ by using average pooling.
> **Training time**
We list the training time for the LVU and COIN datasets below. The training is finished within a few hours. We use 8 Tesla V100 GPUs and PyTorch for the experiments as stated in L258.
| Methods | LVU | COIN |
|---------------|-----|------|
| Learnable VTM | 4.0h | 3.5h |
> **Re-training, fixed compression ratio**
We agree with the reviewer. We will include this discussion in the revision.
***
If you have any additional concerns, please let us know. We will address them faithfully. Thank you again for your constructive comments.
---
Rebuttal 2:
Title: Raising Rating + Followup response
Comment: Thanks for the detailed rebuttal, I really appreciate your efforts. I will raise my rating for now to a 4. I have some further doubts that I'd appreciate clarification on.
__Saliency__
I think I understand this better now but I really don't think saliency is the right word. From what I understand, the point of this is to learn some scoring function to do a better job of partitioning, since the hypothesis of the paper is that the partitioning scheme of ToMe is suboptimal. Maybe "learned partition" coudl be a better word? Saliency is too broad / vague of a term.
__Training Time__
When I asked for the training time, I meant compared to standard ToMe (no learned module) and a baseline with no merging. From the results of the original ToMe paper they demonstrated very large speedups on training time. It seems that learnable VTM would remove those gains, which would be a big reason to not use this method. Can you comment on this?
__Long Video vs Short Video__
I still don't really see why this method is tailored for long video at all, and Reviewer 7bwU agrees. Nothing about learnable VTM specifically is more effective than other methods with more frames . Furthermore, as I mentioned, you can train a model on Kinetics-400 with 64 frames as well and we should be able to notice some sort of improvement. What exactly about this method is specialized for long videos and thus merits not evaluating on Kinetics (where there are more baseline results)?
---
Rebuttal Comment 2.1:
Title: Response by Authors
Comment: Thank you for your positive response. Please find our follow-up response below.
***
> **Saliency**
We agree with the reviewer. We will revise the draft as suggested.
> **Training time**
In Table below, we list the training time on the LVU dataset. The training time of the learnable VTM is much faster than that of the baseline network without token merging. Also, compared to the naive VTM, which is the straightforward application of ToMe to the baseline network, the learnable VTM takes a slightly longer time. However, it is not a huge gap, which significantly diminishes the benefits of the proposed algorithm.
| No Merging | Naive VTM | Learnable VTM |
|------------|-----------|---------------|
| 7.2h | 3.7h | 4.0h |
> **Long video vs short video**
Long-form video understanding tasks not only require to model more frames but also need to capture the complex spatiotemporal short-term and long-term dependencies of semantics. To facilitate this, the learnable VTM finds out important tokens among numerous in the video through the learned token partition and increases the influence of these tokens in the attention processes after merging. By doing so, it obtains better understanding than the ‘ToMe’ on the long-form video understanding tasks, as shown in the experimental results.
As stated in L51-52, in this work, we aim to explore various video token merging methods on the long-video understanding datasets (LVU, Breakfast, and COIN) and to find out the effective video token merging technique for long-video understanding. Thus, we don’t use all types of video benchmarks for the evaluation. However, we explore various token merging methods (including ToMe) on the long-form video understanding benchmarks and also compete with recent SOTA methods for long video understanding.
As the reviewer pointed out, the proposed algorithm can be evaluated on the short-video understanding benchmarks (such as Kinetics-400), even though it is out of the original scope of this paper. Please note that Kinetics-400, as pointed out in the “What Makes a Video a Video: Analyzing Temporal Information in Video Understanding Models and Datasets, CVPR 2018”, is a relatively simple dataset with low information density; it contains one simple action for each video and thus complex information across many frames are not required to obtain good classification results on this dataset. Therefore, the proposed algorithm may not yield meaningful performance gap over other token merging baselines on the Kinetics-400 dataset, because this dataset may be less sensitive to information loss caused by suboptimal token merging than the long video understanding datasets which contain enormous amount of complex information. Due to the time limitation, we may not be able to finish the experiment within the rebuttal period. However, we will include performance of short-video understanding benchmarks as the supplementary of this paper.
***
Thank you again for your time and effort for reviewing our paper. We do appreciate it. If you have additional comments, please let us know. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their time and efforts for providing constructive reviews. Also, we extend our thanks to the program and area chairs. We have faithfully responded to all comments below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios | Accept (poster) | Summary: The paper presents a novel approach called Functional Maximal Correlation Algorithm with Trace cost (FMCA-T) for estimating cortico-muscular dependence by leveraging orthonormal decomposition of density ratios. This method is designed to model the relationship between EEG (electroencephalography) and EMG (electromyography) signals, addressing the challenges of interpretability, scalability, and local temporal dependence in cortico-muscular connectivity. The key contributions include introducing a matrix trace cost optimization for improved stability and efficiency, demonstrating robustness against nonstationary noise and delays, and effectively capturing movement and subject information from EEG features for enhanced classification accuracy. The proposed method outperforms existing baselines, particularly in cross-subject scenarios, and provides insights into channel-level and temporal dependencies, reinforcing its potential applications in brain-computer interface development and neuromuscular disorder diagnostics.
Strengths: 1. Innovative Method: Introduces the Functional Maximal Correlation Algorithm with Trace cost (FMCA-T), providing a novel approach for estimating cortico-muscular dependence.
2. Improved Stability and Efficiency: Utilizes matrix trace cost optimization, which is more stable and computationally efficient compared to traditional log-determinant cost methods.
3. Enhanced Classification Accuracy: Effectively captures movement and subject information from EEG features, significantly improving classification accuracy, especially in cross-subject scenarios
4. Validation on Multiple Datasets: Validated using both simulated and real EEG-EMG datasets, confirming the method’s effectiveness and robustness.
5. Open Data and Reproducibility: Offers open access to datasets and detailed implementation code, facilitating reproducibility and further research in the field.
Weaknesses: The provided baselines are relatively few; future work could expand on this.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. While the paper discusses the improved stability and efficiency of the FMCA-T method, can the authors provide more detailed about the computational resources required for training and inference?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: 1.Limited Dataset Size: The cross-subject classification performance drops, potentially due to the limited dataset size of only 25 participants. Larger and more diverse datasets may be needed to validate the method’s robustness comprehensively.
2. Generalization to Other Modalities: While the method shows promise for EEG and EMG signals, its generalizability to other types of biosignals or broader neural data modalities has not been extensively disscuss.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. Please find below our replies to the concerns/questions.
**1. Limited baselines.**
We have added EEG-Conformer (available on GitHub) and Deep4 (from the braindecode repository on GitHub) in the attached pdf. All baselines were implemented following the official codes. Results indicate that FMCA-T outperforms the additional baselines on almost all tasks. Deep4 performs slightly better on cross-subject 11-movement classification task. We will update Table 1 in the revised manuscript.
**2. Computational resources for training and inference.**
Experiments on the simulated sinusoidal dataset and the additional simulated EEG and EMG dataset were conducted on an NVIDIA GeForce RTX 3090. Experiments on the experimental EEG and EMG dataset were conducted on an NVIDIA GeForce A5000. All training and inference can be conducted on a single GPU.
**3. Limited dataset size.**
We agree with the reviewer about needing a larger, diverse dataset for better validation. Although unavailable now, we hope our method will encourage large-scale datasets, which we plan to explore further.
**4. Generalization to other modalities.**
We thank the reviewer for proposing the possible extension to broader neural data modalities. This is an interesting question for further investigation. We expect our method can still apply to learning the dependence between other modality data, e.g., between EMG and kinematics or between EEG and visual data. We will continue studying such scenarios in future work.
**5. Ethics review.**
The public EEG and EMG dataset was reviewed and approved by the Institutional Review Board at Korea University (1040548-KU-IRB-17-181-A-2).
---
Rebuttal Comment 1.1:
Title: To Authors
Comment: Thank you for your thorough and detailed responses to my concerns. I appreciate the effort you have put into addressing the points raised. Based on your clarifications and the additional information provided, I am satisfied that my concerns have been addressed.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: We thank the reviewer for the quick response and we are glad to see that our responses addressed the concerns raised by the reviewer. We will make sure to include all the new analysis in the revised manuscript. | Summary: The paper presents a new method to model the relationship between cortical and muscular oscillations using EEG and EMG recordings. Traditional methods like Cortico-Muscular Coherence (CMC) have limitations, so the authors propose using statistical dependence estimators to learn eigenvalues, eigenfunctions, and projection spaces. This approach improves interpretability, scalability, and local temporal dependence. Experimental results show that the method accurately classifies movements and subjects, highlighting specific EEG channel activations during movements, and demonstrates robustness against noise and delays, suggesting its potential for diagnosing neuromuscular disorders and developing brain-computer interfaces.
Strengths: 1. The paper combines statistical dependence estimators with neural network optimization techniques. This fusion of methodologies enhances the ability to capture high-level and contextual connectivity between cortical and muscular oscillations.
2. The paper provides a detailed description of the proposed methodology, including the mathematical foundations, algorithmic implementation, and practical considerations. The inclusion of eigenvalues, eigenfunctions, and projection spaces adds depth to the analysis.
3. The authors conduct comprehensive experiments to validate their method. The results demonstrate the method's robustness against nonstationary noise and random delays, confirming its reliability and practical applicability.
Weaknesses: 1. Mathematical and Algorithmic Complexity: The proposed method involves complex mathematical formulations and advanced statistical techniques that may be challenging for a broader audience to grasp. Simplifying some of the mathematical derivations or providing more intuitive explanations and visualizations could make the paper more accessible.
2. Interpretation of Results: While the method highlights specific EEG channel activations during movements, the physiological and neuroscientific significance of these results could be further elaborated. Providing more detailed discussions on how these findings align with or differ from existing neuroscience research would enhance the interpretability and relevance of the results.
3. Scalability: The scalability of the proposed method to larger datasets or longer signal durations is not thoroughly addressed. Discussing the computational complexity and providing benchmarks on how the method performs with varying data sizes would be valuable.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The method identifies specific EEG channel activations during movements. Could you provide more detailed explanations or references to how these findings align with existing neuroscientific knowledge? What are the physiological implications of the identified activations, and how do they contribute to our understanding of cortico-muscular connectivity?
2. The author mention potential applications in diagnosing neuromuscular disorders and developing brain-computer interfaces. Can you provide concrete examples or case studies where your method has been or could be applied? What specific benefits or improvements does your method offer over existing approaches in these applications?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Plz go and check weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the instructive comments. Please find below our replies to the concerns/questions.
**1. Mathematical and algorithmic complexity.** We thank the reviewer for the suggestion. We will add more explanations of our methodology to the supplementary.
**2. Interpretation of results and physiological implications of the identified activations.** In terms of interpretation, we found that the frontal central areas (FC) are most activated in the spatial-level dependence maps for most subjects. Since we used eigenfunctions that decompose the density ratio for classification, this implies that these FC areas contribute significantly to movement classification. It is reasonable for FC1 to show the most activation in the dependence map. A related previous study also found movement-related cortical potential changes on FC1-FC2 and C2 electrodes [1].
We have additionally added a frequency analysis using event-related desynchronization (ERD) in the beta band. In the general response, we addressed the differences and consistency between frequency topologies and our dependence ratio map.
**3. Scalability.** We appreciate the reviewer's suggestion. Given the limited time frame and the limited availability of paired EEG-EMG datasets, it is difficult to show scalability on new real-world datasets. Instead, we divided the dataset into smaller sub-datasets and ran experiments with increased dataset sizes to demonstrate how our framework scales up. Fig. 4(a)~4(c) in the attached letter show that: (a) the convergence speed of training errors is almost the same, with smaller datasets converging slightly faster; (b) downstream classification accuracies steadily increase when using more data; (c) dependence decreases when using more data.
We observed the expected trend in dependence scores - as more data is used, it implies greater uncertainty in the data.
The computational complexity was not particularly heavy, and we did not see it as an obstacle. We trained the model on an A5000 GPU and did not encounter significant difficulties.
**4. Potential applications.** Corticomuscular coupling is promising for quantitively measuring movement disorders. This application has been validated in stroke populations [2, 3] and in Parkinson’s disease [4]. Therefore, the proposed dependence measure can be used as an effective and clinically relevant neural marker to evaluate movement performance of stroke patients and to indicate Parkinson’s disease pathology. Moreover, as demonstrated in the manuscript, EEG eigenfunctions show excellent performance in movement classifications. Therefore, the proposed method also has great potential in improving brain-computer interfaces. We will add this discussion in the revised manuscript.
*[1] Spring J N, Place N, Borrani F, et al. Movement-related cortical potential amplitude reduction after cycling exercise relates to the extent of neuromuscular fatigue[J]. Frontiers in human neuroscience, 2016, 10: 257.*
*[2] Chen X, Xie P, Zhang Y, et al. Abnormal functional corticomuscular coupling after stroke[J]. NeuroImage: Clinical, 2018, 19: 147-159.*
*[3] Liu J, Wang J, Tan G, et al. Correlation evaluation of functional corticomuscular coupling with abnormal muscle synergy after stroke[J]. IEEE Transactions on Biomedical Engineering, 2021, 68(11): 3261-3272.*
*[4] Zokaei N, Quinn A J, Hu M T, et al. Reduced cortico-muscular beta coupling in Parkinson’s disease predicts motor impairment[J]. Brain communications, 2021, 3(3): fcab179.*
---
Rebuttal Comment 1.1:
Title: To authors
Comment: Thank you for your effort and response. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's response. The reviewer's concerns on the previous version of the manuscript mainly pertained to the importance of this work and its consistency with physiological evidence. We feel we have addressed these comments specifically, as we briefly summarize below. If the reviewer still has concerns on these aspects, we would be truly grateful if they could kindly specify any remaining concerns, allowing us the opportunity to address them thoroughly.
Below follows a summary of how we addressed the concerns.
In terms of importance, our study measures the dependence and extracts features between two biosignals (EEG from the brain and EMG from muscles) without using any labels, yet the extracted features still capture contextual information such as participant movement. This can be highly beneficial for large, unlabeled datasets in BCI.
In terms of disorder analysis, conditions such as Parkinson's and stroke can affect cortical activation, as patients pathologically experience difficulties in planning and executing movements. The coherence between EEG and EMG signals has been widely used as an identifier for these diseases, as they can induce alterations or disruptions in neuronal pathways, leading to abnormal coherence and dependence between the signals.
In terms of neuroscientific evidence, frontal central area (FC) activation is expected. In neurophysiology, the motor cortex (central areas C3, C4) is linked to motor control. The pre-motor cortex and frontal cortex (FC) are linked to the cognitive process and motor planning. Thus FC activation matches the expectations. Additionally, we included simulated data in a controlled setting, simulating C3/C4 activation during right/left hand movements. Our dependence maps match the ground truth. | Summary: The authors apply novel but already existing (https://www.sciencedirect.com/science/article/pii/S0047259X2300074X, https://arxiv.org/pdf/2212.04631) machinery based approach on the orthonormal decomposition of density to decipher the relationship between cortical brain activity and the electromyographic signals during basic hand movements. The work is based on the publicly available dataset and the code is available. The unknown decomposition is modeled by a pair of neural networks concurrently processing EEG and EMG data in order to arrive at the internal representation for each of the modality. The internal representations are then aligned to minimize the rank of the joint covariance matrix. To guide the learning process, the authors propose a somewhat novel loss function equal to the negative trace of the canonical correlation matrix calculated using latent representations. The authors test their approach on the downstream tasks of classifying movement types in both within and across subject designs. They also apply the obtained representations to distinguish between participants based on their EEG data. The authors provide some interpretation to the obtained solution in the form of channel and temporal maps indicating the electrodes and time moments that contribute to the decoding most.
Strengths: 1. The authors applied novel but existing methodology of orthonormal density decomposition to the EEG+EMG dataset for the first time
2. The authors introduced a novel loss function and showed that it provides better performance in the downstream task of EEG-based classification of movement types
3. The authors used multi subject dataset
4. The authors attempted to provide interpretation of the obtained decision rule
5. The authors present detailed results of their experiments in the appendix
Weaknesses: 1. Several inaccuracies and lack of details in the mathematical expressions:
1.1 line 100, last expression and additional p(z) is needed in the integral
1.2 equation 3 - do the eigenvalues need to be normalized? Does the sum exclude the first normalized eigenvalue?
2. I would argue against the suggested novelty of the proposed loss function as it seems like the loss function that is closely associated with the Canonical correlation analysis (CCA) (equation 4). Generally speaking, the proposed approach boils down to the CCA in the latent variable space with latents computed by means of a CNN.
3. The authors claim that they “..design a specialized network topology to generate features for individual channels and time intervals, ensuring that the internal layers of this network quantify channel-level and temporal-level features, similar to [22–24]” - however unlike for instance the EEGnet, the authors use non-linearities in the temporal network (prior to the spatial) which in my view prevents the straightforward interpretation of the obtained representations at least using simple correlational measures.
See also Q.1 and 2.
4. The authors did not validate their approach to interpreting the decision rule and obtaining spatial and temporal maps with simulated data. This needs to be done and the simulated data should contain not only the neuronal sources coupled to the simulated EMG but also the sources unrelated to the signal of interest (EMG). The authors then need to demonstrate that their methods infers the proper spatial patterns corresponding to the task-relevant simulated neuronal sources. Ideally, the obtained maps should be ranked based on their importance for the overall decoding accuracy. If this is not possible within the review cycle, the authors should significantly reduce the proportion of the manuscript dedicated to the physiological plausibility of their solution and instead describe limitations related to potentially non-physiological origin of the extracted features.
5. It is disappointing that when interpreting the decision rule the authors did not provide information regarding the EEG frequency domain their network got tuned to during the training.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Having significant experience in the domain of recording and analyzing electrophysiological data I founnd the obtained maps very suspicious. While EEG electrode FC1 can indeed be implicated and be coherent with EMG, I would expect other electrodes such as C3, C5 to have some significant contribution to the EEG derived latents that are maximally aligned with EMG. Instead in addition to FC1 we see the involvement of peripheral electrodes and the frontal electrodes. These electrodes often lose proper contact with the skin and become sensitive to the physical movements due to capacitive effects, when slight body displacements during the actual movement causes significant fluctuation in the electrode-skin capacitance and modulates the signals registered by EEG. The analysis of frequency response of the temporal layer (see W.5) may help to resolve this potential issue.
2. In the dataset used by the authors the reference channel was located in the midline between FC1 and FC2 sensors. Such an arrangement often results in low variance of the signals located close to the reference. The spatial patterns that the authors demonstrate in Figures 5 and 11 show peaks around Fc1 and FC2. The ground electrode was located at the edge of the EEG cap between F1 and F2 and the temporal maps show them as the next best electrodes after Fc1. Could it be that within certain normalization steps the authors explicitly or implicitly divided the data by the channel variance or multiplied the data by a poorly conditioned and not properly regularized inverse covariance that the role of these electrodes got artificially inflated?
3. The authors show spatial maps for several other *selective* subjects and clusters to illustrate across subject reproducibility. What about the spatial maps corresponding to the other latent channels\clusters for SUB3?
4. Why did the authors not follow the EEGNet architecture and decided to use non-linearities between the temporal and spatial processing blocks? Avoiding nonlinearities in the front end would improve interpretability and would help to make the presentation more convincing.
5. How do the authors avoid the trivial training result, i.e. that the two networks will simply learn to generate similar EMG and EEG embeddings regardless of the input data?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: The authors partly addressed limitations but several items, see Weaknesses section, are left out.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed insightful feedback.
**1.1 Novelty over CCA.** We acknowledge the reviewer's observation that the final cost is the trace of a normalized canonical correlation matrix between two multivariate neural networks.
But we emphasize the link between cost optimization and joint density ratio $\frac{p(X,Y)}{p(X)p(Y)} = \sum_{k=1}^K \lambda_k \widehat{\phi_k} (X) \widehat{\psi_k} (Y)$, which the reviewer might have misunderstood.
Vanilla CCA uses linear models to maximize a correlation coefficient. After KICA was introduced (i.e., Kernelized CCA), the problem became minimizing matrix costs using universal approximators, with costs including log determinant, Frobenius norm, and matrix trace. Our trace cost is closer to HSIC. We have thoroughly compared our method with KICA and HSIC as baselines, both of which use kernels while we use neural nets.
Only recently, studies proved that optimizing matrix costs is equivalent to factorizing joint density ratios, including FMCA and Gaussian universal features. An informal proof: if $\widehat{\phi}$ and $\widehat{\psi}$ are two orthonormal sets, optimizing the norm of $\mathbb{E}[\mathbf{\widehat{\phi}}(X)\mathbf{\widehat{\psi}^\intercal}(Y)] = \int \mathbf{\widehat{\phi}}(X)\mathbf{\widehat{\psi}}^\intercal(Y) p(X,Y)dXdY$ decomposes the joint density ratio $\frac{p(X, Y)}{p(X)p(Y)}$ and finds its eigenfunctions.
This paper further shows that all such costs, including log determinant, Frobenius norm, and trace, are fundamentally equivalent, differing only in the convex functions applied to eigenvalues. We consider this a major novelty.
**1.2 More discussions about novelties.** Clarifying the question on $\mathbf{z}$, it is the variable making $\mathbf{X}$ and $\mathbf{Y}$ conditionally independent, such as movement types, not the network's latent features that approximate eigenfunctions. Lemma~1 explains why eigenfunction can be used for downstream tasks. Another novelty is analyzing dependence within the network for temporal and spatial activation.
**1.3 Normalization.** Eigenvalues of $\frac{p(X,Y)}{p(X)p(Y)}$ are well-regularized and don't need normalization: (1) All eigenvalues are positive and bounded by 1; (2) $\lambda_1=1$ regardless of dependence; (3) independence iif there exists only one non-zero eigenvalue. This is due to the implicit normalization dividing $p(X,Y)$ by $p(X)p(Y)$, corresponding to the cost's normalization $\mathbf{R}_F^{-\frac{1}{2}} \phi$ and $\mathbf{R}_G^{-\frac{1}{2}} \psi$.
The normalization is costs also enforces orthonormality constraints on features, so outputs won't be constants (trivial solution)
**2.1 Simulated dataset.** We have added additional experiments with simulated dataset, shown in Fig. 3 in the general response.
We simulated EEG and EMG signals for left/right motor and sensory activations in 20 subjects using EEGSourceSim. Motor sources were used to simulate the corresponding EMG signals. We trained the networks on paired EEG-EMG samples from 16 subjects' data as training samples, and visualized the spatial-level dependence maps for the remaining 4 subjects as test samples. Then we plotted ground truth brain activations calculated from motor ROI and forward matrices.
The spatial-level dependence is highly similar to ground truth activations, indicating the learned ratio captured real brain activations.
**2.2 Dependence shows major activations around FC areas.** We have added a frequency analysis using event-related desynchronization (ERD) in the beta band (Fig. 2) and complete spatial dependence maps of our method (Fig. 3), both for Subject 3.
The frequency topology shows that left-central areas (e.g., C3) are commonly activated across sessions and movements while frontal-central (FC) and central-parietal areas have unique patterns for each movement. In comparison, our dependence maps show the most activated areas around FC areas.
We argue that the difference in activation occurs because we are quantifying ***nonlinear dependence***, not correlations. It has been experimentally proven that the eigenfunctions of this dependence yield high accuracies in downstream classification tasks, which means that the activation maps should show the areas that make each movement most distinguishable. Thus the frontal central and central parietal areas are activated in dependence maps. In frequency topologies, C3 is activated for all movements, but it's unable to be used as an identifier for individual movements. This causes the differences between our maps and frequency maps. More details can be found in the general response.
The public dataset used has been validated before publication. No activation or deactivation was found on peripheral or frontal electrodes in the frequency analysis-based topographies.
**2.3 Electrode arrangement and normalization steps.** The electrode arrangement is reliable and has been used in commercial equipment. In EEG analysis, if the target electrode is near the reference, a weighted average filter will be employed to enhance the signal. Our study instead had no clear target before analysis. Therefore, we applied an average reference across electrodes, which is commonly used in EEG analysis. During preprocessing, all EEG channels were normalized to the first sample's maximum amplitude. Trials exceeding an absolute amplitude of 5 were discarded. No channel-wise normalization was used that could affect the variance.
**3.1 Choice of nonlinearity differs from EEGNet.** In EEGNet paper, the authors found that nonlinear activations did not improve performance, which is why they chose linear functions, thus making it a matter of preference.
Although linearity may help prevent overfitting in cases of long signal durations, our experiments showed no significant overfitting. Using ReLU or Sigmoid may provide a more stable approximation when computing cross-layer ratios, as the ratios should be non-negative and bounded.
---
Rebuttal Comment 1.1:
Comment: Thanks for the effort and explanation - I did not say you did linear CCA. The real data result is still meaningless. I will keep my rating as is.
---
Reply to Comment 1.1.1:
Comment: We would like to thank the reviewer for the engagement in this dialogue. We hope that we have sufficiently addressed the concerns regarding our algorithm. We understand that the remaining concerns are related to the interpretation of dependence maps. We want to emphasize that since our results are based on nonlinear dependence analysis between modalities, EEG and EMG, we should expect that the localization differs from using frequency analysis for EEG alone or correlation analysis. Indeed, our results incorporate two sources of information by estimating the joint density ratio between the two: if there is coupling, we should observe a better localization. We have made efforts, including frequency analysis, to ensure that this difference is not caused by electrode movements or normalization. Thus we are confident that the observed activation of premotor areas is caused by the EEG-EMG dependence.
We thank the reviewer for the thoughtful analysis and look forward to pursuing this line of thought to further validate our hypothesis. | Summary: This paper introduces a novel approach to analyzing cortico-muscular connectivity using statistical dependence measures based on density ratio decomposition. The authors apply a method called Functional Maximal Correlation Algorithm with Trace cost (FMCA-T) to paired EEG and EMG recordings. The key idea is to decompose the density ratio between EEG and EMG signals into eigenvalues and eigenfunctions, which can capture important contextual information that affects the EEG-EMG dependency such as type of movement or subject without having them labeled. They also use the learned eigenfunctions as feature projectors and train a classifier on top for movement type classification tasks.
The authors test their approach on simulated data (SinWav) and a real EEG-EMG dataset with 25 subjects performing 11 different upper limb movements. They compare FMCA-T against several baseline methods for dependence estimation and classification. They find that the learned eigenfunctions capture factors such as movement type and subject identity. Further, FMCA-T outperforms the baselines, for example by 10% for cross-subject classification of arm-reaching, hand-grasping and wrist-twisting.
Strengths: * very sophisticated method with clear motivation
* original idea cleanly mathematically derived (as far as I can tell)
* produces good results
Weaknesses: * classification baselines, could be stronger, e.g. by also using [EEG Conformer](https://pubmed.ncbi.nlm.nih.gov/37015413/) and [Deep4](https://onlinelibrary.wiley.com/doi/full/10.1002/hbm.23730)
* text is very dense at times, definitely found some part hard to read, but not sure how much it can be made easier, possibly you could explain some concepts used in 2.2 in more detail in the supplementary
Technical Quality: 3
Clarity: 3
Questions for Authors: "In scenarios where X and Y are statistically independent, all eigenvalues are zero" > doesn't this make the density ratio 0 then? shouldn't the density ratio be 1 if they are independent?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors could discuss a bit more under what conditions the assumption of conditional independence may be problematic and when it is fine for EEG/EMG. In terms of what one may expect to see in analyses as performed here under different conditions.
Of course, evaluation on further datasets would also be helpful for this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments.
**1. Classification baselines could be stronger.** We have now added EEG-Conformer (available on GitHub) and Deep4 (from the braindecode repository on GitHub) as new baselines in the general response letter. The results show that FMCA-T surpasses the added baselines in almost all tasks, with Deep4 slightly ahead in cross-subject 11-movement classification.
**2. Independence criterion with the eigenvalues.** The reviewer correctly pointed out our mistake. A more precise argument would be: two variables are independent (i.e., $p(X, Y) = p(X)p(Y)$) if and only if the spectrum has a single positive eigenvalue $\lambda = 1$.
Define the linear operator $\psi := \mathcal{L}\phi = \int \frac{p(X,Y)}{p(X)p(Y)} \phi(X) \, d\mathbb{P}_X$ with $\phi$ and $\psi$ as functions of $X$ and $Y$. It can be confirmed that $\phi=1$ yields $\psi=1$, making it an eigenfunction of the operator with eigenvalue $\lambda=1$. Two variables are independent if and only if the only positive eigenvalue is this trivial one. All eigenvalues of the operator are non-negative and bounded by $1$. More than one positive eigenvalue implies dependence. We will correct the error.
**3. Applicability and limitations of conditional independence assumption.**
a. We demonstrated the system's ability to capture discrete and easily distinguishable factors like movement types, participants, and sessions. But it struggles with finer-grained factors, such as sub-movements that include arm-reaching in six directions, hand-grasping three objects, and wrist-twisting with two motions.
One possible explanation is that non-invasive techniques like EEG face more challenges to extract finer movement information compared to invasive techniques. Consequently, the ratio $\rho(X,z)$ becomes trivial, and finer information may not be easily extracted from the dependent components between EEG and EMG.
b. Future work is needed for scenarios with a continuous conditioned variable $\mathbf{z}$. In Lemma 1, we proposed the expression $\frac{p(X,Y)}{p(X)p(Y)} = \sum_{z\in \mathcal{Z}} p(z) \rho(X, z)\rho(Y, z) dz = \sum_{k}\lambda_k \phi_k(X)\psi_k(Y)$, which applies when $\mathbf{z}$ is discrete. This may not account for continuous variables such as the force of muscle contractions and kinematics (joint angle).
**4. Density of texts.** We appreciate the reviewer's feedback and will improve the formatting.
---
Rebuttal Comment 1.1:
Comment: Thanks for your answers and thanks for the additional baselines. Keep in mind with density I was not referring so much to the formatting, more to the text itself. As said. I am also not sure how much this can be improved to be easier to read but if you find some way would be great.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their further replies. We will definitely try to reduce the density of the text in the revised manuscript, particularly in Section 2.2, as the reviewer mentioned. We have considered the following possible solutions: (1) Moving reference materials (such as log determinant) to the supplementary, which will allow us to have more space for discussing the new cost and density ratio factorization; (2) Adding more explanations to the concept, for example, the definition of the density ratio, why it's positive definite such that its factorization exists, and making a side-by-side comparison with properties of coherence analysis; (3) We have included pseudo codes in the supplementary, which we hope will provide more clarification, and we can explain the motivation behind each step in Section 2.2 from an implementary perspective; (4) We will check other sections and add more explanations if needed.
We appreciate the clarification and insights. We understand the reviewer's concern, as Sections 2.2 and 2.3 are our main proposals. We are confident that we can reduce the density of the text in the revised version. We once again thank the reviewer for the engagement in the discussion. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' feedback. The following responses address their shared concerns.
We have attached a letter containing the additional results, including the requested classification baselines, a frequency analysis of brain activations, full maps for Subject 3, maps for simulated EEG-EMG data, and model scalability with dataset size.
**1. Additional classification baselines.** We have added EEG-Conformer and Deep4 results as baselines. FMCA-T surpasses these supervised methods in most tasks, except Deep4's slight advantage in cross-subject 11-movement classification. This is shown in Table 1 of the attached letter.
**2. Additional frequency analysis.** To address the reviewer's concern regarding frequency topologies, we have performed a frequency analysis of brain activations using event-related desynchronization (ERD) in the beta band.
Fig. 1 shows results for Subject 3 across all movement types and sessions. We observed that the left central area is commonly activated across all movement types, frontal-central (FC) and central-parietal areas have unique patterns for each movement, and no apparent activation or deactivation was found in peripheral or frontal electrodes.
This is consistent with conventional EEG analysis and validates the dataset.
**3. Dependence map interpretation.** We have added dependence maps for all nine clusters from Subject 3 in Fig. 2, as requested by the reviewer. FC areas are consistently activated in most maps, matching our main paper.
To explain why the activated areas (FC) may differ from frequency topologies (left central), note that we are measuring nonlinear dependence, not correlations. Eigenfunctions for this dependence have been proven to yield high classification accuracies in downstream classification tasks, so we expect the activated areas in the dependence map to contribute significantly to classifying and identifying movements and participants, thus leading to FC activation.
Frequency topologies potentially support this argument. Since C3 is activated across all movement types, it may not be an effective identifier for distinguishing individual movements. Instead, FC has unique activation patterns for each movement.
**4. Additional results on simulated EEG-EMG dataset.** We have added results from a simulated EEG-EMG dataset to validate the spatial-level dependence map. We simulated EEG and EMG signals for left/right motor and sensory activations in 20 subjects using EEGSourceSim. Motor sources were used to simulate the corresponding EMG signals. As shown in Fig. 3 in the letter, the spatial-level density ratio learned from the dataset shows similar activation patterns to the ground truth motor activations computed from motor ROI and forward matrices.
Pdf: /pdf/76774e7edfeefafefb5e555d30b9bc61477d0983.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GaussianMarker: Uncertainty-Aware Copyright Protection of 3D Gaussian Splatting | Accept (poster) | Summary: This paper proposes GaussianMarker, a novel method for embedding invisible watermarks into 3D Gaussian Splatting (3DGS) models to protect their copyright. The key idea is to use uncertainty estimation to add imperceptible perturbations to 3D Gaussian parameters with high uncertainty. The method enables extraction of copyright messages from both the 3D Gaussian parameters and rendered 2D images, and demonstrates robustness to various 3D and 2D distortions. Experiments on multiple datasets show the effectiveness of the approach in terms of message decoding accuracy and visual quality preservation.
Strengths: * Timely contribution addressing copyright protection for 3D Gaussian Splatting models, an increasingly important 3D asset format
* Clever use of uncertainty estimation to guide watermark embedding in a way that preserves visual quality
* Demonstrates robustness to various 3D and 2D distortions/attacks
Weaknesses: * The decoder is trained per scene, rather than being a generalizable decoder. This makes the watermarking process essentially impractical for real-world use. It's not feasible for people to store a separate watermark encoder and decoder for each scene for the vast number of Gaussians distributed across the internet. Reflecting on the logic of image watermarking, a single watermark encoder and decoder can encode and decode information for any cover image, so the sender and receiver only need to jointly possess one watermark decoder. This is a more reasonable setup.
* Experiments focus mostly on relatively simple scenes - more complex, dynamic scenes could be challenging
* The robustness to more sophisticated attacks (e.g. adversarial perturbations) is not explored
* Discussion of potential negative impacts of the technology could be expanded
Technical Quality: 3
Clarity: 3
Questions for Authors: My primary concern with this paper stems from a fundamental physical challenge: How can a digital watermark be embedded into the 3D Gaussian Splatting (3DGS) representation of a scene in such a way that it can be reliably decoded from any viewing direction?
The volume rendering process that converts 3DGS representations into 2D images is designed to produce geometrically consistent views based on the camera pose. However, the requirement to embed and extract a watermark from arbitrary viewpoints seems to conflict with this underlying principle.
One potential resolution to this contradiction could be as follows: Rather than directly encoding the watermark itself into the 3D representation, the method might embed a geometrically consistent signal that can be detected by a trained network D. This signal could then trigger the generation or retrieval of the actual watermark (be it an image, audio, or text), which has been memorized by the detector D during the training process.
This hypothesis aligns with the paper's description of F as a detector/classifier rather than a decoder. It also explains the need for a separate classification module to guide whether the detector should produce the stored watermark data.
However, this interpretation raises several questions:
* How does the method ensure that the embedded signal remains detectable across different viewing angles and rendering conditions?
What is the information capacity of this approach, and how does it compare to traditional digital watermarking techniques?
* How robust is the embedded signal to various forms of 3D transformations or edits to the 3DGS model?
* Is there a trade-off between the strength of the embedded signal and the visual quality of the rendered images?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have included a brief limitations section that acknowledges potential vulnerabilities to malicious attacks beyond their technical solution. However, this discussion could be expanded to consider more specific limitations of the approach, such as potential challenges with very complex scenes or highly dynamic content. The societal impact is briefly mentioned, focusing on the positive aspects of copyright protection. A more thorough examination of potential negative impacts or misuse scenarios would strengthen the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer BKx3,
We will address your comments below and in the revised paper.
### Weakness:
> W1: Generalizability of decoders.
We agree that the generalization ability is very important for practical use. In our designs, we employ a two-layer protection approach to achieve generalizable and specific protection. We have two decoders in 2D and 3D, respectively. The 2D decoder is **never trained per scene** for generalization ability. In our setting, it is pre-trained and can be used across images rendered from different 3DGS models. This can provide protections for **vast number of Gaussians distributed across the internet**. Our second decoder is in 3D and is trained per scene. This is the second protection wall for 3D assets with more specific protection. The 3D asset owners can train their 3D decoder per scene based on the scene property. We believe such two-layer protection with both general and specific protection can make 3D assets safer.
> W2: More complex, dynamic scenes.
In Table 1 of our main paper, we present evaluations of our method across three datasets, including MipNeRF360, widely regarded as the most challenging static scene dataset. To address your concerns, in this rebuttal, we further test our watermarking strategy on the Simplicits [1], a latest framework for simulating complex motions. In this place, we just need to simply modify the distortion layers in the original designs by incorporating various geometry distortions. From the **left part of rebuttal Figure 4**, our method still shows robustness in such new and complex scenarios. Besides, it also partly shows that our approach can be extended to dynamic 3DGS scenarios.
> W3: Adversarial perturbations.
Following your suggestions, we have designed more experiments to evaluate adversarial robustness. A malicious user can employ an adversarial attack, such as PGD, to compromise hidden messages in rendered images. As shown in the **right part of rebuttal Figure 5**, adversarial attacks can indeed reduce bit accuracy, even with minimal visual distortion (PSNR > 30). However, the defense against such adversarial attacks can be easily achieved via adversarial training by generating adversarial samples during the training for HiDDeN decoder. Under adversarial training to the message decoder for 2D images, the bit accuracy can be significantly improved, while the image quality is preserved with a PSNR value larger than 30.
> W4: Potential negative impacts.
Although our method demonstrates robustness against many sophisticated distortions, additional legislative strategies should be implemented to combat such malicious attacks beyond technological measures. We will explore extending our work by integrating legislative actions to provide comprehensive copyright protection for model owners.
### Questions
If our understanding is accurate, BKx3’s primary concern is whether our watermarks can maintain geometric consistency, which is essential for robust extraction from different viewing angles. Our specific designs have equipped our digital watermarks with such consistency. Our method embeds watermarks into the 3DGS model by adding perturbations to 3D Gaussians with high uncertainty.
As shown in Figure 3 of our main paper and supplementary material, these areas cover most object boundaries in the 3DGS scene, regardless of viewing angle. **Rebuttal Figure 6** further illustrates geometry consistency by displaying the incorporated perturbations. These perturbations effectively cover the scene's general geometric structure. The geometry of these perturbations remains consistent across different camera angles and can be transmitted into the rendered images, enabling detection by the 2D message decoder.
The scenarios envisioned by BKx3 can be achieved by some classical information-hiding techniques [2], which use specific strategies to hide images or even videos in the target data and can employ a specific decoder to extract such hidden data. We try to address the newly raised questions related to this envisioned scenario below:
> Viewing angles, information capacity, and comparison to traditional watermarking.
During training, careful selection of training and testing data guarantees that camera angles cover most scene perspectives, enabling successful watermark detection from various viewpoints. The information capacity relies on the model parameters of the decoder since this approach is conducted in an over-fitting manner. Compared with traditional watermarking, the envisioned method can decode different types of watermark data, but it relies on a specific decoder.
> The 3D robustness.
Information hiding seldom considers robustness. However, this also belongs to a kind of fine-tuning approach, which means the signals are embedded by fine-tuning the original model. In this situation, based on Table 2 of the main paper, such a fine-tuning strategy may show degraded 3D robustness.
> Trade-off.
As discussed in the **rebuttal Table2** and the previous methods [2], stronger 3D perturbations can enhance watermarking decoding accuracy, while it may affect image quality. Thus, for the envisioned scenario, there should also be a similar trade-off.
However, we have to say our method is for digital watermarking, which is a different topic against information hiding, despite their somewhat correlations. We will investigate how to incorporate your suggestions into our future work and feel happy to discuss this point with you during the discussion session.
> Limitations
We will further incorporate more limitations and potential impacts into our final version by considering the valuable suggestions from each reviewer.
[1] Simplicits: Mesh-Free, Geometry-Agnostic, Elastic Simulation.
[2] StegaNeRF: Embedding Invisible Information within Neural Radiance Fields.
---
Rebuttal Comment 1.1:
Comment: >W1: Generalizability of decoders.
>>We agree that the generalization ability is very important for practical use. In our designs, we employ a two-layer protection approach to achieve generalizable and specific protection. We have two decoders in 2D and 3D, respectively. The 2D decoder is never trained per scene for generalization ability. In our setting, it is pre-trained and can be used across images rendered from different 3DGS models. This can provide protections for vast number of Gaussians distributed across the internet. Our second decoder is in 3D and is trained per scene. This is the second protection wall for 3D assets with more specific protection. The 3D asset owners can train their 3D decoder per scene based on the scene property. We believe such two-layer protection with both general and specific protection can make 3D assets safer.
Thank you for your insightful response. It appears that we may have a slight misunderstanding regarding the concept of generalizability. While using a common decoder to fit all previously encountered scenes is certainly feasible, this approach doesn't necessarily imply true generalization. Genuine generalization refers to the applicability of a model to unseen scenarios.
To illustrate this point, let's consider two analogous examples:
1. In 2D steganography on images, the encoder and decoder should be capable of embedding and extracting watermarks on carrier images that were not seen during training.
1. In machine learning, generalization typically refers to performance on unseen domains and data, rather than maintaining good performance across numerous seen domains.
This distinction raises an important question: How well does the proposed method perform on entirely new scenes or objects that were not part of the training set that have been seen?
True ”generalization“ in this context would involve the ability to apply the watermarking technique to novel 3D scenes or objects without requiring retraining or significant adaptation. This capability would demonstrate the robustness and versatility of the approach, making it more applicable in real-world scenarios where encountering new and diverse 3D environments is common.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer BKx3,
Thank you very much for your thoughtful comments and for taking the time to engage in this discussion. We greatly appreciate your insights and the opportunity to clarify our work.
We will incorporate all your valuable suggestions about the generalization ability into our final version. That would be a very interesting part for the future work in this area. As the discussion has come to its end, may we know whether our response has fully addressed your concerns? If possible, we would be very grateful if you can raise your score.
Thanks again for your valuable feedback during the rebuttal and discussion.
Best Regards,
Authors
---
Rebuttal 2:
Comment: Thanks for pointing out this. We agree that a general encoder-decoder framework in image watermarking is indeed highly desirable. However, the unique nature of 3D neural representations presents distinctions that require a different approach when compared to image watermarking.
In 3D neural representations, trainable parameters or networks are utilized to represent 3D scenes. This fundamental difference makes it challenging to directly apply the encoding techniques used in image watermarking to encode those parameters or networks. Consequently, some changes are necessary for watermarking neural representations.
These changes inevitably lead to certain optimizations during the message embedding for neural representations like previous CopyRNeRF for NeRF or our approach for 3DGS. We agree that this may potentially impact the generalization ability. To address this, our current strategy focuses on minimizing the time costs of the message embedding. By reducing the time required for embedding, we can partly mitigate the issues arising from these additional optimizations.
We have conducted extensive experiments here. Our results demonstrate that our method achieves significantly shorter message embedding times. However, such message embedding sometimes may need about 70 hours in established pipelines for neural representations. This improvement not only enhances efficiency but also helps to preserve the generalization capability of the watermarking technique.
| Datasets | 3DGS training | Our message embedding |
| :----------| :-------------------------------: | -----: |
| Synthetic datasets (Blender) | 30k steps / 30mins | 1k steps / 3 mins |
| Real-world scenes (LLFF) | 30k steps / 45mins | 1k steps / 5 mins |
Besides, we have to point out that the images used for training the 2D message decoder are all from the COCO dataset. This dataset does not have any correlations to the images used for the optimization of 3DGS models. Thus, it means that our 2D message decoder has never seen the images used for the optimization of 3DGS models, and can be used as a generalized message decoder on any novel 3D scenes.
What you mention is very insightful. We will incorporate those suggestions into the future work of our final version.
---
Rebuttal Comment 2.1:
Comment: @Reviewer BKx3, authors provided new comments for your questions. could you please check whether they have addressed your concerns or not. thanks!
---
Rebuttal Comment 2.2:
Comment: Thank you for your response. A quick follow-up question: why the training time you provide for 3DGS training: 30k steps / 30mins | 30k steps / 45mins is so big?
---
Reply to Comment 2.2.1:
Comment: Dear Reviewer BKx3,
Our 3DGS training follows the original 3DGS paper [1], which takes 30k steps by default. In our experiments, the typical training times required for training 3DGS ranged from 30 to 45 minutes. The training time can also be found in the original 3DGS paper [1] Table. 1, and for the 30k steps model it also ranges from about 30 to 45 minutes.
Users can also select 7k steps for faster training time, and the optimization can also finish faster since the 7k steps 3DGS model contains fewer parameters and can be trained faster. No matter whether the user uses a 7k or 30k steps 3DGS model, our method can all be applicable to these existing 3DGS models and be optimized within minutes for watermarking purposes.
[1] 3D Gaussian Splatting for Real-Time Radiance Field Rendering.
Best regards,
Authors | Summary: 3D Gaussian Splatting(3DGS) has gradually become the mainstream method for acquiring 3D assets, which has led to a demand for copyright protection of 3DGS. In this paper, a watermarking method based on uncertainty called GaussianMarker is proposed. Firstly, 3DGS is partitioned based on uncertainty, and the watermark is only added to the model parameters with high uncertainty. Subsequently, the corresponding parameters are perturbed using both 2D and 3D watermark encoders, enabling the extraction of watermark information from rendered 2D images as well as directly from 3D model parameters. Experimental results demonstrate the robustness of the proposed GaussianMarker method against 2D and 3D distortions.
Strengths: 1. The paper proposes a method that utilizes uncertainty to partition 3D Gaussian. By embedding watermarks specifically in the parameters with high uncertainty, the method aims to mitigate the impact on the quality of the model.
2. The paper considers the extraction of watermarks in both 2D and 3D scenarios, taking into account the robustness of watermark extraction in these two contexts.
Weaknesses: 1. The paper mentions that the calculation of uncertainty is related to the model parameters, and in 3D Gaussian, each point has multiple parameters such as $\mu, R, S, c, and \alpha$. It would be helpful if the authors could clarify which specific parameters are used in the proposed method. Additionally, the paper provides a formula for calculating model uncertainty, but it is unclear how the uncertainty of each Gaussian is computed and used for partitioning. The authors should provide further explanation or clarification on this matter.
2. The description of the densify function $g(\cdot)$ in the paper states that it randomly samples a new position from a distribution. According to my understanding, the original Gaussian $G_i$ should have been replaced. However, Figure 2 shows that the original Gaussian $G_i$ still exists, which is confusing to me.
3. During the watermark embedding process, it is unclear whether the 2D and 3D watermarks are embedded into the same model parameters. It would be helpful if the authors could clarify which specific model parameters of the 3D Gaussian are used for embedding the watermarks.
4. In the section on "Distilling watermarking knowledge," the authors mention that "the pre-trained feature from 2D space can be distilled to the 3D space." It is important for the authors to provide an explanation of how this is achieved.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In the experimental section, the authors present four baseline methods. How do 3DGS with message and 3DGS with fine-tuning extract messages.
2. Four types of 3D editing methods are listed in the experiment, which parameters of 3DGS are affected by these distortions?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer VdSM,
Thank you for your valuable feedback and constructive comments. We will address your comments below and in the revised paper.
### Weakness
> W1: Parameters for uncertainty calculation and partitioning.
As we have mentioned in the main paper (Section 4.1), the model parameters $\theta$ are used to compute the uncertainty, including all Gaussian parameters $\mu$, $R$, $S$, $c$, and $\alpha$. The uncertainty is estimated by calculating the Hessian matrix as an approximation of the Fisher information. This process can be simplified by computing the gradient for each Gaussian (main paper, lines 148-152) as $\mathbf{H}\left[\mathbf{I} \mid \mathbf{V}, \boldsymbol{\theta}^*\right]=\nabla_{\boldsymbol{\theta}} f\left(\mathbf{V} ; \boldsymbol{\theta}^*\right)^T \nabla_{\boldsymbol{\theta}} f\left(\mathbf{V} ; \boldsymbol{\theta}^*\right)$. Since Fisher information is additive (main paper, lines 153–155), the uncertainty of each Gaussian can be summed by adding the uncertainty of each parameter: $\mathbf{H}[\mathbf{I}|\mathbf{V},\mathbf{\theta}^*] = \mathbf{H}[\mathbf{I}|\mathbf{V},\mathbf{\theta^*_{\mu}}] + \mathbf{H}[\mathbf{I}|\mathbf{V},\mathbf{\theta^*_{R}}] + \mathbf{H}[\mathbf{I}|\mathbf{V},\mathbf{\theta^*_{S}}] + \mathbf{H}[\mathbf{I}|\mathbf{V},\mathbf{\theta^*_{c}}] + \mathbf{H}[\mathbf{I}|\mathbf{V},\mathbf{\theta^*_{\alpha}}]$. The high uncertainty 3D Gaussians are partitioned based on the Equation (7).
> W2: The desify function 𝑔(⋅).
As we have mentioned in the main paper (lines 163-170), we retain the integrity of the original 3D Gaussians, denoted as $\mathcal{G}$. We densify the 3D Gaussians with high uncertainty, and the newly added 3D Gaussians are our proposed 3D perturbations denoted as $\mathcal{\tilde{G}}$. We embed $\mathcal{\tilde{G}}$ into $\mathcal{G}$ to get the watermarked Gaussians: $\mathcal{\hat{G}} = \mathcal{G} \cup \mathcal{\tilde{G}}$. We keep the integrity of the original Gaussians $\mathcal{G}$ and add 3D perturbations $\mathcal{\tilde{G}}$ on the original 3D Gaussian, similar to the 2D watermarking methods apply an invisible 2D perturbation messages on the original cover images: $x_{w} = x_{o} + \delta$ (main paper, lines 178-180).
> W3: It is unclear whether the 2D and 3D watermarks are embedded into the same model parameters
The 2D and 3D watermarks are all embedded into the same model parameters. As mentioned in lines 207-210 of the main paper, our training contains two phases. In the first phase, we distill the messages into the model parameters. After this distillation, the messages have already been able to be extracted from 2D rendered images. In the second phase, we only train the 3D message decoder to ensure that the messages embedded in the first phase can be directly extracted from the 3D assets.
> W4: Distilling watermarking knowledge.
As we have mentioned in lines 211-213 of the main paper, previous methods have shown that the 2D knowledges can be distilled into the 3D radiance fields via additional settings. In our designs, we distill the 2D knowledges from a pre-trained HiDDeN decoder into the Gaussian parameters as the embedded watermarks. Based on our proposed two training phases, such distilled knowledges can be extracted from the 2D and 3D domains.
### Questions
> Q1: Message extraction methods
In our experimental setting, for a fair comparison, “3DGS with message” and “3DGS with fine-tuning” methods all use the same 2D and 3D message decoders to extract messages. The 2D message decoder is a pre-trained HiDDeN decoder and can extract messages on the watermarked rendered images (main paper, lines 176–177). The 3D message decoder is based on PointNet architecture and can extract messages on the watermarked 3D Gaussians (main paper, lines 196–197).
> Q2. Four types of 3D editing methods are listed in the experiment, which parameters of 3DGS are affected by these distortions?
The four types of 3D editing (3D Gaussian noise, translation, rotation, crop-out) occur in 3D space and are used to modify the positions $\mu$ of the 3D Gaussians, while other Gaussian parameters remain unchanged. We treat the 3D Gaussian mean $\mu$ as the point position and other parameters as the associated point features (main paper, lines 193-196), so that the PointNet-based 3D message decoder can extract the hidden messages directly from the 3D Gaussians and even they are spatially transformed or distorted.
---
Rebuttal Comment 1.1:
Title: Some questions
Comment: Thank you for your careful response. Combining it with the feedback from other reviewers, I still have some questions.
1. Does the watermark in question amount to introducing an optimizable perturbation in a 3D scene (a set of densify high-uncertainty Gaussians)? Regardless of the 2D or 3D phase, are all parameters of these Gaussians optimized ($\mu, R, S, c, and \alpha$)?、
2. I am unclear on how the 2D information is distilled. From my understanding, is it rendered directly from a viewpoint, then a watermark is added to the rendered image, and the corresponding Gaussian parameters are updated using gradients. Is this correct?
3. Regarding the four 3D editing methods mentioned, the authors stated that only $\mu$ was modified. However, as per my understanding, some existing 3D Gaussian editing methods, such as Gaussianeditor[1], modify all parameters of the Gaussian. In this case, does the PointNet-based 3D message decoder proposed in this paper become ineffective?
[1] Chen, Yiwen, et al. "Gaussianeditor: Swift and controllable 3d editing with gaussian splatting." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.
---
Reply to Comment 1.1.1:
Comment: Thanks for your kindly reply and we are willing to further discuss about the technical details:
1. The optimizable perturbations.
Yes, our proposed method introduces an optimizable perturbation $\tilde{\mathcal{G}}$, which is added into the original Gaussians $\mathcal{G}$ to obtain watermarked Gaussians $\hat{\mathcal{G}}$ used for the 3D scene representation. Those perturbations parameters in $\tilde{\mathcal{G}}$ are all optimized in the 2D phase, including $\mu$, $R$, $S$, $c$, and $\alpha$. In the 3D phase, we use the watermarked Gaussians $\hat{\mathcal{G}}$ (which contain the perturbations $\tilde{\mathcal{G}}$) optimized in the 2D phase for training the 3D message decoder.
2. The 2D watermark distilling.
Yes, you are correct! The watermark is added into the rendered image. Let's review the process of the 2D watermark distillation in our proposed framework. At the beginning we only have the original Gaussians $\mathcal{G}$, and after we apply our proposed uncertainty-aware perturbation based on the Equation (6) and (7), we obtain the 3D perturbations $\tilde{\mathcal{G}}$ as our proposed 3D watermark, and the final watermarked Gaussian is denoted as $\hat{\mathcal{G}} = \mathcal{G} \cup \tilde{\mathcal{G}}$. Under the supervision of the pre-trained 2D message decoder, the corresponding Gaussians parameters in $\tilde{\mathcal{G}}$ are updated using the gradient in Equation (8). When the optimization is finished, the watermarked Gaussians parameters $\hat{\mathcal{G}}$ contain the watermarking information. Those watermarks can then be transmitted into the rendered image pixels $\hat{C}$ based on the rendering defined in Equation (2).
3. Modifying other Gaussian parameters and GaussianEditor.
Thanks for your insightful suggestions. Such robustness you mentioned is indeed a very important aspect of digital watermarking. Besides, as PointNet-based message decoder is more sensitive to geometry editing, we mainly focus on the evaluations of some operations like translation, rotation and crop-out. We further conduct experiments to edit all Gaussian attributes by adding a normalized noise $n \sim \mathcal{N}(0, \sigma)$ as random perturbations and the results are shown in the table below. The results indicate that our 3D message decoder still shown robustness when other Gaussian attributes are modified:
Method | None | $c$ | $R$ | $S$ | $\alpha$| All ($\mu$, $c$, $R$, $S$, $\alpha$) |
| :----------: | :------: | :----: | :------: | :------: | :------: | :-----: |
| Noise ($\sigma=0.1$) | 100% | 97.69% | 98.95% | 98.39% | 99.30%| 96.61% |
| Noise ($\sigma=0.5$) | 100% | 97.15% | 98.94% | 98.13% | 99.23%| 95.82% |
Based on your valuable suggestions, we have further tried the modifications in GaussianEditors on the MipNeRF360 flowers scene. We have considered the color-editing scenarios by adding prompt "make it blue" and "make it red". Similar to the dynamic scene situations in the rebuttal Figure 4 and Figure 5, since our distortion layers are scalable, we can achieve color robustness by adding color-jittering into the distortion layers during the 2D message decoder training. As shown in the table below, our method still keeps high bit accuracy even when the scene color is edited.
Method | Scene| None | "make it blue" | "make it red" |
| :----------:| :------: | :------: | :------: | :-----: |
| GaussianEditor | Flowers | 97.91% | 93.75% | 91.66% |
Achieving robustness in different modifications is very important. If the users have specific needs for different robustness, they can change the distortion laters to achieve higher robustness. However, it is also very difficult to cover all distortions at the same time. Making the watermarking system robust is an active research area. We will incorporate your valuable concerns and suggestions into our final version. | Summary: The paper presents a new method for embedding digital watermarks in 3D Gaussian Splatting (3DGS) models to protect the copyright of 3D assets. Traditional watermarking techniques for mesh, point cloud, and implicit radiance fields are not suitable for 3DGS, as they can cause distortions in rendered images. The authors propose an uncertainty-based approach that constrains perturbations to the model parameters, ensuring that watermarks remain invisible while preserving visual quality. The method allows for reliable extraction of copyright messages from both 3D Gaussians and 2D rendered images, even under various distortions.
Strengths: 1. The proposed method ensures that the embedded watermarks do not cause significant distortions in the rendered 3D scenes or 2D images, maintaining the visual quality of the assets.
2. The approach is designed to be robust against various forms of 3D and 2D distortions, such as noise, translation, rotation, cropping, JPEG compression, scaling, and blurring. This enhances the reliability of copyright protection.
3. The method allows for the extraction of copyright messages from both 3D Gaussian parameters and 2D rendered images, providing multiple layers of security and verification.
4. Extensive experiments demonstrate that the method achieves state-of-the-art performance in both message decoding accuracy and view synthesis quality.
Weaknesses: The malicious scenarios considered are limited to traditional distortions. \
More sophisticated scenarios should also be explored. \
For instance, a malicious actor could fine-tune the downloaded 3DGS or use an auto-encoder to remove embedded information ([1],[2],[3]). \
In such cases, how would the proposed method perform?
Additionally, a more complex scenario to consider is when a malicious actor renders Bob's 3DGS and uses it as training data to create their own 3DGS. \
How would the proposed method address these advanced threats?
[1] Fernandez et al., The Stable Signature: Rooting Watermarks in Latent Diffusion Models \
[2] Kim et al., WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models \
[3] Zhao et al., Invisible Image Watermarks Are Provably Removable Using Generative AI
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer QvjQ,
Thank you for your valuable feedback and constructive comments. We appreciate your suggestion about considering more sophisticated scenarios, and we will address your comments below and in the revised paper.
> W1: Model fine-tuning and auto-encoder attack.
By following your suggestions, we have designed experiments for both the model fine-tuning scenario and the auto-encoder attack scenario.
For the **model fine-tuning**, we consider a challenging case where attackers have direct access to the original non-watermarked images used for the creation of the 3DGS. Based on this assumption, we implement the attack by eliminating the message loss and then fine-tuning the model solely through perceptual loss.
As shown in **Figure 3, left part of the rebuttal paper**, the bit accuracy still remains relatively high even when such fine-tuning compromises the model quality. It indicates that model fine-tuning cannot significantly reduce bit accuracy without undermining image quality.
For the **autoencoder attack**, we consider two kinds of VAE attacks: VQ-VAE [1], and VQ-GAN [2]. We use different compression rates to alter image quality and evaluate the robustness of the watermarking.
As shown in **Figure 3 of the rebuttal paper**, the message decoding can maintain relatively high accuracy and achieve good reconstruction quality (PSNR > 31), when the two kinds of auto-encoder use a low compression rate. Then, under a high compression rate, these two auto-encoder attacks can result in lower bit accuracy and degraded image quality. Although these two kinds of auto-encoder attacks can undermine the watermark in the rendered images, as shown in the **Figure 3, right part of the rebuttal paper**, such degraded image quality may lead to multiview inconsistency, making the sharing of the rendered contents difficult.
> W2: Retrain a 3DGS with Bob's rendered images.
We appreciate your insightful feedback. As shown in the table below, our experiments on retraining a new 3DGS model using Bob's rendered images reveal that, even with a slight reduction in message decoding accuracy, the fully converged retrained model can still maintain a relatively high bit accuracy.
| Method | PSNR | SSIM | LPIPS | None | Noise | JPEG | Scaling | Blur |
| :--------------| :---------: | :----------: | :--------: | :---------: | :---------: | :---------: | :---------: | -----: |
| Original | 31.97 | 0.9098 | 0.0759 | 98.94% | 98.17% | 92.41% | 97.27% | 98.20% |
| Retrained | 31.09 | 0.8844 | 0.0781 | 87.21% | 86.98% | 81.35% | 83.88% | 86.76% |
We appreciate all your valuable suggestions and look forward to discussing them further with you during the discussion session.
[1] Neural Discrete Representation Learning.
[2] Taming Transformers for High-Resolution Image Synthesis.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Including this additional work in the revised version will significantly enhance the clarity and completeness of your paper, making it more robust and comprehensive. My concerns are now resolved, and I have decided to increase my score. Thank you for your efforts.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to review our revised submission. We greatly appreciate your recognition of the additional work we've incorporated and how it has enhanced the clarity and comprehensiveness of our paper.
We're pleased to hear that our revisions have addressed your concerns. Your constructive comments help us improve the quality and robustness of our work.
---
Reply to Comment 1.1.2:
Comment: Dear Reviewer QvjQ,
Thank you once again for your thorough review and valuable feedback throughout this process. We are deeply grateful for your recognition of our additional work and how it has enhanced the clarity and comprehensiveness of our paper. We appreciate the time and expertise you've dedicated to reviewing our submission.
Sincerely,
Authors | Summary: This paper proposes an uncertainty-based method to achieve watermarking for 3D Gaussian Splatting. Specifically, the Hessian matrix is used to estimate the parameter uncertainty. Then, the 3D Gaussians with high uncertainty are densified. The densified 3D Gaussians are trained to embed watermarking using a pre-trained 2D message decoder. After that, a 3D message decoder is trained using PointNet. Experimental results show that the proposed method achieves the best performance.
Strengths: 1. This paper is well-written and easy to follow.
2. The experimental results show that the proposed method achieves new SOTA results.
3. The proposed method can decode watermarking both in 2D rendered images and 3D assets.
4. An uncertainty-based method is proposed to select trainable 3D Gaussians, which is reasonable.
Weaknesses: 1. One concern about this paper is its novelty. The major contribution of this paper is the introduction of uncertainty into 3D Gaussians watermarking. As the definition of uncertainty using Fisher Information comes from [42], simply using uncertainty for 3D Gaussians watermarking is quite simple and straightforward. Regarding the message decoders, they are all standard operations. HiDDeN [11] is used for the 2D message decoder, and PointNet [43] is used for the 3D message decoder. Therefore, the major contribution of the proposed method should be further justified.
2. The proposed method utilizes the 3D Gaussians with high uncertainty to embed watermarking. What if an attacker also uses this feature? The attacker could first identify the 3D Gaussians (after training/fine-tuning) with high uncertainty and then only attack these 3D Gaussians using techniques such as Noise, Translation, Rotation, or Cropout. Additionally, the attacker might delete some of the identified 3D Gaussians to compromise the 3DGS assets.
3. The influence of the parameter uncertainty threshold should be included in the experiments to assess the sensitivity of the uncertainty threshold on the proposed method.
4. The results with different bit lengths are missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fyQG,
We will address your comments below and in the revised paper.
> W1. One concern about this paper is its novelty.
Thanks for raising the concern. While our approach incorporates elements from classical techniques, our contributions extend beyond these traditional frameworks. Besides the **first exploration** to protect the 3DGS via digital watermarking, our primary contributions also lie in developing a framework based on **uncertainty** for embedding invisible perturbations into the 3DGS model, and **message extraction from both 2D images and 3D Gaussians**.
**W1-1. Use uncertainty to achieve invisible watermarking of 3DGS**
One major contribution is the application of uncertainty estimation in invisible watermarking of 3DGS, an exploration that diverges from its traditional use in active learning [1]. We find that uncertainty estimation can inherently identify model parameters that are more tolerant to perturbations, making it well-suited for the purpose of invisible watermarking in 3DGS.
Without considering the uncertainty, naively applying the existing watermarking strategies can cause noticeable distortions. As shown in **rebuttal Figure 1**, compared with our method, "3DGS with message" and "3DGS with fine-tuning" all show poor reconstruction quality and reduced bit accuracy. This further demonstrates the superiority of the proposed uncertainty-based strategy. Besides, the HiDDeN decoder mainly focuses on the decoding of information along the boundary areas. From the **left part of rebuttal Figure 2**, such boundary areas are all with high uncertainty values. This can also partly show the correlation between the uncertainty and the message embedding.
**W1-2. Message extraction from both 2D images and 3D Gaussians.**
Our second contribution is a method that enables copyright message extraction from both 2D images and 3D Gaussians. Existing watermarking techniques for 3D assets (e.g., NeRF) are limited to extracting messages from 2D rendered images, as directly extracting messages from the neural networks used for scene representation in NeRF proves challenging. Consequently, owners of the 3D assets can only assert their ownership through these rendered 2D images, rather than from the underlying 3D neural representation itself. Our approach allows direct message extraction from 3D Gaussians using PointNet. This provides model owners with a novel means of claiming ownership directly from their 3D assets, rather than relying solely on rendered images. This advancement offers a more robust and versatile approach to protecting the copyright of 3DGS model owners.
> W2: The proposed method utilizes the 3D Gaussians with high uncertainty to embed watermarking. What if an attacker also uses this feature?
In response to your concern about this potential risk, we have conducted additional experiments specifically attacking 3D Gaussians with high uncertainty values to assess the robustness of our approach. We assume a scenario where an attacker directly manipulates the high-uncertainty 3D Gaussians. As demonstrated in the table below, our method maintains relatively high bit accuracy despite this attack.
This resilience partly stems from the fact that the uncertainty distribution of watermarked 3D Gaussians can differ from that of the original 3D Gaussians, as mentioned in the main paper (lines 168–170). If the attacker conducts the operation based on the original Gaussians, it is still challenging to attack the watermarked Gaussians with new properties. Additionally, our 3D message decoder randomly samples 3D Gaussians (main paper, lines 197-199) so that even if some of the high-uncertainty Gaussians are deleted, messages can still be extracted from other 3D Gaussians. However, it is still suggested that the watermarking strategy be kept private.
| Method| Noise | Translation | Rotation | Cropout | Delete |
| :----------| :----------: | :----------------: | :------------: | :---------: | ---------: |
| Normal attack | 99.95% | 98.32% | 95.32% | 91.73% | N.A. |
| High-uncertainty attack | 90.80% | 88.75% | 84.42% | 81.97% | 77.93% |
> W3: The influence of the parameter uncertainty threshold should be included.
In our experiments, we set the average uncertainty value as the default threshold. By following your suggestion, we show more results to verify the sensitivity of the uncertainty threshold. As shown in **right part of the rebuttal Figure 2**, a lower threshold can enhance the bit accuracy, though it slightly compromises image quality. Conversely, a higher threshold results in better image quality and a more lightweight model but also slightly compromising message decoding accuracy. However, in both situations, the compromises are moderate.
> W4. The results with different bit lengths are missing.
As we have discussed in the main paper (lines 273-275), we evaluate the message decoding capacity by setting the bit length to 48 bits, aligned with the maximum length used in 3D model watermarking methods [34, 4]. Shorter message bit lengths typically yield higher decoding accuracy. We evaluate the 16-bit and 32-bit message decoding accuracy on the Blender dataset and show the results in the below table.
| Bit-length | PSNR | SSIM | LPIPS | None | Noise | JPEG | Scaling | Blur |
| :------------ | :-----------: | :----------: | :---------: | :---------: | :----------: | :----------: | :----------: | ----------: |
| 16 | 32.25 | 0.9102 | 0.0758 | 99.53% | 98.66% | 92.35% | 97.63% | 98.56% |
| 32 | 31.97 | 0.9098 | 0.0759 | 98.94% | 98.17% | 92.41% | 97.27% | 98.20% |
[1] Unifying approaches in active learning and active sampling via fisher information and information-theoretic quantities.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response, my concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer fyQG,
We greatly appreciate the time and effort you have invested in evaluating our work and providing insightful comments and questions.
We want to express our thanks again for your expertise and insights. Your feedback has played a crucial role in improving our paper, and we are sincerely grateful for your dedication to the review process.
Best regards,
Authors
---
Rebuttal 2:
Comment: We are pleased to hear that our revisions have addressed your concerns. Your constructive comments help us improve the quality and robustness of our work.
We will incorporate all your valuable suggestions into our final versions. If you have further questions, please feel free to raise them. We would be grateful if you could consider raising your scores. Thanks very much.
Best Regards,
Authors | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank all reviewers for their comprehensive evaluations and valuable feedback. We are pleased to address any additional questions during the discussion period.
Best Regards,
Authors of Paper 3674
Pdf: /pdf/ce627dfca129bbba37fee1203b579757119efd4b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LiT: Unifying LiDAR "Languages" with LiDAR Translator | Accept (poster) | Summary: In this paper, the authors propose a method to help alleviate the domain gaps among different datasets with different LiDAR sensors, which can enable zero-shot detection on a new dataset. The proposed method including Scene Modeling for foreground and background reconstruction and LiDAR Modeling with statistical and ray-drop modeling. Another contribution is that the authors also accelerate the ray casting algorithm using GPU. The authors conducted single-domain and Multi-domain unification experiments on Waymo, nuScenes, and KITTI datasets, which achieves SOTA performance compared to previous works. The authors also provide ablation studies on foreground diversity and LiDAR noise injection. In addition, the authors show the run time performance after the GPU acceleration.
Strengths: Originality: The foreground and background reconstruction and LiDAR Modeling and the statistical and ray-drop modeling in LiDAR Modeling make the paper differ from previous works.
Quality: The code is provided. The performance is evaluated on multiple datasets, and achieves SOTA performance compared to previous works, and ablation studies are good. The GPU acceleration is also good.
Clarity: The images in the paper are clear and easy to understand.
Significance: The paper demonstrates the potential of zero-shot detection on a new dataset by 3D reconstruction from multiple different dataset and LiDAR settings and LiDAR simulation.
Weaknesses: In the title of the paper, the use of terms such as "Language," "Translator," and "LiT" appears to be capitalizing on the popularity of the trending terms "LLM", "ViT", and "DiT", potentially misleading readers.
SECOND and PV-RCNN are relatively old detection models, it's better to have experiments on more recent models such as CenterPoint, and other SOTA models to further demonstrate the effect of domain unification on SOTA models and even achieve new SOTA results. This would significantly enhance the paper's persuasiveness and impact.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the multi-domain unified training experiments, have you considered including comparisons with other non-reconstruction semi-supervised learning methods? (like pseudo-labels and so on)
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. Usage of terms and naming of the method
We thank the reviewer for bringing up the point on the naming and terms used in the paper. Regarding the terms "language" and "translator", "language" refers to a particular LiDAR pattern of a specific LiDAR sensor, and the "translator" refers to the LiT model that translates the LiDAR pattern from one sensor to another. We will revise it to make it more clear and avoid any potential confusion. We will think of some new names to better reflect the core idea of the paper in the revised manuscript, for instance:
- LidarTranslator
- LidarUnifier
- LidarAdapter
- LidarTransformer
We will consider using these new terms in the updated version of the paper.
### W2. Inclusion of newer detection models
We thank the author for the suggestion. We do believe this is very valuable. We will include experiments with more recent models such as CenterPoint in the revised manuscript. We will also compare the performance of LiT with CenterPoint and other SOTA models to further demonstrate the effectiveness of domain unification.
### Q1. Comparison to non-reconstruction methods in multi-domain learning
We thank the reviewer for the question. We add an additional experiment to compare LiT with ST3D, which is a model-driven pseudo-label approach. Specifically, we evaluate the "W + N -> K" tasks where the mix of Waymo (W) and nuScenes (N) are used for training and KITTI (K) is used for testing. We show AP_BEV and AP_3D for both SECOND-IOU and PV-RCNN models. Specifically, we compare:
- **Source only**: naive mix of Waymo and nuScenes samples
- **ST3D**: pseudo-label with a mix of Waymo and nuScenes samples
- **LiT (ours)**: LiT translates Waymo and nuScenes samples to KITTI style
- **Oracle**: direct train on KITTI, with full target domain information available
| Training set | SECOND-IOU AP_BEV (↑) | SECOND-IOU AP_3D (↑) | PV-RCNN AP_BEV (↑) | PV-RCNN AP_3D (↑) |
| -------------------------------- | --------------------- | -------------------- | ------------------ | ----------------- |
| Source only (naive mix of W + N) | 67.26 | 22.05 | 77.82 | 34.05 |
| ST3D (pseudo-label on W + N) | 75.91 | 39.08 | 69.69 | 31.08 |
| **LiT (ours, translated W + N)** | **84.45** | **71.58** | **84.15** | **75.50** |
| Oracle (direct train on K) | 83.29 | 73.45 | 88.98 | 82.50 |
The results show that LiT outperforms ST3D in multi-domain unified training tasks, where we jointly train with a mix of Waymo and nuScenes samples. This demonstrates the effectiveness of our data-driven domain unification method in such tasks.
---
Rebuttal 2:
Title: Updates on W2. Inclusion of Newer Detection Models
Comment: ### Updates on W2. Inclusion of newer detection models
> It's advisable to conduct experiments with more recent models such as CenterPoint and other SOTA models to further demonstrate the effect of domain unification on SOTA models and potentially achieve new SOTA results.
Dear Reviewer omtq,
We sincerely thank you for your constructive suggestion to evaluate recent models in W2. We would like to provide an update regarding integrating LiT with the **CenterPoint** model. We perform the Waymo -> KITTI translation tasks, comparing the performance across various setups: baseline (no translation), ST3D (model-based adaptation), our method (LiT, direct translation), and an oracle (direct training on KITTI). The results are summarized below:
| **CenterPoint** | **AP_BEV (↑)** | **AP_3D (↑)** |
|----------------------------------------|----------------|---------------|
| Source only (Baseline) | 75.26 | 45.46 |
| ST3D (Model-based adaptation) | 76.97 | 49.72 |
| **LiT (Ours, Direct translation)** | **77.86** | **62.67** |
| Oracle (Direct training with target) | 81.23 | 71.24 |
These results illustrate that our LiT method surpasses both the baseline and ST3D in the AP_BEV and AP_3D metrics, with a particularly significant boost in AP_3D, showcasing the practical benefits of our approach in domain translation. The performance of LiT closely approaches that of the oracle, highlighting our method's potential to effectively bridge the domain gap.
Thank you again for your valuable feedback. We plan to conduct additional experiments on different translation tasks and evaluate other SOTA models with LiT for the revised manuscript. Please let us know if you have any further suggestions or questions.
Sincerely,
Authors
---
Rebuttal 3:
Comment: Thank you for the detailed rebuttal and updates. Among the names you provided, I prefer "LidarUnifier" or "LidarAdapter," as they capture the essence of your method well and won't cause confusion with other methods. For example, "LidarTransformer" could be confused with [1] in the normal detection/segmentation area instead of the domain adaptation area. The additional experiments with CenterPoint strengthen the paper, demonstrating the effectiveness of your approach across newer models. Maintain the score.
References
[1] Z. Zhou et al., "LiDARFormer: A Unified Transformer-based Multi-task Network for LiDAR Perception," 2024 IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, 2024, pp. 14740-14747, doi: 10.1109/ICRA57147.2024.10610374.
---
Rebuttal Comment 3.1:
Comment: Dear Reviewer,
Thank you for your thorough review and insightful advice, including your suggestion regarding the naming scheme. We are pleased that you find our work valuable for the community. Thank you!
Sincerely,
Authors | Summary: To address the significant gap between different LiDAR datasets (related to sensors, environments, etc.), this paper proposes a solution that differs from the existing model-based adaptation approach. By employing a scene-reconstruction-data-simulation approach, it achieves consistent representation of different LiDAR datasets. This data-driven method partially resolves issues such as domain shift in autonomous-driving-related 3D point cloud learning.
Strengths: - Innovatively analogizing the domain gap between different LiDAR data to that between languages, this paper proposes a data-driven cross-sensor training method from a "translation" perspective.
- The proposed method shows good performance across different datasets, especially in terms of the AP3D metric.
- The paper is well-written with clear logic and comprehensive experiments.
Weaknesses: - Does "foreground" only refer to vehicles? Do pedestrians, bicycles, and similar entities fall into this category?
- Similarly, in background reconstruction, is consideration limited to rigid bodies like the ground? In autonomous driving scenarios, is there no need to consider non-rigid objects such as vegetation?
- In the current version, it seems that scene variations are not significant. Does this mean it's difficult to address zero-shot scenarios? For instance, if the source data are all from residential areas, is it challenging to accurately simulate point clouds from downtown areas?
Technical Quality: 4
Clarity: 4
Questions for Authors: - How does the modeling accuracy of different foreground/background components affect the results of this paper?
- Since the background is static, can it be replaced by other data sources? For example, historical high-precision drone point clouds or three-dimensional maps from scene reconstruction?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The analysis and discussion regarding scene reconstruction need improvement, as suggested by the previous comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1. Does "foreground" only refer to vehicles? Do pedestrians, bicycles, and similar entities fall into this category?
Currently, we only focus on the vehicle category in the foreground modeling. In this paper, the main motivation is to demonstrate the effectiveness of model-based domain adaptation by directly translating the LiDAR patterns. However, we do plan to extend our method to other object categories such as pedestrians, bicycles, and similar entities in the revised version. We thank the reviewer for pointing out this.
> W2. Similarly, in background reconstruction, is consideration limited to rigid bodies like the ground? In autonomous driving scenarios, is there no need to consider non-rigid objects such as vegetation?
- **Rigid vs non-rigid background objects.** For background modeling, we consider a point as background point if it is not in one of the foreground annotated bounding boxes. Therefore, the background modeling step includes all rigid objects in the background and potentially include non-rigid objects such as vegetation.
- **Effects of background inaccuracy.** We have conducted an experiment to study the effects of inaccuracies in the background modeling on the performance of the translated model. From the experiment, we see that the model is less sensitive to inaccuracies in the background modeling. However, foreground modeling is more critical for accurate object detection.
| Condition | AP_BEV (↑) | AP_3D (↑) |
| --- | --- | --- |
| Noise in background (std 0.01m) | 78.02 | **61.43** |
| Noise in background (std 0.02m) | 78.70 | 57.99 |
| Noise in background (std 0.05m) | 76.76 | 58.21 |
| Baseline LiT model without noise | **80.54** | 60.13 |
> W3. In the current version, it seems that scene variations are not significant. Does this mean it's difficult to address zero-shot scenarios? For instance, if the source data are all from residential areas, is it challenging to accurately simulate point clouds from downtown areas?
We can indeed model the "LiDAR pattern" in residential areas, and simulate the point cloud in unseen "downtown areas". There is a difference between "scene variations" and "LiDAR pattern variations". We could model the LiDAR pattern regradless of the scene variations.
To illustrate this, we first model the LiDAR pattern with nuScenes dataset. Then, we simulate the LiDAR points with a scene taken from the Mai City dataset (cite: Poisson Surface Reconstruction for LiDAR Odometry and Mapping), which is unrelated to the nuScenes. We show that the simulated LiDAR points closely match the pattern of the nuScenes LiDAR, this is as if you are driving a "nuScenes car" in a "Mai City" environment. We provide visualizations in the rebuttal PDF (Figure x). Please refer to **Figure R1** in the attached rebuttal PDF.
### Questions
> Q1. How does the modeling accuracy of different foreground/background components affect the results of this paper?
To explore how the modeling accuracy of different foreground/background components affects the results, we simulate inaccuracies by adding noise to the foreground and background reconstructed meshes. Specifically, we study the nuScenes->KITTI task, where we train a SECOND-IOU model with LiT-translated nuScenes and evaluate on the original KITTI dataset. The noise is added to the reconstructed mesh vertices, and the translated LiDAR point cloud is generated by ray casting from the noisy mesh. The results are shown in the table below:
- Inaccuracies in foreground modeling
| Condition | AP_BEV (↑) | AP_3D (↑) |
| --- | --- | --- |
| Noise in foreground (std 0.01m) | 77.14 | 57.84 |
| Noise in foreground (std 0.02m) | 77.25 | 56.78 |
| Noise in foreground (std 0.05m) | 69.57 | 35.37 |
| Baseline LiT model without noise | **80.54** | **60.13** |
- Inaccuracies in background modeling
| Condition | AP_BEV (↑) | AP_3D (↑) |
| --- | --- | --- |
| Noise in background (std 0.01m) | 78.02 | **61.43** |
| Noise in background (std 0.02m) | 78.70 | 57.99 |
| Noise in background (std 0.05m) | 76.76 | 58.21 |
| Baseline LiT model without noise | **80.54** | 60.13 |
- Inaccuracies in both foreground and background modeling
| Condition | AP_BEV (↑) | AP_3D (↑) |
| --- | --- | --- |
| Noise in both foreground and background (std 0.01m) | 77.54 | 59.60 |
| Noise in both foreground and background (std 0.02m) | 76.90 | 56.09 |
| Noise in both foreground and background (std 0.05m) | 72.03 | 36.18 |
| Baseline LiT model without noise | **80.54** | **60.13** |
From the table above, we see that the model is more sensitive to inaccuracies in the foreground compared to background. This is expected as the foreground objects are more critical for object detection, demonstrating the importance of accurate foreground modeling for the performance of the translated model.
> Q2. Since the background is static, can it be replaced by other data sources? For example, historical high-precision drone point clouds or three-dimensional maps from scene reconstruction?
Yes. We can indeed replace the background with other data sources. We added a new visualization where we put a nuScenes modeled LiDAR in an unseen Mai City scene. Please refer to **Figure R1** in the attached rebuttal PDF.
---
Rebuttal Comment 1.1:
Comment: All my concerns are well addressed.
---
Rebuttal 2:
Comment: Dear Reviewer Q1U8,
We sincerely thank you for acknowledging the strengths of LiT's data-driven approach to bridging the domain gap. We are also pleased to hear that the new experiments we introduced have addressed the concerns you highlighted. This reaffirms the effectiveness of our approach and its applicability across different LiDAR datasets.
We would like to especially _thank you_ for your suggestion to include an analysis of the inaccuracies in foreground and background modeling, respectively. This has improved our insight into how noise in the translated foreground and background can affect the model's performance. This insight is very valuable, and we are grateful for your suggestion.
Sincerely,
Authors | Summary: This paper proposed a unifying LiDAR Translator named LiT to achieve LiDAR domain adaptation. Differing from current model-driven approaches, LiT adopts a novel data-driven approach, embedding disparate LiDAR attributes into a common representation. LiT
proposes a generalizable scene modeling and LiDAR statistical modeling. Besides, an efficient ray-casting engine is proposed to accelerate the above models. LiT also achieves efficient SoTA performance on several LiDAR datasets.
Strengths: S1. LiT adopts a novel data-driven approach instead of the classical model-driven approach, embedding disparate LiDAR attributes into a common representation. This research direction provides much value for real-world applications in autonomous driving industries.
S2. An effective ray-casting engine is proposed to accelerate LiT on GPUs.
S3. Experiments on widely used datasets demonstrate the SOTA performance of LiT.
Weaknesses: W1. This work looks like a data normalization operation, only modifying different datasets into a unified representation.
W2. The authors argue that model-driven approaches will introduce considerable costs associated with customizing model structure and training data for new, specific domains. However, this work has an extra LiDAR statistical modeling, this operation also causes additional costs.
W3. Table 7 shows that LiT may not avoid the problem of model-driven approaches, that is, requiring different configurations for distinct datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. The authors argue that model-driven approaches need extra training for new domains, while the proposed LiT also needs extra LiDAR statistical modeling. Can the authors provide a detailed comparison to prove that data-driven approaches are significantly better than traditional model-driven ones?
Q2. Since datasets will be unified into a common representation, why LiT needs different training hyperparameters for distinct domain adaptation tasks, as shown in Table 7? It seemed to contradict the original motivation of this paper, i.e., unifying different types of LiDAR sensors.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Although the proposed data-driven approach seemed to be a promising research direction, LiT lacks sufficient comparisons with the model-driven approaches. Besides, LiT seemed to show "ununified" for the training process of different adaptation tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. About dataset normalization
- **Dataset normalization is non-trivial.** It is actually non-trivial to normalize different datasets into a unified representation. The LiDAR sensors have different specifications, such as the number of beams, vertical and horizontal resolution, and field of view. These differences make it challenging to directly combine data from different sensors. Our method, LiT, addresses this challenge by modeling the target domain LiDAR pattern with only limited unannotated target domain scenes. We show that LiT outperforms the SOTA data-based domain adaptation method ReSimAD.
- **Data-based and model-based domain adaptation are orthogonal approaches.** We would like to point out that our data-driven approach is actually orthogonal to the previous model-driven domain adaptation methods as it attacks the domain adaptation problem from a different angle. We hope this could inspire future work in this domain, including the potential combination of data-driven and model-driven approaches.
### W2. Cost of LiDAR modeling
We appreciate the reviewer's question about LiDAR modeling costs. We demonstrate that only a small amount of unannotated target domain data is needed. Our experiment compares MMD and JSD metrics on a nuScenes-to-nuScenes translation task using 0, 1, 2, and 4 scenes for LiDAR modeling. The "0" scene baseline relies solely on LiDAR specifications. We evaluate MMD and JSD performance with 50 unseen scenes.
| Num scenes used for LiDAR modeling | 0 | 1 | 2 | 4 |
| ---------------------------------- | --------- | ------------- | ------------- | ------------- |
| Data preprocessing time (s) | N/A | 27.04 sec | 55.53 sec | 101.87 sec |
| Ray drop MLP training time (s) | N/A | 366.68 sec | 381.68 sec | 369.46 sec |
| Statistical modeling time (s) | N/A | 4.08 sec | 4.09 sec | 4.43 sec |
| Total LiDAR modeling time | N/A | 6.63 min | 7.35 min | 7.93 min |
| MMD (↓) | 3.303e-04 | **9.535e-05** | **8.131e-05** | **7.884e-05** |
| JSD (↓) | 0.108 | **0.065** | **0.062** | **0.061** |
- Our results show that only a small amount of unannotated target domain data is needed for effective LiDAR modeling. Even with just 1 scene, there are significant improvements compared to no modeling, with further gains from additional scenes being marginal.
- Total run time (including data preprocessing, MLP training, and statistical modeling) is minimal, typically under 10 minutes.
### W3. Hyperparameter consistency
Thank you for the comment regarding the hyperparameter settings. Actually, the parameters across different datasets are **the same** in Table 7. We only use different parameters for different backbone models as they require different amounts of memory. This appears to be a misunderstanding that needs clarifying.
- **Consistency across datasets**: For each model, the hyperparameters are the **same** across all datasets in Table 7.
- **Model-dependent settings**: The variations in hyperparameters are solely due to the differences between the detection models (Second-IOU vs. PV-RCNN), not the datasets. PV-RCNN, due to its higher memory demands, requires smaller batch sizes compared to Second-IOU.
### Q1. Data-driven vs model-driven
- **Comparison with model-driven approaches**: We compare LiT with the traditional ST3D method, a model-driven domain adaptation approach based on pseudo-labels. Our method outperforms ST3D in all translation scenarios.
- **Cost of LiDAR modeling**: An additional experiment (in W2) demonstrates that modeling the target domain LiDAR is minimal in cost. We require only a small amount of **unannotated** target domain data to model the LiDAR pattern. Details of this experiment are provided in the response to W2.
- **Orthogonality of data-driven and model-driven approaches**: Our data-driven approach is orthogonal to existing model-driven methods, addressing domain adaptation from a different perspective. We hope this perspective will inspire future work and encourage exploring potential synergies between data-driven and model-driven approaches.
### Q2. Hyperparameter consistency
Thank you for the comment regarding the hyperparameter settings. Actually, the parameters across different datasets are **the same** in Table 7. We only use different parameters for different backbone models as they require different amounts of memory. Please kindly see the response to W3 for more details.
---
Rebuttal 2:
Title: Summarized Responses and Gentle Reminder
Comment: Dear Reviewer Kzeh,
Thank you for your detailed review and insightful questions regarding our work. Especially, we would like to take this opportunity to _thank you_ for the suggestion to include an analysis of the _cost of LiDAR modeling (W2)_. With the new experiments added, we believe this suggestion is genuinely valuable for improving the compellingness of our work.
Below, we would like to offer a **summarized version** of our responses to ensure clarity:
- **W1. About Dataset Normalization**
- "Normalizing" datasets into a unified representation is non-trivial due to inherent differences in LiDAR sensor patterns. LiT effectively addresses these challenges by modeling the target domain LiDAR pattern with minimal unannotated data, showing superior performance over state-of-the-art domain adaptation methods like ReSimAD. We show that LiT enables efficient zero-shot detection across diverse datasets (Table 2 in the main paper), and LiT is able to combine data from multiple source domains to achieve better performance on the target domain (Table 3 in the main paper). This data-driven approach is orthogonal to traditional model-based methods and provides a practical solution for real-world applications.
- **W2. Cost of LiDAR Modeling is Minimal**
- We add new experiments to show that LiDAR modeling can be achieved with minimal unannotated target domain data with fast run times, demonstrating cost-efficiency and practicality for real-world applications. The detailed table is provided in the original rebuttal.
- **W3. Hyperparameter Consistency**
- We clarify that in Table 7, for a given model, the hyperparameters are actually _consistent_ across all datasets. The hyperparameters are only different when a different model is used.
- **Q1. Data-driven vs Model-driven Approaches**
- LiT surpasses traditional model-driven approaches (e.g., ST3D) across all tested translation scenarios.
- Besides, we see LiT's data-driven approach as a _complementary and orthogonal_ strategy to model-driven methods. LiT offering a new path forward in domain adaptation research.
- **Q2. Hyperparameter Consistency**
- We clarify that in Table 7, for a given model, the hyperparameters are actually _consistent_ across all datasets. The hyperparameters are only different when a different model is used.
For more detailed responses, including experiment results, please refer to the original rebuttal.
We hope this summary addresses the key points of your review. Should there be any further details you wish to discuss or additional clarifications needed, please do not hesitate to reach out. We look forward to your feedback and are hopeful for a positive consideration.
Sincerely,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for your rebuttal and kindly summarization of the responses. I have already raised my scores.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer,
Thank you for your thoughtful feedback and for updating your scores. We believe that your suggestions, along with our newly added experiments, have greatly enhanced the persuasiveness and completeness of our work.
Sincerely,
Authors | Summary: The paper presents a novel framework designed to unify LiDAR data into a single target “language” and unified domain detection capabilities across diverse LiDAR datasets, marking a step toward domain unification for LiDAR-based autonomous driving systems. Experiments on dataset KITTI, Waymo, and nuScenes demonstrate the superiority of the proposed method in the task of single-source and multi-sources domain adaptation.
Strengths: 1. The paper is novel in introducing LiDAR Translator (LiT) to joint training across multiple datasets. LiT enables efficient state-of-the-art zero-shot and unified domain detection capabilities across diverse LiDAR datasets.
2. The paper is well-written and easy to follow, especially the part explaining the background.
3. It presents good experimental results and intuitive visualizations, convincingly demonstrating its effectiveness.
Weaknesses: 1. The motivation of this paper is not clear. If it is possible to accurately model the target domain data, why is there a need to translate the source domain data into the target domain data?
2. As the core component of this work, the translator requires more direct experimental validation, such as measuring the distributional differences between the translated data and the target data, rather than solely relying on verification through downstream domain adaptation tasks.
3. It lacks of comparative experiments with the latest state-of-the-art methods.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How can we ensure the accuracy of modeling the target data? Will the differences between simulated data and real data have negative impacts?
2. Modeling the target LiDAR data and target scenes requires a lot of prior information. When this information is unknown, how can we use the method proposed in this paper to solve the domain adaptation problem? From my understanding, the objective of domain adaptation is to address the challenge of having limited data or labels in the target domain.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors discussed potential limitations about the data, annotation and the category of object.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### W1. Motivation of the work
We thank the reviewer for highlighting the importance of clarifying the motivation for our work.
- **Background:** Imagine an autonomous driving company that has collected a substantial amount of LiDAR data from different sensors (LiDAR-A and LiDAR-B). The company has also annotated the data for object detection for LiDAR-A and LiDAR-B. However, they want to deploy the object detection model on a new LiDAR sensor (LiDAR-T, T for target) mounted on their production vehicles.
- **Classic approach:** Traditionally, this involves sending out cars equipped with LiDAR-T, collecting a new large dataset, annotating this data for object detection, and retraining the model. This process is expensive, time-consuming, and does not scale well with the deployment of newer LiDAR models, nor does it effectively leverage the existing annotated data from LiDAR-A and LiDAR-B.
- **Our approach:** We aim to utilize the existing annotated data from LiDAR-A and LiDAR-B to train a model that can be directly used on LiDAR-T. To achieve this, we collect a small dataset from LiDAR-T (without needing to label it for object detection), model the characteristics of LiDAR-T, and then use the LiDAR Translator to translate the data from LiDAR-A and LiDAR-B to match the sensor pattern of LiDAR-T. This approach offers three main advantages:
- **Utilization of mixed data:** We leverage historical annotated data from LiDAR-A and LiDAR-B, ensuring that past investments in data collection are not wasted.
- **Scalability:** Our model demonstrates excellent scalability. We show our method is better than the baseline in the (train: LiDAR-A, test: LiDAR-T) setting, our model performs better when more training data is involved (train: LiDAR-A + LiDAR-B, test: LiDAR-T), and if the target data is also known (train: LiDAR-A + LiDAR-B + LiDAR-T, test: LiDAR-T), our model outperforms training purely on LiDAR-T data. This demonstrates the scalability of our method as it shows improved performance when more data collected from different sensors is utilized.
We hope this clarifies the motivation behind our work and the reasons for modeling the target domain LiDAR pattern with a small amount of unannotated data and translating source domains to the target domain. We thank the reviewer for acknowledging the importance of this clarification, and we will include the above explanation in the revised manuscript to more clearly articulate the real-world problem we are trying to solve.
### W2. Direct verification for translation
We have added a new experiment to directly compare the translated target domain with ground-truth target domain. We follow LiDARDM and UltraLiDAR to evaluate the distribution differences with Maximum-Mean Discrepancy (MMD) and Jensen–Shannon divergence (JSD). The results are shown below:
- Waymo -> KITTI
| Input style | GT style | MMD (↓) | JSD (↓) |
| ------------------------- | -------- | ------------- | --------- |
| Waymo (baseline) | KITTI | 8.817e-04 | 0.273 |
| Waymo translated to KITTI (ours) | KITTI | **3.268e-04** | **0.180** |
- Waymo -> nuScenes
| Input style | GT style | MMD (↓) | JSD (↓) |
| ------------------------- | -------- | ------------- | --------- |
| Waymo (baseline) | nuScenes | 2.310e-03 | 0.380 |
| Waymo translated to nuScenes (ours) | nuScenes | **6.583e-04** | **0.205** |
- nuScenes -> KITTI
| Input style | GT style | MMD (↓) | JSD (↓) |
| ------------------------- | -------- | ------------- | --------- |
| nuScenes (baseline) | KITTI | 8.725e-04 | 0.220 |
| nuScenes translated to KITTI (ours) | KITTI | **2.107e-04** | **0.164** |
The MMD and JSD metrics are significantly reduced after translation, indicating that the translated data is closer to the target domain data.
### W3. Comparison with other method
In the paper, we have compared the state-of-the-art ReSimAD (ICLR2024) method, which is a recent work that is closely related to ours. We have shown that our method outperforms ReSimAD in domain translation tasks, as shown in Table 2 and summarized below:
| Domains | Method | AP_BEV (↑) | AP_3D (↑) |
| ----------------- | -------------- | ---------- | --------- |
| W -> K | ReSimAD | 81.01 | 58.42 |
| W -> K | **LiT (ours)** | **84.35** | **65.68** |
| W -> N | ReSimAD | 37.85 | 21.33 |
| W -> N | **LiT (ours)** | **38.77** | **23.48** |
### Q1. Data-driven vs model-driven
**Orthogonality of data-driven and model-driven approaches**: We would like to point out that our data-driven approach is actually orthogonal to the previous model-driven domain adaptation methods as it attacks the domain adaptation problem from a different angle. We hope this could inspire future work in this domain, including the potential combination of data-driven and model-driven approaches.
### Q2. Cost of LiDAR modeling
- We agree with the reviewer that domain adaptation aims to tackle limited data or labels in the target domain.
- We added an experiment showing that only a small amount of unannotated target domain data is needed. Using 1, 2, and 4 **unannotated** scenes for LiDAR modeling, we measured the run time for each LiDAR modeling step, as well as the MMD/JSD metrics.
| Num scenes used for LiDAR modeling | 0| 1| 2| 4|
| ---------------------------------- | --------- | ------------- | ------------- | ------------- |
| Data preprocessing time (sec) | N/A | 27.04 | 55.53 | 101.87 |
| Ray drop MLP training time (sec) | N/A | 366.68 | 381.68 | 369.46 |
| Statistical modeling time (sec) | N/A | 4.08 | 4.09 | 4.43 sec |
| Total LiDAR modeling time | N/A | 6.63 min | 7.35 min | 7.93 min |
| MMD (↓)| 3.303e-04 | **9.535e-05** | **8.131e-05** | **7.884e-05** |
| JSD (↓)| 0.108 | **0.065** | **0.062** | **0.061** |
---
Rebuttal 2:
Title: Updated Response to Q1
Comment: Dear Reviewer tU6b,
Thank you for your time and all of the insightful feedback. We would like to offer an **updated response for Q1**.
### Q1. How can we ensure the accuracy of modeling the target data? Will the differences between simulated data and real data have negative impacts?
- **Q1.1 How can we ensure the accuracy of modeling the target data?**
- **Direct validation of the source -> target translation**: Thanks to your suggestions, we have added a new experiment to evaluate the distributional differences between the translated data and the target data. We follow LiDARDM and UltraLiDAR to evaluate the distribution differences with Maximum-Mean Discrepancy (MMD) and Jensen–Shannon divergence (JSD). We show that the translated data is closer to the target domain data compared to the source domain data.
- Waymo -> KITTI
| Input style | GT style | MMD (↓) | JSD (↓) |
| ------------------------- | -------- | ------------- | --------- |
| Waymo (baseline) | KITTI | 8.817e-04 | 0.273 |
| Waymo translated to KITTI (ours) | KITTI | **3.268e-04** | **0.180** |
- Waymo -> nuScenes
| Input style | GT style | MMD (↓) | JSD (↓) |
| ------------------------- | -------- | ------------- | --------- |
| Waymo (baseline) | nuScenes | 2.310e-03 | 0.380 |
| Waymo translated to nuScenes (ours) | nuScenes | **6.583e-04** | **0.205** |
- nuScenes -> KITTI
| Input style | GT style | MMD (↓) | JSD (↓) |
| ------------------------- | -------- | ------------- | --------- |
| nuScenes (baseline) | KITTI | 8.725e-04 | 0.220 |
| nuScenes translated to KITTI (ours) | KITTI | **2.107e-04** | **0.164** |
- **Verification through downstream tasks**: We evaluate the effectiveness of LiT translator through downstream detection tasks, where we train a model with the source domain and test it on the target domain. The results show that our method outperforms previous state-of-the-art methods (Baseline, SN, ST3D, and ReSimAD) in all translation cases. The results are provided in Table 2 and Table 3 of the main paper.
- **Q1.2 Will the differences between simulated data and real data have negative impacts?**
Real target domain data with annotation, if available, is typically better than the simulated data. However, annotated real target domain data can be costly to obtain. LiT's main goal is to address the scenario _where real annotated target domain data is limited_. More importantly, LiT enables scaling up with simulated data from different source domains, and can even surpass real target domain training in some metrics with purely translated data.
We provide an experiment to study the impact of using "real target domain data versus translated target domain data". In particular, we set KITTI as the target domain, and we use different combinations of real and translated data for training the SECOND-IOU model. The results are summarized below:
| | Training set | Test set | Real or translated training data? | AP_BEV (↑) | AP_3D (↑) |
| --- | ------------------------ | -------- | --------------------------------- | ---------- | --------- |
| (a) | Waymo without translation | KITTI | No translation | 67.64 | 27.48 |
| (b) | Waymo | KITTI | Purely translated | 82.55 | 69.94 |
| (c) | Waymo + nuScenes | KITTI | Purely translated | **84.45** | 71.58 |
| (d) | Waymo + nuScenes + KITTI | KITTI | Mix of translated and real | **87.52** | **75.76** |
| (e) | KITTI | KITTI | Purely real | 83.29 | 73.45 |
(Result (a) is from Table 1. Results (b) to (e) are from Table 3 of the main paper. Results better than training on "purely real" target set are highlighted in bold.)
We can conclude that:
- With translation is significantly better than no translation, as we see that (b) is significantly better than (a).
- Training with real target data only (d) can be better than training with one translated source domain data (b). This shows the difference between real and translated data.
- However, if we add more source domain data in experiment (c), the performance improves, although (c) has never seen the real target domain data during training, it does better than (e) in AP_BEV already. This shows that with LiT translation, we are able to scale up the training data to improve the performance, even surpassing training on real target domain data.
- When we have both translated and real target domain data in training (d), the performance is the best, outperforming the purely real target domain data training (e).
Therefore, the differences between simulated data and real data do have an impact, but with carefully designed translation strategies, we show that with purely translated data, it is possible to scale up the simulated training set and surpass real data training in some metrics (AP_BEV in experiment (c) is better than (e)); and when we have both translated and real data, the performance is the best (d).
Sincerely,
Authors
---
Rebuttal 3:
Title: Summarized Responses and Gentle Reminder
Comment: Dear Reviewer tU6b,
Thank you for your time and insightful feedback. Especially, we would like to _thank you_ for the suggestion to include a _direct evaluation of translation quality (W2)_. In response, we have added new experiments to measure the distributional differences between real and translated target LiDAR with MMD and JSD metrics. We believe this suggestion is genuinely valuable for improving the compellingness of our work.
Here, we would like to offer an additional summarized version of our responses for your reference:
- **W1. Motivation of the Work**
- The main question raised is when we can already "accurately model the target domain data", why do we need to "translate the source domain data into the target domain"? The short answer is that we first model the "target LiDAR pattern" with a minimum dataset, and with this, we translate the "source data" to the "target data". There are two concepts here: what we are modeling is the **target domain LiDAR pattern**, but what we are translating is the **source domain data**. The goal is to effectively utilize LiDAR data collected from various source domain LiDAR sensors to jointly train a model that can be effectively used on the target domain, enabling data scaling up.
- **W2. Direct Evaluation for Translation**
- We add new experiments (result table in original rebuttal W2) to directly measure the translation quality with Maximum-Mean Discrepancy (MMD) and Jensen–Shannon divergence (JSD). The results show that the translated LiDAR data closely matches the target domain data distribution, validating the effectiveness of our translation method.
- **W3. Comparison with Other Methods**
- We provide comparisons with the current state-of-the-art data-based domain adaptation method, ReSimAD (ICLR24). The results show that our method outperforms ReSimAD in various domain adaptation tasks.
- **Q1. Ensuring Accuracy of Target Data Modeling**
- See the **Updated Response to Q1** in the previous official comment.
- **Q2. Cost of LiDAR Modeling**
- We agree with the reviewer that "the objective of domain adaptation is to address the challenge of having limited data or labels in the target domain".
- We add a new experiment (results attached in original rebuttal) to show that only with very limited (1-4 target domain scenes without annotation), LiT can effectively model the target domain LiDAR pattern and achieve good performance. We provide a detailed analysis of the runtime cost and performance of LiDAR modeling. This shows that the cost of LiDAR modeling is quite small.
For more detailed responses, please refer to the original rebuttal and the updated response to Q1 outlined earlier.
We hope our response and additional experiments have addressed your concerns. If you have further questions or need additional clarification, please let us know and we are more than happy to engage in further discussions. We appreciate your feedback and are hopeful for a positive consideration.
Sincerely,
Authors
---
Rebuttal 4:
Title: Gentle Reminder: Review of Rebuttal & Final Score
Comment: Dear Reviewer,
We appreciate the insights and feedback you have provided on our submission, LiT, especially regarding the inclusion of a direct evaluation of translation quality (W2). We have added MMD and JSD metrics to directly evaluate the translation quality, and we believe this has significantly improved the robustness of our work.
Similarly, we have addressed all other concerns raised in your review through detailed responses and additional experiments, and we are eager to **hear your thoughts** on whether these have adequately addressed the issues you highlighted, as well as **assigning a final score**. We are happy to discuss any further questions or provide additional information if needed. Your feedback is invaluable to us, and we look forward to your response.
Thank you again for your time and consideration.
Sincerely,
Authors | Rebuttal 1:
Rebuttal: Please refer to individual rebuttal comments. The rebuttal PDF is attached.
Pdf: /pdf/97e81d42a8bc29e2986cc2890c567ed34d653215.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Real-Time Selection Under General Constraints via Predictive Inference | Accept (poster) | Summary: The paper proposes a method for online sample selection. The authors introduce the concepts of _individual_ and _interactive constraints_, and demonstrate theoretically and empirically that their method satisfies both.
Strengths: The problem seems important and the formulation and approach novel. The authors provide both theoretical guarantees and empirical evidence, in both synthetic and real-world applications, of the effectiveness of their approach. The mathematical formulation seems sound, and the assumptions and theoretical results are clearly stated.
Weaknesses: I am not familiar with the FDR control literature, and had to read parts of the paper (specifically sections 2.2, 2.3 and 4) multiple times to get a gist for the logic of the method and its empirical perfomance. This is reflected in my confidence score. If the paper is accepted, I highly recommend the authors revise the paper to make it easier to follow. A flowchart to illustrate the steps of the algorithm and/or to illustrate the differences between the Oracle and Data-driven selection procedures may be helpful; a toy example could also help. I also suggest including a longer description and/or table of the benchmark methods against which the empirical performance of II-COS was compared.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Interactive constraint: Why is this defined only with respect to the correctly selected samples? In the case of, e.g., the diversity of selected samples, is the constraint not intended to represent the diversity of all selections?
- Line 159: Should $R_t = \left( \sum_{i \leq t} \delta_i \right) \lor 1$?
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: The authors discuss limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: I am not familiar with the FDR control literature, and had to read parts of the paper (specifically sections 2.2, 2.3 and 4) multiple times to get a gist for the logic of the method and its empirical perfomance....I highly recommend the authors revise the paper to make it easier to follow.
**To W1**: Thank you for your thoughtful suggestions, and we apologize for any confusion caused to readers unfamiliar with the related fields.
- First, we indeed have a flowchart of the data-driven II-COS procedure in Figure 4 of Appendix C. Based on your suggestions, we will add an illustration of the oracle version to the flowchart and move it to the main text in a future version. We are also willing to provide a more detailed description of the benchmarks in the main text, include a toy example of our method, and revise the paper to make it easier to follow according to your helpful advice.
- Second, due to the limit of main text pages , we have provided more details and related works about our proposed method in supplementary materials. Additionally, in Appendix B, we discuss related works on online FDR control and predictive inference, offering an overview of the literature on online multiple testing and related topics, which can be helpful for readers unfamiliar with these fields.
Again we greatly appreciate your valuable suggestions for improving the quality of our paper.
> Q1: Interactive constraint: Why is this defined only with respect to the correctly selected samples? In the case of, e.g., the diversity of selected samples, is the constraint not intended to represent the diversity of all selections?
**To Q1**: Nice question! We'd like to make the following explanations:
- First, in our online selection problem, we are only concerned with the characteristics of the correctly selected samples of interest. In fact, the individual constraint (e.g., FSR) needs to be prioritized because we should select the samples that are of interest (i.e., $Y_t\in\mathcal{A}$). If the interactive constraint is applied to all selected samples, rather than just the correctly selected ones, we may select many uninteresting samples far from the center of $Y_t\in\mathcal{A}$ to achieve a low similarity. This is not desirable in practice. For example, hiring a obviously unsuitable candidate due to its diversity only.
- Second, if in other problems and scenarios where constraints need to be applied to all selected samples (rather than just the correctly selected samples), our method can be easily modified to meet this requirement by replacing $1-\widehat{L}$ in Equation (7) with 1 in the decision rule of II-COS.
We hope the above clarification will resolve your concerns. Thank you!
> Q2: Line 159: Shluld $R_t=(\sum_{i\leq t} \delta_i)\vee 1$?
**To Q2**: Yes, thank you for your meticulous attention to details. We will incorporate this modification into the final version of our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for these clarifications. | Summary: In this paper, the authors quantify the uncertainty of response predictions using predictive inference; and systematically addressing individual and interactive constraints in a sequential manner. An online selection rule is developed to ensure the above two types of constraints are under control at pre-specified levels simultaneously. Simulated and real-data examples are used to evaluate and illustrate their approach in terms of both online individual and interactive criteria control.
Strengths: This is a nicely, clearly written paper that develops an online selection rule that is simple yet effective. The simplicity of the online selection rule will enhance the potential for this rule to be used in real life. The authors’ claims are well supported both via theory, simulations and application to real data. The paper along with the appendix provides detailed information that allow for replicability. Very nice! I particularly appreciate the comparison with the approaches based on conformal p-values.
Weaknesses: See questions below.
Technical Quality: 4
Clarity: 4
Questions for Authors: Section 4.1 would be good to tell reader how many replications are used.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Comments are needed on the practicality of \hat\mu being a bijection. How critical is this assumption to the method and the theoretical analysis. If critical then this is a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q: Section 4.1 would be good to tell reader how many replications are used.
**To Q**: We have mentioned '500 replications' in the begining of Section 4 (line 270) when introducing our evaluation measures. To avoid repetition, we did not restate this in Section 4.1.
> L: Comments are needed on the practicality of \hat\mu being a bijection. How critical is this assumption to the method and the theoretical analysis. If critical then this is a limitation.
**To L**: Thank you for your nice advice.
- The assumption is considerably mild and widely adopted for the identyification of each $X_t$ in the predictive inference framework. Please refer to [1] for similar assumptions. Per your suggestion, we will add a comment about this assumption in the final version.
- Besides, even if this assumption fails, we can view the local FDR as a special weight to reflect the correctness of each selection. And our strategy can also provide desirable cost control and diversity exhibition empirically.
Reference:
[1] Wu, X., Huo, Y., Ren, H., and Zou, C. Optimal subsampling via predictive inference. JASA, 2023.
---
Rebuttal 2:
Comment: Thanks for your response. I maintain my rating of 8. | Summary: The paper studies online sample selection with individual and interaction
constraints simultaneously. Specifically, the goal is to control (variants) of
the false selection rate (FDR) and the expected similarity (ES) under the
empirical bayes framework. Under distributional conditions, the proposed method
controls the target quantities asymptotically. The method is evaluated on synthetic
and real data.
Strengths: 1. The paper is well-presented and easy to follow.
2. The problem under consideration is of interest and relevant.
Weaknesses: Please see my comments in the questions section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **The model.** In the motivating examples such as candidate selection, it appears to me
that we get to observe the ground truth, i.e., $Y_t$, after time step $t$.
This is briefly mentioned in the discussion section, but I think it is reasonable
to use the observed $Y_t$'s to update the estimation.
2. **The choice of interaction constraints.** It is not well-motivated why
the changing ES to (4) is reasonable. As illustrated in the simulation,
although these two quantities seem to coincide as the time step goes
to infinity, they do differ quite a lot with smaller time steps (this could happen
when $m$ is small).
3. What is the principle for choosing K in general? In particular, for the real data example,
why is K taken to be $1\times 10^{-3}$ in the fist example and $6\times 10^{-3}$ in the second?
4. In the numerical experiments, a fairer comparison with offline CP would be with [1], i.e.,
thresholding the p-values with BH-adjusted p-values as opposed to the fixed threshold.
[1] Jin, Ying, and Emmanuel J. Candès. "Selection by prediction with conformal p-values." Journal of Machine Learning Research 24.244 (2023): 1-41.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have partically addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: In motivating examples such as candidate selection, we get to observe the ground truth after time t. ... it is reasonable to use the observed $Y_t$'s to update the estimation.
**To Q1**: Thank you for the constructive suggestion. We'd like to clarify a few points:
1. **Feedback and Model Updates**: As you mentioned, it's of great interest to explore online sample selection with feedback (past ground truth Y), which is part of our ongoing work. However, using new observations can improve the accuracy of estimated models, but updating these models at every time t can be time-consuming and inefficient for some machine learning algorithms. Additionally, ensuring selection error rate control with feedback requires stability criteria on the model or learning algorithm. This falls into a different regime compared to our current setting. Our method, II-COS, is model-agnostic and compatible with commonly-used learning algorithms. This paper focuses on a general framework without feedback as the first effort in online selection considering general interactive and individual constraints control.
2. **Unavailable Responses in Online Settings**: In many cases, the response is not available in online settings. For instance, in online anomaly detection [1], we cannot access the outlier label throughout the procedure. Responses can also be delayed, effectively making ground truth unavailable within the restricted time of the online process. Candidate selection is a special case of this, as identification occurs after resumes have been passed for a period. Hence, our model is suitable for these scenarios.
We hope this clarification addresses your concerns about our model.
Ref: [1] Gang, B., Sun, W., Wang, W. Structure-adaptive sequential testing for online false discovery rate control. JASA, 2023.
> Q2: It is not well-motivated why the changing ES to (4) is reasonable....
**To Q2**: Thanks for your question. We would like to make some explanations.
- First, the motivation for using mES instead of ES is the technical difficulty of directly dealing with the expectation of a ratio. The mES is a ratio of expectations, which is easier to control and also serves as a reasonable measure of similarity.
- In fact, mES and ES are inspired by the mFDR (modified false discovery rate) and FDR in the field of online multiple testing, where mFDR is a ratio of expectations and FDR is the expectations of a ratio. The mFDR is usually employed as a replacement for FDR and is shown asymptotically equivalent to FDR [1]. Similar techniques also can be applied for the asymptotic equivalence for mES and ES.
- In numerical studies, we see that empirically the two similarity measures yield almost identical patterns when there are sufficient samples. An illustrative example can be found in Figure 6. Besides, in Sec. 4.1, we have shown the online ES values agains time t in Figure 1(right), from which we can see that our II-COS can guarantee valid online ES control even for small values of t. Therefore, when m is small, our method is also empirically valid for controlling ES.
Ref: [1] Sun W, Cai T T. , Oracle and adaptive compound decision rules for false discovery rate control, JASA, 2007.
> Q3: What is the principle for choosing K ?
**To Q3**: In Appen C.1 (line 512), we provided implementation guidelines for choosing K. To add more clarity, we'll explain it here.
- For any two i.i.d. observations $X$ and $X'$with corresponding $\theta$ and , the expected $C_2$ of the individuals of interest is given by ${C_2}=E[g(X,X')\mid\theta=1,\theta'=1]$, which can be estimated by $\widehat{C_2}=\sum\sum_{i<j; i,j\in\mathcal{L}} g({X}_i,{X}_j)/\{|\mathcal{L}|(|\mathcal{L}|-1)\}$, $\mathcal{L}=\\{i:{Y}_i\in\mathcal{A}\\}$.
- We then set $K=a\widehat{C_2}$, where $a>0$ is user-specific to control interactive constraint level. Our numerical evidence shows that $a\in (0.1,0.5)$ works generally well. And we set a=0.4 in our experiments.
- For real data examples, since $\widehat{C}_2$ differs between the datasets, we set the same parameter $a=0.4$, leading to different K values for each application. Both choices are reasonable.
> Q4: In experiments, a fairer comparison with offline CP would be with [1], i.e., thresholding the p-values with BH-adjusted p-values as opposed to fixed threshold.
**To Q4**: Thank you for your suggestion. We considered multiple factors when selecting the baselines and chose several that we believe are reasonable. Here are some clarifications:
1. **Conformal-p-values-based baselines**: We have compared several online FDR control methods using conformal p-values in [1], including LOND, SAFFRON, and ADDIS (see Figures 1-2). Notably, LOND is exactly the online version of the BH procedure, and SAFFRON is the online version of the Storey-BH procedure. More details can be found in [2] and [3]. As mentioned in line 298, for real-data experiments, online multiple testing methods based on conformal p-values resulted in few selected individuals. Therefore, we focused on comparing II-COS with SAST, and omitted results for conformal-p-value-based methods.
2. **Online case consideration**: The BH procedure itself cannot be applied in online scenarios as it requires all p-values to be sorted before making a decision, which is not possible when future p-values are unknown. Thus, we compared our method with the fixed threshold method in real-data examples (see Table 1 and Figure 3), as it can be executed in online setting and serves as one of our baselines.
We hope these clarifications address your concerns. We would appreciate it if you could re-evaluate our work.
Refs:
[1] Jin, Y. and Candes, E. J. Selection by prediction with conformal p-values. JMLR, 2023.
[2] Ramdas, A., Jordan, M. I. et al., Online control of the false discovery rate with decaying memory. NeurIPS, 2017.
[3] Ramdas, A., Jordan, M. I. et al., SAFFRON: an adaptive algorithm for online control of the false discovery rate. ICML, 2018. | Summary: This paper introduces a framework to perform online sample selection such that the unseen outcomes are in specific target range while also optimizing for constraints like diversity that are dependent on the input covariates. The additional constraint involving input covariates can help ensure properties like the diversity of candidates when selecting individuals for interviews while also guaranteeing that most of the interviewed individuals accept the offer. The paper proposes a data-driven procedure to select the subset of candidates in an online fashion by implementing the proposed algorithm. Under reasonably weak assumptions, the paper provides theoretical guarantees on satisfying both the above constraints in online sample selection. The experiments confirm that this framework ensures low false selection rates (i.e. unseen outcomes are in a specific target range) while optimizing for the additional covariate-dependent constraints like diversity on synthetic and real data.
Strengths: The paper proposes an intuitive way to incorporate covariate-dependent constraints like the diversity of candidates when performing online sample selection to optimize for metrics like false selection rates. This paper solves an important problem in online sample selection and demonstrates that the proposed method improves the covariate-dependent objective while maintaining comparable performance on false selection rates.
Weaknesses: It would be interesting to understand the gaps between an ideal diversity profile and the profile obtained by the proposed method in Fig 3. Analysing the gap w.r.t changing g(X_i, X_j) function choice could be helpful. Would it be helpful to increase the weight of the g(X_i, X_j) term to reduce this gap and understand its implications on the satisfaction of individual constraints?
It is evident that the SAST baseline outperforms the proposed method in terms of FSR sometimes, which is understandable given there; 's a tradeoff with the interactive constraints (Table 2b, Fig 1). It would be helpful to learn if we can reduce the gap between SAST and the proposed method by balancing the tradeoff (perhaps using a tunable hyperparameter that balances the two constraints?).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, limitations and broader impacts are discussed in the last section of paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1: It would be interesting to understand the gaps between an ideal diversity profile and the profile obtained by the proposed method in Fig 3. Analysing the gap w.r.t changing g(X_i, X_j) function choice could be helpful. Would it be helpful to increase the weight of the g(X_i, X_j) term to reduce this gap and understand its implications on the satisfaction of individual constraints?
**To W1**: Thank you for your thoughtful suggesstions! We'd like to make some explanations as follows:
- First, your suggestion is very reasonable, especially regarding our focus on specific groups such as Bachelor's degrees. As the categorical variable is transformed into one-hot coding to numerically compute the diversity, we can assign greater weights to these specific groups (one-hot variables) of interest. This adjustment allows us to enhance their contribution to achieving diversity.
- Second, defining an ideal diversity profile precisely is challenging. Since changing g affects how we define our target diversity criterion. From our perspective, the ideal diversity profile can be defined as achieving maximal diversity in the offline setting while selecting the same number of units and controlling the FSR. A related work is [1]. In this aspect, the gap primarily arises due to differences in the online procedure and cannot be easily analyzed for its relationship with g.
- Third, we conducted an experiment on candidate data to demonstrate the diversity performance using our II-COS method, the offline oracle, and your innovative weighting approach, respectively. Here we assign more weight on Bachelor degree (five times than other education variable). We fix $\alpha=0.2, K=0.001$. The education status composition of the correctly selected samples and FSR for three methods are shown in the table below. We can observe that this weighting scheme effectively increased the proportion of Bachelor's degrees, aligning closer to the distribution observed in the offline oracle. However, it also resulted in a decrease proportion in Master's degrees, which is less desirable. Therefore, determining an optimal weighting scheme to achieve the ideal diversity profile remains an interesting question for future investigation.
- As for the implications on individual constraints, we find no significant influence empirically when adjusting the weight of g, since the FSR control of our strategy is quite tight.
Hope these explainations are acceptable for you!
| | FSR | No Qual | High School | Matriculation | Bachelor | Master |
| --- | --- | --- | --- | --- | --- | --- |
| Oracle | 0.19 | 0.03 | 0.34 | 0.26 | 0.27 | 0.08 |
| II-COS | 0.19 | 0.03 | 0.44 | 0.26 | 0.21 | 0.07 |
| More weight on Bachelor | 0.19 | 0.02 | 0.39 | 0.29 | 0.28 | 0.02 |
Reference:
[1] Wu, X., Huo, Y., Ren, H., and Zou, C. Optimal subsampling via predictive inference. JASA, 2023.
> W2: It is evident that the SAST baseline outperforms the proposed method in terms of FSR sometimes, which is understandable given there 's a tradeoff with the interactive constraints (Table 2b, Fig 1). It would be helpful to learn if we can reduce the gap between SAST and the proposed method by balancing the tradeoff (perhaps using a tunable hyperparameter that balances the two constraints?).
**To W2**: Thank you for your questions! Your observation is very insightful, and in fact, our proposed method already can achieve the trade-off you mentioned.
- First, our method itself provides a sufficiently flexible framework that allows for balancing this trade-off by adjusting the parameters $\alpha$ and K as needed. Typically, one could choose $K=+\infty$ for the case the interactive constraint is out of work and only individual constraint control is considered, and meanwhile one can set $\alpha=1$ with which the interactive constraint is the only concern. The results in Table 2 (Appendix E.2) evaluated this conclusion.
- Second, as mentioned in Section 2.2 (lines 169-171) of the paper, SAST can be regarded as a special case of our proposed method. When the individual constraint is set to FSR and K is set to $+\infty$ , our method reduces to the same manner as the controlling step of the SAST.
- Therefore, additional tuning parameters are not specifically required to achieve this.
We hope that this interpretation can ease your doubts.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response and additional work on experiments.
I am increasing my rating to accept (7). | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Constructing Semantics-Aware Adversarial Examples with a Probabilistic Perspective | Accept (poster) | Summary: This paper tackles the field of adversarial image generation by proposing an unrestricted attack method that can be applied to both targeted and untargeted attacks. The innovative approach considers a probabilistic perspective, treating the victim classifier and geometric constraints as distinct distributions. By drawing adversarial examples from the overlap region, the authors ensure that the semantics of the original image are preserved. The efficacy of this proposed approach is convincingly demonstrated through extensive experiments.
Strengths: 1. I find the probabilistic approach proposed in this paper to be particularly innovative and refreshing. The motivation behind this perspective is clearly articulated, providing a solid foundation for the authors' methodology.
2. I am impressed by the encouraging experimental results presented in this paper. The inclusion of a human annotation experiment is particularly noteworthy, as it adds an important layer of validation to the authors' claims. Moreover, the study's success in handling both transfer attacks and adversarial defense scenarios further underscores the model's robustness and effectiveness.
Weaknesses: While the experimental results of the proposed method show promise, I do believe there is room for improvement. Specifically, I think it would be beneficial for the authors to provide more detailed information regarding the human experiment methodology, such as how the five reviewers for the MNIST experiment were selected. Furthermore, I would suggest that the authors consider conducting a follow-up experiment where human annotators are asked to identify perturbed images in the absence of a reference image for the ImageNet experiment, which is a more realistic scenario in an attack setting.
In addition, I find it intriguing that both NCF and cAdv demonstrated higher success rates in generating adversarial examples compared to the proposed method, as shown in Table 2. This highlights some shortcoming in the proposed approach. While it is expected that NCF would generate images that can be identified as perturbed, I am more surprised that cAdv was able to create perturbed images that are highly similar to the original ones.
Lastly, I think it would be beneficial for the authors to explore targeted attacks on ImageNet, given the success of this approach in previous papers such as "Towards Transferable Targeted Attack". This could provide valuable insights into the robustness and effectiveness of the proposed method.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How were the values of c chosen in Table 2?
2. What are the author's thoughts on the tradeoff between choosing different values of c?
3. How were the human annotators selected?
4. How was the set \tau chosen?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and your recognition of our probabilistic perspective and proposed method. Thank you for indicating the oversights and shortcomings in our submission. Your feedback has effectively improved the quality of this work. We respond to your questions and concerns as follows:
## Response to weaknesses
### Human Experiments
> more detailed information … such as how the five reviewers for the MNIST experiment were selected.
Regarding the selection of human annotators, please refer to our response to Question 3 below. We have included screenshots of the user interfaces for annotators in Appendix E. We apologize for omitting an important detail in our original submission: we use a voting system to determine the final choice from human annotators. In cases of a draw (which can occur in the MNIST dataset), we randomly select from the tied options. We will incorporate this description in the revised version. Thank you for bringing this to our attention.
> … conducting a follow-up experiment …
We appreciate your suggestion for a follow-up experiment on ImageNet without reference images. Given the time required for human experiments, we will add this experiment in the final version of our paper.
We understand your concern regarding the presence of reference images in our experiment. We would like to clarify that this does not compromise our methodology. In our study, annotators are not informed which image is the reference. We realize we inadvertently omitted a crucial detail in Figure 8: the original image and adversarial example are presented in random order, adhering to the A/B test methodology employed in unrestricted adversarial attack papers such as [1] and [2]. We will emphasize this important point in the final version of our paper.
### Comparison with NCF and cAdv
We indeed overlooked emphasizing that our method is optimal in balancing the trade-off between human annotation success rate and attack success rate in the paragraph from lines 258 to 264. In the final version, we will explicitly highlight that, according to Figure 5, our method performs best in this trade-off when appropriate parameters are selected.
We hypothesize that cAdv's lower human annotation success rate may be attributed to the presence of unnatural color spots in the generated images, as illustrated in Figure 4. These artifacts likely make the adversarial examples more noticeable to human annotators.
### Targeted Attacks on ImageNet
We acknowledge that targeted attacks on ImageNet pose greater challenges due to the large number of classes, particularly regarding transferability. Our method does not offer specific advantages for this task. In the related work section, we will expand on this issue, discuss the work you recommended, and suggest it as a potential direction for future research based on our probabilistic perspective.
## Answer to questions
**Q1: How were the values of c chosen in Table 2?**
**Answer:** The parameter c controls the influence of the victim distribution. A larger c results in adversarial samples that are more likely to deceive the victim classifier but may deviate more from the original image's semantics. We initially tested the algorithm on a few images, visually assessing the results. Based on these observations, we selected c values of 5, 10, and 20 as they produced distinct yet reasonable outcomes. We then applied these hyperparameters to attack 1000 ImageNet images, yielding the results shown in Table 2 and Figure 5.
**Q2: What are the author's thoughts on the tradeoff between choosing different values of c?**
**Answer:** Figure 5 illustrates the trade-off between different c values, with the x-axis representing human annotation success rate and the y-axis showing attack success rate. As c increases, adversarial examples become more effective at deceiving the victim classifier but are also more easily identified as adversarial by human annotators. We leave the choice of c to the users of our proposed method, depending on their specific requirements.
**Q3: How were the human annotators selected?**
**Answer:** We conducted human participation experiments through a reputable crowdsourcing company. The five annotators were recruited by this company. Our human experiments passed our institute's ethical review of research, taking into account the crowdsourcing company's qualifications, fair compensation, reasonable workload, and potential risks to workers. Due to the anonymous review process, we cannot share specific details here, but we can provide this information to the Area Chairs if required.
**Q4: How was the set \tau chosen?**
**Answer:** As introduced in Section 4.1, $\mathcal{T}$ represents a set of transformations that we subjectively believe maintain the semantics of the original image. For MNIST, this includes scaling, rotation, and thin-plate spline (TPS) transformations. For ImageNet, we utilize the distribution of the diffusion model after fine-tuning on the original image, which incorporates some implicit natural transformations learned by the model.
If we subjectively believe that appropriate color transformations do not affect semantics, we could incorporate such color transformations into $\mathcal{T}$ when fine-tuning the diffusion model. The primary aim of this work is to propose a probabilistic framework that can embed subjective understanding, rather than to conduct an in-depth exploration of $\mathcal{T}$ selection. We leave this exploratory task for future work.
**Reference**
[1] Song, Yang, et al. "Constructing unrestricted adversarial examples with generative models." Advances in neural information processing systems 31 (2018).
[2] Bhattad, Anand, et al. "Unrestricted adversarial examples via semantic manipulation." arXiv preprint arXiv:1904.06347 (2019).
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I thank the authors for responding to my comments and concerns. Some of the questions raised by me have been answered, however not all. Furthermore, I see that other reviewers have raised some valid concerns as well. Thus I have decided not to update my scores.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are dedicated to continuously improving the quality of this paper. | Summary: This paper proposes a new type of adversarial attack, which generates adversarial examples by solving a box-constrained non-convex optimization problem. Different from the traditional norm-bounded attacks, this paper focuses on unrestricted adversarial attacks by replacing the geometrical distance measure with a semantically-aware distance measure. Specifically, The authors propose using a Langevin Monte Carlo (LMC) technique to sample adversarial examples from a probabilistic distribution. To preserve semantics, the authors use a learned energy function to guide the generation of adversarial samples. Following this, rejection sampling and refinement techniques are employed to select and improve the quality of the generated samples. Experiments show that this attack can fool classifiers while preserving the semantic information compared to baseline methods.
Strengths: 1. This paper introduces an interesting perspective on generating adversarial examples, which is significantly different from the traditional norm-bounded adversarial attacks.
2. This paper is theoretically sound and the proposed solution is very intuitive.
3. It is suprising that the proposed attack can achieve a 100% success rate on an adversarially trained model. Adversarial training is often regarded as a SOTA defense method. Therefore, in my view, this work can motivate researchers in this area to design better defense methods.
4. The proposed method can either outperform baseline methods by a notable margin or significantly improve the quality of the generated adversarial examples in terms of preserving semantic meanings.
Weaknesses: 1. Selecting 20 images from each class in the MNIST test set seems to be too little. I understand that it might be infeasible for human annotators to annotate all adversarial images for the entire MNIST, so I would encourage authors to report the success rate except for human annotations using the entire MNIST. I believe this will make the results more convincing.
2. This paper is missing ablation studies for rejection sampling and sample refinement techniques. Is it necessary to include these techniques? How would it affect the attack success rate if one of them is removed?
3. This paper proposes a new attack method but lacks a discussion on how to defend against it. Although it is not compulsory, I am more willing to see how to defend this attack. Can you provide some intuitions on it?
4. Standard deviations are not reported in this paper. Repeated experiments are encouraged.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the **Weaknesses**.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's thorough evaluation and acknowledgment of our probabilistic approach and proposed methodology. We are grateful for the identification of weaknesses in our submission. The reviewer's insights have significantly enhanced the quality of this work. We address the review points as follows:
**Success rate except for human annotations**
In response to the reviewer's suggestion and to enhance the reliability of our results, we conducted an additional experiment using the MNIST test set, without human annotation. The results are as follows:
| | PGD | ProbCW | stAdv | OURS | OURS (no tech.) | OURS (rej. samp. only) |
|---------------------|------|--------|-------|-------|-----------------|------------------------|
| Human Anno. | N/A | N/A | N/A | N/A | N/A | N/A |
| **White-box** | | | | | | |
| MadryNet Adv | 26.9 | 30.5 | 30.0 | 100.0 | 35.4 | 100.0 |
| **Transferability** | | | | | | |
| MadryNet noAdv | 15.4 | 18.0 | 15.8 | 60.4 | 17.9 | 60.2 |
| Resnet noAdv | 9.8 | 10.0 | 11.9 | 23.2 | 13.3 | 21.2 |
| **Adv. Defence** | | | | | | |
| Resnet Adv | 7.3 | 8.5 | 11.2 | 19.4 | 12.9 | 19.4 |
| Certified Def | 11.2 | 12.1 | 22.3 | 40.6 | 24.7 | 40.9 |
Note that 'OURS (no tech.)' and 'OURS (rej. samp. only)' refers to the ablation study discussed in the subsequent section.
**Ablation study on two techniques**
We have included the ablation study results for the two techniques in the table above.
Rejection sampling, a classical method, aligns naturally with our probabilistic approach to adversarial attacks. By employing rejection sampling, we can generate adversarial examples that achieve a 100% attack success rate in white-box settings. This perfect success rate is possible because we can consistently reject samples that fail to deceive the victim classifier.
On the other hand, sample refinement primarily affects human annotation while having minimal impact on the attack success rate for classifiers.
These two techniques are only used in MNIST experiments, because generating targeted adversarial samples for an adversarial trained MNIST classifier is relatively hard: As is shown in Figure 2 (b), the adversarially trained MNIST classifier is already have a strong generation ability, meaning that it already memorize the shape of the each digit.
**Discussion on defending this attack**
We appreciate the reviewer pointing out this problem. In the final version of the paper, we will include the following discussion:
Adversarial training operates on the principle: 'If I know the form of adversarial examples in advance, I can use them for data augmentation during training.' Thus, the success of adversarial training largely depends on foreknowledge of the attack form. Our method bypasses adversarially trained classifiers because the 'semantic-preserving perturbation' we employ is unforeseen by the classifier designers - they use conventional adversarial examples for training.
Conversely, if designers anticipate attacks from our algorithm, they could incorporate examples generated by our method into their training process - essentially, a new form of adversarial training.
This scenario transforms adversarial attacks and defenses into a game of Rock-Paper-Scissors, where anticipating the type of attack becomes crucial. One might consider training a classifier using all known types of attacks. However, expanding the training data too far from the original distribution typically leads to decreased performance on the original classification task, which is undesirable [1]. We believe that investigating the trade-off between this ‘generalized' adversarial training and accuracy on the original task represents a promising avenue for future research.
**Standard deviation**
To calculate the standard deviation in repeated experiments, we need to generate multiple sets of adversarial examples for each method and have them annotated by human annotators. Given the time required for annotation, we commit to providing these results in the final version of the paper.
**Reference**
[1] Zhang, Hongyang, et al. "Theoretically principled trade-off between robustness and accuracy." International conference on machine learning. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their thorough rebuttal. My major concerns have been well-addressed. I am now more confident that this paper should be accepted. I have increased my confidence score from 3 to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback and the increased confidence score. We are pleased that our rebuttal has clarified the major concerns. | Summary: This paper introduces a probabilistic framework for generating adversarial examples, focusing on maintaining the semantic integrity of the original images while implementing substantial pixel-level modifications. Unlike conventional adversarial techniques that rely heavily on minimal geometric perturbations, this approach integrates a semantic understanding into the adversarial example generation process, leveraging energy-based models and diffusion models. The core innovation lies in embedding the semantic interpretation as a probabilistic distribution, which guides the adversarial example generation. This allows for effective deception of classifiers, including those equipped with adversarial defenses, while preserving the semantic content to an extent that remains imperceptible to human observers. Empirical evaluations demonstrate that the proposed method outperforms existing techniques in terms of both effectiveness against defenses and undetectability by humans, establishing a new paradigm for constructing robust and stealthy adversarial attacks.
Strengths: 1. The paper is clear and well-written, effectively highlighting its contributions with accessible explanations of complex ideas.
2. This paper presents a new probabilistic framework for generating adversarial examples that goes beyond traditional norm-bounded methods by integrating semantic distributions. The approach is theoretically robust, with the theoretical analysis providing a solid foundation that supports the model's effectiveness and introduces innovative concepts to the field of adversarial machine learning.
3. The proposed method significantly outperforms baseline methods, particularly in preserving their semantic integrity.
Weaknesses: 1. The assumption in Equation 4 lacks a detailed derivation, leaving it unclear whether $x_{\text{ori}}$ and $y_{\text{tar}}$ need to be independent. Providing a clear derivation and clarifying this assumption would enhance the theoretical rigor of the paper.
2. The training process for the diffusion models is sensitive and requires careful parameter tuning. The paper does not provide enough detail on this sensitivity or potential solutions to mitigate training instability, which impacts the robustness and reproducibility of the method.
3. The paper does not report standard deviations in the performance results. Repeating the experiments is recommended to ensure the reliability and consistency of the findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the aforementioned concerns and questions.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Paper limitations can be found in the above comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's time and valuable feedback, which has significantly contributed to improving our work. We address the identified weaknesses as follows:
### About Equation (4)
Our proposed framework assumes that the adversarial distribution is proportional to the product of the distance distribution and the victim distribution. This assumption separates the distance distribution from the victim distribution, reflecting our understanding that the similarity between objects is not directly related to the classifier being attacked. While this perspective might be debated, we believe it is reasonable within the context of our paper.
Theorem 1 demonstrates that this proposed form of adversarial distribution is consistent with conventional adversarial attacks. In Appendix A, we prove Theorem 1 by starting with the conventional adversarial attack setting (Equation (1)) and deriving the form of the product of $p_\text{dis}$ and $p_\text{vic}$. This derivation can be found above line 423 in our paper.
We appreciate the reviewer's comment and will provide more intuition about separating the distance and victim distributions between lines 71 and 73 to enhance readability.
### About the training process of diffusion models
Please refer to the `README.md` file in the ImageNet folder of our attached anonymous GitHub repository (link provided in the abstract). Our code is based on OpenAI's guided diffusion repository, a mature and widely used diffusion implementation. We fine-tune the weights provided by the guided diffusion repository rather than training a diffusion model from scratch.
The hyperparameters for fine-tuning are specified in our repository. We maintain most of the original training hyperparameters, with two exceptions:
1. Learning rate: 1e-6 (a commonly used rate for fine-tuning)
2. Number of fine-tuning steps: 300 (empirically determined)
We found that after 300 fine-tuning steps on the original image, the fine-tuned diffusion model adequately reflects the original image. These hyperparameters are consistently applied across all images in our evaluation, which we believe is appropriate for our study.
We thank the reviewer for pointing this out. Indeed, merely including this information in the accompanying code is insufficient. In the final version of the paper, we will add an appendix that provides a detailed discussion of the aforementioned parameter selection and fine-tuning process.
### About Repeating the experiments
To calculate the standard deviation in repeated experiments, we need to generate multiple sets of adversarial examples for each method and have them annotated by human annotators. Given the time required for annotation, we commit to providing these results in the final version of the paper. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their recognition and encouragement of the probabilistic perspective and related approaches we proposed. We are also very grateful for the valuable suggestions made by the reviewers. Your insightful suggestions have significantly contributed to the refinement and improvement of this work. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stabilize the Latent Space for Image Autoregressive Modeling: A Unified Perspective | Accept (poster) | Summary: This paper designs a better quantized autoencoder on top of VQGAN. It builds an image autoencoder which is able to both achieve good recognition performance for linear probing, and have a latent space which is suitable for training a generative model. It studies the existing autoencoders from a high-level theoretical perspective and proposes design ideas which are targeted to improve them. The paper claims that the modern autoencoders ignore the fact that they will be utilized for downstream generative modelling tasks and mainly focus on reconstruction. The paper argues that adding recognition losses on top of the encoder would help. To fulfill this desiderata, the model takes DINOv2 features and discretizes them via k-means. Then it trains a translation model into VQ-GAN decoder features. For image generation, it trains a LLM in the discrete token space. For classification, it does linear probing on top of discretized DINOv2 features. As a result, it attains reasonable generative capabilities while being able to keep a latent space suitable for linear probing classification.
Strengths: - In terms of the scores, the paper achieves very good results in the sense of discrimination/generation tradeoff (judging by figure 1).
- It's an interesting finding that one can discretize dino-v2 via K-means and train a strong generative model on top of such tokens.
- The paper studies an important problem of more rigorous understanding of modern autoencoders
Weaknesses: - The paper shows an equivalence between a linear AE and PCA, but it's a well known fact: https://arxiv.org/abs/1804.10253. One can also just google "equivalence between a linear autoencoder and PCA", and find a ton of resources on that.
- "A reconstructive autoencoder does not necessarily establish an advantageous latent space for generative model". That's a very well-known fact in the community (e.g., see Fig 5 in https://arxiv.org/pdf/2312.02116). The paper should not claim this observation as a novel contribution.
- The proposed stability metric is interesting, but it's unclear whether it will correlate with downstream generative models performance
- Proposition 2.4 is extremely vague and seems to be very different from its "rigorous" analogue in the supplementary.
- FID metrics for VQGAN on ImageNet are much higher than in the original paper.
- It's delusive to compare performance of the developed model vs those trained from scratch, since the developed model starts from strong pre-trained models.
- For image generation, the paper shows just 16 random samples, which is extremely little to get a good understanding of the model. It's better to show much more (e.g. stylegan2 provides 100k samples for FFHQ: https://github.com/NVlabs/stylegan2).
- Why DiT-XL/2 is included for comparison but not its guided variants? Why more recent papers are not included for comparison? (e.g., EDMv2).
- The logical transitions in the paper are unclear, e.g., it's unclear why the proposed training improves D^G, it's unclear, why it follows from the propositions that we should improve the stability of the latent space (where stability is also not defined well), etc.
Technical Quality: 1
Clarity: 2
Questions for Authors: - The transition "Drawing from these theoretical insights, we introduce a metric to assess the stability of the latent space induced by different encoder models" on L196 is extremely unclear. How exactly do theoretical results suggest that one should focus on stability of the latent space? Why would LDA lead to a better generative model? Why "separating" distributions class-wise would lead to a better generative model? What exactly do you mathematically define "separation of distributions" for an encoder?
- Is linear probing done on top of discretized or non-discretized DINOv2 features?
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: One limitation that is not explored is whether the model is upper-bounded by the performance of the underlying SSL classifier. In other words, what would be the greater source of improved performance in the future — improving SSL or decoder?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your constructive feedback and would like to clarify several points to enhance the understanding of our contributions.
## 1. Regarding the analysis of AE and PCA and the claim of the observation
We want to emphasize that the analysis of AE and PCA is not the core focus of our theoretical framework. We mention it to provide intuitive insights into the latent space induced by VQ autoencoder and our proposed discriminative tokenizer.
While we acknowledge that the distinction between reconstruction and generation has been explored in prior works, our primary contribution lies in offering a theoretically unified perspective on this issue and proposing a novel method to disentangle the encoder and decoder processes for a more stable latent space, which is neglected in previous work.
## 2. Regarding the stability
To further evaluate the stability of latent space, we analyze the Negative Log-Likelihood loss for AR models associated with different tokenizers. Our results highlight the nondeterministic nature of the latent space, showing an NLL loss of 3.347 for the discriminative tokenizer versus 7.104 for the VQ tokenizer, where a lower NLL loss indicates a more stable latent space, underscoring the efficacy of our approach. The resulting FID score also demonstrates that stable latent space can improve the performance of AR models.
## 3. The FID score of VQGAN
We adopt the VQ tokenizer used in the MaskGIT mdoel, which is an improved version and performs better than the one used in the original VQGAN. We reproduced the **class-unconditional** results by ourselves, yielding an FID score of 24.38, which is reasonably within range and slightly worse than MaskGIT's score of 20.7. Moreover, we check the experiments in the original VQGAN paper and it only presents the results of **class-conditional** image generation results (FID 15.78), which is **not** the same experimental results as above and should not be directly compared. We will clarify these experimental settings in the revised paper.
## 4. Regarding the pretrained SSL encoder in our method
We recognize the dataset difference used in SSL encoders and have included an experimental comparison of various SSL encoders in Fig 3(a). We adopt three SSL models and **all** of them are trained **only** on ImageNet, which is the same as the VQ model. As detailed in section 4.5, the **small** size autoregressive model trained with tokenizer induced by the iBOT SSL model, which shares a similar learning objective with DINOv2 (patch level discriminative objective), **already achieves superior** performance compared to a much larger model trained with VQ tokenizer. The comparison demonstrates the efficacy of the discriminative tokenizer.
We adopt a more powerful SSL encoder DINOv2 to demonstrate that the discriminative tokenizer with more powerful SSL representations can boost the performance of auto-regressive generative models by a large margin, which is not observed in existing AR + VQ models, highlighting the main limitation of image auto-regressive models is the encoder used for latent space construction.
## 5. Sample Cases
The FID and IS scores are the established metrics for image-generative models. The cases shown in the paper are just for reference. We can provide more samples in the revised pdf.
## 6. Regarding the baselines
There exist too many diffusion variants and most of them are based on the LDM or DiT architecture, therefore we opt to focus on foundational works, such as LDM and DiT, as baselines. Furthermore, our proposed method is built upon a different theoretical framework, aimed primarily at surpassing widely accepted baseline models.
## 7. The logical transitions and question on stability
In section 2.1, we first conduct a theoretical analysis that demonstrates the necessity of considering both $\mathcal{D}^{H}$ and $\mathcal{D}^{G}$ simultaneously in the latent space for generative models. Current VQ-based latent generative models represent one pathway for constructing latent space by optimizing $\mathcal{D}^{H}$, an autoencoder objective. In contrast, our approach focuses on optimizing the latent space with $\mathcal{D}^{G}$ to learn lower-dimensional separable features for generative models, which is an SSL objective. Following this, in section 2.2, we explore the differences in latent space induced by various learning objectives and find that the success of combining iterative generative models with VQ latent space is attributable to the stabilization ability of the iterative decoding strategy. In contrast, our proposed discriminative tokenizer directly stabilizes the latent space, which can naturally benefit auto-regressive models. Therefore, the above analysis and findings provide the foundation for the proposed method: SSL + Kmeans.
## 8. Question on Linear probing
The linear probing is done on the hidden states of auto-regressive models, which is the same as the evaluation protocol of iGPT and VIM.
## 9. Question on limitation
As discussed in the paper, the latent generative models are affected by both the encoder and decoder. Our strategy to disentangle these components aims to strengthen the encoder’s capabilities with the integration of pre-trained SSL methods unsupervised. We use the VQGAN model as the decoder to compare with existing VQ-based models. Although improvements to both the encoder and decoder could yield further benefits, our findings indicate that the encoder's ability is currently the primary limiting factor for autoregressive models since the rFID score of VQGAN is low enough. Improvements using SSL encoders and K-means considerably elevate performance, as demonstrated in Table 2, Fig 2(a), and Fig 3(a).
**We kindly ask you to reconsider the evaluation of our work in light of these clarifications. We believe that our research contributes significantly to the understanding and development of image tokenizer and latent autoregressive models.**
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. There are still some parts which I do not understand:
- What exactly is "stability of latent space"? How do you define it?
- Modern autoencoders are typically trained with a VAE objective, which is a generative objective and takes care of both $\mathcal{D}^H$ and $\mathcal{D}^G$. Or am I missing something?
Several of my concerns have been resolved: regarding the SSL pretraining, regarding the small amount of qualitative samples (please, include more in the earliest revision), regarding the VQGAN baseline, and more or less regarding the guided DiT scores. However, I'm still concerned that the paper is written in a fashion like it's the one proposing the equivalence between PCA and linear AEs and also the overall writing quality could be improved.
---
Reply to Comment 1.1.1:
Comment: Thank you for your comments. We are happy that we were able to clarify some points and thank you for acknowledging it. We will address each question below
## 1. Definition of Stability in Latent Space
The stability of latent space can be understood from two perspectives:
1. **Stability of Latent Space for Discrete Tokenizers**: This approach, utilized in our paper, can be mathematically expressed as:
$
e = \text{Enc}(x + \epsilon), \quad q = \text{Quantizer}(e)
$
where $\epsilon \sim \mathcal{N}(0, \sigma)$ and $\sigma$ can be adjusted to vary the signal-to-noise ratio. Here, the Quantizer may refer to the VQ module or K-means module in our method. The stability metrics can be quantified using $\delta \cos(e)$ and $\delta \mathbb{I}(q) $, which correspond to token cosine similarity and token change in Table 1. Our results align with the intuition that SSL models exhibit greater stability against noise, as they capture high-level semantic features of images, unlike traditional autoencoders that focus on low-level appearance features.
2. **Stability of Latent Space for Autoregressive Models**: As the reviewer Vc7B's suggested, we evaluate the stability of latent space for autoregressive models (learnability) by comparing the negative log-likelihood (NLL) loss of models using different tokenizers, thereby demonstrating the nondeterministic nature of the latent space. We monitor the NLL loss values for both the discriminative tokenizer and VQ tokenizer, both using a vocabulary size of 1024 and the same model size: 3.347 for the discriminative tokenizer versus 7.104 for the VQ tokenizer. A lower NLL loss indicates a more stable latent space for autoregressive models. We will include the loss curve comparison in the revised version.
## 2. VAE Objective in the Framework of $\mathcal{D}^H$ and $\mathcal{D}^G$
In **Remark 2.1** in the paper, we discussed that "the latent space $f(X)$ of the VAE is modeled as a tractable Gaussian distribution, and $D^{\mathcal{G}}(P_{f(X)}, P_{X})$ can be zero by setting the generative model as a Gaussian sampler. If the decoder is sufficiently strong and generates samples independently of the encoder output, $D^{\mathcal{H}}(P_{f(X)}, P_{X})$ can also be zero." While the VAEs can generate images by sampling from the prior Gaussian distribution, it significantly lags behind the learned generative model. Consequently, the VAE has largely evolved into a compression model, losing its generation efficacy, particularly in modern latent diffusion technologies.
## 3. Clarification on PCA and AEs in Our Analysis Framework
As emphasized in our rebuttal, the discussion of PCA and AEs serves merely for intuitive understanding of latent space, aiming to make our work more accessible to readers. This part is **NOT** a core contribution of our theoretical framework.
To summarize our contributions succinctly:
1. A unified perspective on the relationship between latent space and generative models.
2. A novel method to stabilize latent space by disentangling the training processes of the encoder and decoder, leading to a simple yet effective tokenizer.
3. Remarkable performance of our proposed tokenizer in image autoregressive modeling.
We appreciate the response and the discussion and hope we provided information that is helpful to clarifying the points made. | Summary: Latent-based image generative models, such as LDMs and MIMs, have achieved success, but autoregressive models lag behind in image generation. Our research introduces a unified perspective on latent space stability and proposes a discrete image tokenizer, DiGIT, that significantly improves autoregressive image modeling, outperforming LDMs and benefiting from scaling similar to GPT in NLP.
Strengths: - The results beat some baseline models, though under a specific (and somewhat confused) experimental setting.
- The topic of latent space property is worth investigating.
Weaknesses: The paper has several weaknesses:
1. **Factual Errors**:
1.1. The cited MIM models, such as MaskGIT and MAGE, cannot revise previously predicted tokens. This contradicts the claim in line 53 that "iterative models like LDMs and MIMs can correct errors." I recommend the authors to their papers for more details.
1.2. In lines 72-73, the authors state that this work provides "the first evidence that image autoregressive generative models behave analogously to GPT." However, the Parti[1] paper has already demonstrated that image autoregressive models have similar scalability to GPT and successfully scaled the model to 20B. The authors have not cited this work.
2. The writing is poor and lacks rigor. For example, the discussion on the so-called "common misconception" in line 41 is not well-supported. What exactly is meant by the "optimal latent space for reconstruction"? How many studies hold this view? There are no citations provided.
3. The quantitative comparisons are also peculiar. The authors cite many paper results without using CFG, while CFG has become a de-facto practice for augmenting generative models. Why not adopt CFG and perform more apples-to-apples comparisons to other SOTA methods with CFG?
4. Presenting two tables (Table 2 lower and Table 3) for image generation performance is confusing. Why not consolidate the results into a single, clear table?
[1] Yu, Jiahui, et al. "Scaling autoregressive models for content-rich text-to-image generation." arXiv preprint arXiv:2206.10789 2.3 (2022): 5.
Technical Quality: 1
Clarity: 1
Questions for Authors: See above.
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: The writing & presentation of this paper seems too rush and lacks rigor. I recommend the authors to refine and polish this paper. The current draft may not be qualified for the publication of NeurIPS.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful consideration of the paper and your constructive feedback.
## 1. Factual Clarification
We respectfully disagree with the points raised and would like to clarify our positions.
**1.1** Regarding the MIM models like MaskGIT and MAGE, as well as diffusion models, it is important to note that they can indeed revise the predicted tokens from previous iterations. As outlined in Section 2.2, the decoding mechanism of iterative generative models can be expressed as:
\begin{equation}
p(x^T)=\prod^{T}_{i=1} p(x^i|x^{i-1}),
\end{equation}
where $x^{i}$ represents the predicted tokens (the entire image) in the i-th iteration. In MIM models, tokens with low probability (from the softmax of logits) are replaced with UNK tokens and re-generated in subsequent iterations. For diffusion models, the core mechanism involves iterative denoising, which is an established concept, where predicted tokens in one iteration are further modified and re-generated in the next iteration. Thus, we believe our statement in the paper is accurate and we encourage the reviewer to reconsider the interpretation in light of this clarification.
**1.2** Regarding the Parti model, it is important to emphasize that it is a VQ-based text-to-image generation model that benefits from extensive training data and a large model size. However, it would be misleading to assert that Parti demonstrates success in image auto-regressive generative models. When we talk about GPT model, we are usually interested in the learning efficiency from data rather than merely enlarging dataset size and model size. By removing the influence of dataset size and model size, we note that the core architecture of Parti, which is a VQ-based AR model, does not perform as well compared to other generative models, limiting its scalability [2]. Our research specifically targets autoregressive image generative models operating without text guidance, evaluated on the ImageNet benchmark under **rigorous** experimental settings. In addition, methods later than Parti like MagVITv2 [1] and VAR [2] also claim to be the "first" language models outperforming diffusion models, yet they do not qualify as genuine autoregressive models. In contrast, our proposed DiGIT model is based on a pure GPT architecture, without any modifications to the decoding strategy. Therefore, we do not agree with the assertion that "the Parti paper has already demonstrated that image autoregressive models have similar scalability to GPT".
## 2. The statement of "common misconception"
We cited relevant work [1] in Line 43 to support the statement regarding the "common misconception." This is a well-recognized aspect within the research community, as pointed out by reviewer K39P as well. The "optimal latent space for reconstruction" refers to the latent space achieved by an autoencoder model that can yield the lowest rFID, which is a straightforward statement that does not require further elaboration.
## 3. The CFG in the experiment
We choose not to employ CFG as a default method due to its tendency to sacrifice diversity in generated images. Instead, we compare all models using class labels as conditions. It is worth noting that autoregressive models can benefit a lot from CFG [3] as well. However, autoregressive models are renowned for their prompt engineering capabilities and our philosophy is to revive the GPT model with as few modifications as possible. Therefore, we intentionally do not adopt CFG as the default method for all the models in our experiments.
## 4. The table in the paper
We attempted to consolidate the tables in the submission version but exceeded the page limit. In the camera-ready version, one additional page will be allowed and we could improve the table presentation if accepted.
**After addressing these potential misunderstandings of the paper, we kindly request a reevaluation of our paper.**
[1] Lijun Yu, Jos’e Lezama, Nitesh B. Gundavarapu, Luca Versari, Kihyuk Sohn, David C. Minnen, Yong Cheng, Agrim Gupta, Xiuye Gu, Alexander G. Hauptmann, Boqing Gong, Ming-Hsuan Yang, Irfan Essa, David A. Ross, and Lu Jiang. Language model beats diffusion - tokenizer is key to visual generation. ArXiv, abs/2310.05737, 2023.
[2] Keyu Tian, Yi Jiang, Zehuan Yuan, Bingyue Peng, and Liwei Wang. Visual autoregressive modeling: Scalable image generation via next-scale prediction. 2024.
[3] Sun, Peize and Jiang, Yi and Chen, Shoufa and Zhang, Shilong and Peng, Bingyue and Luo, Ping and Yuan, Zehuan. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation. arXiv:2406.06525
---
Rebuttal 2:
Title: Rebuttal Review Required for Accurate Assessment
Comment: Dear Reviewer Sn2s,
I hope this message finds you well. The discussion period is ending soon, I am writing to emphasize the importance of your review for our submission. Your score is significantly lower than the other three reviewers, and we believe this discrepancy may indicate a misunderstanding or oversight.
We have addressed all the concerns in our detailed rebuttal and would appreciate your prompt attention to it. A thorough reassessment is crucial to ensure a fair evaluation.
Your expertise is highly valued, and we trust that a reconsidered review will reflect the true merit of our work.
Thank you for your immediate attention to this matter.
Best regards, Authors | Summary: This paper tries to understand why latent autoregressive image models perform worse than latent diffusion models. The key insight is that existing tokenizers are trained primarily with the reconstruction objective, whose latent space is unstable and thus may not be easy to model autoregressively. To solve this issue, the authors propose first learning a stable latent space, which autoregressive models can model easily, and then learning to reconstruct pixels from this latent space. Experimental results show that this modification enables latent autoregressive image models to match latent diffusion models' performance in terms of image understanding and image generation.
Strengths: 1. The paper proposed a new perspective—latent space stability—on understanding latent autoregressive image models, which was neglected in previous works. I think this explanation is intuitive since a fixed-depth autoregressive model may not be able to model very noisy distributions (e.g., the language data have high regularity)
2. The proposed solution is straightforward -- just let image features with similar semantics share the same token.
3. The experiments are comprehensive and interesting. Both image understanding and image generation are evaluated; improvements over previous latent autoregressive models are significant. The ablation study also makes sense to me.
Weaknesses: 1. I think there is a tension between how stable the latent space is and how easily we can reconstruct the latent codes to pixels. The impact of the proposed method on reconstruction is not elaborated in this paper. For example, if we only care about reconstruction, how badly does the proposed method perform? This matters greatly if we are modeling high-resolution images and care about the visual details.
2. The theoretical analysis and the proposed algorithm seem loosely connected to me -- I don't see the proposed algorithm as a direct result of the theoretical analysis. The stability analysis is more straightforward, though.
Technical Quality: 3
Clarity: 3
Questions for Authors: How negatively does the proposed method impact reconstruction?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I think the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful consideration of the paper and your constructive feedback.
## 1. Performance of the Proposed Method on Image Reconstruction
We conduct an experiment to assess the reconstruction performance of the proposed discriminative tokenizer. We use the golden tokens obtained from the discriminative tokenizer to reconstruct the images and calculate the rFID score, which yields a result of 1.92. In comparison, the rFID score for the corresponding VQ tokenizer is 1.67. This indicates that the impact of our proposed tokenizer on reconstruction quality is minimal. We appreciate your suggestions and will include these reconstruction results in Table 2 of the revised paper.
## 2. Regarding the Stability of the Proposed Method
(1) Stability of Latent Space Induced by Tokenizers
We quantitatively measure the stability of the latent space in Table 1, incorporating metrics such as the rate of token changes and cosine similarity when varying levels of noise are introduced. The results align with intuition well that SSL models demonstrate greater stability against noise, as they learn high-level semantic features of images rather than low-level appearance features as in traditional autoencoders.
(2) Stability of Autoregressive Models
To evaluate the stability of autoregressive models, we compare the negative log-likelihood (NLL) loss of models using different tokenizers, thereby demonstrating the nondeterministic nature of the latent space. We monitor the NLL loss values for both the discriminative tokenizer and VQ tokenizer, both using a vocabulary size of 1024 and the same model size: 3.347 for the discriminative tokenizer versus 7.104 for the VQ tokenizer. A lower NLL loss indicates a more stable latent space for autoregressive models. We will include the loss curve comparison in the revised version.
## 3. Connection Between the Theoretical Analysis and the Proposed Algorithm
In section 2.1, we first conduct a theoretical analysis that demonstrates the necessity of considering both $\mathcal{D}^{H}$ and $\mathcal{D}^{G}$ simultaneously in the latent space for generative models. Current VQ-based latent generative models represent one pathway for constructing latent space by optimizing $\mathcal{D}^{H}$, an autoencoder objective. In contrast, our approach focuses on optimizing the latent space with $\mathcal{D}^{G}$, which is a SSL objective. Following this, in section 2.2, we explore the differences in latent space induced by various learning objectives and find that the success of combining iterative generative models with VQ latent space is attributable to the stabilization ability of the iterative decoding strategy. In contrast, our proposed discriminative tokenizer directly stabilizes the latent space, which can naturally benefit auto-regressive models. Therefore, the above analysis and findings provide the foundation for the proposed method: SSL + Kmeans.
---
Rebuttal 2:
Title: Follow-up on Our Rebuttal Submission
Comment: Dear Reviewer eGbB,
I hope this message finds you well. We are grateful for your valuable feedback on our submission and are pleased to see your positive score. In our responses, we have addressed the points you raised in detail.
As the discussion period is coming to a close soon, we kindly ask if you could review our responses at your earliest convenience. We are eager to know if our explanations have alleviated your concerns. If there are still areas needing improvement, your insights would be greatly appreciated and instrumental in enhancing our work.
Thank you once again for your thoughtful review and support.
Warm regards, Authors | Summary: The paper propose to disentangle the encoder and decoder learning for image tokenzier which ultimately will be used for providing the latent space of AR generative model. In particular, SSL model such as DinoV2 is used for encoder (plus k-means clustering).
Strengths: 1. The idea of disentangling the encoder and decoder learning for image tokenizier is interesting and novel.
2. Strong empirical results can be obtained from the method. The fact that by changing a tokenizer and training the same AR model, FID can be halved to half is really impressive.
Weaknesses: 1. The motivation for adopting self-supervised model as encoder/tokenizer is not very clear. Since the method is easy (DinoV2 + kmeans), the motivation of why doing so is the most critical part of the paper. However, I don't think this is presented very clearly and explicitly. Large improvements of the presentation is needed.
2. The term "stability" or ""stablize" is a bit confusing. Explicit explanation is needed. When is a latent space not stable? If it means hard to learn an AR model, probably a better term such as learnability is better.
3. While the argument of "iterative decoding process can stabilize the sampling process by correcting the data falling in the low-density overlap between distributions" makes sense, it still requires justification and evidence, not just conceptual analysis.
4. If you use SSL model as encoder, you need to train a decoder. Not much explicit detail is presented for this part.
5. The metric is not very clearly defined. What's the name of the metric? What is the definition? How to compute it? All these information should be highlighted.
Overall the presentation and organization is not very clear, some major rewrite is needed.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. In section 2.2, you mentioned that a drawback of auto-regressive image modeling is that each iteration only generate a patch so error in the previous generated patch will accumulate. How is this related to your method? IIUC, your tokenizer is still patch based, so it does not resolve the issue mentioned here.
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful consideration of the paper and your constructive feedback.
## 1. Motivation for Adopting a Self-Supervised Model as Encoder/Tokenizer
In section 2.1, we first conduct a theoretical analysis that demonstrates the necessity of considering both $\mathcal{D}^{H}$ and $\mathcal{D}^{G}$ simultaneously in the latent space for generative models. Current VQ-based latent generative models represent one pathway for constructing latent space by optimizing $\mathcal{D}^{H}$, an autoencoder objective. In contrast, our approach focuses on optimizing the latent space with $\mathcal{D}^{G}$, which is a SSL objective. Following this, in section 2.2, we explore the differences in latent space induced by various learning objectives and find that the success of combining iterative generative models with VQ latent space is attributable to the stabilization ability of the iterative decoding strategy. In contrast, our proposed discriminative tokenizer directly stabilizes the latent space, which can naturally benefit auto-regressive models. Therefore, the above analysis and findings provide the foundation for the proposed method: SSL + Kmeans.
## 2. Clarification of "Stability" and "Learnability" of the Latent Space
We measure the stability of the latent space quantitatively in Table 1, incorporating metrics such as the rate of changed tokens and cosine similarity when varying levels of noise are introduced. It intuitively validates that SSL models exhibit greater noise stability because they learn high-level semantic features of images, rather than low-level appearance features as in traditional autoencoders.
We plan to include the negative likelihood loss as a measure of learnability in our revised version (3.347 for the discriminative tokenizer compared to 7.104 for the VQ tokenizer, using the same vocabulary size and model size). The lower NLL loss represents more learnable latent space.
## 3. Addressing the Argument Regarding Iterative Decoding
The iterative decoding strategy serves as the foundation for iterative generative models, including Masked Image Modeling (MIM) and diffusion models. For MIM [1], low-density tokens are revised during the iterative decoding process. Similarly, in diffusion models [2], low-density data undergo perturbation through Gaussian noise and are subsequently denoised in an iterative manner.
[1] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman. MaskGIT: Masked Generative Image Transformer. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2022.
[2] Y. Song, S. Ermon. Generative Modeling by Estimating Gradients of the Data Distribution. In Advances in Neural Information Processing Systems, pp. 11895--11907. 2019.
## 4. The Decoder for Pixel Rendering
The decoder setup is elaborated upon in Section 4.3, where we detail the additional training of a decoder for pixel rendering. To showcase the generalization and robustness of our discriminative tokenizer, we employ both autoregressive models (VQGAN) and MIMs (MaskGIT) as decoders. The experimental results presented in Tables 2 and 3 indicate that the performance gap between AR and MIM decoders is minimal, demonstrating the efficacy of the discriminative tokenizer.
## 5. Clarification of the Metric
The metrics used to assess stabilization in Table 1 are the rate of token changes and the cosine similarity between perturbed and original tokens. We will ensure that these metrics are presented more clearly in the revised version.
## 6. Connection Between Our Method and Error Accumulation in Auto-Regressive Image Models
In Section 2.2, we address the potential for error accumulation due to unstable VQ tokenizers across all latent generative models, including diffusion models, MIMs, and AR models. While iterative models like diffusion models and MIMs can rectify errors through their iterative decoding strategies, autoregressive models do not possess this capability. Therefore, we propose a direct approach to stabilize the latent space for autoregressive models, effectively reducing errors caused by unstable tokenizers in the decoding process.
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: Thanks the authors for the response. I will keep my judgement as my concerns are relatively minor and the authors did a good job clarifying.
---
Reply to Comment 1.1.1:
Comment: Thank you for the time and effort you have dedicated to reviewing our paper. Your thorough review and insightful suggestions have significantly contributed to improving the quality of our work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful comments. We appreciate that reviewers highlight the novelty and effectiveness of our method, e.g. "The idea is interesting and novel... Strong empirical results ... really impressive" (Vc7B), "A new perspective...which was neglected in previous works ... explanation is intuitive and the proposed solution is straightforward ... The experiments are comprehensive and interesting ... improvements are significant...ablation study makes sense" (eGbB), "The paper achieves very good results ... It's an interesting finding ... The paper studies an important problem" (K39P).
The main topics the reviewers commented on were the explanation of metrics on stabilization and the connection between our theoretical analysis and the proposed method. We respond to each reviewer individually about these topics and others.
We believe the proposed discriminative tokenizer is a non-trivial leap that indeed explores a new perspective of latent space for image auto-regressive models. The proposed method shows the promising potential of large-scale pre-training with next token prediction akin to Language Language Models (LLMs) in the visual domain. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
RoPINN: Region Optimized Physics-Informed Neural Networks | Accept (poster) | Summary: The preprint proposes to replace the collocation based PINN loss by a sum of local continuous integrals over regions around the collocation points. These continuous integrals are then again discretized using Monte Carlo integration with a single quadrature pint. The authors furthermore propose to adapt the region size during training using gradient statistics.
Strengths: The authors report good empirical performance on a number of benchmark problems.
Weaknesses: The introduction of continuous integrals over regions around the collocation points that subsequentially are discretized by Monte Carlo integration again seems tautological. After all, the loss function in PINNs is already a Monte Carlo discretization of a continuous integral (over the whole computational domain). Furthermore, the analysis that the authors present for the modified loss in equation (5) should not be carried out with the continuous integral over the regions $\Omega_r$ but with its Monte Carlo approximation. Otherwise, the comparison to the discretized PINN loss is unfair.
Technical Quality: 2
Clarity: 2
Questions for Authors: I struggle to see why the proposed method should work theoretically. I acknowledge the adaptive nature of the regions for the sampling but struggle to see how this might help to accumulate integration points in regions of, e.g., high residual. A visualization that this, or something along these lines that explains why the proposed method works well, happens would be helpful. Furthermore, I am not convinced that the theoretical analysis presented is meaningful, as it analyzes the integrals over the region as continuous objects. Please comment or clarify misconception.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: See questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer zNBp
Many thanks to Reviewer zNBp for providing an insightful review and valuable suggestions.
> **Clarify misconception.**
>
> "After all, the loss function in PINNs is already a Monte Carlo discretization over the whole computational domain."
Firstly, we want to highlight that our theorem **considers the training process (see proof in $\underline{\text{Appendix A}}$)**, which is distinct from previous quadrature-based theorems [Siddhartha Mishra et al., IMAJNA 2023] that only focus on the integral approximation and do not consider the collocation point change during training.
**Thus, it is clearly one-sided to think about our method only from the integral approximation view.** Under the model training context, the reviewer described "whole-domain-discretization PINN loss" corresponds to "(Global sampling) Point optimization" paradigm (defined below), which is different from the "point optimization" in our paper.
| | Paradigms | Implementation |
| - | - | - |
| Our paper | Region optimization | sample within regions around collocation points and **keep changing during training.** |
| Our paper | (Fixed) Point optimization | sample over the whole domain **at the beginning, but fixed during training** (canonical PINN [33 of our paper]) |
| **Reviewer mentioned** | (Global sampling) Point optimization | sample over the whole domain and **keep changing during training** (RAR [46 of our paper]) |
> **Q1:** "The introduction of continuous integrals over regions around the collocation points that subsequentially are discretized again seems tautological.
>
> "The analysis should not be carried out with the continuous integral over the regions Ω𝑟 but with its Monte Carlo approximation. The comparison to the discretized PINN loss is unfair."
**(1) Requested theorem: region optimization with MC approximation.**
As requested, we prove the generalization bound with MC approximation in $\underline{\text{Figure 3 of Global Response PDF}}$, which is extended from $\underline{\text{Theorem 3.5 of main text}}$ by incorporating gradient estimation error.
For clarity, we present the convex version here. See the $\underline{\text{PDF}}$ for the non-convex version.
$$
\text{Generalization\ error}\leq ((1-|\Omega_r|/|\Omega|)L+\mathcal{E}_{r, \mathrm{grad}})\frac{2L}{|\mathcal{S}|}\sum\alpha_t
$$
$\mathcal{E}_{r, \mathrm{grad}}$ denotes the gradient estimation error caused by MC approximation.
We can find that the above two point paradigms can be unified in our region optimization formalization:
- **(Fixed) point optimization** corresponds to an extreme case: $|\Omega_r|=0$ and $\mathcal{E}_{r, \mathrm{grad}}=0$.
- **(Global sampling) point optimization** is equivalent to another extreme case: $\Omega_r=\Omega$. Since the larger region is generally harder to approximate, this may cause a large $\mathcal{E}_{r, \mathrm{grad}}$.
- **RoPINN** adopts the trust region calibration to adaptively adjust $|\Omega_r|$ during training.
**Thus, introducing "region" is not tautological**, which
- Provides a general theoretical framework for three paradigms.
- Reveals the balance of generalization bound and optimization error.
- Motivates RoPINN as a practical algorithm for balancing.
**(2) All the experiments are fair.**
Since we only sample 1 point in practice, RoPINN will not bring extra gradient calculation than "discretized PINN loss". **Under a comparable efficiency, RoPINN consistently boosts 5 different PINN models in 19 tasks ($\underline{\text{Tables 2,3,7,8}}$)**, which should not be overlooked.
> **Q2:** "Why proposed method should work theoretically. I acknowledge the adaptive regions but struggle to see how this help to accumulate points in high residual."
RoPINN is not a collocation-point sampling algorithm, whose contribution is orthogonal to RAR [46 of our paper] (see $\underline{\text{Table 3 of main text}}$). Next, we will show why RoPINN works well.
**(1) Theoretical understanding.**
As discussed in **Q1-(1)**, the generalization bound has two parts, the first term $(1-\frac{|\Omega_r|}{|\Omega|})$ is inversely proportional to $|\Omega_r|$ and the second term $\mathcal{E}_{r, \mathrm{grad}}$ is generally proportional to $|\Omega_r|$.
- (Fixed) and (Global sampling) point optimization are just two special cases. Obviously, they **lack flexibility and cannot adaptively balance optimization and generalization**.
- RoPINN presents an adaptive algorithm, which can **adjust the region size for a better balance of the above two terms**. A visualization of the balance process is in $\underline{\text{Figure 2 of main text}}$.
**(2) Experiment statistics.**
We also record the standard deviation of parameter gradients on the last 100 iterations, which reflects the training stability. Fixed point optimization leads to very stable training and Global Samlping brings more perturbations, while RoPINN achieves a relatively balanced value.
| 1D-Reaction | Paradigm | rMSE | Gradient Std |
| - | - | - | - |
| **PINN+RoPINN** | **Region** | **0.095** | 0.310 |
| PINN Vanilla | (Fixed) Point | 0.981 | 1.262$\times 10^{-10}$ |
| PINN+RAR | (Global Samlping) Point | 0.981 | 1.128 |
> **Q3:** "I am not convinced that the theoretical analysis presented is meaningful, as it analyzes the integrals over the region as continuous objects."
Following your suggestion, we have extended our theorem to practical algorithms (see $\underline{\text{Q1-(1)}}$). Our original theorem analyses are under a perfect approximation, that is $\mathcal{E}_{r, \mathrm{grad}}=0$, which is still meaningful in:
- Providing a unified framework for three paradigms.
- Considerating the optimization process, thereby easily to be extended to practical algorithms.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their replies. I still struggle to see the value of the proposed region optimization.
Could the authors explain to me how the proposed method "informs" the sampling procedure? All sampling techniques for PINN type loss functions I am aware of reallocate points to regions that are challenging to learn for the network. This is informed by quantities that relate to how well the PDE is currently solved -- typically the PDE residual. I cannot really see that in your proposed method. Does this somehow implicitly happen?
I acknowledge that Figure 2 in the main text illustrates that the regions change in size during the training process, but this does not answer my question.
I remain critical of the work.
---
Rebuttal 2:
Title: Thanks for your response and more explanations about how RoPINN works
Comment: We sincerely thank the reviewer's response and detailed descriptions of the question.
We kindly remind you that our previous rebuttal has included the requested theorem for practical algorithm in $\underline{\text{Q1-(1)}}$, which shows that introducing "region" is not tautological and ensures a fair theoretical comparison w.r.t. discretized PINN loss. Thus, we think the theoretical value of introducing "region" is clear.
Next, we will explain how RoPINN helps the PDE-Solving in practice.
> Could the authors explain to me how the proposed method "informs" the sampling procedure?
Firstly, we list the comparison between RoPINN and a sampling method RAR to clarify "what RoPINN informs the sampling" and "why it works". More details are included in the following.
| | Information to sampling | Intuitive understanding of benefits |
| - | - | - |
| RAR [46 of our paper] | PDE residual | Make the algorithm aware of which area is hard to solve. |
| RoPINN | Gradient estimation error (gradient variance of successive iterations) | Make the algorithm aware if the current sampling range can bring a "good" optimization, which refers to a relatively stable training for convergence and is sufficient to make the model "explore" new areas for generalization. |
**(1) RoPINN "informs" the sampling procedure with "gradient estimation error" (gradient variance of successive iterations).**
As shown in $\underline{\text{Algorithm 1 of main text}}$, RoPINN sets the sampling region size $r$ of each iteration as $r/\sigma_t$, where $\sigma_t$ denotes the gradient variance of successive iterations. In $\underline{\text{Theorem 3.9, 3.11 of main text}}$, we have also proved that $\sigma_t$ can be used to approximate gradient estimation error. Thus, RoPINN's design can introduce the gradient estimation error (gradient variance) into the sampling procedure.
Why introducing gradient estimation error is beneficial? Here are the explanations.
- **Theoretical analysis:** As shown in $\underline{\text{Q1-(1) of previous rebuttal}}$, we have proved that in practice, the generalization bound is also affected by gradient estimation error. The detailed explanations for the effect of $r$ are included in the previous rebuttal. One key point is that a proper sampling region size $r$ can better balance $(1-|\Omega_r|/|\Omega|)L$ and $\mathcal{E}_{r, \mathrm{grad}}$, bringing a better performance. What RoPINN does is change $r$ adaptively.
- **Intuitive understanding:** **The gradient estimation error of MC can be used to represent the “consistency” of optimization direction within a region** ($\underline{\text{Theorem 3.9 of main text}}$). A larger gradient estimation error (corresponding to a lower consistency of gradients in successive iterations) will lead to an unstable training process, which may overwhelm the model or even fail to converge. On the other side, too stable training is also insufficient to make the model "explore" new areas and damage the generalization. RoPINN can achieve a balanced result, which is supported by our new statistics in $\underline{\text{Q2-(2) of previous rebuttal}}$.
The above "intuitive understanding" explanation has been partially discussed in $\underline{\text{Lines 207-210 of main text}}$. In the revised paper, we will rephrase this paragraph for a more detailed explanation.
**(2) RoPINN can be combined with the "sampling techniques" that you mentioned.**
Actually, in $\underline{\text{Q2 of previous rebuttal}}$, we have pointed out that the contribution of RoPINN is orthogonal to previous sampling methods. The key idea of RoPINN is to extend optimization targets from collocation points to their regions. Thus, RoPINN can be combined with your mentioned "sampling techniques". Here are part of the results. Please see $\underline{\text{Table 3 of main text}}$ for full results.
| rMSE | 1D-Reaction | 1D-Wave | Convection |
| - | - | - | - |
| PINN | 0.981 | 0.335 | 0.840 |
| +RAR | 0.981 | 0.126 | 0.771 |
| +RoPINN | 0.095 | 0.064 | 0.720 |
| **+RAR+RoPINN** | **0.080** | **0.030** | **0.695** |
To better explain how our model works, we will rephrase our paper by:
- Incorporating the requested theorem in $\underline{\text{Q1-(1) of previous rebuttal}}$ in the main text to illustrate the balance between optimization and generalization.
- Incorporating the new statistics in $\underline{\text{Q2-(2) of previous rebuttal}}$ to show the balanced results of RoPINN.
- Explaining that region optimization is a general framework for Fixed or Global sampling point optimization.
- Adding a new section to explain the relation and difference w.r.t. previous sampling methods.
We hope these new results can resolve your concerns and we are happy to answer any further questions.
---
Rebuttal 3:
Title: More discussions about sampling-based methods
Comment: **We believe that accumulating points in high-residual areas is NOT the only principle to design sampling-based methods.**
Actually, although the previous sampling methods (e.g. RAR) can make the collocation point accumulate to high-residual areas, **they may also face optimization difficulty due to too many hard-to-optimize points.** For example, if one point is high-residual but extremely hard to optimize, the model optimization may be over-attracted by this point and be misguided.
RoPINN is based on a distinct idea, which **considers the model optimization process**. Specifically, the calibrated region size $r$ can finely balance optimization difficulty and "exploration" to unseen areas (i.e. larger sampling regions).
| | Design | Pros | Cons |
| - | - | - | - |
| Previous methods e.g. RAR | Sampling high-residual areas | Make the model solve hard areas better | optimization difficulty |
| RoPINN | Adaptive sampling region size | Balance optimization difficulty and "exploration" to unseen areas | Without specific optimization to hard areas |
The Cons of RoPINN can be solved by integrating RoPINN and RAR ($\underline{\text{Table 3 of main text}}$). Note that we are not saying that RoPINN is better than RAR. They contribute orthogonally.
**We do hope that the reviewer can leave the sampling high-residual design behind (which is distinct from our design) and think about our paper based on our proposed "optimization-based" theorems.**
Many thanks for your dedication to our paper.
---
Rebuttal 4:
Comment: Thank you for the detailed answer. In fact, I can understand your point more clearly now and like the idea of informing the sampling based on gradient statistics instead of or in combination with residual-based methods.
I still find the explanation in your write-up hard to digest, introducing regions around sampling points which are treated like integrals and then again discretized by one collocation point was confusing to me (and also to another reviewer) and I apologize it took some time before I could understand the underlying idea.
Furthermore, I disagree that you propose an optimization method. You propose an adaptive sampling method with different criteria than the residual.
To conclude, I will raise my score but I think the manuscript would benefit from a different presentation that puts the sampling viewpoint as the core novelty. I am still against publication in the current form but now mainly because of the presentation.
---
Rebuttal 5:
Title: Thanks for your reply and acknowledging our "good" contribution
Comment: We sincerely thank you for your response, appreciating our idea and acknowledging our "good" contribution.
> "I disagree that you propose an optimization method."
**We do NOT attempt to say that RoPINN is an optimization method**. What we want to do is to design a new training paradigm considering the optimization process of PINN models, **which is why we name RoPINN as "region optimized PINN" not "region optimizer" in the title**.
Thus, we respectfully point out that there may exist some misunderstandings in this comment.
> "You propose an adaptive sampling method with different criteria than the residual."
As per your request, we explain RoPINN in a simpling-based method style in the previous reply to provide a better understanding for you.
However, we have to emphasize that **the contribution of region optimization is more than different criteria than residual**. As we stated in the $\underline{\text{Clarify misconception in previous rebuttal}}$, both theorem and algorithm are proposed based on the idea of considering the training process.
**Thus, our key contribution is taking the optimization process into consideration and building a complete theorem and practical algorithm to implement this idea.** We believe that this new theoretical framework can be a good assistant to previous methods, which is also acknowledged by you.
> "I think the manuscript would benefit from a different presentation that puts the sampling viewpoint as the core novelty. I am still against publication in the current form but now mainly because of the presentation."
We do appreciate the reviewer's suggestion in the paper presentation, which gives us another perspective to think about our paper. Many thanks for the detailed descriptions in your question, which help us a lot to understand your concern.
**However, we have to clarify that the sampling-based explanation is just following your request. We do hope that you can rethink our paper from our original "optimization-based" insights (see $\underline{\text{Q1-(1) in original rebuttal}}$).** In this context, current writing about region optimization theorem and practical algorithm is more fluent and smoother than sampling-based ideas. We believe that this disagreement is only about preference in academic perspectives.
Besides, we also want to highlight our experiment results, **consistently improving 5 base models on 19 different PDEs with over 50% gain in most cases**. These significant results well support that our idea may be more foundational than sampling.
## Our promise in revision and respectful request to reconsider our paper presentation
As we stated in the previous response, we will polish our presentation by incorporating the **newly proved theorem, experimental statistics and discussion w.r.t. previous sampling methods** into our paper, which will clearly present our paper from an "optimization-based" view. We will consider your valuable suggestions and do our best to make our paper easy to understand.
With the greatest respect, we also request you to reconsider your score. We believe that **both sample-based and optimization-based viewpoints can give a reasonable explanation to our design**, where the optimization-based writing is also $\underline{\text{accepted by the other two reviewers}}$. Thus, **we kindly say that your concern in the presentation may be caused by two different academic perspectives.**
Greatest thanks for your time in reviewing our paper and discussion.
---
Rebuttal 6:
Title: More Discussion about Presentation
Comment: Many thanks for your time in reviewing our paper and new opinion about writing.
For a convenient overview, we respectfully list some key reasons why we think our current writing from an "optimization" view is more suitable than sampling:
**(1) All the theoretical analyses and proof of our paper are based on an optimization-based view. Previous sampling papers are under different theoretical frameworks from us.**
Here is a comparison of two different perspectives. We believe that the previous theoretical framework of sampling methods is distinct from ours, which has also been clarified in $\underline{\text{Clarify misconception of original rebuttal}}$.
| Theoretical framework | Our Paper | Previous Sampling Paper (e.g. RAR) |
| ------------ | ------------------------------------ | ------------------------------------------------------------ |
| Key Question | Will the PINN model be well-trained? | Will the sampling well approximate integral? |
| Theorem | **Optimization-based theorem** | **Quadrature-based theorems** [Siddhartha Mishra et al., IMAJNA 2023] |
**(2) The optimization-based view can better present how our method works.** In our current paper, all the analyses are based on an optimization perspective. Since we do not change the collocation points (center of region) during training and only adjust the region size, the visualization of point distribution change is not suitable for us. Here is a comparison.
| Analysis | Our paper | Previous Sampling Paper (e.g. RAR) |
| ------------------------- | -------------------------------------------------- | ---------------------------------- |
| Training Curve | $\underline{\text{Figures 2,3 of main text}}$ | N/A |
| Gradient std Statistics | $\underline{{\text{Q2-(2) of original rebuttal}}}$ | N/A |
| Point distribution change | N/A | The most widely used analysis tool |
**(3) Our contribution is orthogonal to previous sampling papers with significant promotion**. In $\underline{\text{Table 3 of main text}}$, we present that RoPINN can be combined with RAR to further boost their performance by a great margin, which means RoPINN is from a new idea (may be more foundational). Here are part of the results.
| rMSE | 1D-Reaction | 1D-Wave | Convection |
| ------------------------ | ----------- | ------- | ---------- |
| PINN vanilla | 0.981 | 0.335 | 0.840 |
| +RAR | 0.981 | 0.126 | 0.771 |
| +RAR+RoPINN | 0.080 | 0.030 | 0.695 |
| **Promotion w.r.t. RAR** | **92%** | **76%** | **10%** |
Since the discussion period will end in a few hours, we do hope that the reviewer can reconsider our presentation.
Sincerely thanks for your time and active discussion. | Summary: This paper extends the optimization process of PINNs from isolated points to their continuous neighborhood regions, which can theoretically decrease the generalization error, especially for hidden high-order constraints of PDEs. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm, which is implemented by a straightforward but effective Monte Carlo sampling method. By calibrating the sampling process into trust regions, RoPINN finely balances sampling efficiency and generalization error. Experimentally, RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation.
Strengths: 1. The idea of extending the optimization process of PINNs from isolated points to their continuous neighborhood regions is novel.
2. Theoretical results on generalization error, convergence rate and estimation error of sampling are provided.
3. A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from the region optimization paradigm and associated theoretical results,
4. RoPINN consistently boosts the performance of diverse PINNs on a wide range of PDEs without extra backpropagation or gradient calculation.
Weaknesses: 1. It is better to include the main proof idea of theoretical results in the main text.
2. Although generalization error bound is provided, an intuitive explanation of the reason behind the success of region optimization is desirable. For example, when sampling one point in each region, why is the total loss decreased compared with point optimization?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Which results in section 4 are for comparisons with the losses with high-order terms and variational methods?
2. How tight is the generalization error bound?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The case of sampling more than one points in each region is not discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer KCE6
Many thanks to Reviewer KCE6 for providing the insightful review and questions.
> **Q1:** "It is better to include the main proof idea of theoretical results in the main text."
Following your suggestion, we will add the following descriptions into the main text as a brief proof for $\underline{\text{Theorem 3.5}}$:
"The proof is based on an optimization perspective [13 of our paper]. Firstly, based on the Lipschitz assumption, we can transform the generalization bound to $L$ times the expectation of parameter distance. Further, region optimization paradigm will bring more 'consistent' gradient optimization direction than point optimization at each iteration, thereby benefitting the generalization bound."
> **Q2:** "Although generalization error bound is provided, an intuitive explanation of the reason behind the success of region optimization is desirable. For example, when sampling one point in each region, why is the total loss decreased compared with point optimization?"
We will clarify this question from both theoretical and optimization views.
**(1) Theoretical view: generalization bound for practical algorithm.**
Just as the reviewer mentioned, the generalization bound is based on an ideal assumption, that is, we can accurately obtain the loss of a region. However, in practice, as the reviewer mentioned, we have to use some approximation methods.
To provide a more direct understanding, we also prove the generalization bound for practical algorithm in the $\underline{\text{Figure 3 of Global Response PDF}}$. This new theorem is extended from $\underline{\text{Theorem 3.5 of main text}}$ by considering the gradient estimation error caused by Monte Carlo approximation.
For clarity, we present the convex version here. See the $\underline{\text{Global Response PDF}}$ for the non-convex version and proof.
$$
\text{Generalization\ error}\leq ((1-|\Omega_r|/|\Omega|)L+\mathcal{E}_{r, \mathrm{grad}})\frac{2L}{|\mathcal{S}|}\sum\alpha_t
$$
$\mathcal{E}_{r, \mathrm{grad}}$ denotes the gradient estimation error caused by approximation methods, which is generally proportional to $|\Omega_r|$. We have the following observations:
- **Canonical point optimization corresponds to an extreme case:** $|\Omega_r|=0$ and $\mathcal{E}_{r, \mathrm{grad}}=0$. In this case, the above equation degenerates to the bound that we proved in $\underline{\text{Theorem 3.3}}$.
- RoPINN presents a trust region calibration algorithm, which can **finely balance benefits from region loss** (the first term $(1-\frac{|\Omega_r|}{|\Omega|})$) **and gradient estimation error** (the second term $\mathcal{E}_{r,\mathrm{grad}}$), which leads to a lower generalization bound.
Thus, the success of RoPINN is from our trust region calibration, which can achieve a better balance between generalization bound and gradient estimation error. We also have visualized the balancing process (change of region size during training) in $\underline{\text{Figure 2 of main text}}$.
**(2) Optimization view: A multi-iteration understanding of why sampling one point works.**
Although we only sample one point at each iteration, $\underline{\text{Lemma 3.10 of main text}}$ shows that during training, gradients difference on different sampling points of $\underline{\text{successive iterations}}$ will be close to different sampling points at the $\underline{\text{same iteration}}$.
This means (informally) that if we think about the training process in a multi-iteration view, the optimization is getting closer to sampling multiple points during training, despite these sampled points being dispatched into multiple successive iterations.
> **Q3:** "Which results in section 4 are for comparisons with the losses with high-order terms and variational methods?"
As stated in $\underline{\text{Line 232 of main text}}$, gPINN is with high-order regularization and vPINN is based on variational formulation. We have compared them in every experiment of $\underline{\text{Table 2 of main text}}$, which provides solid support for the advancement of RoPINN over them.
> **Q4:** "How tight is the generalization error bound?"
Firstly, we would like to thank the reviewer's great question, which makes us calibrate the proof of $\underline{\text{Theorem 3.5}}$ and obtain a tighter generalization bound by refining one-step derivation.
See $\underline{\text{Figure 2 of Global Response PDF}}$ for the complete refined theorem and proof. In short, the refined bound is:
$$
\text{Generalization\ error}\leq (1-|\Omega_r|/|\Omega|)\frac{2L^2}{|\mathcal{S}|}\sum\alpha_t
$$
We think this generalization bound is quite tight. All the derivations of the proof can be strictly equal in some cases. Furthermore, if we consider two extreme cases, this generalization bound works perfectly:
- $\Omega_r=0$, this bound degenerates to the bound of point optimization ($\underline{\text{Theorem 3.3 of main text}}$).
- $\Omega_r=\Omega$ means directly optimizing the whole domain, which corresponds to the ideal loss function. In this case, our proved generalization bound is just equal to 0. Note that, as discussed in $\underline{\text{Q2}}$, this case will still face a large optimization error in practice.
We will also update $\underline{\text{Theorem 3.5}}$ as the refined version in the revised paper.
> **Q5:** "The case of sampling more than one point in each region is not discussed."
Actually, we have discussed sampling more points in $\underline{\text{Figure 3 of main text}}$ as one of the main analysis experiments.
Further, during rebuttal, we also provide more comprehensive results in the $\underline{\text{Figure 1 of Global Response PDF}}$. The key finding is that (1) computation costs will grow linearly when adding points; (2) more points will bring better performance but will saturate around 10 points.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their replies. Aftering reading the authors' responses to me and other reviewers, especially why introducing gradient estimation error is benefit and the comparision between RoPINN and other sampling method, I got the rough idea of why region optimization works, though I did not check the proof details. "A larger gradient estimation error (corresponding to a lower consistency of gradients in successive iterations) will lead to an unstable training process", this seems to be related with the loss landscape of PINNs, how does this gradient consistency promote optimization convergence considering the smoothness property of loss landscape?
I am satisfied with the answers to my other questions. I would like to keep my score, and suggest the authors to explain in detail the intuition behind RoPINN and the difference w.r.t. previous sampling methods in the updated submission.
---
Rebuttal 2:
Title: Thanks for your response and acknowledge our rebuttal
Comment: We would like to sincerely thank you for your valuable response and suggestions.
(1) As for "A larger gradient estimation error will lead to an unstable training process", as we stated in the response to Reviewer zNBp, this is just an intuitive understanding. Here are more details.
Please recall the optimization algorithm SGD and GD. SGD generally faces more optimization fluctuations, thereby converging more slowly than GD. Similarly, if the gradient among successive iterations has a low consistency, the optimization direction of PINN models will face fluctuations, which will make it harder to arrive at a convergence point. If the gradient among successive iterations is consistent, the optimization direction is confirmed, thereby converging faster.
(2) As for "considering the smoothness property of loss landscape", we think the loss landscape will affect the gradient consistency, where the gradient consistency is a dependent variable. Suppose that the target PDE is easy to solve, RoPINN can learn to explore larger region size $r$ in pursuing a better balance between generalization bound and optimization (gradient consistency).
Following your suggestion, we will add more discussion about the intuition behind RoPINN (including the newly proved theorem for practical algorithm) and the difference w.r.t. previous sampling methods in the revised paper.
Sincerely thanks for your time in reviewing our paper. | Summary: The authors developed a region optimized PINN to improve the prediction accuracy compared to the scatter-point based PINN.
Strengths: The authors proposed the region optimization paradigm and conducted a theoretical analysis.
Weaknesses: The practical application scope is limited.
Technical Quality: 4
Clarity: 3
Questions for Authors: (1) It is suggested to add some descriptions of training difficulty factors for the canonical PINN on 1D-Reaction in Section 4.2.
(2) Can the proposed method find a good number of sampling points well balancing the computational cost and convergence speed in Figure 3.
(3) The motivation of using Monte Carlo approximation should be elaborated. Why don’t the authors choose some other more advanced methods to provide better accuracy.
(4) The authors should add more details about the possible practical applications with the canonical loss function of L-Lipschitz-β-smooth.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Some initial guess methods can be developed to efficiently determine the preferable region size and sample number.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer mhfV
We sincerely thank Reviewer mhfV for providing valuable feedback and suggestions in new experiments.
> **Q1:** "Add some descriptions of training difficulty factors for the canonical PINN on 1D-Reaction in Section 4.2."
We will add "Previous research [22 of our paper] demonstrates that 1D-Reaction contains sharp transitions, which is hard to approximate" into $\underline{\text{Section 4.2}}$.
> **Q2:** "Can the proposed method find a good number of sampling points well balancing the computation and convergence."
Number of sampling points is set manually. Our method can only adaptively adjust region size.
**(1) The official design of RoPINN is sampling 1 point in each region, which is already a good and well-verified choice.**
Note that RoPINN is proposed in the spirit of boosting PINNs **without extra backpropagation or gradient calculation (see $\underline{\text{Abstract}}$).** Thus, as described in $\underline{\text{Section 3.2 and Algorithm 1}}$, **we only sample 1 point in practice and adopt this as the official design.**
$\underline{\text{Tables 2,3}}$ demonstrate that this official setting achieves significant promotion and comparable efficiency w.r.t. diverse PINNs in all 19 tasks. So, as for your question, we think sampling 1 point is already a good choice in balancing performance and efficiency.
**(2) Discussion about sampling more points.**
Since the convergence of deep models is affected by many factors (e.g. task, base model, optimizer), we cannot give a universal choice of sampling points. But we experiment RoPINN with more sampling points and plot an overview curve in $\underline{\text{Figure 1 of Global Response PDF}}$. Here are part of the results.
| 1D-Reaction | RMSE | 100 iters time |
| - | - | - |
| Vanilla PINN | 0.981 | 18.47s |
| RoPINN 1 Point | 0.095 | 20.04s |
| RoPINN 5 Points | 0.050 | 46.48s |
| RoPINN 9 Points | 0.033 | 67.98s |
| RoPINN 13 Points | 0.035 | 92.48s |
| RoPINN 30 Points | 0.037 | 196.41s |
We can observe that adding points will bring better performance but will saturate around 10 points. The performance fluctuations of 9, 13, 30 points are within three times the standard deviations (Appendix D.4).
Thus, according to our experiments, we first recommend sampling 1 point (our official design). If they want to obtain better performance, try [1, 10].
> **Q3:** "The motivation of using Monte Carlo. Why don’t the authors choose some other more advanced methods."
**Again, RoPINN is proposed to boost PINNs without extra backpropagation or gradient calculation (see $\underline{\text{Abstract}}$).** Monte Carlo approximation can work well with 1 sampling point, which is easy to implement and without extra gradient calculation. As for other advanced methods, they usually need more points to achieve an accurate approximation.
We experiment with 2D space Gaussian quadrature, which requires square number points and the 1-point situation will degenerate to the center value. Here are the results of PINN+RoPINN.
| 1D-Reaction rMSE | Monte Carlo | Gaussian Quadrature |
| - | - | - |
| 1 point | **0.095** | 0.109 |
| 4 points | 0.066 | **0.059** |
| 9 points | 0.033 | **0.030** |
We can find that under our official setting (sample 1 point), Monte Carlo is better. Although Gaussian quadrature is better in more points, these settings defeat our purpose in "no extra gradient calculation".
> **Q4:** "The practical application scope is limited." "More details about the possible practical applications with the canonical loss function of L-Lipschitz-β-smooth."
**(1) Theoretical assumption will not affect the practicability of algorithms.**
A typical example is Adam, whose convergence can only be analyzed under strict assumptions, e.g. convex or L-Lipschitz-β-smooth [1]. However, it has been used as a foundation optimizer.
In our paper, the L-Lipschitz-β-smooth assumption is only in theoretical analysis for a basic understanding of our paradigm and will not affect the practicability of RoPINN.
[1] Implicit Bias of AdamW: ℓ∞ Norm Constrained Optimization, ICML 2024
**(2) RoPINN achieves consistent promotion for 5 diverse backbones, covering 19 tasks ($\underline{\text{Tables 2,3,7,8}}$).**
Our experiments are much more extensive than the latest papers, such as PINNsFormer (ICLR 2024) and KAN (arXiv 2024). As described in $\underline{\text{Appendix C.1}}$, we experiment with diverse PDEs and do not constrain the loss function to be L-Lipschitz-β-smooth in practice. We believe that such extensive experiments strongly support the practicability of RoPINN.
**(3) Our assumption is widely used in other theoretical papers.**
The papers that we cited in $\underline{\text{Theorem 3.3 of main text}}$ for point optimization analysis are based on L-Lipschitz-β-smooth assumption [13, 47 in our paper]. Other papers that we cited in $\underline{\text{Section 4 of main text}}$ also assume the smoothness of loss function [20 in our paper] or even "infinitely wide neural networks" [42 in our paper].
> **Q5:** Some initial guess methods can be developed to efficiently determine the region size and sample number.
**As stated in $\underline{\text{Section 3.2 and 4 of main text}}$, we adopt the same hyperparameter setting (initial region size $r=10^{-4}$, sample 1 point) for all benchmarks, across 19 different tasks and 5 different base models, which works well consistently.** Thus, this configuration can be a good initial guess.
Besides, we can obtain some empirical guidance from:
- $\underline{\text{Figure 2}}$: Region size $r$ will be progressively adjusted by RoPINN. Setting $r$ in $[10^{-6}, 10^{-4}]$ can work well.
- $\underline{\text{Figure 3}}$ and results in **Q2**: Sampling 1-30 points can gain consistent promotion.
As discussed in $\underline{\text{Appendix G}}$, some tools (e.g. Weights and Bias) can also be helpful.
---
Rebuttal Comment 1.1:
Title: Summary of rebuttal to Reviewer mhfV
Comment: We sincerely thank your dedication in reviewing our paper.
Since this is the last day of the review-author discussion period, we summarize the key points of our rebuttal as follows for a convenient overview:
- **Clarify the concern about practical application scope:** (1) We highlighted that "Theoretical assumption will not affect the practicability of algorithms". (2) The extensive experiment results (5 base models on 19 PDEs) have well supported RoPINN's practicability. (3) Our theoretical assumption is widely used in other papers.
- **Add new experiments about more sampling points:** We highlight that only simpling 1 point is already a good choice and also add experiments on more points to provide an overview of RoPINN.
- **Add comparison w.r.t. advanced quadrature methods:** We show that the advanced methods need more points to achieve a better performance, which defeats our purpose in "no extra gradient calculation".
- **Give practical guidance to hyperparameters** based on analysis experiments in the main text.
**Sincerely thank you for ranking our paper with "excellent" soundness, "good" presentation and "good" contribution**. We do hope our rebuttal can fully resolve your questions to your satisfaction. If so, with the greatest respect, we also hope you can kindly consider raising the score correspondingly.
Many thanks for your time and looking forward to your response. | Summary: The paper proposes a novel optimization method for training physics-informed neural networks (PINNs): Region optimization, which extends a regular pointwise optimization of PINNs to neighborhood regions, named RoPINNs. The paper provides theoretical analysis explaining the decrease of generalization error with RoPINNs and high-order PDE constraints satisfaction. Then the paper presents a practical algorithm to enable the RoPINNs paradigm by introducing Monte-Carlo approximation of region integral and a region calibration scheme. Lastly, the paper assesses the performance on several well-known benchmark problems and showed the improved performance over the considered baselines.
Strengths: - The paper is well-motivated and tackles the important problem in training PINNs (leveraging more information than just a point-to-point type mapping).
- The paper presents theoretical analysis on benefits of RoPINNs, decreased generalization errors and satisfaction of higher-order PDE constraints.
- The experimental results show that the proposed algorithm is effective in solving some challenging benchmark problems (known as failure modes) and is capable of producing more accurate solution approximates.
Weaknesses: - Although shown to be very effective in several benchmark problems, the paper does not seem to provide general guidelines on how to set some important hyper-parameters such as initial region size and the number of sampling points. (While acknowledging that the authors indicate this as one of the limitations,) it would be great to see some experts’ guidelines.
- If the authors could provide some analysis with regards to computational wall time, that would provide more complete pictures on how the proposed method performs. For example, it would be information to see a figure depicting a scatter plot showing computational wall time versus final rMSE type information, where a point in the plot corresponds to a different hyper-parameter setting (that is, the number of sample points).
- [minor] there is a typo in the second paragraph of Section 4.2: line 289 Figure 2 => Figure 3.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Eq (5) seems to suggest that region optimization is applied to the boundary condition as well as L = L_bc + L_ic + L_pde. Is this the correct understanding or is it a typo?
- The proposed optimization method seems to benefit significantly in the case of 1D reaction case while the benefit in 1D Wave or 1D convection cases are not as significant as that of 1D reaction. That is, rMSE for example in Table 2 of 1D reaction achieves an order (or orders) of magnitude improvement over the second best performing methods. Do the authors have some explanation on why?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - some more discussions on practical guidelines would be needed for users who want to utilize this method in different applications
- some additional experiments (regarding computational wall time) would be needed to provide a complete picture of the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # To Reviewer aU22
We would like to sincerely thank Reviewer aU22 for providing a detailed review and insightful questions.
> **Q1:** "The paper does not seem to provide general guidelines on how to set some important hyper-parameters. It would be great to see some experts’ guidelines." "More discussions on practical guidelines would be needed."
Many thanks for reviewer's valuable suggestions.
**At first, we would like to highlight that we adopt the same hyperparameter setting (initial region size $r=10^{-4}$, sample 1 point) for all benchmarks, across 19 different tasks and 5 different base models, which works well.** This verifies the consistent effectiveness of our algorithm.
Next are some guidelines for RoPINN. We will include them in the revised paper.
**(1) Experiment guidance: recap hyperparameter analyses in our paper.**
We have provided analyses on every hyperparameter of RoPINN in $\underline{\text{our original submission}}$, which deliver the following empirical guidance:
- $\underline{\text{Figure 2 of Section 4.2}}$: Initial region size $r$ will be progressively adjusted during training.
- $\underline{\text{Figure 3 of Section 4.2}}$: Increasing the number of sampling points will speed up the convergence but will bring more computation costs (see $\underline{\text{Q2}}$ for efficiency comparison).
**(2) Theorem guidance: a new generalization bound for practical algorithm.**
To provide a more direct understanding of the practical algorithm, we prove a new theorem in the $\underline{\text{Figure 3 of Global Response PDF}}$, which considers the gradient estimation error:
$\text{Generalization\ error}\leq \big((1-|\Omega_r|/|\Omega|)L+\mathcal{E}_{r, \mathrm{grad}}\big)\frac{2L}{|\mathcal{S}|}\sum\alpha_t$
$\mathcal{E}_{r, \mathrm{grad}}$ denotes the gradient estimation error. Here are the guidances derived from this new theorem:
- Increasing the initial region size can benefit the first term of the bound, but may increase gradient estimation error. Fortunately, it can be adaptively adjusted by RoPINN during training, making this hyperparameter easier to tune.
- Sampling more points can reduce the gradient estimation error.
**(3) Practical suggestions.**
Based on experiments and the new theorem, we suggest that
- Firstly, researchers can use the same configuration as us, whose effectiveness has already been verified in 19 tasks and diverse base models. Specifically, as stated in $\underline{\text{Section 3.2 and 4 of main text}}$, initial region size $r=10^{-4}$ and number of sample points $=1$.
- Further, they can adjust the initial region size according to the training loss curve. A jitter curve indicates that you should decrease the initial region size value. Besides, the number of sampling points should be set according to the efficiency demand, where more points will generally bring better results.
- As discussed in $\underline{\text{Limitations in Appendix G}}$, some tools (e.g. Weights and Bias) can be helpful.
> **Q2:** "It would be information to see a figure depicting a scatter plot showing computation time versus rMSE type information." "Experiments (computational wall time) would be needed to provide a complete picture of the proposed method."
Firstly, we want to emphasize that **RoPINN is proposed in the spirit of boosting PINNs without extra backpropagation or gradient calculation (see $\underline{\text{Abstract}}$).** Thus, we only sample 1 point in all experiments, which has already achieved significant promotion and comparable efficiency w.r.t. PINNs.
As per the reviewer's request, we plot the performance and efficiency under different numbers of points in $\underline{\text{Figure 1 of Global Response PDF}}$. Here are the results of 1D-Reaction. 1D Wave is also included in the $\underline{\text{PDF}}$.
We can find that (1) computation costs will grow linearly when adding points; (2) more points will bring better performance, but will saturate around 10 points. The performance fluctuations of 9, 13, and 30 points are within three times the standard deviations (Appendix D.4).
| Number of Points | RMSE | 100 iters time | GPU Memory |
| - | - | - | - |
| PINN | 0.981 | 18.47s | 1.44GB |
| RoPINN 1 Point | 0.095 | 20.04s | 1.48GB |
| RoPINN 5 Points | 0.050 | 46.48s | 4.24GB |
| RoPINN 9 Points | 0.033 | 67.98s | 6.74GB |
| RoPINN 13 Points | 0.035 | 92.48s | 9.24GB |
| RoPINN 30 Points | 0.037 | 196.41s | 19.80GB |
> **Q3:** "A typo in the second paragraph of Section 4.2: line 289 Figure 2 => Figure 3."
Many thanks for your detailed review. We will correct this in the revised paper.
> **Q4:** "Eq (5) seems to suggest that region optimization is applied to the boundary condition as well as L = L_bc + L_ic + L_pde. Is this the correct understanding or is it a typo?"
Your understanding is correct. As described in $\underline{\text{Lines 113-115}}$, region optimization is applied to the inner domain, boundary and initial conditions. However, we will restrict all the sampled points within the definition domain of the PDE. We will include these details and rephrase Eq (5) in the revised paper.
> **Q5:** "The proposed method seems to benefit significantly in the case of 1D reaction while the benefit in 1D Wave or 1D convection cases are not as significant as that of 1D reaction."
Since we experiment with RoPINN on diverse base models and various PDEs, **the value of relative promotion is affected by both base model capability and PDE difficulty,** which makes comparisons among different tasks a large variation.
For example, for the base model PINN, the promotion on 1D-Reaction is larger than 1D-Wave and Convection. In contrast, as for KAN, the promotions on 1D-Wave and Convection are larger than 1D-Reaction. Promotions for PINNsFormer on three tasks are close to each other.
Thus, comparing relative promotion values among different tasks may be meaningless.
---
Rebuttal Comment 1.1:
Comment: Thank you for sharing the new results. They provide useful insight into the tradeoff between additional computation and accuracy. I remain positive about the paper and will maintain the current score.
---
Rebuttal 2:
Title: Thanks for your response and positive support
Comment: Thanks for your prompt response and for acknowledging our new experiments in computation-accuracy balancing. Your positive support and valuable suggestions help us a lot in revising our paper. We will include all the new results in the revised paper. | Rebuttal 1:
Rebuttal: ## Global Response and Summary of Revisions
We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further.
This paper proposes and theoretically studies a **new training paradigm as region optimization**, which can benefit both generalization bound and hidden high-order constraints of PDEs. RoPINN is derived from the theory as a practical training algorithm, which **consistently boosts the performance of 5 different PINN models on 19 PDEs without extra backpropagation or gradient calculation**. Detailed visualizations and theoretical analysis are provided.
The reviewers generally held positive opinions of our paper, in that the proposed method is "**novel**", "**practical**", "**well-motivated**", "**tackles an important problem**", and "**effective in solving some challenging problems**"; we have experimented "**on a wide range of PDEs**" and shown "**good empirical performance**".
The reviewers also raised insightful and constructive concerns. We made every effort to address all the concerns by providing detailed clarification, requested results and theorems. Here is the summary of the major revisions:
- **Provide practical guidelines for initial region size, number of sampling points (Reviewer aU22, mhfV):** Firstly, we highlight that our experiments are all under the same hyperparameter setting, which can be a good and widely-verified choice for initial guess. Secondly, we recall the hyperparameter analysis experiments and newly proved theorem to show both empirical and theoretical guidance. The requested experiment on more sampling points is provided for an overview of RoPINN.
- **Experiment with more advanced approximation methods (Reviewer mhfV):** Firstly, we emphasize the design principle of RoPINN: do not bring extra gradient calculation, which motivates us to choose the easy-to-implement Monte Carlo method. Secondly, we provide experiments on Gaussian quadrature, which needs more sampling points for an accurate approximation.
- **Explain how tight the generalization bound is (Reviewer KCE6):** Following the reviewer's question, we calibrate the proof of our theorem and obtain a tighter bound by only updating one-step derivation. The refined theorem is quite tight and can seamlessly cover two extreme cases. We will include the refined theorem in the revised paper.
- **Explain why our method works theoretically (Reviewer KCE6, zNBp):** Following the reviewer's request, we prove a new theorem for practical algorithm. By considering the gradient estimation error caused by MC approximation, we demonstrate that point optimization is just a special case of our theorem, which lacks flexibility. In contrast, RoPINN can adaptively balance generalization and optimization errors.
- **Explain the meaning of our region optimization theorem (Reviewer zNBp):** Firstly, we clarify that our theorem considers the training process, which may result in some misconceptions. Secondly, we show that our region optimization theorem provides a unified theoretical framework for different optimization paradigms and can be easily extended to practical algorithms.
The valuable suggestions from reviewers are very helpful for us to revise the paper to a better shape. All the above revisions will be included in the final paper. We'd be very happy to answer any further questions.
Looking forward to the reviewer's feedback.
### **The mentioned materials are included in the following PDF file.**
- **Figure 1 (Reviewer aU22, mhfV)**: Efficiency & performance w.r.t. number of sampling points.
- **Figure 2 (Reviewer KCE6, zNBp)**: Refined Theorem 3.5: A tighter generalization bound.
- **Figure 3 (Reviewer KCE6, zNBp)**: New Theorem: Generalization bound under Monte Carlo approximation.
Due to the compilation difficulty, we can only provide a brief description of new theorems in the OpenReview reply. A formal version is in the PDF.
Pdf: /pdf/e0e7c3046845f725d5393f8735c58f0d82ab6d8e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Incentivizing Quality Text Generation via Statistical Contracts | Accept (poster) | Summary: The authors formulate a theoretical setup for a LLM text generation service to incentivize the service to output high quality text the consumer. The authors formulate this setup as having the service having a set of models that has quality (as rated by a evaluator on the end of the consumer) that increases with the cost of running the LLM. The goal is to derive a framework for paying the LLM service based on that quality of the text generated that incentivizes the LLM service to always use its best model. The authors go about this by formulating the definition of a contract in this setting, and defining various metric (max payment, avg. payment, etc.) that consumer aims optimize. They show that the set of contracts that will incentivize the LLM service to output text with the best model can be derived from the set of optimal hypothesis tests that distinguish which model is being used from the evaluator outputs. They derive how the optimal contract can be formulated from these hypothesis tests, when only bounds are known on the costs of running each model for the LLM service.
Strengths: The theoretical setup and monotone assumption of model performance, cost, etc. is quite reasonable and tackles and interesting and relevant problem with LLM queries. The results are simple and intuitive, and connect nicely with previous work on contract theory and hypothesis testing.
Weaknesses: The main issue is the theoretical setup does require an assumption that the bounds on cost are known, which seems somewhat impractical. It might be useful to explicitly comment on how the different metrics degrade with increased looseness of the cost bounds (linearly, it seems like), since one can always pick extremely conservative cost bounds.
Minor issue:
- In Definition 3, it would be helpful to illustrate why $B_R^*$ and $B^*_\rho$ are defined as they are, correctly. Further, the definition with $b \geq c_n$ is a bit strange, since the definition of minmax hypothesis test does not involve worst case over cost vectors, so it doesn't seem correct to use $b$ to derive the minimax contract (and instead it should remain a function of $c_n - c_1$) --- maybe dropping that $b \geq c_n$ case would be more accurate, since it is used in the definition of cost-robust later.
Technical Quality: 4
Clarity: 4
Questions for Authors: Should the principal know the outcome distribution for each generator? This seems slightly unrealistic since consumers only ever have black box access to the API, and never know precisely which model they're responses are from. They do know that previous agent actions could only be mixed over the old (worse) models though.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: I think the limitations are sufficiently addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful and encouraging review! We address your questions and remarks below:
> **The main issue is the theoretical setup does require an assumption that the bounds on cost are known, which seems somewhat impractical.**
This is a very good point. We note that virtually all robust optimality results in the mechanism design literature assume the principal knows something about the setting (e.g., a subset of available actions in [1], or the first moments of the distributions in [2]), and we believe that supporting uncertainty regarding cost is certainly better than assuming full knowledge as in the standard contract setting. We hope that the characterization of cost-robust contracts can serve as a building block towards the development of contracts which are robust in additional ways.
> **It might be useful to explicitly comment on how the different metrics degrade with increased looseness of the cost bounds (linearly, it seems like), since one can always pick extremely conservative cost bounds.**
We will gladly add the analysis you suggest of degradation with looseness of the cost bounds (it is indeed linear).
> **Minor issue in Definition 3: it would be helpful to illustrate why $B_R$ and $B_\\rho$ are defined as they are...**
We will also revise Definition 3 - thank you for these suggestions!
> **Should the principal know the outcome distribution for each generator? This seems slightly unrealistic since consumers only ever have black box access to the API, and never know precisely which model they're responses are from. They do know that previous agent actions could only be mixed over the old (worse) models though.**
This is an excellent question, which suggests a very interesting direction for future research. In our model, the principal is indeed assumed to know the score distribution of the generators, and indeed can never know (without further assumptions) whether the agent “cheated” in the past to influence this knowledge.
It seems that to gain from such cheating, responses should come from a weaker generator, to appear as if easy tasks require more effort from the agent than they actually do, and thus justify more compensation from the principal. In mechanism design, this is somewhat similar to systematically underbidding in auctions, so that the auctioneer will charge less in a future auction, never finding out the true value distribution of the bidder. From the game-theoretic perspective, formulating and studying a model with such non-myopic agent behavior is a very interesting question for future research. From the statistical learning perspective, it will be very interesting to identify distributional assumptions which are reasonable in practice, and also provide meaningful guarantees for learning from samples.
--
Again, we would like to thank you for the insightful feedback! If any additional questions or thoughts arise, please do not hesitate to let us know.
References:
* [1] Carroll, Robustness and Linear Contracts, AER 2015.
* [2] Dutting et al., Simple vs Optimal Contracts, EC 2019. | Summary: This paper addresses the issue of moral hazard in pay-per-token pricing for large language model (LLM) services. Firms may use cheaper, lower-quality models to cut costs, compromising text quality. Moreover, the firms's costs may be unknown to the clients. To counter this, the authors propose a pay-for-performance framework using cost-robust contracts that incentivize high-quality text generation under the uncertainty about the firms's costs. These contracts are designed based on and have a one-to-one correspondence to optimal composite hypothesis tests. Approximation guarantees are provided with respect to the optimal contracts. Empirical evaluations show that these contracts provide robust incentives with small cost increases.
Strengths: The results of characterizing the forms of optimal cost-robust contracts using hypothesis testing, as well as the approximation guarantees, are interesting and have valuable contributions.
Weaknesses: 1. The model's complexity is unnecessary. The problem could actually be studied in the most basic contract setting.
2. The authors do not discuss the computational complexity of finding the optimal cost-robust contract.
3. The authors do not discuss the cases where the action with the highest cost may not be the best action to incentivize.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. what is the computational complexity of finding the optimal cost-robust contract that incentivizes $c_n$ (equivalently, the complexity of finding the optimal test)?
2. Can the characterization results and computational efficiencies be directly applied to the cases where actions with lower costs may be the best action to incentivize?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful review, and for the excellent questions!
> **What is the computational complexity of finding the optimal cost-robust contract that incentivizes (equivalently, the complexity of finding the optimal test)?**
This is a very good question. Optimal cost-robust contracts (and equivalently, optimal hypothesis tests) can be found by solving linear programs, and can therefore be computed in $\\mathrm{poly}(n,m)$ time, where $n$ is the size of the action space (number of possible text generators), and $m$ is the size of the outcome space (number of possible evaluation outcomes). This is a corollary of Theorem 1 and Lemmas 2-3: By Theorem 1, the optimal cost-robust contract is equivalent to an optimal statistical contract, and by the lemmas this in turn is equivalent to a min-budget or min-pay contract in an appropriate uniform-cost setting. Equations (1) and (3) encode min-budget and min-pay contract LPs, respectively.
Further expanding on this point, it is also worth noting that while all optimal contracts presented in this paper can be computed in polynomial time, a common theme in the contract design literature suggests that complexity may change when restricting optimization to contracts with a "simple" functional form (for example, when restricting optimal contracts to have only two levels of payment). Following your remark, we looked into the hardness results of Saig et al. [1] for the full-information min-budget setting, and found that their result is also applicable to cost-robust contracts when their functional form is restricted to have only two levels of payment (Theorem 3 in Saig et al. [1] shows that computing a min-budget contract with two levels of payment is NP-hard in the full-information setting). We are thankful for this remark, and will include this additional insight in the paper.
> **Can the characterization results and computational efficiencies be directly applied to the cases where actions with lower costs may be the best action to incentivize?**
This is an excellent question. Our model indeed relies on the assumption that the output of costlier LLMs has higher quality in general. While it doesn’t capture scenarios such as LLM fine-tuning, we believe that this is a reasonable starting point.
A generalization of this approach currently appears in footnote 4 in the paper. Expanding upon that point, we note that our contract design scheme can be naturally generalized to scenarios where the agent seeks to incentivize any LLM above a certain quality threshold, rather than the single most costly LLM. Taking OpenAI’s products as an example, this corresponds to cases where the agent would like to incentivize text generation using GPT-3 and above, rather than GPT-3 strictly. This generalization still maintains the correspondence between cost-robust contracts and hypothesis tests, and the contract is computed by iteratively solving cost-robust design problems with a single target action. We will update the paper to include a detailed discussion of these aspects.
Beyond this extension, our main analysis technique is not directly applicable, as it requires a separation between the interval covering the costs of target actions and the interval covering the costs of alternative actions (see Lemmas 2-3). In this regard, extending cost-robustness to problems with cost interval overlap seems to be a very interesting direction for future research. We will add to the paper a discussion of our theoretical techniques and their potential applicability in more general scenarios, as well as counterexamples that illustrate limitations.
> **The model's complexity is unnecessary. The problem could actually be studied in the most basic contract setting.**
We are not sure what “most basic'' means here and apologize in advance if we misunderstood, but if you mean a setting in which the costs are known, we note that many factors affecting cost (such as model architecture, batching policy, and exact energy costs) remain undisclosed in practical commercial applications, motivating the development of cost-robust contracts. If this wasn’t the intention, please let us know. There is always a trade-off between simplicity and expressiveness when defining models, and having a concrete reference for a basic model will help us make a more precise comparison and clarify the relations.
–
Please let us know if our response has addressed your questions regarding computational complexity, multiple target actions, and the conciseness of the model. If you have any further questions or thoughts, we are more than happy to clarify and discuss!
Reference:
* [1] Saig et al., Delegated Classification, NeurIPS 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. The computational efficiency is indeed helpful. Here are my further responses and questions:
1. Just to clarify. Do you make any assumption on the outcome distributions related to actions with different costs? Or do you just refer the action with the highest quality to the action with the highest cost? If this is the case, why do your results only rely on incentivizing actions with the highest costs without any assumptions made on the distributions? Could you provide more intuitions and explanations for it?
2. By saying basic contract problem, I mean that the problem can be studied in the basic contract setting without the modeling of LLM and Text Generation. The problem is actually a robust contract design setting where the action costs are uncertain, and LLM is just an application. Using the language of contract design with moral hazard would significantly simplify the notations.
One minor:
1. It seems that the references are not updated. the reference of Saig et al'23 is still the Arxiv one.
---
Rebuttal 2:
Comment: Thank you for the thoughtful response! We address your remarks below:
> **Do you make any assumption on the outcome distributions related to actions with different costs? Or do you just refer the action with the highest quality to the action with the highest cost? If this is the case, why do your results only rely on incentivizing actions with the highest costs without any assumptions made on the distributions? Could you provide more intuitions and explanations for it?**
This is a great question. Our main theoretical results only assume that the target action is implementable - i.e., that there exists some contract incentivizing it (see, e.g., Appendix B.2). Intuitively, implementability is equivalent to the assumption that the observed quality of the target LLM is different (in distribution) from the observed quality of smaller, cheaper models [1]. One intuition for the fact that no further distributional assumptions are required is the equivalence to optimal composite hypothesis testing (by Theorem 1), which doesn't require further assumptions as well.
We also note that the implementability assumption can be verified in polynomial time given outcome distributions, by checking the feasibility of the corresponding linear programs (i.e. equations (1,3)). Additionally, the assumption was verified to hold in the empirical datasets we analyze. In scenarios where the highest-cost action cannot be implemented by any contract (i.e., when the observed quality of the costly LLM is identical to the quality of cheaper ones), this action can be ignored, and the next highest-cost action effectively becomes the highest-cost one and can be targeted instead. We will further emphasize these points in the paper.
Further extending on this remark, we also note that another common theme in the literature is providing stronger guarantees on the resulting contracts by making stronger assumptions about the structure of outcome distributions (see, e.g., [1,2]). Connecting to this theme, in Proposition 2 we show that the MLR structural assumption (Monotone Likelihood Ratio, Def. 5) implies a threshold functional form for the optimal cost-robust contracts. However, from the empirical perspective, it is also worth noting that the outcome distributions we observe in our empirical study don’t seem to satisfy the theoretical assumptions currently available in the literature (for example, see Figure 2 middle right, which shows the non-trivial outcome distributions of the MT-Bench dataset). In this context, we hope that our empirical observations will motivate future theoretical research with refined structural assumptions. We will emphasize this point in the paper as well.
> **By saying basic contract problem, I mean that the problem can be studied in the basic contract setting without the modeling of LLM and Text Generation. The problem is actually a robust contract design setting where the action costs are uncertain, and LLM is just an application. Using the language of contract design with moral hazard would significantly simplify the notations.**
Thanks for the clarification, this is also a very good point. The cost-robustness model and our theoretical results indeed apply more broadly, and extend beyond the context of LLMs and text generation - We view it as one of the paper’s main strengths. Through the use of "application specific" language (i.e. focusing on LLMs), our hope is to promote discussion between different scientific communities, which will eventually increase the applicability of contracts in this setting. The LLM market is nascent and evolving - for example, more players are joining, and different pricing (contract) schemes are emerging. The current naïve pricing schemes, which do not yet tie payments to performance, are likely to be reshaped and improved. Experience from the sponsored search market shows that economic theory informs better pricing [3], and we expect that through contractual payments this will be the case for LLM pricing too. In any case, we will also update the paper to further clarify the significance of our results to general contract design theory.
> **One minor: It seems that the references are not updated. the reference of Saig et al'23 is still the Arxiv one.**
Thank you for this remark! We will gladly fix this.
–
Please let us know if our response has addressed your questions regarding distributional assumptions, and the relation to general contract design theory. Also, if the discussion so far increases the favorability of your assessment, we would greatly appreciate it if you would consider increasing your score. In any case, your remarks are very helpful, and we are more than happy to discuss any additional questions or thoughts that come up.
References:
* [1] Dutting et al., Simple vs Optimal Contracts, EC 2019.
* [2] Saig et al., Delegated Classification, NeurIPS 2023.
* [3] Ostrovsky and Schwarz, Reserve Prices in Internet Advertising Auctions: A Field Experiment, JPE 2023. | Summary: * The paper concerns the problem of incentivizing LLMs to use the most costly model, which is assumed to be the model with the best performance. Without proper incentive, the LLM company has the incentive to charge customers for the highest payment, but deliver the service using a lower-cost model, because the performance of the model is usually not verified. Therefore, the paper proposes to use contract design to solve this problem. In particular, an automatic detector first gives an integer score for the performance of an LLM. Then, the task is to design a payment for the company for each of the integer scores. The goal is to minimize the total payment conditioned on incentivizing the best-performed model.
* The main contribution is the discussion of the cost-robust contract, meaning how to design the optimal contract while the costs of LLMs are unknown. Empirical evaluations have shown how to use the theory in practical settings given LLM performance data.
Strengths: * (I’m not an expert in contract design.) I appreciate the theoretical contributions of the paper. To me, section 4 has several interesting insights into connecting cost-robust contracts with hypothesis tests. As claimed, this is the first paper considering cost-robust contract design. However, the real contribution should be better evaluated by experts.
* In general, incentive issues of LLM uses have been very critical and challenging. I also like the connection between contract design and the production of LLMs.
Weaknesses: Although I believe contract design can speak with the production of LLMs, I’m not fully convinced that the proposed model is a good idea to solve the considered problem.
* In practice, each company pricing its own AIs, so who should be the principal? In other words, the paper assumes there is a trust-worthy third party who can run the quality-detector and commits to a contract with the LLM companies. I’m not sure this is feasible in practice. I hope the authors can explain more carefully the application scenarios of their theory.
* Furthermore, there is no evidence in the paper (and, I guess, on the Internet) that can prove LLM companies are cheating about their service quality. I also don’t think this is very likely because LLM companies have other incentives to provide high-quality services, e.g. their reputation. So, how do we know we are not solving a problem that does not exist?
* Even though we go with the assumption that there is a contract that the company (agent) agrees on, I doubt cost-robustness is the first-order concern. The cost data is usually publicly obtainable from energy reports, as the authors did in their experiments. Even though this data is not public, the energy cost is usually easy to estimate. Therefore, I don’t think incentivizing LLM production is a suitable application for cost-robust contracts.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: Limitations are reasonably stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the insightful review! We address the points below:
> **In practice, each company pricing its own AIs, so who should be the principal? In other words, the paper assumes there is a trust-worthy third party who can run the quality-detector and commits to a contract with the LLM companies.**
Thank you for this question. One application our solution applies to is cases in which the principal (a consumer, or a group) has some bargaining power. While this doesn’t cover all possible scenarios, in many cases the principal is in fact in a strong position: Large organizations typically have bargaining power as they license software in bulk (through "enterprise licensing"). Additionally, our results also suggest that users with common interests are likely to benefit from negotiating together, or purchasing LLM services through agencies with negotiation power as in the online ad market. We will update the paper to emphasize these aspects.
Regarding the need for third-party evaluation, it is first important to note that evaluation can be performed locally by the principal in a verifiable way, and thus there is no need for a third-party. This applies naturally in important use-cases such as LLM-assisted code generation. For evaluation settings which are more intricate, following your remark we have extended our framework to support contracts with principal-side sampling, where the principal only evaluates a fixed proportion of results that are chosen uniformly at random (e.g. 1% of responses at random). This results in a contract which requires more budget but demands less evaluation resources. Allowing this trade-off can eliminate the need for a trusted third-party in additional key use-cases, such as evaluation based on an LLM-as-a-judge that runs locally. We will add a full formal description to the paper, together with supporting proofs.
Lastly, we also note that evaluator integrity is a common assumption in the contract design literature, and any economic/cryptographic solution to the evaluator integrity problem can be applied in our case as well.
> **There is no evidence in the paper (and, I guess, on the Internet) that can prove LLM companies are cheating about their service quality. I also don’t think this is very likely because LLM companies have other incentives to provide high-quality services, e.g. their reputation.**
While strategic misreporting in LLMs has not yet been officially acknowledged, there is ample evidence for strategic misreporting in similar industries with more mature markets, and related evidence suggesting that strategic behavior is also plausible in the context of LLMs:
* In related industries, there are certainly examples of trust violations from companies who provide black box services - an extreme example is Theranos, and another example is throttling by internet service providers, which purposefully and intermittently slow down service.
* For LLMs, there has already been a documented period of alleged low-quality service from ChatGPT around January 2024, dubbed “ChatGPT gone lazy”, which OpenAI acknowledged but never fully explained (see The Guardian, “What is going on with ChatGPT?”, Jan 12, 2024). With the current pricing schemes, it is not clear what will prevent such periods from recurring in the future.
While indirect systems such as reputation or the litigation system allow for some level of quality assurance, contracts are increasingly being applied in real-world applications - such as pay-for-performance healthcare, or revenue sharing for internet content creators. For reputation, one of the issues is that quality is non-trivial to assess, and quality measures are noisy. Considering contracts in conjunction with complementary systems like reputation appears very promising for future work, and we are very grateful for this insightful remark.
> **The cost data is usually publicly obtainable from energy reports, as the authors did in their experiments. Even though this data is not public, the energy cost is usually easy to estimate.**
To our knowledge, fine-grained cost analysis is only publicly available for open-source models. Our experiments use data from the open-source Llama-2 models with publicly-disclosed computational setups. In contrast, the full architecture and computational setup of the commercial models common today are not publicly disclosed (for example, GPT-4, Gemini, and Claude are all closed-source). For closed models, many factors affect the cost (architecture, batching policy, GPU type, exact energy prices, etc.), and many of them remain undisclosed - Motivating cost-robustness.
Moreover, there is no doubt that simplicity and explainability are important practical concerns in pricing, and our contracts based on hypothesis tests achieve exactly these properties, while also having a deep theoretical justification as optimal cost-robust contracts. This mirrors seminal results in contract theory that justify the use of simple linear contracts due to their robust optimality - whether to the full action set [1] or to the distributional details [2].
–
We would like to thank you again for the insightful questions and remarks! We hope this discussion clarifies the plausibility of moral hazard problems in LLMs, the applicability of contract design to them, and the significance of the contribution to the foundations of contract design theory.
Please do not hesitate to follow up if you feel some questions remain unanswered.
References:
* [1] Carroll, Robustness and Linear Contracts, AER 2015.
* [2] Dutting et al., Simple vs Optimal Contracts, EC 2019.
---
Rebuttal Comment 1.1:
Title: post rebuttal
Comment: I appreciate the author's rebuttal. Although I'm still not fully convinced by the rebuttal of the second point which concerns the applicability of the theory, I can see some value in the model and theoretical analysis. I'm raising my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for the improved assessment! We appreciate your feedback, and will certainly expand on this discussion in the paper to further facilitate the applicability of our model. We are also more than happy to continue the discussion if additional questions arise. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Rethinking Patch Dependence for Masked Autoencoders | Reject | Summary: This paper reveals the role of inter-patch dependencies in the decoder of MAE on representation learning. The paper shows that MAE achieves coherent image reconstruction through global representations learned in the encoder rather than interactions between patches in the decoder. Based on this, the authors propose CrossMAE, which only utilizes cross-attention in the decoder.
Strengths: - The approach of analyzing the reconstruction process through self-attention between mask tokens and cross-attention between mask and visible tokens is intriguing.
- The writing is clear and easy to follow, with main messages that are solid and insightful.
Weaknesses: 1. Idea/Novelty
- The claim that MAE reconstruction is achieved through global representation learning within the encoder rather than interactions between patches needs more support. Recent studies linking MAE to contrastive learning have found that the receptive field of specific mask tokens in the decoder is relatively small. Could the role of mask tokens in the decoder be to capture local area information? This might explain the smaller attention magnitude of masked tokens compared to visible tokens in Figure 1(b).
- There is a concern that without self-attention (i.e., with the proposed method), the observation that authors made on the vanilla MAE may no longer be valid. Additional explanation on this point is necessary as this observation is the main motivation for suggesting CrossMAE.
2. Additional justification
- Effectiveness of using a subset of mask tokens as queries: Unlike the traditional architecture, this method uses only a subset of mask tokens as queries. Detailed analysis and interpretation are needed on why this is effective.
- Performance differences when using the entire set of mask tokens versus a subset (and what percentage of mask tokens is used) should be reported.
3. Experiment
- For a fair comparison, CrossMAE's performance should be evaluated using the same setting as the original MAE, especially regarding the fine-tuning recipe.
- The current experimental results do not convincingly demonstrate the effectiveness of the method. For classification tasks, only the linear-probing and fine-tuning results on IN1K are reported. Following the previous works, classification on various downstream datasets should be also considered.
- For generalizability, evaluation on another task like semantic segmentation (e.g. on ADE20K) would be useful to verify that the suggested method learns the generalizable feature representation.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have not discussed limitations of this work except for the very last sentence of section 5, indicating that they have discussed limitations in this section in the questionnaire #2. It is strongly recommended to disclose more detailed limitations of the proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and we appreciate these suggestions. We have performed the requested experiments and will revise the paper accordingly.
> [W1] “The claim that MAE reconstruction is achieved through global representation learning within the encoder rather than interactions between patches needs more support.”
The interpretation seems inconsistent with what was proposed in the work. We demonstrate that interactions between masked and visible patches are the primary contributors to MAE reconstruction, while interactions among masked patches contribute marginally. Figure 1(b) supports this, showing masked tokens attend much more strongly to visible tokens than to other masked tokens.
We further validated this with two test-time ablations:
1. Disabling self-attention among masked tokens still produces valid reconstructions (Figure I for Reviewer yYWM).
2. Disabling cross-attention fails to reconstruct the image (Figure II for Reviewer yYWM).
These results confirm that vanilla MAE primarily uses cross-attention for reconstructions, not self-attention between masked patches.
> [W1] “Could the role of masked tokens in the decoder be to capture local area information? This might explain the smaller attention magnitude of masked tokens compared to visible tokens in Figure 1(b).”
We apologize for the potential misunderstanding. The hypothesis that each masked token attends to tokens nearby is also supported by our empirical analysis in Figure 1. However, this observation **does not contradict our claims** that **1)** all the encoder features of visible patches collectively construct a representation of the whole image, including the parts that were masked out, and **2)** the masked patches attend to visible patches more than they attend to other masked patches.
For the first claim, we note that each masked patch can be reconstructed by attending to the encoder features of the visible patches, indicating the encoder features for the visible parts **collectively** contain information for **the whole image**. In this way, **we can independently decode each masked patch without considering the pixel values of other masked patches**. We will revise the wording to reduce confusion.
For the second claim, although the masked tokens often attend to tokens nearby, they still attend to visible tokens much more strongly, justified by Figure 1(b).
> [W2] “There is a concern that without self-attention (i.e., with the proposed method), the observation that authors made on the vanilla MAE may no longer be valid.”
We observed that in vanilla MAE, self-attention among masked tokens is much weaker than cross-attention between masked and visible tokens. Converting the self-attention MAE to a cross-attention-only decoder changes the attention mask to allow only attention from masked tokens to visible tokens (i.e., **equivalent to setting self-attention magnitude to 0**), aligning with our previous observation.
> [W3] “Detailed analysis and interpretation are needed on why (using only a subset of masked tokens as queries) is effective.”
The intention and justification for using a prediction ratio that is different from the mask ratio are presented in the general response.
> [W4] Performance differences when using the entire set of masked tokens versus a subset (and what percentage of masked tokens is used) should be reported.
We will revise to emphasize that the downstream performance tradeoff is provided in Table 1 and ablated in Table 3(c). The FLOPS efficiency tradeoff is provided in the general response and will be added to our work.
> [W5] For a fair comparison, CrossMAE's performance should be evaluated using the same setting as the original MAE, especially regarding the fine-tuning recipe.
We use the same fine-tuning and evaluation recipe as outlined in MAE. Hyperparameters such as decoder block width and the number of training epochs are the same as MAE for a fair comparison.
> [W6] “Following the previous works, the classification on various downstream datasets should be also considered.”
We provide more experiments on iNaturalist2019 and Places365 (ViT-B, pretrained for 800 epochs). **We find that CrossMAE performs comparably to MAE for transfer learning.**
| | MAE | CrossMAE (0.25) | CrossMAE (0.75) |
|-|-|-|-|
| iNaturalist2019 Accuracy | 79.8 | 79.4 | **80.1** |
| | MAE | CrossMAE (0.25) | CrossMAE (0.75) |
|-|-|-|-|
| Places365 Accuracy | **57.9** | **57.9** | 57.8 |
> [W7] “For generalizability, evaluation on another task like semantic segmentation (e.g. on ADE20K).”
In addition to COCO instance segmentation, we provide semantic segmentation results on ADE20K, which further demonstrates that CrossMAE learns generalizable features.
| | MAE | CrossMAE (0.25) | CrossMAE (0.75) |
|-|-|-|-|
| ADE20k mIoU | 47.7 | 47.7 | **48.1** |
> [Limitations] “The authors have not discussed the limitations of this work except for the very last sentence of section 5”
We will add more limitations in the revision:
While CrossMAE improves the efficiency of pretraining, it still inherits the limitations of MAE. For example, although CrossMAE performs on par or better than MAE with the same hyperparameters, as ablated in Table 3, finding the optimal masking and prediction ratio can require more experimentation on new datasets. In addition, since CrossMAE still follows a masked reconstruction objective, the learned model predicts content based on the training dataset and will reflect biases in those data. Finally, both MAE and CrossMAE only work on vision transformers and their variants, while adapting them to CNN-based methods is non-trivial and may require custom CUDA kernels to maintain efficiency.
**Given the clarifications we’ve provided and the promising results in many requested experiments, we kindly ask if you would consider raising your assessment score. Additionally, please let us know if there are any new concerns or further questions we can address for you!**
---
Rebuttal Comment 1.1:
Title: A gentle reminder - 2 days left for the author-reviewer discussion
Comment: Dear reviewer,
We wanted to ask if you had a chance to check our response with additional details and clarifications, analyses on transfer learning to different datasets and task, and an updated limitations section.
Please also consider checking our general response, which includes a more detailed analysis of the improved efficiency of CrossMAE.
Please let us know if we addressed your concerns and/or if any further information or experiments would be helpful. We would be happy to provide them.
Many thanks!
Authors
---
Reply to Comment 1.1.1:
Title: Gentle reminder - 1 day left for the author-reviewer discussion
Comment: Dear reviewer,
Please let us know if our response addressed your concerns and/or if you have any additional questions. We would be happy to answer them before the end of the discussion period.
Thanks!
Authors | Summary: The paper introduces a novel pre-training approach called CrossMAE. Instead of concatenating the masked and visible tokens for the decoder, the authors add cross-attention to decode the masked tokens by using them and the visible patch embeddings as separate inputs to the decoder. Further, the authors introduce a method to only partially reconstruct the masked patches, and leverage inter-bock attention to fuse feature across different layers.
Strengths: - The paper is well motivated through a practical observation
- The authors propose a useful technical contribution which seem intuitive given the described observations
- The paper is well written and technically sound
- All visualizations provide additional value, I especially like Figure 5. It describes the effect of the contributions well
- Judging from the experiment section, the presented approach mostly improves over the vanilla MAE and other MAE-like follow-up works
Weaknesses: - I feel like the paper is missing a more structure ablation of the individual contributions. I think the paper would benefit from having a simple table where all contributions are added sequentially to better identify the performance effect of the individual contributions as in:
MAE X.X
+ Cross-Attn X.X
+ Partial Reconstruction X.X
+ Inter-Block Attn X.X
- As can be observed from Table 3 c), the final setting (underlined) of the prediction ratio, 0.75, turns out to be exactly the same as the optimal masking ratio, 0.75. If I understood correctly, this means that in practice, CrossMAE works best when it predicts all tokens that were masked, not just a fraction of them. Only predicting part of the masked tokens was previously listed as a contribution. Therefore, I don’t understand how this additional hyper parameter provides any benefit for better downstream performance. Maybe I’m missing something and this be cleared up by answering the previous point.
- All models are only trained for 800 epochs. The original MAE reaches peak performance at 1600 epochs. For a thorough comparison, it would be necessary to also train CrossMAE for 1600 epochs and see if the performance gains sustain, or if performance has peaked at 800 epochs.
- Table 1 is missing the CrossMAE ViT-H with 75% masking ratio
- Contribution 2 and 3 don’t seem to be as well motivated in the introduction in comparison to Contribution 1
- Better performance is listed as a contribution. IMO this is not a contribution, rather a result of the technical contributions
Technical Quality: 3
Clarity: 3
Questions for Authors: I like the idea and motivation of the paper. It starts from an interesting observation of the vanilla MAE, and aims to improve this aspect. Unfortunately, it is not fully clear which of the proposed contributions actually have an impact on performance. Table 3a) shows that adding Cross-Attn improves downstream performance. But since the authors choose the masking ratio to be the same as the prediction ratio, there doesn’t seem to be an improvement resulting from the second contribution. The effect of improved computation complexity only exists if prediction ratio < masking ratio. Lastly, according to Table 3 d), with the right number of fused feature maps, inter-block attention CAN improve the model, but the authors choose 12 as their default number of fused feature maps, which doesn’t improve performance over just adding Cross-Attn.
Concretely, I think the following additions could improve the paper:
- Introduce a comprehensive analysis of the individual contributions’ impact on performance, and also computational complexity if you want to highlight that, in a similar manner as proposed above
- Train both models for 1600 epochs and evaluate if the performance increase can be sustained
I’m willing to increase my score if my concerns are adequately addressed, and/or if the other reviewers list further convincing arguments for accepting the paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have sufficiently addressed the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to thank the reviewer for the detailed review. We provide responses via the discussion below.
The most critical concern that the reviewer had was outlined in **Weakness 2**:
> [W2] I don’t understand how the prediction ratio provides any benefit for better downstream performance.
We apologize for the confusion regarding why we introduced a varying prediction ratio (partial reconstruction). Partial reconstruction does not intend to improve the performance in terms of downstream accuracy. Instead, it **allows our user to control and significantly reduce the computation required during training in terms of runtime and GPU memory usage with minimal impact on downstream performance**, as explained in L196-197. **Additionally, in the general rebuttal above, we present a derivation of the computational complexity as well as runtime statistics.**
In addition to the absolute downstream performance that the reviewer focused on throughout the review, one of the ways to improve model performance is making training more **efficient** (i.e. reaching a similar downstream classification accuracy and segmentation accuracy as baselines with less training time and compute).
Practically, the outlined improvement in efficiency not only **allows longer and more exploratory experiments to be run for improved downstream accuracy within the same compute budget** but also **makes pre-training more feasible in settings where compute resources are more constrained** (Table 10). We hope that the findings we had in the paper along with the improved pre-training efficiency can lead to more future research in improving visual pre-training.
Given this, we address each of the questions individually:
> [W1/Q1] “missing a more structure ablation of the individual contributions”
We refactored Table 3, the individual contribution’s impact on performance. All time trials are averaged over 10 epochs with FlashAttention 2 enabled. Note that the runtime is benchmarked with 2x NVIDIA A100 GPUs while Table 10 uses 4x A100 (L 584), so the runtime roughly doubles compared to Table 10.
| Method | Accuracy | Runtime (mins per epoch) |
|-------------------------------|----------|---------------------|
| MAE | 83.0 | 9.35 |
| +Cross Attention | 82.9 | 7.38 |
| +Inter-block Attention* | **83.3** | 8.41 |
| +Partial Reconstruction | 83.2 | **6.32** |
\* Since in cross-attention, the keys and queries can take different values. This enables each decoder block to use different encoder features through inter-block attention. This would otherwise not be possible with self-attention decoder blocks in vanilla MAE.
We will incorporate this table to improve the clarity of our work’s contributions and advantages.
> [W3/Q2] “Train both models for 1600 epochs”
We provide an experiment at 1600 Epoch in the table below. Both MAE and CrossMAE are trained with the same hyperparameters and on the ViT-B architecture. **CrossMAE still performs comparably while being faster at pre-training.**
| | MAE | CrossMAE (0.25) | CrossMAE (0.75) |
| - | - | - | - |
| ImageNet Acc | 83.6 | **83.7** | **83.7** |
| Runtime (mins per epoch) | 9.35 | **6.32** | 8.41 |
> [W4] Table 1 is missing the CrossMAE ViT-H with 75% masking ratio
Due to resource constraints and limited time for the rebuttal, we are not able to finish training ViT-H at 75% masking ratio. However, we did show that **even at 25% masking ratio our ViT-H performance surpasses the performance of MAE** (86.3% vs 85.8%), which is a setting requiring **even less compute** compared to 75% masking ratio.
> [W5,6] “Contribution 2 and 3 don’t seem to be as well motivated in the introduction in comparison to Contribution 1; better performance (contribution 3) is a result of the technical contributions”
We have restructured the list of contributions as below:
1. (left unchanged) **We present a novel understanding of MAE.** Our findings show that MAE reconstructs coherent images from visible patches not through interactions between patches to be reconstructed in the decoder but by learning a global representation within the encoder. This is evidenced by the model’s ability to generate coherent images and maintain robust downstream performance without such interactions, indicating the encoder effectively captures and transmits global image information.
2. Given our discovery that the encoder in MAE already captures a comprehensive global representation, **we propose replacing self-attention layers with cross-attention** to aggregate the output tokens of the encoder into each input token within the decoder layers independently, thereby eliminating the need for token-to-token communication within the decoder.
3. Finally, **we leverage additional properties of cross-attention to achieve an even better performance-efficiency trade-off**. CrossMAE's ability to independently reconstruct masked tokens allows us to process only a fraction of the masked patches during training, significantly improving efficiency. Furthermore, the use of cross-attention enables different decoder blocks to utilize distinct encoder features, enhancing the performance-compute trade-off through inter-block attention. This approach achieves comparable or superior results in image classification and instance segmentation tasks across various model sizes (from ViT-S to ViT-H) while reducing computational demands compared to MAE.
**In light of the clarifications and analyses we provide, we would like to ask if you might be open to increasing your assessment score, and if there are any additional concerns or questions that we can address for you!**
---
Rebuttal 2:
Title: Additional experiment for CrossMAE ViT-H with 75% masking ratio
Comment: > [W4] Table 1 is missing the CrossMAE ViT-H with 75% masking ratio
Our CrossMAE ViT-H run with 75% masking ratio just finished. The updated comparison for ViT-H is shown in the table below. **Our method, either with 25% or 75% masking ratio, surpasses MAE in terms of ImageNet fine-tuning performance on ViT-H, justifying the scalability of our method.** The result further justifies that reconstructing only 25% of the masked tokens brings large efficiency gains, as analyzed in the general response, but has a marginal impact on downstream performance.
| Method | ViT-S | ViT-B | ViT-L | ViT-H |
|-----------------|-------|-------|-------|-------|
| Supervised | 79.0 | 82.3 | 82.6 | 83.1 |
| MAE | 78.9 | 83.3 | **85.4** | 85.8 |
| CrossMAE (25%) | 79.2 | 83.5 | **85.4** | 86.3 |
| CrossMAE (75%) | **79.3** | **83.7** | **85.4** | **86.4** |
---
Rebuttal Comment 2.1:
Title: A gentle reminder - 2 days left for the author-reviewer discussion
Comment: Dear reviewer,
We wanted to ask if you had a chance to check our response with additional details and clarifications, a more structured ablation study, a comparison of model performance at 1600 epochs, CrossMAE ViT-H with 75% masking ratio, and an updated list of contributions.
Please also consider checking our general response, which includes a more detailed analysis of the improved efficiency of CrossMAE.
Please let us know if we addressed your concerns and/or if any further information or experiments would be helpful. We would be happy to provide them.
Many thanks!
Authors | Summary: This paper presents CrossMAE, a methodology for improving pre-training efficiency over that of MAE for an encoder. The paper motivates its approach by presenting visual evidence that, in standard MAE pre-training, masked tokens attend to other masked tokens significantly less than to non-masked (aka, visible) tokens. Using this motivation, the paper then presents CrossMAE, which differs from MAE largely in that it replaces the MAE self-attention with cross-attention between the masked tokens and a learnable weighted combination of the encoder feature maps. This aspect decouples queries from keys and values (which is not the case in MAE), which the paper then exploits to allow only some (but not necessarily all) mask tokens to be used during reconstruction to pre-train the model. The paper presents an analysis of which encoder block features are optimal to cross attend with each decoder block, and it presents ablation studies on multiple design decisions. Finally, it presents visual and fine-tuning results showing comparable performance to MAE and similar methods.
Strengths: This paper motivates CrossMAE well by showing evidence of a potential inefficiency in MAE (self-attention) and then presenting an approach to remedy it (cross attention). I particularly like how the paper delves even deeper, though: instead of stopping at the level of replacing self-attention with cross-attention, it then points out that this choice allows for a significantly fewer number of masked patches to have to be reconstructed, which reduces flop count significantly. The ablations in Table 3 are fairly thorough and answered some questions I have developed. The performance of CrossMAE appears comparable to other SOTA methods but with significantly more efficient pretraining.
Weaknesses: 1) In Fig 1b, IIUC, for one particular mask token, the two $\mu$'s are the respective attention values averaged over all transformer blocks and all masked/non-masked tokens. If this is the case, my concern is that by averaging over all transformer blocks, variations in the attention is being hidden. Naively, I would think that for early blocks, the attention due to masked tokens would be small (as the paper concludes) but becomes larger for the later blocks (since now the masked tokens have actual useful signal in them). Did you consider this?
2) I do not follow why CrossMAE does not need an MLP at the end to convert final decoder tokens back to raw pixels. Line 218 says that the inputs to the first encoder block are included in the feature maps for cross attentions. Does this cause a final MLP to not be used?
3) Less critical:
3a) Fig 1b should point the reader to Section A.1. I spent much of my reading confused about what $\mu$ is.
3b) Fig 4a should have a different number of decoder layers than encoder layers. When I saw this figure, I immediately wondered why a decoder block wasn't being paired with feature maps from its "partner" encoder. I had to wait until lines 204-207 to get an explanation of why this doesn't work.
3c) Line 187 references a "second question" in Sec 3.1, which doesn't exist as far as I can tell.
3d) Fig 4a shows the "vanilla" version of Cross MAE, where the final encoder layer feature maps are attended with all decoder layers. But the paper presents results exclusively (?) on the version that uses a learned combination of the feature maps. Anyway, the figure confused me. Maybe I just didn't understand what the solid arrows vs dotted ones are supposed to represent.
Technical Quality: 3
Clarity: 3
Questions for Authors: See "Weaknesses".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No weaknesses are specifically addressed. But as this paper is essentially an optimization to MAE, I'm not sure this question is relevent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable questions and suggestions! We provide responses via the discussion below:
> [W1] “By averaging over all transformer blocks, variations in the attention may be hidden. Naively, (the reviewer) would think that for early blocks, the attention due to masked tokens would be small but becomes larger for the later blocks.”
This is indeed an interesting hypothesis! We test this hypothesis on the pre-trained MAE and report the cross-attention and self-attention magnitude at each individual block in the table below.
| Block | Attention to visible tokens | Attention to masked tokens |
|-------|---------------------------|--------------------------|
| 1 (closest to the encoder) | 0.196 | 0.042 |
| 2 | 0.149 | 0.058 |
| 3 | 0.181 | 0.047 |
| 4 | 0.153 | 0.057 |
| 5 | 0.193 | 0.043 |
| 6 | 0.161 | 0.054 |
| 7 | 0.207 | 0.038 |
| 8 (closest to the reconstruction) | 0.179 | 0.048 |
| Sum (reported in the paper) | 1.418 | 0.388 |
Based on these results, we do not observe a significant increase in the magnitude of the attention to masked tokens for the later decoder blocks. In addition, our observed pattern that “the magnitude of attention is larger in masked tokens’ cross-attention to visible tokens than in masked tokens’ self-attention” is shared across all decoder layers.
> [W2] “I do not follow why CrossMAE does not need an MLP at the end to convert final decoder tokens back to raw pixels.”
> Line 218 says that the inputs to the first encoder block are included in the feature maps for cross attentions. Does this cause a final MLP to not be used?
Sorry for the confusion! We **do** need a final MLP to convert the final **decoder** token back to raw pixels, which is consistent with MAE.
Line 218 refers to the process of passing the aggregated feature of each **encoder** block to each decoder block rather than converting the final decoder features to raw pixel values, where we treat the input feature after patchfication and linear projection as one of the encoder features to be aggregated. We will update the paper to clarify this further.
> [W3] 3.a-d
Thank you so much for pointing these out! We have made the following changes to the paper to provide clarifications.
> For 3.a: “Fig 1b should point the reader to Section A.1. I spent much of my reading confused about what μ is.”
We updated the third paragraph to clarify what μ is and updated the figure captions. Now the third paragraph and the caption reads as below (with main changes in **bold**):
Line 28-33:
We decompose the decoding process of each masked token into two parallel components: self-attention with other masked tokens, as well as cross-attention to the encoded visible tokens. If MAE relies on self-attention with other masked tokens, its average should be on par with the cross-attention. Yet, the quantitative comparison in Figure 1.(b) shows **the average magnitude of masked token-to-visible token cross-attention** (μ=1.42) in the MAE decoder evaluated over the entire ImageNet validation set far exceeds **that of masked token-to-masked token self-attention** (μ=0.39). **We describe the attention calculation in Section A.1.**
Figure 1 Caption:
(B) MAE reconstructs a masked token (marked by a blue star) by attending to both masked tokens (B.Left) and visible tokens (B.Right). A quantitative analysis of the ImageNet validation set reveals that masked tokens in MAE attend disproportionately to visible tokens compared to other masked tokens (**average attention magnitude** μ=1.42 vs μ=0.39, respectively). This observation raises questions about the necessity of attention among masked tokens themselves.
> For 3.b/d: “Fig 4a: should have a different number of decoder layers than encoder layers … Fig 4a shows the "vanilla" version of CrossMAE, but the paper presents results exclusively on the version that uses a learned combination of the feature maps.”
Thanks for the suggestions. We have updated Figure 4a accordingly and attached the updated figure in the PDF in the general rebuttal (see the figure for Reviewer gKk3).
> For 3.c: “Line 187 references a "second question" in Sec 3.1, which doesn't exist as far as I can tell.”
Now Line 187 reads: “Rather than decoding the reconstruction for all masked locations, we only compute the reconstruction on a random subset of the locations and apply the loss to the decoded locations.”
**Thank you once again for your positive feedback! If you have any further questions, please don’t hesitate to ask!**
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses to my concerns. From W1, I conclude that my intuition is far from always correct. I appreciate the study you made. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all the reviewers for their thoughtful reviews as well as encouraging feedback. We are especially glad that the reviewers believe that our ablations are **“fairly thorough”** (Reviewer gKk3), the paper is **“well motivated through a practical observation”** and **“well written and technically sound”** (Reviewer cyeA), and our work offers **“main messages that are solid and insightful”** (Reviewer yYWM). We respond to some common concerns in the general response below. We are more than happy to discuss with the reviewers to address any additional questions during the discussion period.
## The goal of decoupling the prediction ratio from the masked ratio (partial reconstruction) [Reviewer cyeA, yYWM]
Partial reconstruction does not intend to improve the performance in terms of downstream accuracy. Instead, it **allows our user to control and significantly reduce the computation required during training in terms of runtime and GPU memory usage with minimal impact on downstream performance**, as explained in L196-197. We refer the reviewers to the response below for a detailed analysis of computational complexity.
Concretely, we compare the ImageNet classification accuracy, runtime, and memory requirements in the table below (the setting follows Table 1 and the runtime is measured on 2x A100 80GB with Flash-Attention 2 [1] enabled for all three models; gradient accumulation is set to 2):
| Method (prediction ratio) | MAE (0.75) | CrossMAE (0.75) | CrossMAE (0.25) |
|-----------------------------|------------|-----------------|-----------------|
| Accuracy | 83.0 | **83.3** | 83.2 |
| Runtime in mins per epoch | 9.35 | 8.41 | **6.32** |
| Memory (MB per GPU) | 68386 | 57987 | **36805** |
Furthermore, **we would like to underscore that partial reconstruction depends on our proposed use of cross-attention decoder blocks and is not directly applicable to vanilla MAE for improved efficiency.**
In vanilla MAE, each masked token attends to other masked tokens in the decoder blocks. Consequently, removing any masked tokens will change the decoded values of the remaining ones. However, CrossMAE generates each masked token based solely on the visible tokens, without considering other masked tokens. This means the reconstruction of a masked patch would not be affected based on which subset of masked tokens are decoded. As a result, the loss applied to this subset serves as an unbiased estimate of the original loss that would be applied to all reconstructed tokens. This approach minimizes performance impact while significantly reducing computational requirements and runtime.
## Analysis on computational complexity and effectiveness of small prediction ratio [Reviewer cyeA, yYWM]
Since vanilla MAE uses a self-attention decoder, the computation complexity of the decoder is quadratic with respect to the total number of tokens. Using cross-attention reduces the computation complexity to be linear with respect to the number of visible tokens and the number of masked tokens. By varying the number of masked tokens through partial reconstruction, the complexity can be further reduced by only decoding a subset of the masked tokens.
Formally, let the total number of visible tokens be $N$, latent dimension be $d$, with a mask ratio $p$ and prediction ratio $\gamma$. The complexity of attention in the MAE decoder is on the order of $N^2d$. In CrossMAE, the visible tokens (of length $(1-p)N$) serve as the queries, and the masked tokens (of length $\gamma N$) serve as the keys and values. The resulting attention complexity in CrossMAE decoder is $(1-p)\gamma N^2d$. In Table 1, the CrossMAE variant that decodes all masked tokens, or CrossMAE (0.75) uses $p=0.75$ and $\gamma=0.75$ (i.e., full prediction with $p=\gamma$), which gives $\frac{3}{16}N^2d$. On the other hand, CrossMAE (0.25) uses $p=0.75$ and $\gamma=0.25$, which gives $\frac{1}{16}N^2d$, further reducing the complexity.
The reduced computational complexity does translate to **faster training and lower memory requirements**, as shown in the table above, and is orthogonal to other acceleration techniques such as Flash-Attention 2 [1].
[1] Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. 2023. 2, 3
Pdf: /pdf/0269dcc66329bd869ef6c2f053b00394fbd10512.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Sober Look at the Robustness of CLIPs to Spurious Features | Accept (poster) | Summary: The authors aim to investigate spurious correlations learned by CLIP models. For this, they curate a novel dataset where animals are organized into common and uncommon backgrounds, e.g. a polar bear is more likely encountered in snow than on grass. The authors then perform experiments where they benchmark various CLIP and ImageNet models on the curated dataset. They observe that CLIP models suffer from spurious correlations which stem from changing the background.
Strengths: I think the issue of spurious correlations is important and one needs to understand how and whether VLMs learn spurious features. The paper presents many experiments and shows that scale or backbone capacity do not improve the effective robustness on CounterAnimal which is interesting.
Weaknesses: The paper has many issues, both in terms of writing and the methodology which need to be fixed.
### Major:
**The authors missed important previous works**: The paper “Recognition in Terra Incognita” is very related to this work and also proposes a “dataset designed to measure recognition generalization to novel environments” based on camera traps. The dataset is sorted according to difficult environments for different animals, which makes it very similar to CounterAnimal. I think the authors need to cite and discuss this paper. Currently, I do not understand the benefit of having a new dataset in addition to the already present one. The waterbirds dataset is also highly similar and should be discussed (https://arxiv.org/pdf/1911.08731). The authors cite that paper, but do not discuss it in the Related Work section, nor put it into context with CounterAnimal. The backgrounds challenge (https://github.com/MadryLab/backgrounds_challenge) is also highly related and should be discussed. In general, the related work section is very weak, given how extensively spurious correlations and worst-group-accuracy have been studied. Another important work to be discussed would be "Finding and Fixing Spurious Patterns with Explanations" (https://arxiv.org/abs/2106.02112).
**The naming of the common vs counter groups is misleading**:
Line 165: “Photos with the highest CLIP accuracy are assigned to the common group, and those with the lowest CLIP accuracy are assigned to the counter group.” I have a major understanding issue here. As far as I understood the paper before this line, the goal was to put images with common backgrounds into the common group and images with uncommon backgrounds into the counter group. This is also depicted in Fig. 1 or Table 1. The caption in Fig.1 says that “Most ice bears appear in a snow background (i.e., common), while it also is reasonable to find some ice bears in a grassy environment (i.e., counter)”. But here, the authors write that accuracy has actually been used to separate images into these groups? But then the frequency of the co-occurrence of certain backgrounds and classes has not been taken into account, or rather, it is a conjecture that those backgrounds where the CLIP model has higher accuracy on are more “common”?
**The terms "effective robustness" and "robustness" are used interchangeably which is wrong and confusing**:
I think the paper conflates the terms “robustness” and “effective robustness” which is confusing. When looking at effective robustness plots, such as in Fig. 2, we are interested in the residual difference between the measured value and the value predicted by the linear fit. As I can see, all plotted markers (CLIP and ImageNet) lie on their respective linear fits, and none of the interventions, such as CLIP-DC or CLIP-DFN offer any effective robustness benefits. It is though true that the **absolute** robustness numbers are overall higher for the CLIP-DFN models, for larger models or models trained on more data. I am however confused by the authors discussion of this observation. On the one hand, they write that larger CLIP models are more robust but increasing the dataset size does not yield improvements. First, I am confused whether they mean “effective robustness” or “robustness” here. Second, I do not see the effect the authors are describing: Both more data and larger backbones have higher absolute robustness but the same effective robustness as the other models. The statement “CLIP models trained on high-quality data are more robust” is also confusing, because it is not clear whether “robustness” or “effective robustness” is meant.
**Due to methodology issues, results on CLIP models cannot be compared to results on ImageNet models (or other advanced LVLMs):**
Line 60: “d) Spurious discovering: preserving classes and associated data based on the decrease in zero-shot performance (i.e., evaluating based on pre-trained CLIP models without fine-tuning) when shifting the backgrounds.” This step is really unclear. Do the authors curate the dataset based on the zero-shot accuracy of a CLIP model? From the introduction and the abstract, it sounds like the authors want to benchmark the robustness of CLIP vs ImageNet models on this custom dataset. But then, it is strange that CLIP models also seem to be used during the curation process. After reading the more detailed description in line 156, I think the statements made in line 85 are misleading. The authors write “ImageNet models are more robust to spurious correlations captured by CounterAnimal” and “Compared with CLIP models (colored in blue), surprisingly, we find that ImageNet models exhibit a stronger robustness to the spurious correlations in CounterAnimal.” Given that CounterAnimal has been curated based on the performance drop of a CLIP model, I find it very unsurprising that CLIP models perform worse on it compared to ImageNet models. I think that if CounterAnimal had been curated based on an ImageNet-trained ResNet50, the trend would have been reversed. I think all statements comparing CLIP and ImageNet trained models on CounterAnimal need to be relaxed and I think that this comparison is quite meaningless because of the described selection bias. I think that the whole Section 3.3. is misleading for this reason and statements such as the following cannot be made given the methodology issues: “Surprisingly, we find that ImageNet models are more robust to spurious features in the CounterAnimal dataset. This finding may contradict the common belief [Radford et al., 2021, Shi et al., 2023] that the CLIP models tend to be more robust to spurious correlations than single-modal supervised learning.” Similarly, the conjecture paragraph from line 265 onwards is wrong and cannot be made.
For the same reason, the comparison to advanced LVLMs in line 273 onwards cannot be made.
Figure 1: Are these examples cherry-picked or are they representative of the data points present in CounterAnimal? I am asking this, because of the Winoground dataset [A]. This dataset tests the compositionality of VLMs by forcing a model to match two captions to two respective images. Winoground has later been criticized because the two images in the choice process are not equally hard [B]. For example, the model needs to match “the glass is on the grass” and “the grass is in the glass” to the corresponding images. However, there is much more grass in the image matching to the first caption, and the model likely picks that image for both captions just because there is more grass and it makes the decision in a bag-of-words-manner. To summarize, Winoground did not control for object size, orientation and other confounders. In Fig.1, it appears that the main objects (the polar bears) are equal in size, so size could be excluded as a possible confounder? Did the authors consider this possibility, i.e. that the drop in performance could be explained by other differences in the images from the respective domains?
[A] https://arxiv.org/abs/2204.03162
[B] https://arxiv.org/abs/2211.00768
### General:
Line 25: please cite CLIP
Line 64: “The resulting dataset covers a total of 45 animal classes, ends up with 7,174 common photos and 5,926 counter photos, aligning with the standard size as an evaluation dataset [Recht et al., 2019, Hendrycks et al., 2021].” -> I do not understand this statement. Different test sets have different numbers of test images. ImageNet Val has 50k images for example. In what sense are the presented numbers standard?
Line 94: “Overall, larger CLIP backbones (i.e., larger markers) can improve the effective robustness, implying that scaling up backbones may enhance robustness against spurious features.” -> I do not see this in Fig. 2. The larger markers appear to be on the fitted line, same as the smaller markers. Effective robustness measures the difference with respect to the linear fit, and there is none for the larger CLIP backbones. Please clarify this point.
Line 146: “feature noise involves severe feature corruptions” -> Please be more specific here. What do you mean with feature noise? Do features refer to animals features such as missing ears or such? Or to the images themselves?
Line 147: “clarity issues arise when animal objects are not in major positions” -> unclear formulation: what is a major position? Do the authors mean that the animals are too small or not in the center of the image?
Line 153: “Note that the class space of backgrounds as above is not entirely orthogonal with each other due to the inherent ambiguity of the real-world situations. Nevertheless, we try our best to discern the assigned background labels within each animal class.” -> This is unclear. How many images would be ambiguous? I could imagine that many images would have two backgrounds, such as e.g. grass and sky or snow and water. For example, the last image in Fig. 1 on the left has both snow and water. It is not clear to me that only picking the snow background and ignoring the water is correct here. Further, at least for CLIP, the caption can contain several background keywords.
Further, I imagine animals occur in all kinds of environments, but there are only two backgrounds for each animal. Were the other images also discarded?
Line 214: “Therefore, we conclude that our CounterAnimal dataset possesses some realistic shifts that are generally contained in large-scale pre-training data, regardless of backbones.” This conclusion cannot be drawn from this experiment since the backbone has not been varied here.
### Section 4:
The proposed experiment is very similar to the well-known ShiftMNIST [D] or ColoredMNIST [E] datasets, which test the influence of spurious correlations. The findings here are not novel and should be brought into perspective with previous work. I do not understand how Fig. 11 relates to the text. What is “supervised”, “obj”, “objbkg”?
[D] https://arxiv.org/pdf/1811.00401
[E] https://arxiv.org/pdf/1907.02893
### Typos, grammar:
The quality of the text is poor on some occasions which makes reading and understanding the paper difficult. The manuscript would benefit from a round of proof-reading. Some statements and formulations should be made more precise.
Line 32: “The performance boosts over ImageNet models seem to suggest that CLIP resolves distribution shifts and thus spark a rich discussion about its rationale.” Strange formulation. How can “distribution shifts be resolved”? Please rephrase for clarity.
Line 112: “More specifically, [Yang et al., 2023] report that CLIP models may misaligned frequently co-occured objects with the corresponding texts.”
Line 115: “[Tong et al., 2024] find that CLIP misaligned samples will further cause the hallucination of LVLMs.” I do not understand this statement, grammar errors.
Line 132: “Meanwhile, many existing datasets, e.g., DomainBed and Wilds, do not have overlapped label space with ImageNet, making the comparison between ImageNet and CLIP models hard.” There is a version of DomainBed [C] where the dataset has been filtered to only include classes compatible with ImageNet, such that an evaluation of ImageNet models is possible out-of-the-box.
[C] https://openreview.net/pdf?id=LiC2vmzbpMO
Line 171: “Recalling that, when CLIP models resort to the shortcut of data, the model performance will heavily correlate with the backgrounds presented in the common group yet is compromised when coming to the counter group.” Grammar errors, I do not understand this sentence. What is “the shortcut of data”?
Line 208: “It suggests that the CounterAnimal dataset captures some general spurious shifts that at least commonly present in the pre-train dataset of LAION400M.” grammar
Line 213: “Here, the spurious features degenerate the zero-shot robustness of CLIP models trained on both LAION2B and by OpenAI.” Typo? “degenerate”?
Line 243: “In Figure 7, we consider two pre-train datasets, namely, LAION2B and the close-soured data from OpenAI” typo
Line 297: “Nevertheless, in the following theorem, we justify that CLIP remains learning to use spurious features, aligned with our experimental observations on the CounterAnimal dataset.” grammar
Strange space break between line 310 and 311.
# Summary of the review:
We could fix the naming convention from "common" and "counter" to something like "hard" and "easy" since accuracy has been used rather than frequency of certain backgrounds to classify backgrounds into certain groups. Based on my arguments below, I believe we cannot compare CLIP models to ImageNet models on the proposed dataset in any sensible way due to the introduced selection bias. I believe the very title of the paper is misleading since the posed question cannot be answered based on the methodology issues. But if we remove the claims about comparing ImageNet models and CLIP models, then, the main point of the paper is that there exist backgrounds which are harder for CLIP models, given certain classes, and other backgrounds which are easier. I don't think that this observation is particularly interesting on its own. The authors did not relate the hardness of the backgrounds to their frequency in the pretraining dataset or anything else. The observation that backgrounds matter is also not novel but quite well-known and the authors do not offer a solution. Further, the writing is quite poor and confusing on many occasions; I provided many examples of incorrect and confusing sentences below.
Technical Quality: 1
Clarity: 2
Questions for Authors: I have written a very detailed review above. I expect clarifications with respect to the raised points, at the very least in the "Major" paragraph.
Confidence: 5
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: The limitations discuss the comparison in performance of the CLIP vs ImageNet models which I believe cannot be made due to the methodological issues in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below.
>Q1. The authors missed important previous works.
A1. Many thanks for your suggestion. CounterAnimal utilizes high-quality data available on the internet, whereas Terralcognita relies on camera trapping data, which are not as widely spread. We believe CounterAnimal is preferred over Terralcognita as CounterAnimal much align with the training / deployment setups of CLIPs. Moreover, to compare with ImageNet models, we should ensure that we align the label space of ImageNet, satisfied by CounterAnimal but not by Terralcognita. WaterBird also has such a problem, and is limited to simple binary classification. For the Background Challenge, they study the robustness from another perspective, i.e., w/ or w/o the background. However, to find spurious features within CLIP datasets, we need to compare model performance across different backgrounds. At last, SPIRE masks a part of objects from the original image, which may introduce new factors to degenerate model performance.
>Q2. The naming of the common vs counter groups is misleading.
A2. Using accuracy instead of frequency makes the data collection procedure more effective in finding spurious features that make CLIP models fail. We will follow your suggestion and change the names of groups.
>Q3. The terms "effective robustness" and "robustness" are used interchangeably which is wrong and confusing.
A3. We agree that we need to clarify the term “robustness” in Section 3. We further depict the lines of effective robustness, where we found that the conclusions remain the same. However, the improvement from increasing model scales and improving data quality is much higher than simply scaling up the datasets. We will add the related discussion in our revision.
>Q4. Due to methodology issues, results on CLIP models cannot be compared to results on ImageNet models (or other advanced LVLMs).
A4. In general, we do not intend to claim that ImageNet models are generally more robust to spurious features. Instead, our goal is to emphasize that spurious features within CounterAnimal may not be as influential for ImageNet models, supporting that CounterAnimal captures spurious features within CLIPs, underscoring the significance of CounerAnimal. For previous works, they assess CLIP robustness using OOD datasets primarily for ImageNet datasets. Such a tendency may not fully reflect the CLIP robustness, as spurious features within ImageNet datasets may not be learned by CLIP models. Our CounterAnimal fills this gap, offering a more comprehensive perspectives when studying CLIP robustness.
>Q5. Did the authors consider this possibility, i.e. that the drop in performance could be explained by other differences in the images from the respective domains?
A5. we have tried our best to do the quality control, and get rid of the influence of other factors. We also evaluate some potential confounders like gestures, as in the response to Reviewer iLSC, and find that it is relatively equally distributed across groups. Please kindly let us know if you feel there is another potential factor that may affect the performance.
>Q6. Different test sets have different numbers of test images. ImageNet Val has 50k images for example. In what sense are the presented numbers standard?
A6. We aim at clarifying that our CounterAnimal (about 13K) aligns with the standard size as an evaluation dataset, resembling the scales of other many popular OOD evaluation datasets such as [1-2]. We will make our discussion clearer in our revision.
[1] The many faces of robustness: A critical analysis of out-of-distribution generalization.
[2] Do ImageNet Classifier Generalize to ImageNet?
>Q7. Unclear description in the dataset construction procedure in Section 2.1.
A7. In data curation, feature noise refers to cases where some pixels are disrupted or missing. Clarity issues refer to cases where animal objects are largely occluded by backgrounds or other irrelevant objects. It also includes cases where animal objects do not occupy most of pictures. In background labeling, there are two cases where one image can be assigned more than one background label, i.e., some backgrounds can be confusing and other may possess more than one background. Due to space limit, we will further clarify our data construction procedure in our revision.
>Q8. Further, I imagine animals occur in all kinds of environments, but there are only two backgrounds for each animal. Were the other images also discarded?
A8. Yes, we only preserve two groups for each animal, aiming to better capture spurious features captured of CLIPs. In Sec 3.1, we further show that the identified spurious features are general across different CLIP setups, justifying that these spurious features will not be largely bias towards a particular CLIP checkpoint.
>Q9. “Therefore, we conclude that our CounterAnimal dataset possesses some realistic shifts that are generally contained in large-scale pre-training data, regardless of backbones.” This conclusion cannot be drawn from this experiment.
A9. We have shown that using different backbones will always lead to poor performance on the counter group in Fig 7. We will make our discussion clearer in our revision. Thank you for your suggestion.
>Q10. The proposed experiment is very similar to the well-known ShiftMNIST [D] or ColoredMNIST [E] datasets, which test the influence of spurious correlations.
A10. The experiments are presented to echo our theoretical analysis that CLIP learns to align spurious features with object captions. For the legends in Fig. 11, “supervised” refers to the results of supervised trained models, while “obj” and “objbkg” refer to use different prompts to fine-tune CLIPs. We will add more discussion about the table legends in Fig. 11.
>Q11. Typos and grammar issues.
A11. Thanks for pointing out our typos. We will correct them in our revision.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I apologize for the late response. I have read the rebuttal, the other reviews and the comments.
**Q3. The terms "effective robustness" and "robustness" are used interchangeably which is wrong and confusing."**
The authors write "*We further depict the lines of effective robustness, where we found that the conclusions remain the same. However, the improvement from increasing model scales and improving data quality is much higher than simply scaling up the datasets.*" When we look at effective robustness, we are interested in the residual difference between the measurement and the linear fit. I do not see any interventions in Fig.2 which go beyond the linear fit. Could you please clarify more directly what is meant here?
**On the common vs unusual background issue**: It is good that the authors agreed to rename the groups to "easy" and "hard". Reading the other reviews, all reviewers "misunderstood" the naming convention. I find it very important to emphasize in the updated manuscript that the groups were created based on accuracy and not on occurrence frequency.
**On analyzing the backgrounds frequency**: Since writing that the "hard" group is the "uncommon" one feels so natural, this should be analyzed. I would like to suggest to the authors to perform a caption based analysis to check whether "hard" examples actually do contain uncommon backgrounds in their captions. I understand that parsing the LAION dataset for the images themselves is unfeasible. I of course do not expect results on this until tomorrow; I merely think that performing this experiment would actually justify the authors in writing "common" and "uncommon" backgrounds and would also link the learned spurious correlations nicely to the training data. As it stands, the authors have not offered an explanation for why CLIP models perform worse on the "hard" backgrounds, which has been asked by Reviewer **2MXa**. Reviewer **2MXa** wrote: *I think the claim is somewhat "obvious": there exists a relatively strong correlation between the object captions and the parts of image backgrounds, CLIP will learn to align the backgrounds, i.e., spurious features. If the training dataset contains many examples of spurious correlations, then models will tend to be biased.* This is a natural question which can be analyzed.
**On reframing the paper and the new title**: I agree with the other reviewers that reframing the paper to follow the spirit of ImageNet-A is a good idea. I agree with the other reviewers that a major revision might be necessary because the new title and the new framing effectively makes it a different paper. I think it might be helpful if the authors could post their reworked **abstract** here as it could be a good discussion base for the next reviewer-AC discussion stage.
I greatly appreciate the new experiments and the authors' willingness to update their submission in such a major way. I am raising my score to 4, but would like to stress that I am very borderline on this. I am looking forward to the discussion with the other reviewers and AC on this submission. I felt in agreement with most of the other reviewers' comments and hope we can reach a collective decision together. I currently vote for a "weak reject" because I think that the paper lacks an analysis for why certain backgrounds are easy and other backgrounds are hard. I believe that the naming convention of "common" and "uncommon" aimed to (subconsciously) fill this gap by providing an untested hypothesis for the observed spurious correlations. I believe that merely presenting a dataset where CLIP trained models underperform is a bit weak and could be linked to the training data, as suggested by reviewer **2MXa**.
---
Reply to Comment 1.1.1:
Title: Background frequency results and further clarfications
Comment: Dear Reviewer BaJ3,
Thank you for providing further feedback about our work and for raising the rating. We believe all your remaining concerns are addressable! **Although there is limited time before the end of the discussion, we have conducted an investigation of the background frequency to clarify your main concern about the observed spurious correlations.** Please find our responses below:
> Residual differences in the effective robustness
We kindly refer Reviewer BaJ3 to the uploaded figure, where we separately draw the linear fits in terms of each intervention:
- In the uploaded Figure 1, we compared the effective robustness of CLIP models with `ViT/B/16` and `ViT/L/14`, shown as in the blue line and red line, respectively. It can be found that in the rightmost part of the x-axis, two red dots locate beyond the linear fit of the blue line, indicating an improvement of effective robustness of CLIP models based on larger backbone `ViT/L/14`.
- In the uploaded Figure 2, we compared the effective robustness of CLIP models trained with `LAION2B` and `OpenAI` data, shown as in the blue line and red line, respectively. As both of them are considered web data with simple filtering, they have similar data quality that CLIP models trained on either `LAION2B` or `OpenAI` do not lead to improvements in terms of effective robustness.
- In the uploaded Figure 3, we compared the effective robustness of CLIP models trained with high quality data (`HQ`) and relatively low quality data (`LQ`), shown as in the red line and blue line, respectively. It can be found that in the rightmost part of the x-axis, multiple red points locate beyond the linear fit of the blue line, indicating an improvement of effective robustness of CLIP models trained on high quality data (`HQ`).
We have supplemented the aforementioned discussion in our revised manuscript.
> Group naming issue
We have revised our manuscript to avoid the confusion of the group naming.
> Backgrounds frequency
Thank you for your insightful question. We need to clarify that, **our theoretical analysis in Section 4 indeed explains why CLIP models perform worse on the "hard" backgrounds, as we clarified [in the response to Reviewer 2MXa](https://openreview.net/forum?id=wWyumwEYV8¬eId=BNamSMaVj3)**.
To provide more support of our theoretical explanation, we conduct an investigation in terms of the background frequency following your suggestion! Specifically, we adopt the searching tool of `Have I Been Trained` based on `clip-retrieval` to retrieve images from `LAION5B` closely matching to a given class name.
- For each animal class, we obtain 500 images and count the frequencies for our considered backgrounds.
- We control the sampled images aligned to the distribution of natural animal photos. For example, we filter out images with multiple distinct animal subjects or multiple distinct backgrounds. Due to time limit, we currently present three classes as follows and are extending our studies for revisions:
| class name | easy bkg | | hard bkg | |
|--------------|----------|-----|----------|----|
| Ice Bear | Ice | 83 | Grass | 7 |
| Black Swan | Water | 101 | Earth | 38 |
| Flamingo | Water | 111 | Sky | 16 |
In general, it aligns with our conjecture that hard examples do contain uncommon backgrounds in the CLIP training data, e.g., `LAION5B`.
We hope the supplemented results above could clarify the concern of Reviewer BaJ3 about our work.
---
Reply to Comment 1.1.2:
Title: The updated abstract
Comment: > Revision to the paper
Thank you for acknowledging our revisions. Here we provide our reworked abstract for your reference:
```
Large vision language models, such as CLIP, demonstrate impressive robustness to spurious features than single-modal models trained on ImageNet. However, existing test datasets are typically curated based on ImageNet-trained models, that aim to capture the spurious features inherited in ImageNet. Benchmarking CLIP models based on the ImageNet-oriented spurious features may not be sufficient to reflect the extent to which CLIP models are robust to spurious correlations inherited in CLIP training data, e.g., LAION. To this end, we craft a new challenging dataset named CounterAnimal that is designed to reveal the reliance of CLIP models on realistic spurious features. Specifically, we split animal photos into groups according to the backgrounds, and then identify a pair of groups for each class where a CLIP model shows high-performance drops across the two groups. Our evaluations show that the spurious features captured by CounterAnimal are generically learned by CLIP models with different backbones and pretraining data, yet have limited influence for ImageNet models. We provide theoretical insights that the CLIP objective cannot offer additional robustness. Furthermore, we also re-evaluate strategies such as scaling up parameters and high-quality pre-trained data and find that they still help mitigate the spurious features, providing a promising path for future developments.
``` | Summary: This paper presents CounterAnimal, an evaluation dataset featuring two subsets: animals with common backgrounds and those with unusual backgrounds. The images were sourced from iNaturalist. Data with high CLIP accuracy are categorized as "Common", while those with low CLIP accuracy are labeled as "Counter". Results shows that CLIP models experience a greater accuracy drop compared to ImageNet models when tested on this dataset.
Strengths: - This paper analyzes multiple factors affecting CLIP accuracy, including model size and training data quality.
- The paper combines both experimental results and theoretical analysis. The analysis in Section 5 is interesting and novel.
- The paper is well-written and easy to follow.
Weaknesses: - The proposed dataset is not sufficiently robust to analyze the influence of spurious bias, as this is not the only difference between the common and counter datasets.
- To analyze the accuracy drop caused by spurious features such as background, the background should be the only difference between common and counter image pairs. Prior work [4,5] has proposed such datasets focusing on background.
- In the proposed dataset, other factors may influence the model accuracy gap besides background. For instance, as shown in Figure 1, the more varied gestures of ice bears on the right compared to the left could be a contributing factor to the accuracy drop.
- Current experiments cannot conclusively show that ImageNet models generalize better than CLIP.
- As the common and counter groups are selected according to the CLIP accuracy (see line 165 in the paper), they indicate easy and hard samples for CLIP. Since ImageNet models have different training characteristics, it is natural that hard cases for these models may differ from those for CLIP, resulting in a smaller performance drop for ImageNet models. This result cannot support that ImageNet models are more robust than CLIP models.
- The accuracy drop from common to counter group can be greatly influenced by the model used to divide the common and counter dataset. Using the combined proposed common and counter dataset, a new Common' and Counter' dataset can be created based on the accuracy of ImageNet models. What is the impact of this dataset division on the accuracy drop for different models?
- Prior studies[1,2,3,4,5,6] have proposed datasets specifically to analyze the influence of background, which are not discussed in this work. These datasets can be used for CLIP evaluation as they do not overlap with the CLIP training set. Additionally, creating datasets based on model accuracy in this work is similar to the approach in [6].
[1] Noise or Signal: The Role of Image Backgrounds in Object Recognition.
[2] Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models, NeurIPS 2019.
[3] Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation.
[4] ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing, CVPR 2023.
[5] LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images, NeurIPS 2023.
[6] ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object, CVPR2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper has discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below.
> Q1. The proposed dataset is not sufficiently robust to analyze the influence of spurious bias, as this is not the only difference between the common and counter datasets.
A1. In our study, we primarily focus on the real-world spurious bias caused by backgrounds, while we also try our best to avoid the influence of other factors. As mentioned in Sec 2.1, during data curation, we took measures to control the influence of other covariates, e.g., ensuring that the main objects are in the major position. To further understand whether other factors such as gestures will introduce unintentional bias into the dataset, we count the proportion of different gestures within the class of Ice Bear. The results are listed as follows, which do not have an obvious difference in the distribution of gestures between common and counter scenarios. We will add the related discussion and explore other unintentional factors that may affect model predictions in our revision.
| Ice Bear | Common | Counter |
|----------|--------|---------|
| Stand | 89% | 84% |
| Lying | 13% | 16% |
> Q2. Current experiments cannot conclusively show that ImageNet models generalize better than CLIP.
A2. We apologize for any confusion in our description. In general, our conclusion that ImageNet models generalize better than CLIP models is limited to CounterAnimal. This is a preferred observation as it aligns with **our main goal of crafting a dataset that characterizes the spurious features that may widely exist in CLIP setups**. Prior to our work, there is no specialized benchmark curated for CLIPs to study the robustness of CLIPs to the real-world spurious correlations. Instead, previous works predominantly focus on the spurious features in ImageNets. Studying the OOD robustness of CLIPs on ImageNet specialized benchmarks is biased and may bring illusions about the robustness of CLIPs.
> Q3. Prior studies [1,2,3,4,5,6] have proposed datasets specifically to analyze the influence of background, which are not discussed in this work. These datasets can be used for CLIP evaluation as they do not overlap with the CLIP training set. Additionally, creating datasets based on model accuracy in this work is similar to the approach in [6].
A3. Many thanks for your suggested paper. Most of these suggested papers have shown that CLIP models are more robust to the associated spurious features than ImageNet models, indicating that they did not intend to capture spurious features within CLIP setups. The CounterAnimal remains the first dataset that can find some spurious features that are predominant for the CLIP setups while might not be so strong for the ImageNet benchmarks, which is the uniqueness of our paper.
---
Rebuttal 2:
Title: Response
Comment: Thanks to the authors for providing the rebuttals, which partially addressed my concerns.
* Regarding Q2, I appreciate the adjustment to emphasize the spurious features found in CLIP setups. However, I did not find any discussion related to my original second point in Q2, which is quoted below:
> The accuracy drop from common to counter group can be greatly influenced by the model used to divide the common and counter dataset. Using the combined proposed common and counter dataset, a new Common' and Counter' dataset can be created based on the accuracy of ImageNet models. What is the impact of this dataset division on the accuracy drop for different models?
* Regarding Q2, could you please clarify the following point? For example, I would appreciate some examples of how prior work has focused on the spurious features in ImageNets.
> Prior to our work, there is no specialized benchmark curated for CLIPs to study the robustness of CLIPs to the real-world spurious correlations. Instead, previous works predominantly focus on the spurious features in ImageNets.
* Regarding Q3, the difference between this paper and prior studies remains unclear, which also affects the clarity of the paper's contribution. As also noted by reviewers KCA4 and BaJ3, the dataset creation makes it unsurprising that ImageNet models show a smaller gap between the two subsets. Therefore, it is confusing to claim that:
> The CounterAnimal remains the first dataset that can identify some spurious features predominant in CLIP setups **while these features might not be as strong for the ImageNet benchmarks**, which is the uniqueness of our paper.
* Regarding Q3. Prior studies have pointed out the spurious bias of the CLIP model about backgrounds, Therefore, a direct way to find CLIP-specific spurious bias is by splitting prior test set into two datasets—Counter and Common—based on CLIP accuracy. Could you further clarify how the proposed dataset differs from splitting prior test sets?
---
Rebuttal Comment 2.1:
Title: Further clarification on the remaining questions
Comment: Dear Reviewer iLSC,
Thank you so much for engaging in the discussion! Now we provide more clarification on the remaining questions:
> A new Common' and Counter' dataset can be created based on the accuracy of ImageNet models
We apologize for not clearly responding to this question in the rebuttal. As also requested by Reviewer KCA4, we are currently collecting the new dataset based on the accuracy of Imagenet models and will update you once we have the preliminary results very soon!
Nevertheless, we would like to clarify that, since previous ImageNet variant test sets are curated based on the accuracy of ImageNet models, the focus of this work is to provide a corresponding one for the CLIP models.
> Examples of how prior work has focused on the spurious features in ImageNets.
We briefly introduce some examples here and refer Reviewer iLSC to [a] for a detailed discussion of the examples:
- ImageNetV2[b]: a reproduction of the ImageNet test set, where the authors introduce the distribution shifts by rerunning the Imagenet curation pipeline to collect a new test set;
- ObjectNet[c]: a test set of objects in a variety of scenes with 113 classes that overlap with ImageNet, where the authors introduce the distribution shifts by curating images with varying object poses, locations, etc.;
- ImageNet-Sketch[d]: the authors curate images from Google Image with queries "sketch of __", where __ is the standard class name;
- ImageNet-R[e]: the authors curate images containing various renditions (e.g., paintings, embroidery, etc.) of ImageNet object classes;
- ImageNet-A[f]: the authors curate a hard version of the ImageNet test set with adversarial filtration based on the correctness of a ImageNet-trained model;
All the aforementioned works introduce distribution shifts against the original ImageNet test sets in a way that they find that ImageNet models perform badly on the newly curated test sets. Based on those ImageNet variant test sets, [a] conducted extensive experiments and found that the performance gains of ImageNet models on the original ImageNet test sets can hardly be generalized to the ImageNet variant test sets. Therefore, it is suspected that the ImageNet models learn some spurious features from the original ImageNet train set, that do not hold on the ImageNet variant test sets.
> The differences between this paper and prior studies
We apologize for any confusion. The uniqueness of this work compared to those referred by Reviewer iLSC is that:
- [1,2] are mainly designed for ImageNet models. Especially, in ObjectNet[2], CLIP models have been shown to be more robust than ImageNet models;
- [3,4,5,6] mainly consider **synthetic distribution shifts instead of natural distribution shifts**. We acknowledge the value of the synthetic distribution shifts in debugging neural networks, and have already revised our manuscript to discuss them. Natural distribution shifts may better reflect the robustness against real-world spurious features [e].
> How this paper differs from splitting prior test set into two datasets—Counter and Common—based on CLIP accuracy
Thank you for this insightful question!
- First, merely splitting test data according to the accuracy without considering the group information (e.g., backgrounds), will also involve additional factors, since the low-accuracy group will contain samples that are also affected by label noise or misspecification together [f]. Consequently, **the accuracy difference between the two groups may not uniquely reflect the influence of spurious features**. Note that our goal is to study the influence of spurious features. Therefore, we first labeled the background and then measured the differences between background groups, in order to measure the influence of spurious features in the backgrounds;
- As for the previous datasets specially designed for Imagenet models such as [1,2] referred by Reviewer iLSC, splitting them according to CLIP accuracy can not get rid of influence by other factors such as label noises, and may not reflect the real-world spurious features **uniquely in the CLIP training data**, as ImageNet models and CLIP models will both perform badly on the "counter/hard" splits.
- As for the synthetic datasets [3,4,5,6], they may not reflect the **real-world spurious features** in the CLIP training data.
**References**
[a] Measuring Robustness to Natural Distribution Shifts in Image Classification, NeurIPS'20.
[b] Do ImageNet Classifiers Generalize to ImageNet? ICML'19.
[c] ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models, NeurIPS'19.
[d] Learning Robust Global Representations by Penalizing Local Predictive Power, NeurIPS'19.
[e] The many faces of robustness: A critical analysis of out-of-distribution generalization, ICCV'21.
[f] Natural adversarial examples, ECCV'21.
[g] ZIN: When and How to Learn Invariance Without Environment Partition? NeurIPS'22.
---
Reply to Comment 2.1.1:
Title: Follow-up experiments on splitting prior test set into two datasets based on CLIP accuracy
Comment: Dear Reviewer iLSC,
Following your suggestion of the last question, we conduct additional ablation study that uses ImageNet-a[f] to create common and counter splits based on the `CLIP-ViT/B/32-LAION400M`. Meanwhile, as the original ImageNet-a is curated based on the `ResNet-50` trained on ImageNet, therefore the accuracy of `ResNet-50-ImageNet` is 0.
In the table below, we present the accuracies of CLIP models and ImageNet models across the common and counter groups. It can be found that
- CLIP models demonstrate a larger performance drop from common to counter groups, which is as expected;
- However, **none of the ImageNet models have a better generalization performances than CLIP models, despite a smaller performance drop**. In contrast, in CounterAnimal, ImageNet models can achieve compeitive performances in the Common groups while suffer less performance drops, or better effective robustness[a].
- The results demonstrate the new splits are more challenging to generalize for both CLIP and ImageNet models. It is hard to dissect the influence of spurious features from the other factors such as label noises or model misspecification[g].
| **CLIP** | | common | counter | drop | **ImageNet** | common | counter | drop |
|:------------:|-----------|:------:|:-------:|:-----:|:----------:|:------:|:-------:|--------|
| RN50 | OPENAI | 0.502 | 0.144 | 0.358 | AlexNet | 0.04 | 0.012 | 0.028 |
| RN101 | OPENAI | 0.606 | 0.199 | 0.407 | VGG11 | 0.032 | 0.011 | 0.021 |
| RN50-4 | OPENAI | 0.672 | 0.299 | 0.373 | VGG13 | 0.048 | 0.012 | 0.036 |
| RN50-16 | OPENAI | 0.748 | 0.449 | 0.299 | VGG19 | 0.046 | 0.015 | 0.031 |
| RN50-64 | OPENAI | 0.807 | 0.584 | 0.223 | ResNet18 | 0.022 | 0.008 | 0.014 |
| ViT/B/16 | LAION400M | 0.749 | 0.216 | 0.533 | ResNet34 | 0.039 | 0.013 | 0.026 |
| ViT/B/16 | OPENAI | 0.778 | 0.384 | 0.394 | ResNet50 | 0 | 0 | 0 |
| ViT/B/16 | DATACOMP | 0.812 | 0.38 | 0.432 | ResNet101 | 0.09 | 0.037 | 0.053 |
| ViT/B/16 | LAION2B | 0.734 | 0.256 | 0.478 | ViT/B/16 | 0.3767 | 0.1699 | 0.2068 |
| ViT/B/16 | DFN2B | 0.832 | 0.386 | 0.446 | ViT/B/32 | 0.1937 | 0.07 | 0.1237 |
| ViT/B/32 | LAION400M | 1 | 0 | 1 | ViT/L/16 | 0.286 | 0.138 | 0.148 |
| ViT/B/32 | OPENAI | 0.665 | 0.205 | 0.46 | ViT/L/32 | 0.232 | 0.093 | 0.139 |
| ViT/B/32 | DATACOMP | 0.689 | 0.192 | 0.497 | CONVNEXT-S | 0.473 | 0.264 | 0.209 |
| ViT/B/32 | LAION2B | 0.704 | 0.15 | 0.554 | CONVNEXT-B | 0.527 | 0.297 | 0.23 |
| ViT/L/14 | LAION400M | 0.858 | 0.354 | 0.504 | CONVNEXT-L | 0.538 | 0.343 | 0.195 |
| ViT/L/14 | OPENAI | 0.872 | 0.608 | 0.264 | | | | |
| ViT/L/14 | DATACOMP | 0.902 | 0.62 | 0.282 | | | | |
| ViT/L/14 | LAION2B | 0.856 | 0.441 | 0.415 | | | | |
| ViT/L/14 | DFN2B | 0.914 | 0.593 | 0.321 | | | | |
| ViT/L/14-336 | OPENAI | 0.893 | 0.691 | 0.202 | | | | |
| ViT/H/14 | LAION2B | 0.869 | 0.477 | 0.392 | | | | |
| ViT/H/14 | DFN5B | 0.926 | 0.626 | 0.3 | | | | |
| ViT/H/14-384 | DFN5B | 0.961 | 0.745 | 0.216 | | | | |
| ViT/G/14 | LAION2B | 0.877 | 0.494 | 0.383 | | | | |
| ViT-bigG/14 | LAION2b | 0.908 | 0.597 | 0.311 | | | | |
| CONVNEXT-B | LAION400M | 0.509 | 0.13 | 0.379 | | | | |
| CONVNEXT-BW | LAION2B | 0.546 | 0.189 | 0.357 | | | | |
---
Rebuttal Comment 2.2:
Title: Preliminary results of suggested experiments are released
Comment: Dear Reviewer iLSC,
Following your suggestion, we curate a new test set `CounterAnimal-i` according to the accuracy of ImageNet models. The detailed results and discussions about `CounterAnimal-i` are present in [the latest general response](https://openreview.net/forum?id=wWyumwEYV8¬eId=4IR8MTyJIm), as the experiments are also suggested by Reviewer KCA4.
The results of the experiments on `CounterAnimal-i` show that **curating the OOD test data according to different models will reveal different spurious features** and one needs to be cautious when selecting the proper OOD test data to evaluate the robustness against spurious features. Since most of the previous OOD test sets are designed for ImageNet models, it again highlights the necessity and significance of a test benchmark like CounterAnimal specifically for CLIP models.
Please kindly let us know whether the aforementioned responses address your concerns. We would sincerely appreciate it if you could jointly consider our responses above when making the final evaluation of our work. Thank you again for your time and insightful suggestions about our work!
---
Rebuttal 3:
Title: Final rating update
Comment: Thanks a lot to the authors for the detailed response and new experiments. I appreciate the changes to the title and the additional experiments, which have improved this work a lot in the rebuttal. I also appreciate the contribution of creating a real-world dataset about backgrounds. Considering all the comments and discussions, I am raising my rating to 5.
I share Reviewer 2MXa's concern that the paper needs significant revisions due to the shift in title and focus. I want to emphasize the modifications in the experimental results under the new shift. The original paper takes test accuracy as main experimental results to support the old title. However, test accuracy is not solid enough to be the main results for the new title as they are heavily influenced by the model used for dataset splitting. More analysis and results on spurious features are needed.
Thanks again to the authors for their efforts during the rebuttal and discussion.
---
Rebuttal Comment 3.1:
Title: We used **effective robustness** instead of test accuracies as the main supporting experimental results
Comment: Dear Reviewer iLSC,
Thank you for acknowledging our responses and updating the rating lean to an acceptance.
We feel necessary to clarify that **we do not use the test accuracy as the supporting experiments in our original manuscript**.
- **We use effective robustness[1] in the main figures to support our claims about the robustness of CLIP models on `CounterAnimal`**. The test accuracy results are adopted to provide more details.
- In response to Reviewer BaJ3, we also provide **more figures of effective robustness** (as attached in [the general response](https://openreview.net/forum?id=wWyumwEYV8¬eId=yKMULf9RNp)) to support our claims that scaling up the model parameters and increasing the quality of data help with improving robustness on `CounterAnimal`.
Since our focus is to study the robustness of CLIP models under spurious features instead of comparing whether ImageNet models are more robust than CLIP models, the results and analysis in our original paper sufficiently help with addressing our main research question `Is there a benchmark that reflects the exact reliance on spurious features of CLIP?` in line 41, including
- the curation procedure of `CounterAnimal`;
- the experimental results of effective robustness of CLIP models on `CounterAnimal`;
- the influence of factors such as parameters of models, and the quality of data to the effective robustness of CLIP models on `CounterAnimal`;
- the theoretical analysis of why CLIP models still learn spurious features;
**Therefore, it does not need significant revisions to accommodate with our new title, as all the original contents sufficiently address our main research question**. The revisions we need to make (most of which has already been done) are to:
- Adjust the abstract and introduction following [2], to avoid potential misunderstanding and precisely present the motivation of our work: We do not focus on the comparison of CLIP and ImageNet models but on curating a test set specifically for CLIP models to study the robustness of CLIP models against spurious features;
- Supplement the ablation studies with `CounterAnimal-i` to strengthen our motivation;
Please kindly let us know if our aforementioned revisions could address your concerns especially about the focus and the revisions of our work. We again thank you for your time and constructive comments!
**References**
[1] Measuring Robustness to Natural Distribution Shifts in Image Classification, NeurIPS'20.
[2] Natural adversarial examples, ECCV'21. | Summary: In this work, the authors create an evaluation dataset comprising two groups, one with animals in usual backgrounds (common group) and another with unusual backgrounds (counter group). They then evaluate a suite of models of different backbones, model sizes, and datasets. They find that CLIP models do poorly than ImageNet-trained models, and generally high quality data or bigger model size improves counter group accuracy.
Strengths: 1. The CounterAnimal dataset is a nice contribution that can be of value to the community.
2. The authors have evaluated a number of models on the dataset and that too could be of value to the community.
Weaknesses: Please see questions for more information.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. **Biased dataset:** The dataset is split into to common and counter group using a CLIP model. Therefore by construction, the CLIP models will perform poorly and it is no surprise that the ImageNet-trained models do a bit better. One could construct a split of this dataset where ImageNet models do better than the CLIP models. If the dataset was collected in a model agnostic way then the conclusions could potentially be more interesting.
2. **On the premise:** As such the premise or the primary question seems a bit vacuous. Models arguably learn different features and there will exist some type of evaluation where one does better than the other. But are there useful tasks/evaluations where ImageNet models are preferred over CLIP models? That is an interesting open question. This work doesn't necessarily start from there and create a benchmark that is supposed to represent a task. The authors rather create a biased dataset that by design make CLIP models perform poorly. Therefore the primary premise of the work seems erroneous. There is some value in the other evaluations so maybe the paper could be rewritten by positioning things differently.
3. **Lack of novelty:** Keeping the primary result aside, other conclusions like that better datasets or model sizes improve robustness are not new. Please see [1, 2, 3, 4].
[1] [Geirhos 2021] https://proceedings.neurips.cc/paper/2021/hash/c8877cff22082a16395a57e97232bb6f-Abstract.html
[2] [Idrissi 2022] https://arxiv.org/abs/2211.01866
[3] [Fang 2022] https://arxiv.org/abs/2205.01397
[4] [Nguyen 2022] https://arxiv.org/abs/2208.05516
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have addressed some limitations. For the rest, please see my questions block.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below.
> Q1. Biased dataset: The dataset is split into common and counter groups using a CLIP model. Therefore, by construction, the CLIP models will perform poorly, and it is no surprise that the ImageNet-trained models do a bit better. One could construct a split of this dataset where ImageNet models do better than the CLIP models. If the dataset was collected in a model agnostic way then the conclusions could potentially be more interesting.
A1. We need to clarify that, **one of the major objectives of this work is to construct a benchmark specifically to reflect the spurious correlations learned by CLIPs**. To characterize the spurious features captured by CLIP, it is reasonable to use a CLIP model to curate the data. Our experiments in Section 3.1 justify that spurious features captured by CounterAnimal are general across different CLIP setups, and our experiments in Section 3.3 verify that the spurious features within CounterAnimal may not be so influential for ImageNet benchmarks. These results justify that our crafted CounterAnimal satisfies our original goal.
Furthermore, we would like to note that, **previous comparisons between CLIPs and ImageNet models on ImageNet variant test sets are also biased**. However, biases are not entirely bad. The past few years witnessed a lot of developments built upon the ImageNet variant test sets. Therefore, **our benchmark share the same goal to provide a testbed for developing more advanced CLIP and vision-language models**.
> Q2. On the premise: Are there useful tasks/evaluations where ImageNet models are preferred over CLIP models? That is an interesting open question. Therefore the primary premise of the work seems erroneous. There is some value in the other evaluations so maybe the paper could be rewritten by positioning things differently.
A2. As explained in A1, our benchmark indeed captures spurious features learned by CLIP models. Future works can be developed upon our benchmark to mitigate the spurious correlations learned by CLIP models, similar to those built upon ImageNet variant testsets.
Moreover, we need to clarify that the comparison between ImageNet and CLIP models is not to argue which model is the best universally. Rather, **we would like to highlight the biases existing in previous evaluations of CLIPs’ OOD robustness using ImageNet variant testsets**. Benchmarking models with improper test sets would cause illusions that CLIP models seem to resolve the spurious correlations, especially compared with ImageNet models. However, the experiments with CounterAnimals provide a sober look of the vulnerability of CLIP models to spurious correlations, and provide a platform for future developments of more advanced and robust CLIP and vision-language models.
Finally, we understand that our original title may bring unnecessary misunderstandings about our work. We thus propose to revise it to `A Sober Look at the Robustness of CLIPs to Spurious Features` to more precisely reflect the contents of this work. Please kindly let us know if you have better suggestions to resolve the misunderstandings!
> Q3. Lack of novelty: Keeping the primary result aside, other conclusions like that better datasets or model sizes improve robustness are not new.
A3. As clarified in A1 and A2, CounterAnimal is more suitable than ImageNet variant test sets to benchmark the OOD robustness of CLIP models. As the main comparison results between CLIPs and ImageNet models already differs from the previous studies, extending the benchmarking of CLIP models to more variants is necessary to verify the conclusions of previous works. **We do not claim we are the first to discover those findings, rather, we are verifying those findings in order to provide insights of developing more robust CLIP and vision-language models**.
---
Rebuttal 2:
Comment: Thank you for your response and the new title—it’s definitely an improvement. I believe this paper warrants acceptance, but additional revisions are necessary. The paper would benefit from aligning more closely with the direction of Hendrycks et al. (2019) in terms of the motivation. Specifically, the ImageNet/ResNet comparison needs to be reframed, as its current presentation is potentially misleading.
One potential experiment that could add value is identifying a common/counter split of the same ~13K dataset for ImageNet-trained models. Since you have the classes and backgrounds marked, it could be interesting to compare the differences between CLIP-based splits and ResNet-based splits. This comparison might reveals insights into the nature of spurious correlations in these two models. Incorporating these experiments and suggested changes could significantly enhance the manuscript. Hoping that the authors could make these changes, I am increasing my score.
Hendrycks et al. (2019) Natural Adversarial Examples
---
Rebuttal Comment 2.1:
Title: Thank you and we are working on the required experiments
Comment: Dear Reviewer KCA4,
Thank you for acknowledging our revisions to the title and the paper an improvement that warrants acceptance!
In addition to the existing revisions, we promise that we will revise our manuscript more aligned with [1] following your suggestion.
Meanwhile, we are also working on creating a similar common/counter split based on the ImageNet-trained models. We will share more details during the discussion period very soon once we have some preliminary results!
**References**
[1] Natural adversarial examples, ECCV'21.
---
Reply to Comment 2.1.1:
Title: Preliminary results of suggested experiments are released
Comment: Dear Reviewer KCA4,
We would like to thank you again for your time and constructive comments about our work. Following your suggestion, we curate a new test set `CounterAnimal-i` according to the accuracy of ImageNet models. The detailed results and discussions about `CounterAnimal-i` are present in [the latest general response](https://openreview.net/forum?id=wWyumwEYV8¬eId=4IR8MTyJIm), as the experiments are also suggested by Reviewer iLSC.
The results of the experiments on `CounterAnimal-i` show that **curating the OOD test data according to different models will reveal different spurious features** and one needs to be cautious when selecting the proper OOD test data to evaluate the robustness against spurious features. Since most of the previous OOD test sets are designed for ImageNet models, it again highlights the necessity and significance of a test benchmark like CounterAnimal specifically for CLIP models.
Please kindly let us know if our additional experiments could address your concerns. Thank you again for your valuable suggestions about our work! | Summary: This work asks one interesting question: "Do CLIP models always generalize better than ImageNet models?" Driven by this question, this work proposes a new benchmark dataset named CounterAnimal. This dataset consists of a) the common group: comprising animals in common backgrounds, and b) the counter group: including animals in plausible yet unusual backgrounds. The main idea is that the performance drops from the common to counter groups quantify the reliance on spurious background features for animal predictions. The main observation is that CLIP models exhibit notable performance drops when tested on the counter group. In comparison, ImageNet models can be more robust than CLIP models.
Strengths: - It is always good to see a new and novel dataset proposed for evaluating CLIP and ImageNet-trained models. The proposed dataset CounterAnimal is complementary to existing datasets that cannot reflect the robustness of CLIP models to spurious correlations.
- The dataset construction is well-presented. The statistics, curation, background labeling, and spurious discovery are well introduced in Section 2
- The analysis around spurious correlation is good. This work tries to give insights from several aspects, such as pre-trained datasets, scaling up, and learning paradigms. The observations are sound to me.
Weaknesses: - I found the analysis of why CLIPs rely on spurious features interesting. However, I think the claim is somewhat "obvious": there exists a relatively strong correlation between the object captions and the parts of image backgrounds, CLIP will learn to align the backgrounds, i.e., spurious features. If the training dataset contains many examples of spurious correlations, then models will tend to be biased.
- I am curious about why ImageNet models may not be so influenced by the spurious bias in CounterAnimal. Is this because the ImageNet training set does not have too many spurious correlation examples? Or ImageNet has a spurious bias but such bias is different from the one in CounterAnimal? Please provide a discussion or share some insights on this question.
- This paper adopts absolute performance drop in Section 3.3. Such a metric may not be so robust. For example, model A drops from 40 to 39, and model B drops from 90 to 89. They drop the same but the I would say model B is better. Please comment on this, and discuss the metric of absolute performance drop.
Technical Quality: 3
Clarity: 3
Questions for Authors: - ImageNet models are not so biased toward spurious correlations compared with CLIP models. Why? Is this because the ImageNet training set does not have too many examples that exhibit spurious correlations?
- While I appreciate this work includes results on ColoredCOO and ColoredMINIST, some other spurious correlation benchmarks (e.g., WaterBirds) would be greater if they were also included.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The dataset is proposed, so a discussion on the potential bias/ privacy is needed. I appreciate this work highlights the future improvement of expanding semantic scope, data source, and ImageNet testbed.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent', 'Ethics review needed: Data quality and representativeness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and suggestions! Please find our responses below.
> Q1. I think the claim is somewhat "obvious": there exists a relatively strong correlation between the object captions and the parts of image backgrounds, CLIP will learn to align the backgrounds, i.e., spurious features. If the training dataset contains many examples of spurious correlations, then models will tend to be biased.
A1. We need to clarify that, our theoretical analysis has unique and significant values for its implications to both theory and practice:
- From the theoretical perspective, it complements the literature of theoretical studies about CLIPs. Prior to this work, previous works such as [1,2] mainly focus on developing theories and experiments justifying the superior OOD generalization capabilities of CLIPs in ImageNet variant tests. To the best of our knowledge, **our theory is the first to provably demonstrate the drawbacks of CLIPs in OOD generalization**, providing the foundation for future developments tackling the issue.
- As a direct implication, the theory shows that CLIP training objectives can not offer additional robustness against spurious correlations. Consequently, the biases in large-scale multimodal training data such as LAION-5B is the major source for the spurious correlations learned by CLIPs. **However, prior to our work, there is no proper benchmark capturing the spurious correlations for large-scale multimodal training data**.
[1] Identifiability Results for Multimodal Contrastive Learning, ICLR’23.
[2] Does CLIP’s generalization performance mainly stem from high train-test similarity? ICLR’24.
> Q2. I am curious about why ImageNet models may not be so influenced by the spurious bias in CounterAnimal. Is this because the ImageNet training set does not have too many spurious correlation examples? Or ImageNet has a spurious bias but such bias is different from the one in CounterAnimal? Please provide a discussion or share some insights on this question.
A2. Thank you for the insightful question. First, we need to clarify that, ImageNet models still learn the spurious correlations as evidenced by the performance gaps in CounterAnimal benchmark. One reason for the lower performance gaps of ImageNet models is that, **CounterAnimal is specifically designed to reflect the biases of the CLIP pretrained datasets**, similar to ImageNet variant test sets which are designed for ImageNets[3,4]. A key takeaway for the phenomenon is that, we need to carefully choose suitable benchmarks to evaluate the OOD robustness of different models.
[3] Do ImageNet Classifiers Generalize to ImageNet? ICML’19.
[4] Natural Adversarial Examples, CVPR’21.
> Q3. This paper adopts absolute performance drop in Section 3.3. Such a metric may not be so robust. For example, model A drops from 40 to 39, and model B drops from 90 to 89. They drop the same but the I would say model B is better. Please comment on this and discuss the metric of absolute performance drop.
A3. Thank you for the suggestion. We further depict the lines of effective robustness, where we found that the conclusions remain the same. However, the improvement from increasing model scales and improving data quality is much higher than simply scaling up the datasets.
> Q4. While I appreciate this work includes results on ColoredCOO and ColoredMINIST, some other spurious correlation benchmarks (e.g., WaterBirds) would be greater if they were also included.
A4. We would like to note that the WaterBirds benchmark is similar to ColoredCOO, that synthesizes unnatural images by composing birds and different image backgrounds. Essentially it can be considered as a binary classification variant of ColoredCOCO and ColoredMNIST, hence the results of ColoredCOO and ColoredMNIST also generalizes to WaterBirds.
In addition, we need to clarify that the main focus of this work is to identify natural spurious correlations learned by CLIPs instead of the synthetic ones. CounterAnimal essentially captures the desired real-world spurious features, differing from previous synthetic benchmarks like WaterBirds.
> Q5. The dataset is proposed, so a discussion on the potential bias/ privacy is needed.
A5. As discussed in question 12 of the checklist (see Appendix), we use the publicly available data from the internet, following the CC BY-NC license, which permits scientific use. We have revised the Broader Impacts in our manuscript to include a discussion regarding the issue.
---
Rebuttal 2:
Comment: Dear Authors,
Thank you for providing the response. I share the concern about using a CLIP model to curate the data. The newly added CounterAnimal-i dataset is helpful, but it could introduce differences in some of the observations made in the submission. For example, it raises questions about whether the title, “Do CLIP Models Always Generalize Better than ImageNet Models?” can still be fully addressed.
The authors now emphasize that “the focus of this work is to construct a test set for studying the robustness of CLIP models against real-world spurious features.” While this is a valid focus, it necessitates significant revisions to reflect this shift. For instance, the abstract and introduction currently do not align with this focus.
Given these points, I am actually in the boderline case. I would like to point out that the motivation behind the work is always good—whether it is to answer the title’s question or to study the real-world spurious features of CLIP models.
However, I would like to ***draw the Area Chairs’ attention*** to the fact that this submission needs significant modification, and some observations should be carefully adjusted based on the newly curated dataset. The presentation also needs to be revised to align with the goal of studying real-world spurious features of CLIP models.
Best regards,
Reviewer 2MXa
---
Rebuttal 3:
Title: The original maunscript does not need significant modifications
Comment: Dear Reviewer 2MXa,
Thank you for engaging in the discussion and acknowledging our motivation. We need to clarify that **as our focus is to study the robustness of CLIP models, the additional experiments do not necessarily introduce significant modifications**.
Meanwhile, to avoid any potential misunderstandings, we have changed the title to `A Sober Look at the Robustness of CLIPs to Spurious Features` to **more precisely reflect the research problem that our original abstract and introduction are addressing**.
To accommodate the concern of adopting CLIP models to curate the splits, we have revised our abstract and the introduction to more align with the corresponding work for ImageNet [1], which curates an OOD test set `ImageNet-a` according to the accuracy of ImageNet models. **We have revised our manuscript to clarify that it is a common practice in the literature to curate the OOD test set to evaluate the robustness of neural networks. The revisions are also acknowledged and suggested by Reviewer KCA4.**
Please kindly let us know if you still feel any additional revisions are needed. We would sincerely appreciate it if you could jointly take our clarifications above when making the final evaluation of this work. | Rebuttal 1:
Rebuttal: We sincerely appreciate all the reviewers for their careful reviews and constructive feedback. We are also grateful for reviewers’ recognitions of our efforts on dataset constructions, the empirical findings, as well as the theoretical analysis. In response, we would like to emphasize the contributions of this paper and clarify the potentially misleading content.
**Dataset Contribution**. We would like to highlight the biases existing in previous evaluations of CLIPs’ OOD robustness using ImageNet variant test sets. **Benchmarking models with improper test sets would cause illusions that CLIP models seem to resolve the spurious correlations, especially compared with ImageNet models**. It motivates us to craft the CounterAnimal dataset, specifically capturing the spurious correlations within CLIP setups. The experiments with CounterAnimals provide a sober look of the vulnerability of CLIP models to spurious correlations and provide a platform for future developments of more advanced and robust CLIP and vision-language models.
**Comparative Experiments**. In general, **It is not our intention to claim that ImageNet models are inherently more robust to spurious features than CLIP models**. The comparative experiments between CLIP and ImageNet models in our study demonstrate that the features captured by CounterAnimal are generalizable across different CLIP setups. Furthermore, our experiments detailed in Section 3.3 suggest that the spurious features identified by CounterAnimal may have limited influence on ImageNet benchmarks.
Moreover, following the suggestions from the reviewers, we have applied measures of effective robustness to further support our conclusions in Section 3.3. These results are detailed in the appendix. Overall, our previous conclusions are upheld, with additional findings indicating that **improvements from increasing model scales and enhancing data quality significantly outweigh the benefits of merely scaling up datasets**.
Finally, we understand that our original title may bring unnecessary misunderstandings about our work. We thus propose to revise it to `A Sober Look at the Robustness of CLIPs to Spurious Features` to more precisely reflect the contents of this work. Please kindly let us know if you have better suggestions to resolve the misunderstandings!
Pdf: /pdf/eb323ca74502bdac51af834661255f3bf3e05f87.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Reflective Multi-Agent Collaboration based on Large Language Models | Accept (poster) | Summary: The paper introduces COPPER, a novel framework designed to enhance collaboration in multi-agent systems using a learnable self-reflection mechanism. COPPER utilizes a shared reflector fine-tuned to adjust actor model prompts via a counterfactual PPO mechanism. This approach includes counterfactual rewards to address the credit assignment problem and enables the reflector to customize reflections based on agent roles, optimizing computational resources and training stability. The framework's efficacy is validated through experiments in multi-hop question answering, mathematics, and chess, demonstrating improved reflection capabilities and generalization across various actor models.
Strengths: This paper is clearly written and explores a new setting — multiagent reflection. It also show improved performances on all three tasks. Using counterfactual rewards to perform PPO training sounds straightforward.
Weaknesses: - My main concern is this paper involves a combination of various components and I could not clearly infer from the paper which part is most important. This makes the improvement for each part look marginal. Generally, this paper proposes a novel training method to enhance reflection, as well as use reflection-based multi-agent discussion to improve agent reasoning. I believe the method could be directly applicable to single agent scenario as reward for each agent is updated independently. Could you perform ablation in terms of single-agent?
- The test scenario focuses on single-step tasks, can this framework be applied to multi-step agent tasks like AlfWorld?
- How is the performance of COPPER compared to shared parameter & loss training for all LLMs?
Technical Quality: 2
Clarity: 3
Questions for Authors: See Weakness.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To Reviewer tdnV:
Thanks for your comments. We will try to alleviate your concerns one by one in the following.
**Q1: My main concern is this paper involves a combination of various components and I could not clearly infer from the paper which part is most important. This makes the improvement for each part look marginal. Generally, this paper proposes a novel training method to enhance reflection, as well as use reflection-based multi-agent discussion to improve agent reasoning. I believe the method could be directly applicable to single agent scenario as reward for each agent is updated independently. Could you perform ablation in terms of single-agent?**
Thanks for this comment. In our paper, we propose the problem of multi-agent reflection, which, to the best of our knowledge, is the first time in the area of LLM-based agents. For this problem, we have actually designed several tailored techniques to solve its special challenges, for example, designing the counterfactual rewards to obtain the real effect of different agent reflections, and so on.
In our setting, the reward is determined by the trajectory composed of all the agents' actions, and the reflector also needs to consider all the agent actions to reflect. As a result, one agent's action may influence the final reward and reflection, and further impact the other agents' updates.
To follow your suggestion, we use the episode difference reward as the reward for all the agents, and update the agent independently. We refer to the method as Multi-Retroformer and present the experiment results in Figure 2 of the PDF.
**Q2: The test scenario focuses on single-step tasks, can this framework be applied to multi-step agent tasks like AlfWorld?**
Thanks for the question. Actually, HotPotQA is a multi-step agent task, where agents can retrieve relevant knowledge by invoking search engines multiple times to ultimately answer questions. Each agent in the environment follows the ReAct framework and generates actions sequentially. The main experiment results on HotPotQA can be found in Section 5.4.
To further improve our study, we additionally conduct experiments on ALFWorld. We follow the same multi-agent collaboration setting as [1] and test the model with 134 instances. The experiment results are presented in Figure 4 of the PDF.
From the results, we can observe that COPPER achieves better reflection performance than the original LongChat and GPT-3.5. Besides, compared to the initial success rate, COPPER brings an improvement of 37.2% with 4 times of reflections.
**Q3: How is the performance of COPPER compared to shared parameter & loss training for all LLMs?**
Actually, we have compared the shared and non-shared versions of our reflector in Section 5.8. To further improve this part, we have added more experiments on the other datasets, which are shown in Figure 3 of the PDF.
The experiment results indicate that in multi-agent collaboration scenarios, training a shared reflector enables higher-quality reflections, both when fine-tuned with SFT and RLHF techniques.
**References:**
[1] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen LLM applications via multi-agent conversation framework. *CoRR*, abs/2308.08155, 2023.
---
Rebuttal 2:
Comment: Dear reviewer tdnV,
Thanks so much for your contrastive comments, which can definitely improve our paper.
We believe most of your concerns are about the experiments. To alleviate your concerns, we have added a large number of experiments to make our claims more convincing (see the one-page pdf).
The discussion ddl is fast approaching, if you have further questions, we are very happy to discuss them.
---
Rebuttal 3:
Comment: Dear reviewer tdnV,
We deeply appreciate all the insightful comments you have posted, as they have greatly enhanced our paper. To follow your advice, **for each point in the Weaknesses, we have added extensive experiments (see the one-page pdf)**.
Since the rebuttal deadline is approaching rapidly, we would like to kindly inquire if we have adequately addressed your concerns. If there are more remaining issues, we would appreciate the chance to address them and work towards achieving a positive score.
Really hope our efforts can be considered and alleviate your concerns.
Thanks
---
Rebuttal Comment 3.1:
Title: Thanks for the rebuttal
Comment: Thanks for the rebuttal. After reading the rebuttal, I am inclined to keep my original score.
---
Reply to Comment 3.1.1:
Comment: Thanks so much for your feedback.
We take a lot of time (more than five days) and money (e.g., collecting data using GPT, renting servers to seed up the experiments) to conduct extensive experiments for each of your concerns in the Weaknesses (see the one-page pdf).
We would like to kindly ask if these experiments have alleviated your concerns. If not, we would appreciate the chance to continue working towards a positive score. | Summary: The paper proposes a multi-agent reflection framework COPPER to solve reasoning tasks on several datasets such as HotPotQA, GSM8K, and Checkmate in One Move. The two main contributions are:
1. designing counterfactual rewards to alleviate the credit assignment problem;
2. training a shared reflector to personalize the reflection for each agent.
Strengths: 1. Novelty: The paper introduces counterfactual rewards from RL to LLM agents, to deal with the credit assignment problem in multi-agent cooperation.
2. Soundness: The authors conducted extensive experiments to thoroughly analyze the proposed mechanism.
Weaknesses: 1. The motivation of the shared reflector may not align with reality. Embodied scenarios do not allow complete information sharing with a central reflector.
2. The computation of counterfactual rewards can be very high. Every agent demands two times of simulation to calculate the rewards, and the computational costs could be much higher when the number of agents increases.
3. The claims of personalized reflection may not be completely conducted. For the Cooperative Debate Paradigm, there are no roles for the debaters.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How to determine the number of agents for each task?
2. Can you present the computational cost? Including training and inference stages.
3. Can you provide more case studies, especially for the other two datasets?
4. Does the shared reflector take all the agents' trajectories together to reflect?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The efficiency of data collection and the length of reflection memory may limit the application of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To Reviewer ogh3:
Thanks for your comments. In the following, we try to alleviate your concerns one by one.
**Q1: The motivation of the shared reflector may not align with reality. Embodied scenarios do not allow complete information sharing with a central reflector.**
Thanks for this comment. As an initial study for multi-agent reflection, we focus on the settings where the agents' information can be fully observed, which we believe indeed simplifies the real scenarios.
However, we believe our study is also meaningful since it formally proposes the direction of considering agent reflection under multi-agent settings. Based on our study, one can easily extend to the settings where the information between agents is not fully observable.
To improve our paper, we conduct more experiments to investigate the settings with partial information. In specific, we introduce two models. For the first one, we remove the information of the other agents. For the second one, we use a proxy model to predict the information of the other agents. We present the experiment results in Figure 6 and Figure 7 of the PDF, respectively. We can find from the results that our COPPER still achieves better reflection performance under these partial settings.
**Q2: The computation of counterfactual rewards can be very high. Every agent demands two times of simulation to calculate the rewards, and the computational costs could be much higher when the number of agents increases. Can you present the computational cost? Including training and inference stages.**
Following your suggestion, we have incorporated the computational costs of the training and inference stages.
In the training stage, we first collect training reflection data and then train the reflector model via RLHF offline. So although constructing counterfactual rewards requires multiple times of simulation, the process does not incur additional computational cost. For example, when training reflector on HotPotQA dataset with one NVIDIA A800-80G, both SFT and reward model training stages can be finished in about 1 hour, while the PPO training requires about 4 hours (For reference only. Training time may vary with different hyperparameters).
In the inference stage, we load the fine-tuned reflector model on GPU and call GPT API as the actor model. The duration of the stage mainly depends on the speed of calling GPT.
**Q3: The claims of personalized reflection may not be completely conducted. For the Cooperative Debate Paradigm, there are no roles for the debaters.**
Thanks for this comment. We believe personalization should be considered as a general term, any differences between the agents can be actually regarded as "personalization" and written into the profile. For the Cooperative Debate Paradigm, the stances, the goal of each agent, and the knowledge owned by different agents can be regarded as personalized.
**Q4: How to determine the number of agents for each task?**
For the HotPotQA dataset, we design a teacher-student collaboration framework, in which an intuitive number of agents is 2 (with one student agent and one teacher agent). In GSM8K and Checkmate in One Move datasets, we follow [1] to set the number of agents and debating rounds. The paper explores the best number setting in each scenario through empirical experiments.
**Q5: Can you provide more case studies, especially for the other two datasets?**
Following your suggestion, we have added more case studies in the PDF(Figure 8 and Figure 9), which will be incorporated into the final version.
From the cases, we can find that in the context of the multi-agent debate, compared to GPT-3.5, COPPER can accurately identify key issues where mistakes occurred in previous trials and generate more specific reflections to guide agents in improving their responses. For example, in the case of GSM8K, COPPER identified that previous errors mainly stemmed from an incorrect analysis of equal relations and emphasized the need to pay attention to "every two weeks". In the case of Checkmate in One Move, COPPER pointed out the importance of focusing on the weakness in the own position.
**Q6: Does the shared reflector take all the agents' trajectories together to reflect?**
Yes, in our model, the reflector takes all the agents' trajectories to reflect. To further improve our study, we also conduct more experiments on the settings where the agent only takes its own trajectories to reflect, see the answers to Q1 for more details.
**References:**
[1] Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. *CoRR*, abs/2305.14325, 2023.
---
Rebuttal 2:
Comment: Dear reviewer ogh3,
Thanks again for your detailed comments, which can definitely improve our paper.
In our rebuttal, we have tried our best to alleviate your concerns and added many experiments inspired by your constructive suggestions.
The discussion ddl is fast approaching, if you have further questions, we are very happy to discuss them.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed response. I acknowledge the efforts to clarify the key concepts and add new results. However, regarding W2 and Q2, although the data is collected offline, the collection process itself requires more API calls due to the construction of the counterfactual reward. Therefore, the tokens used should be included as part of the cost analysis. I will raise my score to 6.
---
Rebuttal 3:
Comment: Thanks so much for your feedback.
In the following, we show the token costs of building our datasets to make our training process more clear.
In HotPotQA dataset, each trajectory of the multi-agent system comprises approximately 28,672 input tokens and 3,584 output tokens, costing 0.01925 dollars. Constructing training data with original LongChat for CF SFT training costs 290.12 dollars in total, and the training data generated by LongChat fine-tuned with SFT for PPO training costs 293.24 dollars.
In GSM8K dataset, a single trajectory consists of around 5,376 input tokens and 1,536 output tokens, with a cost of 0.004875 dollars. The total cost for creating training data with the original LongChat for CF SFT training is 35.08 dollars, and the training data constructed by LongChat fine-tuned with SFT for PPO training costs 35.45 dollars.
In Checkmate in One Move dataset, each trajectory includes about 11,520 input tokens and 2,304 output tokens, which requires 0.009 dollars. Total expenses for generating training data with the original LongChat for CF SFT training amount to 227.59 dollars, while the cost for data constructed by LongChat fine-tuned with SFT for PPO training is 232.70 dollars.
In the final version, we will definitely incorporate the above cost analysis to make our paper more clear. | Summary: This paper proposes COPPER to enhance the collaboration ability of multi-agent systems through a learnable self-reflection mechanism. It involves reflections from different agent-specific profiles. The contribution of each agent-specific reflector is measured based on their marginal reward. This reflector is shared among agents and generates personalized reflections according to agents' roles. Experimental results on several datasets demonstrate its effectiveness.
Strengths: 1. This paper explores the reflection on multi-agent collaboration. Previous work on reflection mainly focuses on a single LLM, ignoring the complex environment and interaction in the multi-agent system.
2. The introduction of the counterfactual reward in PPO training assigns the reward to rate each agent's reflection, helping the credit assignment problem.
3. The comprehensive analysis of the counterfactual reward, the shared reflector, and different LLMs for reflectors provide a deep insight into the proposed method.
Weaknesses: 1. Including the Retroformer under the multi-agent setting as one of the baselines would be better.
2. When the environment provides a sparse reward such as the credit for the reflection of different agents may become very similar. For example, the removal of all reflections may result in a counterfactual reward of 0 because both trials fail. Then the counterfactual reward may degrade to the episode reward in Retroformer.
3. With the complex PPO training, COPPER's performance is not very impressive, especially when the trial is small (in GSM8k and Checkmate in One Move of Figure 4 and Figure 8)
Technical Quality: 3
Clarity: 3
Questions for Authors: * The left part of Figure 2 is a little confusing due to the position of Step 1 to 4. The execution order is unclear. The text in Figure 2 is too small to be seen, especially the right part.
* Will there be any negative counterfactual reward? For example, the removal of a specific reflection will improve the performance.
* What is the impact of the agent profile on the reflector? Will a personalized reflector be better than a general reflector?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss their potential limitations in their paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: To Reviewer gdf9:
Thanks so much for your positive comments on our manuscript. In the following, we try to alleviate your concerns in detail (we combine all the questions in the weaknesses and questions).
**Q1: Including the Retroformer under the multi-agent setting as one of the baselines would be better.**
Following your suggestion, we have implemented Retroformer under multi-agent setting, and compared it with our model. The experiment results are presented in Figure 2 of the PDF.
From the results, we can see that our model can outperform Multi-Retroformer on all the datasets. We speculate that the improved performance is brought by our special designs for multi-agent settings, such as the counterfactual reward and so on.
**Q2: When the environment provides a sparse reward such as the credit for the reflection of different agents may become very similar. For example, the removal of all reflections may result in a counterfactual reward of 0 because both trials fail. Then the counterfactual reward may degrade to the episode reward in Retroformer.**
To begin with, the counterfactual reward aims to evaluate "the effectiveness of conducting reflection on each agent". For agent $i$, we compare the final rewards when agent $i$ uses and does not use reflections. The reward change is regarded as the real effect of the reflector on agent $i$. Basically, we decompose the overall reward improvement into sub-rewards, which are more tailored to different agent reflectors.
According to the above explanations on the counterfactual reward, we can not remove all the reflections, for each agent, we only remove its corresponding reflection to obtain the counterfactual reward, while all the other reflections are still valid. This corresponds to the basic meaning of counterfactual effect: "What is the effect of one variable when all the other variables remain the same".
Here, we present a toy example, suppose we have two agents $A$ and $B$, and their reflectors are $X$ and $Y$, respectively. For $X$, we first run a task with $(A+X, B+Y)$ and obtain the reward $r(A+X, B+Y)$, and then we run the same task with $(A, B+Y)$ to get the reward $r(A, B+Y)$, then the counterfactual reward for $X$ is $r_X = r(A+X, B+Y) - r(A, B+Y)$. Similarly, the counterfactual reward for $Y$ is $r_Y = r(A+X, B+Y) - r(A+X, B)$.
Since $r(A, B+Y)$ and $r(A+X, B)$ are very different, the counterfactual rewards for different agents are various. For example, in the HotPotQA dataset, after a failed trial, the student agent $A$ generates a reflection $X$: "The student did not find the correct answer due to incomplete search scope and insufficient specificity...", and the teacher agent $B$ generates a reflection $Y$: "I think the main reason for failure was the unclear pronoun usage...". We first add both the reflections to the corresponding agent's memory, and run the next trial to calculate the episode difference score $r(A+X, B+Y)=1.0$. Then we run the same task with $(A, B+Y)$ and $(A+X, B)$ respectively and obtain $r(A, B+Y)=0$ and $r(A+X, B)=0.211$. The counterfactual reward of $X$ and $Y$ can be calculated as follows:
$$
r_X = r(A+X, B+Y) - r(A, B+Y) = 1.0
\notag
$$
$$
r_Y = r(A+X, B+Y) - r(A+X, B) = 0.789
\notag
$$
**Q3: With the complex PPO training, COPPER's performance is not very impressive, especially when the trial is small (in GSM8k and Checkmate in One Move of Figure 4 and Figure 8).**
The trial number in these figures means that, for the same task, how many times the agents reflect on their behaviors to achieve better task success rates.
We believe this could be a character of the RLHF method, that is, when using RLHF, we need to let the agents reflect more times to achieve better performances.
**Q4: The left part of Figure 2 is a little confusing due to the position of Step 1 to 4. The execution order is unclear. The text in Figure 2 is too small to be seen, especially the right part.**
The Step 1 to 4 on the left part of Figure 2 describes the process where the multi-agent system generates actions in response to problem $k$ at time $t$, which corresponds to lines 136-138 in Section 4.1. First, the system needs to compute the identifier $i$ of the agent to respond at the current time (Step 1). Then, agent $i$ updates its memory, including reflections of previous trials and the current trial's historical interactions (Step 2), and perceives the environmental state such as the question and current task scores (Step 3). Finally, the agent $i$ generates the action based on its own memory and the environment state (Step 4).
To make Figure 2 more clear and accurate, we will add explanations of steps 1 to 4, and replot the figure in the final version. The reploted version can be found in Figure 1 of the PDF.
**Q5: Will there be any negative counterfactual reward? For example, the removal of a specific reflection will improve the performance.**
Yes, sometimes removing the reflection of a certain agent can lead to an improvement in the task performance. To verify this, we further count the number of negative reflections in the constructed training data. The statistics are shown in Table 1 and Table 2 of the PDF.
**Q6: What is the impact of the agent profile on the reflector? Will a personalized reflector be better than a general reflector?**
Thanks for this very interesting question. Inspired by your comments, we conducted additional experiments by removing the personalized profiles. The results are presented in Figure 5 of the PDF.
From the results, we can observe that when using pre-trained LMs (LongChat and GPT-3.5) to reflect, the removal of the agent profile has a greater impact on the GPT-3.5 reflector. This may be due to GPT-3.5's better contextual understanding ability. Our COPPER can further improve the model's reflection ability under non-profile settings. However, the results are slightly worse than the setting with agent profiles.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's detailed responses and I would like to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thanks very much for your feedback. Your comments are very constructive, and we will incorporate them in the final version. | null | null | Rebuttal 1:
Rebuttal: Dear reviewers:
Thanks for your detailed reviews. Additional tables and figures mentioned in the rebuttals are shown in the submitted one-page pdf.
Pdf: /pdf/5f5f64c530b1a9d18678029e435df512c30efe7d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Experts: Mixture of Experts for Implicit Neural Representations | Accept (poster) | Summary: This paper proposes a mixture of experts (MoE) approach for INRs, which allows the learning of local piece-wise continuous functions by subdividing the domain and fitting locally. The incorporation of a MoE architecture enhances speed, accuracy, and memory efficiency. They also propose a novel manager architecture and initialization that enable domain subdivision without ground truth.
Strengths: 1. The paper is well-written and easy to follow.
2. The proposed MoE INR has a good performance compared to baselines.
3. The idea of delivering MoE as a learnable partition region for INR fitting with randomized initialization is novel to me. From the ablation study, the randomized initialization improves the performance a lot.
Weaknesses: 1. Missing some closely-relative works. I encourage the authors to have a detailed discussion of previous MoE INRs [1,2] and decomposition/partition-based INRs [3,4].
2. Lacking some key comparison experiments with decomposition/partition-based INRs. The authors only compare their method with the baseline SoftPlus and SIREN (and their wider version). However, some related works [4] have also shown the INR based on pre-determined masks can also outperform the wider version of SIREN. I encourage the authors to experimentally compare your method with [4] to illustrate the necessity of learnable partition regions.
3. A detailed ablation study on the hyper-parameters of the MoE INRs is missing, such as the layer of encoder, manager, and experts. Given a fixed number of parameters, how to allocate the parameters to the three modules remains unknown.
[1] Zhao, Jianchen, et al. "MoEC: Mixture of Experts Implicit Neural Compression." arXiv preprint arXiv:2312.01361 (2023).
[2] Wang, Peihao, et al. "Neural implicit dictionary learning via mixture-of-expert training." International Conference on Machine Learning. PMLR, 2022.
[3] Rebain, Daniel, et al. "Derf: Decomposed radiance fields." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[4] Liu, Ke, et al. "Partition speeds up learning implicit neural representations based on exponential-increase hypothesis." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Could you please discuss how to choose the number of experts giving a fixed number of parameters and iteration time? Is it better to have more experts with each one having fewer parameters or fewer experts with each one having more parameters?
2. I wonder whether it is possible to apply your method to NeRF since there are no supervised signals for the 3D ground truth.
3. When comparing the Vanilla MoE INR and your Neural Experts, have you kept their total parameters similar (smaller experts for your Neural Experts due to the extra parameters needed by the encoders)?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations and potential negative societal impact of their work have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: Thank you for bringing these important works to our attention, we will address them in our paper.
W2: We compare with [4,new] on their dataset (300 test images from LSUN bedrooms). We use the same learning rate and same measure (PSNR after 300 steps for SIREN). Note that these have not converged at 300 iterations (see also Fig. 5 in [4,new]), hence for the experiments in our paper we report after 30k steps (also done for the last row in the table).
| **Method** | **Mean** | **Std** | **Parameters** | **Steps** |
|-----------------------------------|----------|---------|----------------|-----------|
| SIREN [4,new] | 21.211 | - | 790.0k | 300 |
| SIREN-PoG [4,new] | 23.864 | - | 793.6k | 300 |
| SIREN-PoS [4,new] | 24.485 | - | 793.6k | 300 |
| Ours Neural Experts SIREN | 30.50 | 2.73 | 366.0k | 300 |
| Ours Neural Experts SIREN | 54.98 | 2.56 | 366.0k | 30000 |
W3:
While keeping the number of overall parameters fairly constant, we experiment with how to allocate parameters between the components. The first column is
`width x layers for the input encoding, num experts x (width x layers) for the experts themselves, width x layers for the manager encoding, width x layers for the manager` and the second column is
`% parameters for input encoding, % parameters for the experts, % parameters for the manager`.
We can see that as long as the majority of the parameters is not for the manager (one third of the total params or less) our neural experts SIREN model performs similarly.
| **Layers** | **Pecentages** |**Parameters** | **PSNR** |
|-----------------------------------|----------------|----------------|-----------------|
| 128x2, 4x(128x2), 128x2, 128x2 | 14%, 55%, 31% | 366.0k | 89.35±7.10 |
| 196x2, 4x(85x2), 128x2, 128x2 | 32%, 34%, 34% | 368.0k | 87.86±8.98 |
| 128x2, 4x(102x2), 128x2, 196x2 | 14%, 38%, 48% | 366.1k | 83.05±10.11 |
| 256x2, 4x(68x2), 96x2, 68x2 | 54%, 29%, 17% | 368.3k | 88.12±6.92 |
Q1: We now experiment with changing the number of experts vs number of layers within the experts. Like the previous table, we keep the parameters as similar as possible (~366k) by changing the layer width of experts. The results show that less layers and somewhat high number of layers is best, with 6 experts and 1 layer performing the best out of all the configurations we tried. Our chosen configuration of 4 experts and 2 layers is somewhat consistent with this rule of thumb.
| Number of experts \ Number of layers | 1 | 2 | 4 | 8 |
|--------------------------------------|-------------|--------------|-------------|------------|
| 2 | 79.39±8.74 | 77.78±8.18 | 72.23±6.92 | 65.32±6.51 |
| 4 | 92.32±7.80 | 89.35±7.10 | 81.19±8.04 | 74.36±7.31 |
| 6 | 99.15±5.32 | 92.84±7.95 | 87.80±8.98 | 75.26±9.02 |
| 8 | 96.13±7.95 | 90.72±10.61 | 83.05±10.31 | 66.57±8.72 |
Q2: It is not possible to use our method for NeRFs, for the reason the reviewer points out, see our discussion in the General Comments section above.
Q3: The parameters are similar 349584 for vanilla SIREN MOE vs 365968 for our neural experts SIREN (so 95.5\% of the params of our model, or our model is a 1.05x increase in parameters).
---
Rebuttal 2:
Title: Official Comment by Reviewer ekS9
Comment: I appreciate the authors' effort in providing such a detailed rebuttal.
One of my major concerns is the comparison of your paper with "MoEC: Mixture of Experts Implicit Neural Compression". It seems that their model is the vanilla MOE in your paper, right? Furthermore, I find that Random Pretraining for manager is very important to performance. When you conduct the vanilla MOE experiment, have you also applied the Random Pretraining for manager strategy? If not, I wonder whether the Random Pretraining for manager strategy can improve the performance of vanilla MOE.
---
Rebuttal Comment 2.1:
Comment: **Results with Random Pretraining the Vanilla SIREN MOE and discussion of MoEC**
We did not apply the random pretraining method for the Vanilla SIREN MOE as it is a contribution of our method that we proposed. We give results when adding the random pretraining to the Vanilla SIREN MOE in the updated Table 1 (put as a new general comment above). We see a significant increase in performance when adding random pretraining to the Vanilla SIREN MoE (62.98 vs 74.28 PSNR) while still having lower performance than our full model (89.35 PSNR). This shows that both random pretraining and the shared input encoder are important parts of the method.
While the MoEC architecture is similar to our Vanilla SIREN MoE baseline, it does not follow the pure MoE framework from [16,18,28] that our method uses. Instead they use the "sparsely-gated mixture of experts layer" from [35] ([28] in their paper) where conditional computation is done within the network to produce a single output (note the shared decoder in their method, compared to the expert decoders in ours). Thus they cannot use the pure MoE loss, so similar to [35] they use a balancing loss to ensure load balancing and sparsity. Furthermore they only show the efficacy of their method on a single domain (one biomedical dataset) and one base architecture, while we show the efficacy of our method on multiple domains and multiple base architectures. However their motivation is similar to ours: show that using spatial parameters with learnable partitions can improve performance over using them with fixed partitions when constraining the number of parameters. | Summary: This paper proposes a new architecture for implicit neural representations (INRs) based on the mixture-of-experts (MoE) architecture. This new architecture is differs from traditional MoE architectures in that now all the experts have a shared encoder and expert-specific decoders, while the manager also now has an encoder-decoder architecture, with the manager decoder taking as input both the manager encoder's representation as well as the experts' shared encoder representation. The authors also provide a pre-training method for the manager. The method is evaluated on image reconstruction, audio reconstruction, and 3D shape reconstruction by fitting signed distance functions (SDFs).
Strengths: This paper proposes a novel MoE-based architecture for INRs together with a novel pre-training strategy for the MoE manager.
The empirical results are good and there is an ablation study on the major components.
The paper is well-written and easy to understand.
Many details relating to reproducibility are provided in the supplemental material.
Weaknesses: One of the major weaknesses of this paper is the experimental evaluation. The method does not compare against other methods that propose a new INR architecture (e.g. Gaussian activation function [1], WIRE [2], FINER [3]) or a standard MLP with positional encoding. The experimental evaluation is not also very robust, as only small datasets are used (Kodak, only 3 audio recordings, only 4 shapes) and more complicated tasks investigated by similar works are not considered (for example, WIRE [2] and FINER [3] both include evaluation on neural radiance fields).
The proposed MoE architecture also did not show improvements when used with softplus activation.
This paper also did not include other metrics normally used to evaluate the tasks, such as SSIM and LPIPS for 2D image fitting (e.g. FINER [3]).
Minor point: inclulding the number of parameters in Table 1 may be helpful since comparisons between the number of parameters is discussed in the text.
References
1. Ramasinghe, Sameera, and Simon Lucey. "Beyond periodicity: Towards a unifying framework for activations in coordinate-mlps." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
2. Saragadam, Vishwanath, et al. "Wire: Wavelet implicit neural representations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
3. Liu, Zhen, et al. "FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 1
Clarity: 3
Questions for Authors: 1. Can this work be combined with traditional ReLU MLPs with positional encoding or MLPs with activation functions other than sine?
2. How does this work compare to other novel INR architectures (e.g. WIRE, FINER, etc)?
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 1
Limitations: Limitations and negative impact are addressed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **One of the major weaknesses of this paper is the experimental evaluation ...**
- **vs other architectures**: All the listed architectures are about changing the activation function. Our proposal is orthogonal to that as it can work with any activation function, and we have shown our method with two activation functions. We have added two more, Softplus+FF and FINER, in the updated Table 1 (see General Comments). We also explain why we cannot test our method on NeRFs in the General Comments section above.
- **Dataset Size**: Our experiment dataset sizes are similar to the papers you listed. In each of those, relevant experiments (Image,SDF,Audio) are done on similar or smaller sizes (only FINER's SDF experiments has more: 5 shapes to our 4 shapes). In fact many experiments and comparisons are done on as small as 1 instance. (FINER: 16 2D images and 5 SDF shapes, Gaussian activations: comparisons on 1-3 images per property, WIRE: 1 image, 1 3D shape, comparisons on 1 image per property). We compare on 24 images, 4 SDF shapes, and 3 audio samples.
**The proposed MoE architecture also did not show improvements when used with softplus activation.**
Softplus activations alone is not expressive enough. We give new results with Softplus+FF (Fourier Features) in the updated Table 1 (see general comment).
**This paper also did not include other metrics normally used to evaluate the tasks, such as SSIM and LPIPS for 2D image fitting (e.g. FINER [3]).**
- We have added these metric in the updated Table 1 (see General Comments)
**Q1. Can this work be combined with traditional ReLU MLPs with positional encoding or MLPs with activation functions other than sine?**
Yes, we give new results with Softplus+FF (Fourier Features) in the updated Table 1 (see general comment).
**Q2. How does this work compare to other novel INR architectures (e.g. WIRE, FINER, etc)?**
As mentioned before, those are orthogonal contributions as those methods change the activation function, and we can use any activation function in our framework. See our new results with FINER in the updated Table 1. | Summary: The paper presents a novel INR framework that leverages the Mixture of Experts (MoE) technique. The proposed strategy consists of an expert and a manager branch. Each branch has an encoder that processes the input coordinate and extracts an embedding. By processing the two encoder embeddings, the manager predicts the probability of which of the N experts should be used for extracting the signal. They show how the proposed INR framework achieves better reconstructions than SIREN on several modalities, such as audio, image, or 3D surfaces. They also propose a manager pre-training strategy, which is necessary to exploit all the experts effectively.
Strengths: - The original idea might be conducive to new research in this direction.
- The paper is well-written and easy to follow.
- Supervising experts with semantic losses and obtaining networks specialized in specific semantic areas of the input signal might unlock several applications for INRs and ease their interpretability.
- The proposed framework achieves good reconstruction performance.
- The ablations in Table 4 and Table 5 are very insightful.
Weaknesses: W1) The major weakness of the paper is the misalignment between the experimental results and the motivations of this research:
a- In the introduction (L26-28), the authors correctly point out that, in traditional INRs, each coordinate needs to be processed by the whole network. Even though they claim that this problem can be solved by MoE INRs, in the proposed architecture, the input coordinate needs to be processed by the full manager and the encoder of the expert branch. The only saved computation is the one from the final expert, a much smaller network than the others. Thus, the saved computation looks minimal to standard SIREN (considering the same total weights). Moreover, no experiments regarding computational efficiency and the advantages of parallelized inference are necessary to motivate this claim. Maybe, instead of talking about the absolute efficiency of the proposed approach, it is better to show the better trade-offs in terms of efficiency and reconstruction quality than SIREN.
b- In the introduction (L28-30), the authors claim that standard INRs extract features vastly different for close spatial coordinates (i.e., locality problem). I am unaware of studies that formally investigate this INR property. Thus, I suggest adding a reference work or validating it with experiments. Moreover, the authors claim MoE INRs can learn local piece-wise functions (L259 in the conclusion section). Thus, they do not suffer from the problem above. Yet, the experiments show something different. For instance, by looking at Figure 3c, different experts predict audio signals for close temporal coordinates. I can notice the same behavior in the last column of Figure 6, in which many distinct experts predict pixels in the upper part of the image.
W2 ) The idea of using MoE resembles the idea of KiloNeRF [1]. In that case, the routing strategy is not learned and depends only on the input 3D coordinate, and each expert focuses on a pre-determined spatial region. I think the authors should add this reference, explaining the pros and cons of the two kinds of approaches.
W3) In recent years, INR frameworks have been proposed to be faster and more efficient than MLP-only techniques such as SIREN. For instance, hybrid INRs such as InstantNGP [2] (hash grid + MLP) can be used as the base framework to speed up computation when learning INRs of images and surfaces or NeRFs. The paper should also include more recent competitors than SIREN.
[1] Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. ICCV 2021.
[2] Instant neural graphics primitives with a multiresolution hash encoding. ACM transactions on graphics (TOG) 2022.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1) In Table 1, does the vanilla MoE baseline employ Softplus activations, while the final strategy uses Sine activations? In this case, the comparison is unfair and does not validate the superiority of their approach to the vanilla one.
Q2) Can the authors also include the Chamfer Distance metric in Table 3?
I like the paper's core idea. However, the author's response to my concerns will greatly influence my final rating.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The paper includes a discussion on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1)a: The limitation we are mainly interested in is learning capacity rather than computation time. For traditional INRs, as each coordinate needs to be processed by the whole network, all parameters have to contribute to the output of every point in the domain, making parameter optimization difficult. For our proposed network, we have subnetworks that are specialised for particular regions of the domain, and thus it is easier to optimise towards better solutions (as our results show). We will make this more clear in our revised version. As the reviewer mentions, saved computational time would be minimal (and highly dependent on hardware architecture), hence we have not explored this.
W1)b: We apologize for the poor wording regarding locality limitation. What we meant is that a desirable property of INRs is for features to change rapidly (to allow modelling of sharp boundaries). This can't be achieved in networks without explicit spectral bias (which we mention in lines 62-66). We will update the paper to clarify this. Our method gives an alternative way for handling sharp changes, i.e., by changing the expert, which is a more natural way to handle discontinuous signals. This is consistent with your observations in Figures 3c and 6.
W2): We will add this reference. Apart from the obvious difference (ours is an MoE so can learn different regions, while they have fixed regions in a grid structure), KiloNeRF's priority is post optimization rendering speed, hence their simple structure. Their architecture leads to insufficient performance when optimizing directly (see their Figure 3), so they rely on distillation to train. Our method on the other hand is an alternative for the original network that optimizes directly with better signal reconstruction performance, which is not the case for KiloNeRF. Furthermore we show improvement over grid based methods in Table 5.
W3): See general comment. We leave combining our MoE approach with hash based methods as future work. Also note that our updated table 1 has comparision with FINER from CVPR2024 (see general comment).
Q1): No, vanilla MoE was with Sine activations, we will change the name to Vanilla SIREN MoE to make it more clear
Q2): We have added the Chamfer distance now (see table in general comment).
---
Rebuttal Comment 1.1:
Comment: Regarding W3, could you provide the results using InstantNGP as the reviewer VtKc requested? Considering the potential impact of this work, we look forward to seeing this comparison.
---
Rebuttal 2:
Comment: **Results with InstantNGP**: We give results with InstantNGP in the updated Table 1 (put as a new general comment above). The results show that our Neural Experts SIREN and Neural Experts FINER achieve superior performance compared to InstantNGP (89.35 and 90.84 PSNR vs 84.56 PSNR) with an order of magnitude fewer parameters (366k vs 7.7M). In the InstantNGP experiment, we use their default architecture for image experiments (16 encoding levels, 2 parameters per level, maximum cache size of 2^19, and a 2 hidden layer decoder with 64 neurons). Note that most of InstantNGP's parameters are spatial parameters (hash encoding), specifically 99.86% of all parameters. This comparison between our Neural Experts SIREN/FINER and InstantNGP shows that the spatial parameters (parameters dedicated for specific spatial regions, so the experts and the hash encoding) are effective, but if used with patterns and heuristics (like InstantNGP or the other methods discussed in lines 70-76) many of the spatial parameters become redundant and underutilized quickly. This motivates the learning of the spatial regions to achieve a parameter efficient approach (our Neural Experts method).
Extending InstantNGP to include the MoE architecture is non-trivial as the hash encoding parameters, which make up over 99% of the total parameters, are used in the beginning of the network (to provide a specialized encoding to then go through a tiny MLP decoder). The obvious Neural Experts extension (which requires a shared input encoding), is to apply MoE to that tiny decoder, which we call Naive Neural Experts InstantNGP. Results are reported in the updated table 1, and we observe that the baseline InstantNGP performs better than this variant. However, the shared input hash encoding parameters are still over 99% of the total parameters. As shown in our response to Reviewer ekS9, the parameter allocation ratio between the input encoder, the experts and the manager plays an important role in the MoE's performance, with the best performing ratio being 14%, 55% and 31% respectively. So this naive result is to be expected. Constructing a more fair distribution of parameters would require a fundamental change to the InstantNGP backbone. Either we greatly increase the parameters of the expert and manager (which would make the total parameters another order of magnitude larger) or we need a way to split the hash encoding parameters into the experts. Both of these extensions are outside the scope of the current work.
Overall, these results show that learning regions for spatial parameters, as in our Neural Experts, achieve better performance with significantly fewer parameters.
---
Rebuttal Comment 2.1:
Comment: I thank the authors for the responses. I appreciated the experiments with InstantNGP. Most of my concerns are addressed. Thus, I will raise my score to Weak Accept (6).
I expect the authors to insert the new results with InstantNGP and update the introduction to align more with the considerations in W1a and W1b. | Summary: This paper introduces a MoE architecture for INRs, enhancing scalability, local fitting, and parallelization. Traditional INRs use a single network, imposing global constraints, while this method learns several local expert functions, subdividing the domain. Key contributions include a novel manager architecture and initialization method for better convergence and domain subdivision without ground truth. It show improved performance and efficiency over traditional methods, across image, audio, and 3D surface reconstruction,
Strengths: - It demonstrates a neat architectural design along with a robust ablation study.
- It consistently shows performance improvement across various tasks, and the performance with respect to the number of parameters is also superior.
- It is interesting that random initialization, which includes no inductive bias in the manager pretraining process, outperforms initializations like SAM.
Weaknesses: - It requires more parameters compared to the vanilla model. While it is fair to compare it with the wider version of the vanilla model, it is unclear if the proposed model still performs well when it has the same number of parameters as the vanilla model (not wider version). Tab.3 alleviates this concern to some extent.
- In addition to comparing with the vanilla model, it would be good to include a discussion on various INR methods that apply locality bias (e.g., spatial functa).
- As shown in the convergence graphs in the appendix, it shows more unstable convergence compared to the baseline.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Please see the above weaknesses.
- In Tab.1, what is the PSNR of Neural Experts SIREN with the same number of parameters as SIREN?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1: We have added results in the updated Table 1 in General Comments (see "Ours Neural Experts SIREN small **(New)**"). This version has a width of 68 in each layer (encoding, experts and manager) instead of 128, leading to 98,686 parameters. Note that it still outperforms the SIREN baseline, which has 99k parameters.
W2: See comment in General Comments.
W3: Yes, the convergence is unstable as we optimizing both the parameters and the regions. However our results show that even with unstable convergence we have a consistent performance increase from our baselines, and thus show the promise of this line of work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response. After reading all the reviews and their responses, I found several potential weaknesses I had not noticed. But I think the authors addressed most of the major things pointed out. Their new experimental results addressed my concerns and the contribution about 'benefit of optimizing the regions and how to make this work in practice' is convincing to me.
I agree that recent INR methods that apply locality bias mentioned in the reviews have different main goals, but I'm still unsure if this method can have a meaningful impact on INR-based applications without such comparisons.
Therefore, I'd like to maintain my rating of 'weak accept'. | Rebuttal 1:
Rebuttal: ## General Comments
We thank the reviewers for their insightful comments. We include requested additional experimental results here and response to common questions. Specific questions are addressed to each reviewer below.
**New Table 1** (page 5 of submission) now updated to have comparison with Softplus + FF (Fourier features [38]) based networks and FINER based networks [1, new], a version of our method with similar parameter count to SIREN, and SSIM and LPIPS metrics.
| **Method**| **Parameters** | **PSNR Mean**| **PSNR Std**| **SSIM Mean**| **SSIM Std**| **LPIPS Mean** | **LPIPS Std** |
|---------------------------------------------|----------------|----------------|---------------|----------------|---------------|----------------|---------------|
| Softplus| 99.8k| 19.51| 2.95| 0.7158 | 0.1106| 4.08e-1| 8.33e-2 |
| Softplus Wider| 642.8k | 20.91| 3.12| 0.7798 | 0.0899| 3.38e-1| 8.44e-2 |
| Ours Neural Experts SoftPlus| 366.0k | 20.62| 3.12| 0.7628 | 0.0946| 3.53e-1| 8.47e-2 |
| Softplus + FF **(New)** | 99.8k| 28.97| 3.30| 0.9433 | 0.0193| 7.59e-2| 1.70e-2 |
| Softplus + FF Wider **(New)** | 642.8k | 29.48| 3.91| 0.9436 | 0.0245| 7.74e-2| 2.30e-2 |
| Ours Neural Experts SoftPlus + FF **(New)** | 366.0k | 31.66| 3.16| 0.9652 | 0.0180| 4.13e-2| 1.57e-2 |
| SIREN | 99.8k| 57.23| 2.46| 0.9991 | 0.0005| 5.78e-4| 5.09e-4 |
| SIREN Wider | 642.8k | 77.50| 5.32| 0.9996 | 0.0005| 3.08e-4| 3.04e-4 |
| Vanilla SIREN MOE | 349.6k | 62.98| 4.16| 0.9993 | 0.0005| 4.53e-4| 3.88e-4 |
| Ours Neural Experts SIREN small **(New)** | 98.7k| 63.42| 7.09| 0.9992 | 0.0007| 9.96e-4| 2.40e-3 |
| Ours Neural Experts SIREN | 366.0k | 89.35| 7.10| **0.9997** | 0.0004| 2.49e-4| 2.93e-4 |
| FINER **(New)** | 99.8k| 58.08| 3.04| 0.9991 | 0.0005| 7.47e-4| 1.31e-3 |
| FINER Wider **(New)** | 642.8k | 80.32| 5.40| 0.9996 | 0.0004| 2.49e-4| 2.46e-4 |
| Ours Neural Experts FINER **(New)** | 366.0k | **90.84**| 8.14| **0.9997** | 0.0004| **2.46e-4**| 2.48e-4 |
Note that for the new results, our method outperforms the baselines.
1 (new). Liu, Zhen, et al. "FINER: Flexible spectral-bias tuning in Implicit NEural Representation by Variable-periodic Activation Functions." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
**Table 3** We also add Chamfer results to Table 3 (page 7 of the submission).
| **Method** | **# params** | **Armadillo** | **Dragon** | **Lucy** | **Thai Statue** | **Mean** |
|-------------------------------|--------------|-----------------|-----------------|-----------------|-----------------|-----------------|
| SIREN Large | 1.5M | 1.4983e-05 | 2.2367e-05 | 1.2030e-04 | 5.8465e-05 | 5.4029e-05 |
| Our Neural Experts Large | **1.3M** | **1.4975e-05** | **2.1340e-05** | **1.1322e-04** | **5.4084e-05** | **5.0905e-05** |
| SIREN Small | 396k | 2.9165e-05 | 2.3944e-05 | 1.2583e-04 | 6.7273e-05 | 6.1553e-05 |
| Our Neural Experts Small | **323k** | **1.5086e-05** | **2.2102e-05** | **1.1047e-04** | **5.9721e-05** | **5.1845e-05** |
Consistent with Trimap IoU (reported in the submission), our method outperforms the baseline on Chamfer distance.
**Discussion about newer INR methods that apply locality bias (such as hybrid methods)**: We discuss such methods in lines 70-76. These methods greatly improve signal reconstruction performance and often also faster as reviewer VtKc mentions. However, as they choose regions in a deterministic or heuristical way [7,15,24,27,33,37], in order to improve task importance many regions are required, leading to these methods being very parameter inefficient (see InstantNGP [27] Fig 2 and 4). Rather than looking into ways to make more efficient patterns or heruistics for assigning spatial parameters, we consider fixing the number of experts (region specific parameter sets) and allowing their region of influence to be optimized, so that for a smaller number of parameters we can get better performance. Thus, while our method is of the same category as those methods (specializing parameters spatially) our method has a different goal: to show the benefit of optimizing the regions and how to make this work in practice. In fact we show that what is required to make this work can be counterintuitive (needs encoder and random pretraining) so we believe this work is of great interest to the community.
**Using our approach with NeRFs**
Extending our MoE method to NeRFs is an intersting future direction as it cannot natively fit within an MoE framework. In an MoE framework, we change a loss per instance $\mathcal{L}_i=\ell(f(x_i),y_i)$ to a weighted sum of losses over experts $\mathcal{L}_i=\sum_j q_j \ell(f_j(x_i),y_i)$ where $f$ is our model, $f_j$ are the expert models, $q_j$ is the expert weights, and $x_i,y_i$ are the current instance's input and ground truth. Thus as reviewer ekS9 points out, we need a 3D ground truth signal. However NeRF only has supervision per ray (color for each pixel), and the rendering model accumulates points along a ray. Thus there is no way to specify a loss per expert, as the points along a ray may be assigned to different experts. While we could then specialise experts for each view direction, we cannot specialise experts for each 3D point with this formulation. This may be addressed by modifying the formulation, e.g., through a variational bound, but is beyond the scope of the current work. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks | Accept (spotlight) | Summary: This work introduces a VAE variant of GraphCast for global medium-range weather forecasting and a VAE variant of a UNet (that is formulated as a GNN) for limited area modeling over Scandinavia. For this, they adapt GraphCast to have a similar hierarchical structure to UNets, and then treat the coarsest hierarchical layer (the bottleneck) as a latent variable representing the mean of isotropic Gaussians. The ensemble predictions from the model are similarly fast as a single deterministic prediction, achieved through batching. Their calibration can be good for some variables (e.g. global t2m for 10 day lead time has a spread/skill ratio of 0.99), while poorer for others (e.g. local wvint has a spread/skill ratio of 0.57 for 24h lead time).
Strengths: 1. The proposed VAE extension to GraphCast is significantly faster than diffusion-based approaches (like the GenCast model).
2. The work does not limit itself to just global weather forecasting, but also presents results for limited area modeling, which is the class of models used by many national weather services.
3. The paper is reasonably well written, keeping a good amount of detail in the main paper, and presenting many additional details in the appendix.
Weaknesses: Major points:
1. Questionable baselines: I am unsure if the chosen baselines are very strong, let me name a few reasons for this:
- Tab 1 presents performance for GraphCast, e.g. RMSE=387 for z500, 5 day leadtime. However, if I check the headline scores in the WeatherBench 2 (https://sites.research.google/weatherbench/) for GraphCast, i see RMSE=274 for z500, 5 day leadtime, which is significantly higher and beats all models presented in this study.
- Both Tab 1 and Tab 2 do not include scores for the conventional weather models. I would expect Tab 1 to include IFS & IFS-ENS scores and Tab 2 to include MEPS scores.
- Since this work introduces a probabilistic weather model, i would expect comparison with other recent works on probabilistic weather models, like the ones cited in this paper (e.g. GenCast).
- Graph-FM has almost 6x the parameters compared to GraphCast* (Tab 5) - which quite possibly could be the major reason for its improved performance, and not the introduced architectural feature of hierarchical layers.
2. Overlooked connection to UNets: The Graph-FM that was introduced for the LAM setting looks to me as equivalent to a UNet:
- The input data comes on a regular grid with 236 x 268 pixels. Which is subsequently downsampled using 3x3 windows. Processing at each depth level is done with a locally connected layer (in other words: a local convolutional filter). A semantically simpler description of such a model would be a UNet with 3x3 pooling (learned in this case) and 3x3 conv filters at each stage. Possibly, implementing it as a UNet could also be computationally advantageous, making use of the highly optimized kernels for 2d convolutions and pooling operations, instead of GNN layers that rely on scatter sums.
- UNets have been previously used for Weather forecasting: e.g. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018GL080704 & https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020MS002203
3. Proposed VAE implementation physically not meaningful? Your VAE is implemented with a latent variable at the coarsest level. This is supposed to capture epistemic uncertainty related to the forward model (and not due to initial state). However, one may argue for atmospheric models most model uncertainty comes from the subgrid-scale parametrizations and not from the coarse-scale representation of atmospheric dynamics. Hence, to me it seems far more intuitive to introduce the stochasticity at the finest level, representing the small scales. I assume you chose the hierarchical description mostly for computational reasons, but given a lack of physical basis, i would at least expect a more thorough investigation of potential errors introduced by this, e.g. is the ensemble variability too smooth?
4. Missing reference to previously published work? A workshop paper at last years NeurIPS has introduced both the hierarchical GNN and the MEPS dataset https://arxiv.org/abs/2309.17370 , if I am not misstaken. I am not really sure about NeurIPS policy here, but even if this work is a direct extension of the previous work and the previous work is to be considered as non-archival, I still believe you should at least cite the workshop paper.
Minor points:
1. GraphCast + SWAG: This is a baseline with poor performance, that is somewhat arbitrarily picked from many possible approaches to obtain ensemble predictions from neural networks. I see two options here: Either you keep it, but also introduce many other such baselines, to make clear that you did not cherrypick a particularly weak one. Other approaches that should not be prohibitively expensive to run could e.g. be MC-Dropout or Laplace Approximation. Or, you simply drop it, as is, it does not add much to the paper.
2. Introduction lacks motivation for LAM: This is an ML conference that you are submitting to. It would probably be good to briefly motivate why doing LAM is even necessary (i.e., why can't we just rely on global models instead)?
3. Extreme Weather evaluation / case study: One key reason for ensemble prediction is capturing the tails, i.e. the extremes. You state in Appendix A that this is out-of-scope for the work. I would argue you are making your life too easy here. Since the presented models are likely not useful unless they display decent performance also for extreme weather, it would be important to evaluate just that. It may be enough for this paper to e.g. study a single extreme event as a case study.
Technical Quality: 3
Clarity: 2
Questions for Authors: Why does the original GraphCast paper not report the visual artifacts that you found for Graph-EFM (ms)? Could it be that your models have simply not been trained sufficiently or that there is a bug in your implementation?
Why do you use a fixed variance of the latent variable for the global predictions? It would be interesting to see this ablation.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The section on limitations is somewhat short. I believe the key limitation of this work is in the evaluation, i.e. it is unclear to me after reading the work how well the models perform. E.g., are these models robust for longer rollouts or if applied at prediction time further away from training time?
Moreover, the work does transparently communicate a limitation of their multi-scale Graph-EFM: visual artifacts. But, this is strange to me, as it could mean two things: a) the VAE formulation and multiscale edges simply don't work well together or b) since GraphCast did not observe such issues, the presented Graph-EFM (ms) is simply not trained sufficiently or suffers from a buggy implementation. While a) would be an interesting finding, i believe b) can not be ruled out given the presented results in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer PLfK for useful comments. See our response below:
1. Scores for GraphCast*
As we state clearly when introducing this baseline, this is a version of GraphCast trained on the same 1.5° dataset as the other models. It thus has different scores than the original GraphCast model evaluated in Weatherbench. As we emphasize in the paper, we focus here on a fair comparison of models operating at the same resolution and trained on the same data. In the paper we refer to appendix H for comparisons with other state-of-the-art models, trained on higher resolution data.
2. Comparison to conventional models
Results for IFS-ENS are included in appendix H. We realize that this might not be mentioned clearly in the main paper and will clarify. As the paper is mainly about probabilistic modeling, we find it more relevant to include IFS-ENS rather than IFS.
Since the limited area modeling experiment is a pure surrogate modeling task, with data coming from MEPS forecasts, it does not make sense to compare to the MEPS model itself.
3. Comparison to GenCast
See the global author rebuttal.
4. Graph-FM parameter count
When instantiating models of similar size we use the dimensionality $d_z$ of representation vectors as the main scaling factor for model size. The hierarchical architecture in Graph-FM is what enables more flexibility at the same value of $d_z$, resulting also in more parameters. Note that this flexibility does not introduce a significant increase in computational cost, which would be the case if one for example added 6 times as many layers to GraphCast.
5. U-Net connection
We disagree that this connection is overlooked and refer the reviewer to the related work section where we clearly note this similarity. It should however be emphasized that our GNN model can not be seen as *equivalent* to a U-net. While some hierarchical structure is shared, the GNN layers can not be reduced to simple convolutions, even in the LAM setting. For example, these feature edge representation updates that have no correspondence in convolutional layers.
We would also like to emphasize that we present a general framework that is applicable to different forecasting regions and graph constructions. The exact similarity to a U-net model will depend on these choices. In the specific LAM case from section 5.2 this connection is stronger (although the models are not equivalent), whereas in the global forecasting setting this connection is more conceptual.
6. Physical meaningfulness of VAE implementation
We would like to clarify that while the latent variables are associated with the top level of the graph, this does not mean that they only influence the coarsest spatial scales. As the GNN layers of the predictor are applied through the graph hierarchy the randomness in the latent variables can spread to all parts of the graph and introduce variations on different scales. However, explicitly constraining different latent variables to control variation at different spatial scales is an interesting idea for future work.
We have found that introducing the latent variables closer to the grid points in the model can lead to less spatially coherent forecasts. Graph-EFM (ms) can act as an ablation of this, as the latent variables are there directly associated with the nodes of the single level multiscale graph. We found Graph-EFM (ms) to be harder to train and produce worse forecasts.
7. Reference to workshop paper
By the NeurIPS policy, workshop papers not published in proceedings are not to be counted as publications. Therefore we did not explicitly cite the mentioned workshop paper. However, if it is deemed advisable to do so we would not mind including a reference to it. Perhaps the AC can help clarify the situation.
8. GraphCast* + SWAG baseline
This baseline was not picked arbitrarily, but as described it was inspired by the usage of SWAG in Graubner et al. The point of this baseline is to include a comparison to multi-model ensembles, which we see can produce too little variability. While there are indeed other ways to achieve this, we wanted to focus on a method that has been proposed in the ML weather prediction literature, rather than choosing a method arbitrarily.
9. Motivation for LAM
We currently give some motivation for this in section 5.2, but we do agree that this can be expanded on already in the introduction of the paper. We will change this.
10. Extreme weather case study
We have now included such a case study for Hurricane Laura, that can be found in the global author rebuttal.
11. Fixed variance of latent variable for global predictions
There might be some misunderstanding here, as the variance of the latent variable (Eq. 4) is fixed for both the global and LAM models. However, the variance $\sigma^2_{\alpha, j}$ of the likelihood term (Eq. 6) is fixed in the global setting while output by the model in LAM. See Appendix C for more discussion about this.
12. Robustness for longer rollouts
Note that all models are unrolled for much fewer time steps during training than at inference time. In the global case for example, Graph-EFM is unrolled 40 steps for evaluation at 10 days lead time, but only trained by unrolling up to 12 steps. As we focus here on weather forecasting rather than climate modeling, the stability of unrolling the models for months or years is of limited interest in this work.
13. Visual artefacts in Graph-EFM (ms)
We would like to emphasize that when considering the same setting as in the GraphCast paper (global, deterministic modeling) we do not see any artefacts. The artefacts in Graph-EFM (ms) are mainly observed in the LAM setting (e.g. Figure 6.d) and for the global probabilistic model, none of which were considered in the GraphCast paper. Note however that the final global Graph-EFM (ms) model does not have these artefacts, as the artefacts mentioned as limitations only appear when training with a poor choice of $\lambda_{\text{CRPS}}$.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
thank you for your effort to respond to the points raised in my review.
After reading your rebuttal and the referenced sections in the paper again, I would like to ask you for some further clarifications.
I believe the merit of this work mainly consists in an experimental study of cleverly adjusting multiple known concepts from main-stream deep learning to probabilistic weather forecasting. Hence, it is most important to demonstrate two things: 1) that the new architectures are robustly advantageous compared to existing state-of-the-art and 2) that these advantages are relevant for practical use.
For computational reasons you choose to train and evaluate on ERA5 at 1.5° resolution, which is arguably not relevant for practical use, and also not the resolution at which current SOTA is trained. You try to mitigate this by re-implementing one of the SOTA methods (GraphCast) to run at 1.5°, and then show at that resolution your proposed changes lead to similar performance deterministically, but can create fast probabilistic ensembles with improved skill. The crux is, your experiments do not demonstrate that these advantages are robust to scaling.
Also you argue you are not comparing to probabilistic SOTA (e.g. GenCast), because code is not open source. But given you reimplemented GraphCast, i suspect you should be able to reimplement GenCast also. So while I understand your reason, I do not believe you can shy away so easily from comparing against SOTA probabilistic models (or also: initial condition perturbations, as other reviewers point out).
For the LAM models, thank you for clarifying that you evaluate against MEPS forecasts, I overlooked that. While this is of course valid to compare the different neural networks, it does not give an indication how good the emulation is. Now a weather service would arguably only use the emulator if they understood, what they loose in terms of performance compared to the conventional model, and more importantly, if the performance drop is not too large. For this it would be important to have an independent reference. For instance, the cited reference [37] for AROME-MetCoOp evaluated on 154 synoptic weather stations, at such stations, you could compare the performance of your neural network emulators against the original MEPS forecast, and also against the IFS-ENS. It would be important to show that your emulators are at least outperforming the IFS-ENS.
Regarding the difference to UNets, could please elaborate what you mean with "While some hierarchical structure is shared, the GNN layers can not be reduced to simple convolutions, even in the LAM setting. For example, these feature edge representation updates that have no correspondence in convolutional layers."?
Let me elaborate on why I believe they could be basically the same: On a regular grid, a message passing layer is a nonlinear transformation of local features (both edges and nodes), followed by a (possibly weighted) sum of multiple features per grid cell. If the graph is such that the weighted sum includes features that are e.g. in 2x2 quadratic neighborhoods, then this operation is essentially a kernel. In UNets we learn these kernels. For the nonlinear transformation of local features, you can use 1x1 Conv layers. For downsampling, you just use dilation, for upsampling, linear interpolation followed by 1x1 Conv.
There would of course be a fairly simple route to alleviate my concern here: actually training a UNet-based VAE on the MEPS data and compare the performance to the proposed Graph-EFM, but I do understand there is too little time left for the rebuttal to do this. By the way, I consider this point to only influence the assessment of methodological novelty in this paper, but if you can prove that your MEPS emulators are indeed practically useful, such methodological novelty might not even be necessary at all.
Regarding your clarification 6, you write "We would like to clarify that while the latent variables are associated with the top level of the graph, this does not mean that they only influence the coarsest spatial scales.", which is clear to me, but was not the point of my concern. Maybe my writing was not clear, let me restate. Because you only draw random variables at the top level, the model lacks fine-scale randomness beyond the top level resolution. This does not mean that the top level random variables can not influence finer scales, but rather that this influence is deterministic. By the way, this touches upon the point of comparing to IC perturbations as a baseline: quite possibly, the uncertainty your model outputs is related to the initial conditions, instead of the forward atmospheric model, so running Graph-FM with perturbed IC and comparing with Graph-EFM with perturbed IC could give insights here. But again, too little time left in the rebuttal..
Nonetheless, I will reconsider my rating after the rebuttal period.
Thanks!
---
Reply to Comment 1.1.1:
Comment: Thank you reviewer PLfK for your careful consideration of our response. We are happy to discuss and clarify our points further!
1. Regarding the method contribution
We find it important to point out that we are not just applying off-the-shelf methods to weather forecasting, performing a purely empirical study. As all deep learning papers we build upon established methods, but combine these in novel ways motivated by the application. In the paper we contribute details all the way from training objectives to the internal workings of GNN layers. Hence we think our contribution is also methodological, with a focus on making the method work for weather forecasting.
2. Regarding scaling to higher spatial resolutions
It is true that we limit the study to 1.5° resolution data for computational reasons. As we are sure the reviewer is aware, but we would like to clarify here for completeness, the computational requirements for training models on 0.25° data are substantial. Already now we use 8 high-end GPUs for a week in order to train one model. For comparison, the original training of GraphCast on 0.25° data took 4 weeks on 32 devices. By considering this and our training times for GraphCast* and Graph-EFM, we can get a rough estimate that training a 0.25° version of Graph-EFM would take more than 6 months using the same 8 GPUs.
In our opinion it is crucial that new ideas, models, and methods can be proposed and evaluated at scales manageable by academic researchers who do not have access to the same computational and engineering resources as a company like DeepMind. This allows for much faster advancements in machine learning methodology for the ML weather prediction area and enables for the best ideas to then be integrated and scaled up in high resolution models. Indeed, we believe that the evaluation that we present in the paper on a lower resolution is indicative of the results that one might expect on a higher resolution, and that these results are sufficient to show that the model that we propose is of interest to the research community.
3. Clarification regarding GenCast comparison
We would like to reiterate that for GraphCast there is code openly available, making it much more feasible to implement and retrain. While the GraphCast* model is implemented in our codebase, we still rely on the GraphCast code for parts of the global model. It is of course not impossible to reimplement and retrain GenCast, but doing this without access to the implementation or detailed documentation would risk getting important details wrong, leading to unfair comparisons.
4. Regarding the evaluation of LAM models
The perfect emulator of a system should be one that gives exactly the same output as the system. In this way it makes sense to evaluate a MEPS emulator by comparing to what the MEPS system outputs (the forecasts). However, we do agree with your point that this is not the full story here. As the MEPS system itself has some error w.r.t. the true weather, the error of the emulator should be considered in this context. The reported error of the emulator (compared to MEPS forecasts) can in this way be seen as an upper bound on how much worse its performance is compared to MEPS (on observations).
While we do not have data for synoptic stations readily available at the moment to run such an evaluation, one can use the values reported in [37] to put errors into some context. Figure 9a in [37] shows MAE of 2m temperature for different months in 2014 and 2015. The corresponding MAE, when comparing Graph-EFM to the MEPS forecasts in the test set, is 0.61 (and similar for other models). Note that we have reported RMSEs rather than MAEs in the paper, but we have evaluated also the MAE values. An increased error of 0.61 is not negligible, but comparable to the difference between IFS and MEPS over Norway in the figure. Now if one considers trading off the superior MEPS forecasts for a model with a similar error to IFS over the region, but that runs in seconds and can create arbitrary large ensembles, this has some clear usefulness. This comparison comes with a whole bag of caveats (especially since we are comparing values for different years), but the point is mainly to give some example of how the magnitude of emulation errors relates to model errors on observations.
(See continuation in answer below) | Summary: This paper introduces a new method for predicting weather using advanced deep learning models. The approach, called Graph-EFM, improves accuracy and better handles uncertainties in weather forecasts. It uses a 1.5 degree version of ERA5 and making weather predictions more reliable and useful for real-world applications.
Strengths: The paper's strengths include the innovative use of Graph-EFM for accurate probabilistic weather forecasting, detailed experiments on large datasets, and clear presentation of methods. Graph-EFM significantly enhances uncertainty estimation and forecast reliability adding value to both research and practical weather prediction applications.
Weaknesses: What happened if 0.25 degree ERA5 is used?
Technical Quality: 4
Clarity: 4
Questions for Authors: I need to some figures during extreme events, e.g cyclones like Yaku.
Can the model deal with higher resolution data?
Or ERA6 when available?
Or more localised higher resolution data?
Do the results change a lot if it's only trained from 1980 onwards?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Not many, this is a fantastic piece of work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 568u for useful comments. See our response below:
1. I need to some figures during extreme events, e.g cyclones like Yaku.
We have now included such a case study for Hurricane Laura, that can be found in the global author rebuttal.
2. About higher resolution data and ERA6
There is indeed nothing that technically prevents us from applying these methods to higher resolution data. Some minor adaptations have to be done to the exact graph structures used, but the overall framework is directly applicable. The method should scale well to even higher resolutions, although naturally with an increasing memory requirement. Our choice to focus on 1.5° data is mainly due to the computational needs.
Assuming that ERA6 will follow a similar format as ERA5, although higher resolution, there is no reason to believe our methods would not be applicable also to that dataset.
Regarding higher resolution localized data, note that the MEPS data is exactly an example of this. The 10 km spatial resolution of this data is far higher than the 1.5° ERA5 data for this area (spatial resolution $\approx$ 167 x 76 km) and even higher than 0.25° ERA5 (spatial resolution $\approx$ 28 x 13 km). Also for limited area modeling we expect the method to be readily applicable also to data at even higher resolutions.
3. Do the results change a lot if it's only trained from 1980 onwards?
While we have not had the time and resources to train such a model now, we would not expect substantial differences in the results from using only data from 1980 onwards. We opted for using as much data as possible, to make sure the model was trained on a set of weather scenarios as diverse as possible. However, in the GraphCast paper it was reported that including data from before 1980 had only minor impact on model performance [1]. Due to variations in climate one might even expect an improvement in performance when considering only more recent data, as the climate in 2020 would be more similar to that of 1980-2017 than that of 1959-1979. Using only more recent years would thus create a smaller shift in the data distribution between training and testing.
[1] R. Lam et al. Learning skillful medium-range global weather forecasting. Science, 2023. | Summary: The authors propose a graph-based ensemble forecasting model (Graph-EFM) to provide weather prediction with a hirearchical GNN framework. They used a hierarchical mesh graph to handle the challenges of capturing processes unfolding over different spatial scales and modeling the uncertainty in the chaotic system. The Graph-EFM provides a probabilistic weather forecasting with the benefit of capturing forecast uncertainity. The experiment results show the effectiveness and advantages of Graph-EFM compared to other deterministic models.
Strengths: The hierarchical mesh graph provides a reasonable idea to handle different spatial scales for weather forecasting, which could inspire other researchers to handle problems in different domains.
The spatial dependencies are considered and handled within GNN layers.
Using ensumble-based model could capture the uncetainty of weather system.
Weaknesses: In Figure 3, it seems like the selected ensemble members vary a lot, and how close is your forecast to the ground truth. Possibly, explaining a little bit of the underlying meaning of the measures in table 1 & 2 in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The authors could better explain the underlying meaning of meaures, RMSE, CRPS, and SpSkR, for the results of weather forecast. Basically, I would like to ask authors to show how close their Graph-EFM's forecast is to the ground truth weather.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer LjpL for useful comments. See our response below:
1. In Figure 3, it seems like the selected ensemble members vary a lot, and how close is your forecast to the ground truth?
Note that Figure 3 shows the forecasts for 10 days in the future. At such lead times there is indeed a lot of uncertainty in any forecast that can be produced. In Figure 3 we want to show that the ensemble forecast does indeed capture this variability by producing varying ensemble members that each still represent a realistic weather state. The closest forecast (in terms of RMSE) to the ground truth is instead achieved by the ensemble mean. But as expected, due to the large variability at 10 days lead time, this ensemble mean is blurry and not a realistic weather state. We can however still see that some of the large-scale features present in the ground truth are present also in this prediction.
2. The authors could better explain the underlying meaning of measures, RMSE, CRPS, and SpSkR, for the results of weather forecast.
We agree that the presentation of these metrics in the main part of the paper is quite short. Due to space restrictions we chose to move the full definitions and descriptions of these metrics to appendix D. We encourage anyone unfamiliar with these metrics in the context of weather forecasting to read also this appendix. To give a short, more high-level explanation of these metrics in the weather forecasting context:
**RMSE** measures the average deviation of the forecast from the ground truth data. For probabilistic models, that produce forecasts as samples from a distribution, we compute the RMSE using a forecast representing the mean of this distribution. This is because RMSE is minimized by predicting the mean.
**CRPS** is a probabilistic metric, and measures how well the ground truth value is captured by the distribution specified by the model. In this case this corresponds to how well the ground truth weather is captured by the probability distribution over possible future weather states specified by the model. One useful interpretation of the CRPS is that it measures the difference between the Cumulative Distribution Function (CDF) of the model distribution and the CDF of a Dirac distribution centered at the ground truth value. One can compute a version of CRPS also for deterministic models, in which case the model distribution also becomes a Dirac distribution, but centered at the predicted value. The CRPS then reduces to the Mean Absolute Error (MAE), and this is indeed what we report for the deterministic models in our paper. CRPS is thus useful as a metric that can compare both deterministic and probabilistic forecasts. See [1] for more about CRPS.
**SpSkR** measures how calibrated the uncertainty of the model is. At SpSkR = 1 the ensemble forecasts are exchangeable with the ground truth, meaning that they exhibit similar levels of variability. This means that we can accurately use the ensemble spread as an indicator for forecast uncertainty. Each forecast will necessarily have some error, and a SpSkR close to 1 indicates that the model expresses the uncertainty about this error correctly.
All the metrics are computed for each variable and lead time separately, but spatially averaged for all grid points.
[1] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 2007.
---
Rebuttal Comment 1.1:
Comment: Thanks for your input! | Summary: The paper proposes Graph-EFM, a method that combines a hierarchical multi-scale graph neural network with a variational objective for probabilistic weather forecasting. The method performs on par with Graphcast on deterministic metrics with the extra benefit of uncertainty estimation.
Strengths: - The paper is well-written and easy to follow.
- The paper tackles probabilistic weather forecasting, which is an important problem in the field.
- The proposed method is intuitive and makes sense. Overall, generative modeling is a potential direction for probabilistic weather forecasting. People have used GANs and diffusion, so a latent variable model is a natural addition to the literature.
- The performance looks promising, and it is more efficient than existing methods using diffusion.
Weaknesses: - The authors should replace Table 1 with a line graph figure instead, as it allows comparison across different variables and lead times.
- Please see my questions below.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How important do you think the architecture is to the performance versus the objective function? The proposed architecture has an intuition similar to UNet, i.e., multi-scale features and the lowest layer can be used to parameterize the hidden variable.
- Diffusion models are considered the best family of models for generative modeling, surpassing GANs and latent variable models for other fields such as computer vision. What is the reason to believe latent variable models are the way to go for probabilistic weather forecasting?
- Why does the paper compare with Graphcast+SWAG but not the perturbed version of Graphcast and Gencast?
- How does the performance vary w.r.t. the number of ensemble samples? Given that sampling from a latent variable is fast, have the authors tried using more ensemble members?
- Is there an explanation why Graphcast is better than Graph-FM, but Graph-EFM is better than Graph-EFM (ms)?
- Why in LAM, the Graphcast architecture is doing better than the proposed architecture?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer i1dL for useful comments. See our response below:
1. The authors should replace Table 1 with a line graph figure instead, as it allows comparison across different variables and lead times.
Given the limited space in the main paper we did not find a way to fit line plots for all metrics, multiple variables and both datasets. We found that shrinking the line plots to make this feasible made them hard to read, so instead opted for Table 1 and keeping all line plots in the appendix. If we have the space in the camera ready version we would be happy to bring back some line plots also to the main paper.
2. How important do you think the architecture is to the performance versus the objective function?
Both of these contributions are indeed important. The objective function, including the CRPS fine-tuning, is important for learning a calibrated probabilistic model. The hierarchical architecture does also fill an important role in Graph-EFM, spreading out the stochasticity from the latent random variable over the forecast region. Since Graph-EFM (ms) uses the same objective function, but not the hierarchical architecture, it can be viewed as an ablation of this part of the model. The poor performance of Graph-EFM (ms) confirms the importance of both of these components in order to learn a useful probabilistic model.
3. What is the reason to believe latent variable models are the way to go for probabilistic weather forecasting? (as opposed to diffusion models)
While we believe there is a place also for diffusion models in weather forecasting, one strong reason to explore latent variable models is the difference in sampling speed. Being able to quickly sample large ensembles is a desirable property of probabilistic weather forecasting models. As diffusion models require multiple forward passes through a neural network to produce one sample, this can be a slow process. This issue is aggravated by the common practice of producing forecasts auto-regressively, meaning that each forecast time step must be sampled sequentially. In Graph-EFM this only requires one pass through the network per time step. However, speeding up diffusion model sampling is a very active area of research. We view this as another useful research direction also for weather forecasting, complementary to our approach.
4. Why does the paper compare with Graphcast+SWAG but not the perturbed version of Graphcast and Gencast?
Regarding GenCast comparison see the global author rebuttal.
We assume here that the perturbed version of GraphCast refers to the one considered in the GenCast paper, where GraphCast is initialized from perturbed initial conditions. We generally consider ensemble forecasting methods that start from multiple (possibly perturbed) initial conditions to be outside of our scope, and thus do not include perturbed GraphCast as a baseline. As outlines in section 3.1 we define our problem setting as modeling the distribution of future weather states conditioned on one specific set of initial conditions. The reason for this is that creating ensembles only based on multiple initial conditions typically limits the ensemble size or requires the use of ad-hoc perturbations without any guarantees of modeling the correct distribution.
5. How does the performance vary w.r.t. the number of ensemble samples?
See the “global” author rebuttal for an investigation regarding the impact of the ensemble size.
6. Why in LAM, the Graphcast architecture is doing better than the proposed architecture?
There is some more nuance when comparing GraphCast* and Graph-FM than saying one of these is strictly doing better in any of the experiments. Which out of these two that performs the best depends on what variable and lead time is considered, as can be seen from the full results in Appendix H and I. However, it is true that for longer lead times Graph-FM performs overall worse then GraphCast* in LAM. This is likely related to the structure of the problem in the LAM setting, that features the boundary forcing and other length scales than the global case. We would like to emphasize that we consider the probabilistic Graph-EFM model the main contribution and focus of the paper, and Graph-FM mainly a step in getting there. While further investigation into the possible failure modes of Graph-FM would be interesting, we prefer this to not be the focus of this work.
7. Is there an explanation why Graphcast is better than Graph-FM, but Graph-EFM is better than Graph-EFM (ms)?
As noted above, the difference between Graphcast* and Graph-FM has some nuance. Still, what should be remembered here is that the graph partly fills a different purpose in the deterministic and probabilistic models. While in the deterministic model it is used to spread information from the input and produce the best possible forecast for the next step, in the probabilistic models the graph has the key role of augmenting the sampled latent variables into a coherent realization of the weather state. In particular, as we associate the latent variable with the mesh nodes, the stochasticity enters in different ways when using the multiscale and hierarchical graphs. Since the multiscale graph does not offer any further dimensionality reduction than mapping from the grid to the mesh, it also means that the latent variable $Z^t$ will have a higher total dimensionality ($|\mathcal{V}_1| \times d_z$) in Graph-EFM (ms) than when using the hierarchical graph in Graph-EFM ($|\mathcal{V}_L| \times d_z$). For these reasons one should not assume that any performance differences between using the multiscale or hierarchical graphs in deterministic models straightforwardly translates to the probabilistic case.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answering my questions. I would still love to see a comparison of Graph-EFM with perturbed Graphcast. Regardless of the formulation of the problem, IC perturbations are always a common technique to achieve uncertainty estimation, and have been used in numerical methods, hybrid ML methods like NeuralGCM, and deep learning. Comparing with such a simple baseline will help shed light on the performance gain of Graph-EFM.
---
Reply to Comment 1.1.1:
Comment: We agree that this could be interesting to investigate in more detail. However, we believe that it should in that case also involve a version of Graph-EFM with initial condition perturbation, to understand the effect of different sources of randomness on the final uncertainty estimates and to ensure that we compare "apples and apples". Note that there is nothing that prevents us from combining the latent variable formulation of Graph-EFM with initial condition perturbation, but in our view these mechanisms conceptually correspond to different sources of uncertainty: uncertainty about the initial conditions, and uncertainty in the modelled dynamics even in the (hypothetical) case of perfectly known initial conditions. A version of Graph-EFM with initial condition perturbations should additionally be trained with the perturbations present in order to learn the correct distribution. We will consider a more in-depth investigation into this matter for future work. | Rebuttal 1:
Rebuttal: We thank all reviewers for valuable comments and questions that we are sure will improve the overall quality of our paper. We have responded to the points raised by each reviewer separately, but also include this general rebuttal with a few points that we think could be relevant to all.
### Extreme Weather Case Study
Multiple reviewers requested a case study on ensemble forecasting for extreme weather events. We agree that this would be a valuable addition to the paper, as this is one important motivation for the work.
We include here a case study for Hurricane Laura, that will be added to the paper. In August of 2020 Laura developed in the Atlantic Ocean and reached Hurricane levels in the Gulf of Mexico, eventually making landfall in Louisiana and causing major damages [1]. We study here forecasts for 2020-08-27T12 UTC, which is 6 hours after the Hurricane hit land. All forecasts are for this exact time point, but initialized a varying number of days before. We here run 50 member ensemble forecasts using Graph-EFM and deterministic forecasts using the GraphCast* and Graph-FM models. Figure 1 in the attached pdf shows 10 m wind speeds in ERA5 and the forecasts. For Graph-EFM we plot both forecasts from randomly sampled ensemble members and a cherry-picked best member that was deemed to most closely match ERA5. Note that the 1.5° resolution that we work with makes determining similarity or exact positions somewhat challenging.
We see in Figure 1 that at 7 days lead time there is great uncertainty, and the deterministic models do not show the hurricane at all. In the ensemble forecast from Graph-EFM there exists however already members indicating the possibility of a Hurricane making landfall a week ahead (see for example the cherry-picked “Best member”). While the ensemble includes many possible scenarios, a total of 7 members show the development of a hurricane in the area. Having information of such possible scenarios a long time ahead allows for planning and readying disaster response efforts that might be needed. Note that discovering these scenarios is only possible through an ensemble forecast, as the deterministic models do not indicate such an event. At 5 days lead time all models are indicating the development of a hurricane. While the deterministic models do a good job here, they are indicating a landfall location slightly too far eastward. The ensemble members from Graph-EFM however show a range of different positions for the hurricane, indicating the uncertainty in the landfall location. At 3 days ahead 42 out of 50 ensemble members show the hurricane making landfall, but still with some uncertainty about the exact location. At 1 day lead time all models give an accurate forecast of the position of the hurricane. Apart from position, it is also interesting to consider how the models capture the intensity in terms of wind speeds. At 1 day ahead the deterministic models are somewhat underestimating the wind speed. The Graph-EFM ensemble shows a range of possible values, indicating the uncertainty in the exact wind intensity. Overall this study exemplifies how a large ensemble forecast from a machine learning model can be used to discover possible extreme weather scenarios at long lead times and uncover the uncertainties associated with them.
[1] Hurricane Laura 2020. National Weather Service. https://www.weather.gov/lch/2020Laura
### Impact of Ensemble Size on Results
As pointed out by reviewer i1dL it would be interesting to know how the performance of the model, in terms of metric values, varies when sampling different number of ensemble members. To investigate this we ran the evaluation of Graph-EFM with 5-80 members for the global data and 5-100 for the limited area model.
Results for a selection of variables and metrics are shown in Figure 2 in the attached pdf. As expected the RMSE of the ensemble mean decreases when sampling more members. However, already when sampling 20 or 25 members the results are fairly close to the full ensemble. For SpSkR the differences are even smaller. As the CRPS is a property of the distribution of the model forecast its true value does not depend on the number of samples drawn. In practice we compute CRPS using an unbiased estimator, and the variance of this estimator decreases with ensemble size. When averaged spatially and over the whole test set we do however not see any difference in CRPS for different ensemble sizes. All these trends hold consistently for all variables in both the global and limited area datasets. We intend to add the results of this investigation to the paper.
Given that any improvements to metrics saturate, we would not expect the results to meaningfully change from sampling more than 80/100 members. It should however be noted that the motivation for sampling very large ensembles is mainly not to improve on metrics such as these. More important motivations for large ensembles include estimating probabilities of rare events or studying different possible scenarios of extreme weather.
### Comparison to GenCast
The reason for not comparing against GenCast is the lack of any openly available code, pre-trained models or produced forecasts (at the time of writing). Such a comparison would require reimplementing and retraining a GenCast model from scratch, which we deem outside the scope of this work. Note that doing this for GraphCast was much more feasible, as there is code openly available and the model is more similar to ours.
Pdf: /pdf/164818a6ba792d92bbc1fda124d15f133330fa63.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enhancing In-Context Learning Performance with just SVD-Based Weight Pruning: A Theoretical Perspective | Accept (poster) | Summary: The authors provide a theoretical perspective on the stability of in context learning via implicit gradient descent trajectories. Ultimately, the analysis suggests that high condition numbers of the weight matrices belonging to layers with a high index can be pruned in order to achieve a model which performs better on ICL tasks.
Strengths: - In context learning is important, and something which has not been studied as deeply as other topics of ML due to the recent rise of transformers and ICL in general.
- The method intuitively makes sense and is something which can be conditionally tuned after training based on specific tasks if a validation set is available.
Weaknesses: - It would be good to define deep and shallow, as these are subjective terms depending on the reference frame.
- Figure 1 cpation says: "We operate on the whole of MLP or ATTN." What does this mean?
- If as figure 1 states, you can clip 99.5% of the original weights, what happens if you just drop that layer entirely? Recent work has shown that the deeper layers can be completely dropped without much effect. [1]
- I cannot see much benefit gained from pruning part of the weights with SVD when it seems that the in nearly all cases, the benefit can be had by dropping the layer entirely.
- Is the mask on L138 supposed to represent a causal mask? If so, I do not think the notation is correct, as the Identity matrix would only have $N$ binary values which is much less than is needed for a causal mask.
- How can equation 1 and 2 use the same mask?
- Example 1 appears to be incorrect:
- There is no parentheses around $W_{V_r}^k + \delta_V h_i^{k-1}$ in the first line.
- The triangle inequality seems to say that line 2 $\geq$ line 1
- Given the above, I do not see what conclusion can be drawn from this equation.
- Have I missed something here?
Technical Quality: 3
Clarity: 2
Questions for Authors: - I don't believe the clipping process was adequately explained. Once the SVD operation is done, do you clip starting from the largest singular value? or starting from the smallest?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your response on finding our work thorough and informative. Below we address specific questions.
> **W1:** It would be good to define deep and shallow, as these are subjective terms depending on the reference frame.
> **A:** Indeed, there is no universally accepted definition for the terms "deep" and "shallow" as their interpretation can be subjective and dependent on the reference frame (eg. model size). Intuitively, in this paper, our definition is similar to that of Work [1]: “shallow” layers refer to those closer to the input, while “deep” layers are closer to the output. In Figure 1 and Figure 2 (**Section 2**), "shallow" typically denotes the first few layers, and "deep" denotes the last few layers of the network.
>
>
> [1] Lean Wang et al. Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning. EMNLP 2023.
> **W2:** Figure 1 cpation says: "We operate on the whole of MLP or ATTN." What does this mean?
> **A:** It refers to pruning all the weight matrices in these modules. The striking model architecture Transformer block consists of an attention (ATTN) layer and a feed-forward (MLP) layer. As mentioned in **Section 2**, the attention (ATTN) layers consist of key, query, value, and output matrices in both GPT-J-6B and LLAMA2-7B. The MLP layers in GPT-J-6B include input and output matrices, while in LLAMA2-7B, they are composed of up, gate, and down matrices.
>
> **W3:** If as figure 1 states, you can clip 99.5% of the original weights, what happens if you just drop that layer entirely? Recent work has shown that the deeper layers can be completely dropped without much effect. [1]
>
> - I cannot see much benefit gained from pruning part of the weights with SVD when it seems that the in nearly all cases, the benefit can be had by dropping the layer entirely.
> **A:** Thank you for your good question. To provide a thorough and accurate answer, could you please share the reference for [1] that you mentioned? Understanding the specifics of this recent work would help us address your inquiry more effectively.
>
> Regarding your question, I have two key points to consider:
>
> (i) Even though we can "clip 99.5% of the original weights," the remaining weights are still significant and play a crucial role. This is because the remaining 0.5% corresponds to the larger singular values after performing SVD, which are vital for maintaining the model's performance.
>
> (ii) As stated in **Theorem 1**, from the perspective of ICL (In-Context Learning) and gradient descent, each attention layer corresponds to an implicit gradient descent step (**L** layers with Implicit Gradient Descent Trajectories $[\Delta W_t]_1^L$/$[G_t]_t^L$).
>
> Therefore, if we just drop the entire layer, it will result in two main issues: (a) It will become challenging to adjust the weights for another downstream task. (b) The implicit gradient update will be reduced by one step, generally leading to a decline in model performance.
>
> Thank you again for your understanding.
> **W4:** Is the mask on L138 supposed to represent a causal mask? If so, I do not think the notation is correct, as the Identity matrix would only have $N$ binary values which is much less than is needed for a causal mask.
>
> **W5:** How can equation 1 and 2 use the same mask?
> **A:** Our notation here follows the related work [1], which explains: Note that the prompt is asymmetric since the label for $x_{N+1}$ is excluded from the input. To reflect this asymmetric structure, the mask matrix $M$ is included in the attention.
>
> More specifically, if you pay attention to the $(N+1)$-th item, L138 is supposed to represent a causal mask.
>
> (For $H\in \mathbb{R}^{(dout+din)\times(N+1)}$, $HM=(h_1,..,h_N,h_{N+1})M=(h_1,..,h_N,0)=H_s$)
>
> Besides, this mask method is used in GLM training [2]: Part A tokens can attend to each other, but cannot attend to any
tokens in B. Part B tokens can attend to Part A and antecedents in B, but cannot attend to any subsequent tokens in B. So it is reasonable for equation 1 and 2 use the same mask.
>
> [1] wangjun Ahn et al. Transformers learn to implement preconditioned gradient descent for in-context learning. NeurIPS 2023.
>
> [2] Zhengxiao Du et al. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. ACL 2022.
> **W6:** Example 1 appears to be incorrect:
>
> - There is no parentheses around $W^k_{V_r}+ \delta_Vh_i^{k−1}$ in the first line.
> - The triangle inequality seems to say that line 2 ≥ line 1
> **A:** Thank you for pointing out the typo (parentheses). We appreciate your attention to detail and will have made the necessary corrections. However, this minor typo does not affect the final conclusion. A detailed analysis is as follows:
>
> In triangle inequality, it is ture that $||\mathbf{a}||_2+||\mathbf{b}||_2\geq ||\mathbf{a}+\mathbf{b}||_2$ for any vector $\mathbf{a},\mathbf{b}$ (same dimension).
>
> **But in this Example, we did not use the triangle inequality**:
>
> $||W^k\_{V\_r}h\_i^{k−1}+\delta\_Vh\_i^{k−1}||\_2^2=||W^k\_{V\_r}h\_i^{k−1}||\_2^2+2(W^k\_{V\_r}h\_i^{k−1})^T(\delta\_Vh\_i^{k−1})+||\delta\_Vh\_i^{k−1}||\_2^2=||W^k\_{V\_r}h\_i^{k−1}||\_2^2+||\delta\_Vh\_i^{k−1}||\_2^2$.
>
> Note that $W^k_{V_r}=U_{:r}\Sigma_{:r}V_{:r}^T$ and $\delta_V=U_{r:}\Sigma_{r:}V_{r:}^T$, and $U_{:r},U_{r:}$ are **orthometric** (properties of SVD).
>
> Therefore, $(W^k_{V_r}h_i^{k−1})^T(\delta_Vh_i^{k−1})=(h_i^{k−1})^TV_{:r}\Sigma_{:r}(U_{:r}^TU_{r:})\Sigma_{r:}V_{r:}^Th_i^{k−1}=0$, line 2 = line 1.
> **Q1:** I don't believe the clipping process was adequately explained. Once the SVD operation is done, do you clip starting from the largest singular value? or starting from the smallest?
> **A:** As mentioned in **Section 2**, "The optimal rank-r approximation and SVD," we retain the components corresponding to the largest r singular values. Therefore, the clipping process begins with the smallest singular values.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the response. I apologize about the missing reference, it must have erroneously been excluded. The reference is:
[1] Gromov, A., Tirumala, K., Shapourian, H., Glorioso, P., & Roberts, D. A. (2024). The unreasonable ineffectiveness of the deeper layers. arXiv preprint arXiv:2403.17887.
I do not understand the following response above:
> (ii) As stated in Theorem 1, from the perspective of ICL (In-Context Learning) and gradient descent, each attention layer corresponds to an implicit gradient descent step (L layers with Implicit Gradient Descent Trajectories $[\Delta W_t]_1^L$/$[G_t]_t^L$).
>Therefore, if we just drop the entire layer, it will result in two main issues: (a) It will become challenging to adjust the weights for another downstream task. (b) The implicit gradient update will be reduced by one step, generally leading to a decline in model performance.
How would it be challenging to adjust the weights for another downstream task? Given the fact that many models use stochastic depth, and the above reference suggests that dropping a significant portion of the layers has a minimal effect. It would seem that pruning 99.5% of the weights might be almost identical to just skipping them altogether.
for a), As the weights are merely skipped and not totally deleted from the model, I do not understand how this would affect other tasks directly, as they could still be used in the future if necessary.
for b) this is an assumption that has not been backed up by any data as far as I can tell.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: >Dear Reviewer CqwV,
>
>Thank you for your reply and for resharing the reference.
> Firstly, let's briefly analyze this reference and make some comparisons:
>
> **Abstract:** (you provided) We empirically study a simple **layer-pruning** strategy for popular families of open-weight pretrained LLMs, finding minimal **degradation** of performance on different question-answering benchmarks until after a large fraction (up to half) of the layers are removed. To prune these models, we identify the optimal block of layers to prune by considering similarity across layers; then, **to “heal” the damage, we perform a small amount of finetuning**......
>
> - **Cp1:** The pruning method in the reference you provided is '**layer-based** pruning', while ours is '**weight-based** pruning'.
> - **Cp2:** The pruning method in the reference you provided causes **degradation** in model performance, whereas ours leads to **improvement**.
> - **Cp3:** The pruning method in the reference you provided may require **fine-tuning to to “heal” the damage**, while ours is **gradient-free**.
> Next, let's address your questions:
>
> **Q1:** How would it be challenging to adjust the weights for another downstream task?
>
> **Qa:** As the weights are merely skipped and not totally deleted from the model, I do not understand how this would affect other tasks directly, as they could still be used in the future if necessary
> As shown in **Cp1** and **Cp3**, compared to removing layers and fine-tuning, only weight pruning is easier to recover from and adapt to another task. Moreover, without the need for gradient updates, this is a significant advantage of pruning that cannot be overlooked.
> **Q2:** Given the fact that many models use stochastic depth, and the above reference suggests that dropping a significant portion of the layers has a minimal effect. It would seem that pruning 99.5% of the weights might be almost identical to just skipping them altogether.
>
> **Qb:** this is an assumption that has not been backed up by any data as far as I can tell.
> As shown in **Cp2** , although the above reference suggests that dropping a significant portion of the layers has a minimal effect, **the effect is negtive (degradation & need finetuning to heal damage)**. In contrast, the pruning method under our theoretical framework **does not require fine-tuning and results in performance improvement**.
>
> Regarding what you mentioned about 'It would seem that pruning 99.5% of the weights might be almost identical to just skipping them altogether'.
> Typically, the dimensions of a model number in the thousands (for instance, GPTJ is 4096). **We sincerely ask, why do you think the remaining rank (i.e., 4096*0.5%>20) after pruning is no longer important**?
>
>Once again, we thank you for your time and comments. | Summary: This paper investigates the effect of singular value decomposition (SVD)-based weight pruning on the in-context learning (ICL) performance of large language models.
The Authors show that SVD-based pruning can enhance ICL performance, with deeper layers showing more stable improvements.
They provide theoretical analysis to explain these findings, presenting implicit gradient descent trajectories for ICL and deriving generalization bounds.
Based on their insights, they propose a simple algorithm for enhancing ICL inference in downstream tasks.
Strengths: - The Authors provide a theoretical analysis to explain their empirical findings, including the derivation of implicit gradient descent trajectories and generalization bounds for ICL.
- Furthermore, they propose a simple, derivative-free algorithm for enhancing ICL performance in downstream tasks, demonstrating the practical value of their theoretical insights.
Weaknesses: - The theoretical analysis primarily focuses on linear attention, which may not fully capture the complexities of standard Softmax attention used in most transformer models
- The proposed algorithm is derivative-free, but the search for optimal clipping rates may still be computationally expensive for very large models or datasets
- There is a substantial lack of comparison with other pruning methods: the study focuses on SVD-based pruning but doesn't compare it with other pruning techniques, which could provide context for the method's effectiveness
- Poor language, frequent typos, and grammatical errors are significant issues in this paper. This does not help readability, and would likely be a barrier to publication in its current form.
- An essential part of the paper, which is the discussion of related works is not part of the main text. Furthermore, this discussion is prone to criticism. For example, quoting the seminal paper by Frankle and Carbin as an example of low-rank properties of neural networks is clearly misleading. I think that this discussion should be an essential part of the main text, and should also be substantially revised in order to avoid conceptual confusions.
Technical Quality: 2
Clarity: 1
Questions for Authors: I suggest the Authors to address 1) the concerns regarding a better framing of the work in the current literature. In particular, there is a large body of evidence that is growing on the resilience of LLMs to pruning (see for example "The Unreasonable Ineffectiveness of the Deeper Layers", Gromov et al, https://arxiv.org/pdf/2403.17887, and references therein), and 2) the quality of writings by proofreading it together with a native speaker.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Limitations are discussed in the final section (4).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks so much for your time and insightful comments. Please find our point-by-point response below.
> **W1:** The theoretical analysis primarily focuses on linear attention, which may not fully capture the complexities of standard Softmax attention used in most transformer models
> **A:** Firstly, to facilitate qualitative analysis, our main text primarily focuses on the theoretical aspects of linear attention. This is our first step on this issue, and our goal is to provide insightful conclusions under simplified conditions.
>
> Secondly, a considerable portion of the works also consider from the linear attention to explore the ICL [1,2].
>
> However, we also discuss the standard Softmax attention setting in **Appendix B.2**. Specifically, it can be considered that the mapping function $\phi$, or rather the effect of the **Softmax** function, is to project the original features into a higher-dimensional space to capture more profound features. Subsequently, learning and meta-optimization are conducted under this **new feature space**.
>
> $\hat{h}\_{N+1}=h\_{N+1} + \Delta W\_{icl}^{'}\phi(W\_Q h\_{N+1}),\ \Delta W\_{icl}^{'}=\frac{1}{D^{'}} \left[\sum\_{i=1}^N (W\_VH\_s)\_i \otimes \phi(W\_KH\_s)\_i \right]$
>
> [1] Ekin Akyiurek et al. What learning algorithm is in-context learning? investigations with linear models. ICLR 2023.
>
> [2] Johannes von Oswald et al. Transformers learn in-context by gradient descent. ICML 2023.
> **W2:** The proposed algorithm is derivative-free, but the search for optimal clipping rates may still be computationally expensive for very large models or datasets
> **A:** (1) Our primary objective in this work is to provide a general theoretical framework that reveals the underlying mechanism behind the phenomenon that SVD-based weight pruning can enhance ICL performance. Based on this theory, a simple algorithm is provided to demonstrate the effectiveness of theoretical analysis in guiding experimental procedures.
>
> In **Theorem 2**, the **key insight** is that managing the expected generalization error by controlling the norm of $\Delta W_t/G_t$, indicating that pruning methods are not unique. Therefore, some computationally efficient pruning methods could be considered: (a) Random SVD, (b) Magnitude-based pruning [1], and so on.
>
> [1] Wen, W et al. Learning structured sparsity in deep neural networks. NeurIPS 2016.
> **W3:** There is a substantial lack of comparison with other pruning methods: the study focuses on SVD-based pruning but doesn't compare it with other pruning techniques, which could provide context for the method's effectiveness
> **A:** Thanks for your suggestion.
> Similar to the answer to the above **W2**, we primarily focus on theoretical analysis. And the simple algorithm is just used to validate the effectiveness of the theory, so we have not provided comparative methods. There is a possibility that better pruning methods exist.
>
> For instance, if we use magnitude-based pruning on matrix $A$, the $||A||_F$ will be decrease.
> **W4:** Poor language, frequent typos, and grammatical errors are significant issues in this paper. This does not help readability, and would likely be a barrier to publication in its current form.
> **A:** Thank you for your feedback. We sincerely apologize for the language issues, typos, and grammatical errors present in the paper. Rest assured, we are fully committed to addressing these concerns and will do everything possible to correct the issues before resubmission. Your patience and understanding are greatly appreciated, and we are dedicated to improving the paper to meet the required standards.
> **W5:** An essential part of the paper, which is the discussion of related works is not part of the main text. Furthermore, this discussion is prone to criticism. For example, quoting the seminal paper by Frankle and Carbin as an example of low-rank properties of neural networks is clearly misleading. I think that this discussion should be an essential part of the main text, and should also be substantially revised in order to avoid conceptual confusions.
> **A:** (1) Due to page limitations, we previously omitted the detailed discussion of related works in the main text, but we will include it in future versions. (2) We acknowledge your viewpoint and will make substantial revisions accordingly. Our findings show that pruning does not necessarily worsen performance (Note that performance depends not only on generalization but also on optimization). In some datasets, it may even improve results. This aligns with previous work, which we will discuss in detail in the Related Works section, such as [1]. In future research, we will study the effects of pruning different layers and the performance variations across different tasks.
>
> [1] Pratyusha Sharma et al. The truth is in there: Improving reasoning in language models with layer-selective rank reduction. ICLR 2024.
> **Q1:** I suggest the Authors to address 1) the concerns regarding a better framing of the work in the current literature. In particular, there is a large body of evidence that is growing on the resilience of LLMs to pruning (see for example "The Unreasonable Ineffectiveness of the Deeper Layers", Gromov et al, https://arxiv.org/pdf/2403.17887, and references therein), and 2) the quality of writings by proofreading it together with a native speaker
> **A:** Thanks. We will definitely revise the literature review (Gromov et al,and references therein) & our writings.
>
> However, please let us demonstrate the advantages of our theoretical analysis (pruning weight) compared to the example (remove layer) you provided.
> If we just remove the entire layer, it will result in two main issues: (a) It will become challenging to adjust the weights for another downstream task. (b) The implicit gradient update will be reduced by one step (**Theorem 1**), perhaps leading to a decline in model performance.
---
Rebuttal 2:
Title: Looking forward to your reply
Comment: > Dear Reviewer Nxqr,
>
> We sincerely appreciate your time and effort in reviewing our manuscript and providing valuable feedback.
>
> As the author-reviewer discussion phase nears completion, we wish to confirm whether our responses have effectively addressed your concerns. We provided detailed responses to your concerns a few days ago and hope they have adequately resolved any issues. If you require further clarification or have any additional concerns, please do not hesitate to contact us. We are more than willing to continue our communication with you.
>
> Best regards.
---
Rebuttal Comment 2.1:
Title: Reviewer reply
Comment: Dear Authors,
I apologize for being late in my reply.
I sincerely thank the Authors for their clarification. After going through the discussion with the other reviewers, and also taking into account the level of confidence of my rewiew I decided to raise my score.
---
Reply to Comment 2.1.1:
Title: Thank you and looking forward to your reconfirmation of the score
Comment: > Dear Reviewer Nxqr,
>
> Thank you sincerely for your time and valuable feedback. We are grateful for your willingness to raise the score following our discussion. However, we noticed that the revised score appears to be recorded as **1 point**. Could there have been an error in this update? We would appreciate your confirmation on this matter.
>
> Best regards. | Summary: This paper demonstrates that (1) SVD-based weight pruning can sometimes achieve better in-context learning performance, and (2) pruning weights in deeper layers often results in more stable outcomes compared to shallow layers. The authors explain their findings through theoretical analysis and propose an intuitive matrix condition number-based weight pruning algorithm to achieve both stable and improved performance.
Strengths: This work conducts an in-depth analysis to explain the "stability" of transformer weight pruning across different layers. The framework is interesting and validated through experiments. Moreover, the theoretical analysis can be applied to design new algorithms like algorithm 1 in this paper .
Weaknesses: Despite adopting various simplifications (such as using a linear attention transformer without MLP and layer normalization, treating each in-context example as a single vector, implementing attention masks for query tokens, and using meta-gradients for in-context learning) in their theoretical analysis, the results are still limited. They only explain why SVD-based weight pruning can achieve "stable" performance, leaving the more intriguing question of why transformers can achieve "better" performance with pruning unclear. Additionally, even with detailed hyperparameter tuning, the effectiveness of Algorithm 1 remains uncertain. Further details are provided in the questions section.
Technical Quality: 3
Clarity: 2
Questions for Authors: [1] How should Theorem 2 be interpreted? It seems only provide a weak upper bound for in-context learning stableness. Can this also be applied to empirical risk, such as $L_{H_S}$ and $L_\mu$?
[2] Theorem 2 gives the upper bound for expected generalization error. If we fix N in the constructed transformer and reduce the number of in-context examples to $N'$ in the input sequence, then we can find that while the factor $R^2/N$ remains unchanged in theorem 2 , $\Delta W_t$ will change from $(\sum_{i=1}^N)…$ to $(\sum_{i=1}^{N'})…, where $ ($N' < N$). Based on the analysis across different layers, could this mean that fewer context examples are more robust for SVD weight pruning?
[3] Note that in fig-3, large matrix condition numbers can exist in some modules of shallow layers, such as the attention key (K) in GPT-J-6B . What would be the effect of pruning only a single module in a shallow layer (e.g., the key projection matrix) rather than pruning the entire attention module (including Q, K, and V)?
[4] In C.5, it's noted that the optimal clipping rate is sometimes very small and varies across datasets. What would happen if we apply the same clipping rate (e.g., 0.95) as used in SST-2 to other datasets?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See Weaknesses and Questions. Besides, there seem many mistakes on pages 7 and 8, for example, the equation between line 228 and 229, and the equation for about F-norm in lines 230,231,232. These inaccuracies cast doubt on the overall reliability of the paper's findings. If there are any misunderstandings on my part, please point them out, and I will reconsider my evaluation of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, and for your thoughtful comments and questions. They will certainly help improve the revised paper. Please see our response below.
> **A to Weaknesses:** Thanks to Reviewer Zh6a for raising the issue, which gives us the opportunity to clarify this matter.
>
> (1) Our goal is to provide insightful conclusions under simplified conditions. However, we also discuss the standard Softmax attention setting in **Appendix B.2** and feed-forward (MLP) layers in **Appendix B.3**.
>
> (2) The goal of this paper is to study ICL from a generalization perspective. According to the traditional statistical learning viewpoint, performance can be defined by the sum of optimization error and generalization error. This paper reveals that SVD decomposition is beneficial for generalization error, but does not analyze its impact on optimization. Given that theoretical analysis of ICL is rapidly developing, the impact of weight pruning on optimization has not yet been fully explored. We will consider this aspect in our future work.
> **A to Q1** (How should Theorem 2 be interpreted? Can this also be applied to empirical risk ?):
>
> Expected generalization error (**Theorem 2**) = population risk ($L\_{\mu}$) - empirical risk ($L\_{H\_s}$).
>
> More specifically, on the one hand, in **Theorem 2**, clipping weight controls the F-norm of the implicit gradient $([\Delta W\_t]\_1^L/[G_t]\_1^L)$), which reduces the expected generalization error, on the other hand, we can evaluate the empirical risk ($L\_{H\_s}$) by observing the model's performance on the validation set. If the generalization error is determined, it is at least possible to estimate the population risk ($L_{\mu}$). Thus, the most challenging aspect is addressing the generalization error. For the direct theoretical analysis of empirical risk, we will consider this aspect in our future work.
> **A to Q2** (Could this mean that fewer context examples are more robust for SVD weight pruning?):
>
>That's a pretty worthy question! Yes, fewer context examples can indeed lead to more robust results in SVD weight pruning.
>
> (1) Experimentally, in **Section 2.2**, we analyze the effect of different ICL shot numbers. As shown in Figure 2, with a decrease in the number of shots, the rate of performance collapse in the model slows down after a sharp reduction at the shallow layer.
>
> (2) Theoretically, $\Delta {W}(N)-\Delta {W}(N^{'}) = \left( \sum\_{i=N^{'}+1}^N{W}\_V{{h}\_i} \otimes {W}\_K{{h}\_i} \right) {W}\_Q,$ so it is supposed that the implicit gradient is more sensitive for SVD weight pruning.
>
> (3) The robustness mainly concerns the operation "SVD weight pruning", rather than the model's performance.
> **A to Q3** (What would be the effect of pruning only a single module?):
>
> Good question! In our theory, the example (pruning only K or V) is provided in **Appendix B.6**, demonstrating how weight pruning can affect the norm of $[\Delta W_t]_1^L$/$[G_t]_t^L$. Thus, it may confer advantages on the performance of Transformers in ICL inference. However, in **Theorem 2** (Remark 5), it also suggests that modifications to the shallow layers have a less steady impact.
>
> Additionally, please review the supplementary experimental results provided below. (We choose key projection matrix and select the 3-layer (large matrix condition number) of GPT-J-6B, others are the same to **Section 2.1**).
>
> | Task | Optimal $\xi$ | Accuracy/F1 Improve |
> | --------- | --------------------- | ------------------------ |
> | SST-2 | 0.0 | 0.7828 - 0. 7828 |
> | RTE | 0.5 | 0.5413 - 0.5413 |
> | COPA | 0.995 | 0.53 - 0.54 |
> **A to Q4** (What would happen if we apply the same clipping rate to other datasets?):
>
> As we mentioned in **Section 3.3 and Section C.4**, the implicit gradients produced by Transformers in practical applications are noisy due to factors such as the extent of model pre-training and data characteristics (eg. ICL shot number/task difficulty). Therefore, $[\Delta W_t]_1^L$/$[G_t]_t^L$ in **Theorem 1** exhibit varying levels of noise, causing the optimal clipping rate to vary among different tasks, as it is dependent on the specific task and data. So if we apply the clipping rate (0.95) as used in SST-2 to other datasets, the model performance can either improve or deteriorate. Additionally, It is possible to conduct a certain number of experiments to find a range of optimal clipping rate that is broadly applicable.
> **A to Limitations:**
>
> - For the equation between line 228 and 229.
>
> $||W^k\_{V\_r}h\_i^{k−1}+\delta\_Vh\_i^{k−1}||\_2^2=||W^k\_{V\_r}h\_i^{k−1}||\_2^2+2(W^k\_{V\_r}h\_i^{k−1})^T(\delta\_Vh\_i^{k−1})+||\delta\_Vh\_i^{k−1}||\_2^2=||W^k\_{V\_r}h\_i^{k−1}||\_2^2+||\delta\_Vh\_i^{k−1}||\_2^2$.
>
> Note that $W^k_{V_r}=U_{:r}\Sigma_{:r}V_{:r}^T$ and $\delta_V=U_{r:}\Sigma_{r:}V_{r:}^T$, and $U_{:r},U_{r:}$ are **orthometric** (properties of SVD).
>
> Therefore, $(W^k_{V_r}h_i^{k−1})^T(\delta_Vh_i^{k−1})=(h_i^{k−1})^TV_{:r}\Sigma_{:r}(U_{:r}^TU_{r:})\Sigma_{r:}V_{r:}^Th_i^{k−1}=0$.
>
> - For the equation for about F-norm in lines 230,231,232. Thank you for pointing out the typo (231). We appreciate your attention to detail and will have made the necessary corrections. However, this minor typo does not affect the final conclusion. A detailed analysis is as follows:
>
> For any vector $\mathbf{a}$ and $\mathbf{b}$, $\text{rank}(\mathbf{a}\otimes\mathbf{b})=1$, so the matrix ($\mathbf{a}\otimes\mathbf{b}$) only has **one** non-zero singular value.
>
> Combined with $||A||\_F=\sqrt{\sum\_{i}\sigma\_i^2(A)}$ and singular value is nonnegative, we can get
>
> $||\mathbf{a}\otimes\mathbf{b}||\_F = \sqrt{\sum\_{i}\sigma\_i^2(\mathbf{a}\otimes\mathbf{b})}=\max\_i[\sigma\_i(\mathbf{a}\otimes\mathbf{b})]$.
> Therefore, the unique non-zero singular value will decrease after performing SVD on $\mathbf{a}$ and $\mathbf{b}$.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer Zh6a
Comment: I appreciate the author's efforts to address my concerns, and I apologize for misunderstandings on my part. Upon further reflection, I find the theoretical analyses both reasonable and insightful in explaining the experimental results. While I still consider the use of 'gradient quality derived from context' (i.e., $G_t$) somewhat unconventional, I believe the theoretical contributions of this work are enough to explain the main observations presented in the paper. After reevaluating this study, I have decided to adjust the score I previously assigned.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: > Dear Reviewer Zh6a,
>
> Thank you for your thoughtful and constructive feedback regarding our manuscript. We deeply appreciate the time you invested in reevaluating our theoretical analyses and for acknowledging their relevance and insight in explaining the experimental results. Your meticulous review has truly enhanced the value of our work. Additionally, regarding the use of 'gradient quality derived from context' (i.e., $G_t$), please allow us to provide some additional clarifications (We will add a detailed description of this in the next version):
>
> - (1) In **Theorem 1** (Remark 2), $G_t$ is only dependent on $W_{t−1}$ and $H_s$, this is consistent with gradient descent in terms of relevance (Conventionally, gradients in training are only related to the current parameters and the training samples).
>
> - (2) In real-world training scenarios, SGD computes gradient by selecting a small batch of samples per iteration. This approach approximates the true gradient, inherently introducing noise. Similarly, in In-Context Learning, a small subset of samples (context examples) is used to generate implicit gradient (i.e., $G_t$), which also results in the introduction of noise.
>
> Once again, we thank you for your time and the constructive comments that have greatly enriched our work. | Summary: This paper discusses the phenomenon: SVD-based weight pruning can increase the in-context learning abilities of transformer based LLMs. In this paper, the authors conduct theorectical analysis by presenting the implicit gradient descent trajectories of ICL and providing the generation bounds visa full implicit gradient descent trajectories. This paper also provide a simple yet effective algorithm to clip the LLM by SVD to enhance ICL inference.
Strengths: First, this paper has a clear writing and is easy to follow.
It provides a detailed theoretical analysis on why SVD based weight pruning will improve ICL performance by leveraging the implicit gradient descent trajectories. It also provides the generalization bounds of ICL, in Theorem 2, it can be inferred that the noise level and the norm of of gradient contribute to the error bound. It provides the theoretical insight of SVD based method.
The authors provides a simple algorithm to leverage the discovered phenomenon to improve ICL performance of LLM in a gradient-freee way. The ratio between $\sigma_{max} $ and $\sigma_{min}$ is a good choice of heuristic conditional number.
Weaknesses: 1. More details of algorithms is not shared. e.g. the range / number of clipping rate candidates set.
2. In experiments result of C.5, the optimal $\xi$ varies a lot across different tasks and different modules. However, this phenomenon is not touched in the theoretical part.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Matrix condition number is an option for the indicator. But could there be more options, such as compute the decreasing rate of eigenvalues? Because when p=2, conditional number only leverages two values among all the eigenvalues.
2. Could authors provide further more clarification why optimal $\xi$ varies, and is there a way to explain this phenomenon under current theoretical framework provided in this paper?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable comments. Below we address specific questions.
> **W1:** More details of algorithms is not shared. e.g. the range / number of clipping rate candidates set.
> **A:** Firstly, the details of the algorithm can be reviewed in the **code** provided. Specifically, the set of clipping rate candidates is predefined. In our study, the clipping rate candidates are set as shown in **Figures 1 and 2**: [0, 0.1, 0.5, 0.75, 0.9, 0.95, 0.99, 0.995].
> Besides, we analyze the impact of different hyperparameters through comparative experiments (details in **Section 2** and **Section 3.4**), as detailed below.
>
> - Clipping rate $\xi$:We search for the optimal $\xi$ in the predefined clipping rate candidates.
>
> - Predefined module $\mathcal{O}$ : The module containing the target pruning weights, which can be chosen from
> ['k_proj', 'q_proj', 'v_proj', 'out_proj', 'fc_in', 'fc_out', 'all', 'mlp', 'attn']
>
> - Selected layer $l$ : The layer containing the target pruning weights. For examlpe: In **Section 2**, we mainly focuse on comparing the impact of weight pruning to the **first two and the last two layers** of the model.
>
> - ICL shot number : The demonstration number in ICL, we analyze the effect of different ICL shot numbers in **Section 2.2**.
---
> **W2:** In experiments result of C.5, the optimal $\xi$ varies a lot across different tasks and different modules. However, this phenomenon is not touched in the theoretical part.
> **A:** That's a great question! The impact of different tasks and various module prunings on performance is indeed significant.
>
> However, our aim here is to provide a general analytical framework, so we have not delineated the impact of different layers/modules on performance in detail.
>
> In our theory, the simplification of the MLP and the handling method for the ATTN layer (mentioned in the main text) :
>
> - Feed-forward (MLP) layer:In **Appendix B.3**, it can be seen as a dimensional adaptation to correspond to residual connection. Then we can get the Implicit Gradient Descent with MLP: $\Delta W_{icl}^{''}=W_{MLP}\left( \sum_{i=1}^N W_V{h_i} \otimes W_K{h_i} \right) W_Q$
>
> - Attention (ATTN) layer:
>
> (i) In main text **Section 3.2**, our theoretical analysis is mainly focuses on linear attention setting.
>
> $\hat{h}\_{N+1} = h\_{N+1}+ \Delta W\_{icl} h\_{N+1}, \Delta W\_{icl}=\left( \sum\_{i=1}^N W_V{h_i} \otimes W\_K{h\_i} \right) W\_Q$
>
> (ii) In Section **Appendix B.2**, it can be considered that the mapping function $\phi$, or rather the effect of the **Softmax** function, is to project the original features into a higher-dimensional space to capture more profound features. Subsequently, learning and meta-optimization are conducted under this **new feature space**.
>
> $\hat{h}\_{N+1}=h\_{N+1} + \Delta W_{icl}^{'}\phi(W\_Q h\_{N+1}), \Delta W_{icl}^{'}=\frac{1}{D^{'}} \left[\sum_{i=1}^N (W_VH_s)_i \otimes \phi(W_KH_s)_i \right]$
>
> Then if one consider more than one layer, a more detailed analysis can be provided within the current framework to depict the interaction relationship between the input feature and these different modules.
---
> **Q1:** Matrix condition number is an option for the indicator. But could there be more options, such as compute the decreasing rate of eigenvalues? Because when p=2, conditional number only leverages two values among all the eigenvalues.
> **A:** Good question! Let's first review our theory in **Theorem 2**: Modulating the norm of $[G\_t]_1^L$ or $[\Delta W\_{t}]_1^L$ could enhance performance when utilizing ICL.
>
> Our primary objective in this work is to provide a general analytical framework. The simple algorithm is employed to demonstrate the effectiveness of theoretical analysis in guiding experimental procedures. So, Any indicator that can guide the control of norms is a potential option. For instance:
>
> (i) Consider the behavior of the first few and last few singular values.
>
> (ii) Assessing the intrinsic rank of the matrix can be valuable, as it offers another dimension of understanding beyond just the condition number. [1] [2]
>
> [1] Armen Aghajanyan et al. Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. URL https://arxiv.org/abs/2012.13255.
>
> [2] Edward Hu et al. Lora: Low-rank adaptation of large language models. ICLR 2022.
---
> **Q2:** Could authors provide further more clarification why optimal $\xi$ varies, and is there a way to explain this phenomenon under current theoretical framework provided in this paper?
> **A:** (i) As we mentioned in **Section 3.3 and Section C.4**, the implicit gradients produced by Transformers in practical applications are noisy due to factors such as the extent of model pre-training and data characteristics (eg. ICL shot number/task difficulty). Therefore, $[\Delta W_t]_1^L$/$[G_t]_t^L$ in **Theorem 1** have varies noise. That is why optimal $\xi$ varies.
>
> (ii) Besides, expected generalization error (**Theorem 2**) = population risk - empirical risk ($L_{\mu}$-$L_{H_s}$). we search the optimal $\xi$ on the validation set and subsequently evaluate it on the test set (detailed in **Section 3.4**).
>
> On the one hand, in **Theorem 2**, clipping weight controls the F-norm of the implicit gradient ( $[\Delta W\_t]\_{1}^{L}/[G\_t]\_{1}^{L}$), which reduces the expected generalization error, on the other hand, we can evaluate the empirical risk ($L_{H_s}$) by observing the model's performance on the validation set and estimate the population risk ($L_{\mu}$). (Note that different types of tasks affect $L_{H_s}$ differently, and different modules have varying impacts on the expected generalization error. See **A** to **W2**.)
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I apprecitated the authors' responses have successfully addressed my previous questions. I will hold my positive rating on this paper.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: > Dear Reviewer itEx,
>
> Thank you very much for your positive feedback. We truly appreciate your acknowledgment that our responses have successfully addressed your previous questions. Your continued support and positive rating of the paper are immensely encouraging. Thank you once again for your thoughtful review and valuable insights. | Rebuttal 1:
Rebuttal: We are grateful to all reviewers for their detailed and constructive feedback! We are encouraged to see that reviewers find:
> - **Reviewer itEx**: It provides a detailed theoretical analysis on why SVD based weight pruning will improve ICL performance...... It provides the theoretical insight of SVD based method. The authors provide a simple algorithm to **leverage the discovered phenomenon** to improve ICL performance of LLM in a gradient-freee way.
>
> - **Reviewer Zh6a**: This work conducts an in-depth analysis to explain the "stability" of transformer weight pruning across different layers. The framework is interesting and validated through experiments. Moreover, the **theoretical analysis can be applied to design new algorithms** like algorithm 1 in this paper.
>
> - **Reviewer Nxqr**: The Authors provide a theoretical analysis to explain their empirical findings..... Furthermore, they propose a simple, derivative-free algorithm for enhancing ICL performance in downstream tasks, **demonstrating the practical value of their theoretical insights**.
>
> - **Reviewer CqwV**: The method intuitively makes sense and is something which can be **conditionally tuned after training** based on specific tasks if a validation set is available.
We have addressed all the questions raised by the reviewers through detailed clarifications, providing separate responses to each reviewer. Additionally, we would like to address some common concerns in a consolidated global response.
> **(1) The main contribution of this work.**
>
> Our primary objective is to provide a general **theoretical framework** that reveals the underlying mechanism behind the phenomenon that SVD-based weight pruning can enhance ICL performance. Based on our theoretical insights, we can design new algorithms to enhance ICL performance. Consequently, we did not compare our approach with other pruning methods. Algorithm 1 is presented solely to illustrate how theoretical analysis can guide experimental procedures effectively.
>
> Specifically, in **Theorem 2**: Modulating the F-norm of $[G\_t]\_1^L$ or $[\Delta W\_{t}]\_1^L$ could enhance performance when utilizing ICL. So, there may be other potential **weight-based** pruning methods. For instance: magnitude-based pruning [1], which can directly control the $||A||\_F=\sqrt{\sum\_i\sum\_ja\_{ij}^2}$.
>
> Of course, there are also some **layer-based** pruning methods, as mentioned in (**Reviewer Nxqr Q1**) and (**Reviewer CqwV W3**). We would like to clarify the drawbacks of the drop-layer method under our theoretical framework:
>
> As stated in **Theorem 1**, L layers with implicit gradient descent trajectories $[\Delta W_t]_1^L$/$[G_t]_t^L$. Consequently, dropping an entire layer leads to two main issues:
>
> (a) Adjusting the weights for other downstream tasks becomes more challenging.
>
> (b) The implicit gradient update is reduced by one step, which perhaps results in a decline in model performance.
>
> [1] Wen, W et al. Learning structured sparsity in deep neural networks. NeurIPS 2016.
> **(2) The explanation for simplifications.**
>
> To facilitate qualitative analysis, our main text primarily focuses on the theoretical aspects of linear attention, a considerable portion of the works also consider from the linear attention to explore the ICL [2,3]. And our goal is to provide insightful conclusions under simplified conditions.
>
> However, we also discuss the standard Softmax attention setting and MLP in **Appendix B.2** and **Appendix B.3**:
>
> Specifically, it can be considered that the mapping function $\phi$, or rather the effect of the **Softmax** function, is to project the original features into a higher-dimensional space to capture more profound features. Subsequently, learning and meta-optimization are conducted under this **new feature space**.
>
> $\hat{h}\_{N+1}=h\_{N+1} + \Delta W\_{icl}^{'}\phi(W\_Q h\_{N+1}),\ \Delta W\_{icl}^{'}=\frac{1}{D^{'}} \left[\sum\_{i=1}^N (W\_VH\_s)\_i \otimes \phi(W\_KH\_s)\_i \right]$
>
> For the MLP layer, it can be seen as a dimensional adaptation to correspond to the residual connection. Then, we can get the implicit gradient descent with MLP: $\Delta W_{icl}^{''}=W_{MLP}\left( \sum_{i=1}^{N} W_V{h_i} \otimes W_K{h_i} \right) W_Q$
>
> [2] Ekin Akyiurek et al. What learning algorithm is in-context learning? investigations with linear models. ICLR 2023.
>
> [3] Johannes von Oswald et al. Transformers learn in-context by gradient descent. ICML 2023.
> **(3) Why optimal clipping rate $\xi$ varies? How should Theorem 2 be interpreted?**
>
> (a) As we mentioned in **Section 3.3 and Section C.4** (which follows [4]), the implicit gradients produced by Transformers in practical applications are noisy due to factors such as the extent of model pre-training and data characteristics (eg. ICL shot number/task difficulty). Therefore, $[\Delta W_t]_1^L$/$[G_t]_t^L$ in **Theorem 1** have varies noise. **That is why** optimal $\xi$ varies.
>
> (b) Expected generalization error (**Theorem 2**) = population risk ($L\_{\mu}$) - empirical risk ($L\_{H_s}$).
>
> More specifically, on the one hand, **Theorem 2** shows that clipping weights controls the F-norm of the implicit gradient $([\Delta W\_t]_1^L/[G\_t]\_1^L)$ which helps reduce the expected generalization error. On the other hand, we can evaluate the empirical risk $(L\_{H\_s})$ by assessing the model's performance on the validation set. If the generalization error is known, it is possible to estimate the population risk ($L\_{\mu}$). Therefore, the most challenging aspect is addressing the generalization error.
>
> [4] Shivam Garg et al. What can transformers learn in-context? a case study of simple function classes. NeurIPS 2022.
Thank you once again to all the reviewers for your patience and invaluable comments. We hope our responses have clarified your initial concerns and questions. We are happy to provide further clarifications if necessary. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms | Accept (poster) | Summary: This paper studies the problem of aligning vision models with human aesthetic standards in a retrieval system. There are three key parts in the proposed model including LLM rephrasing, re-ranking, and RL fine-tuning. Two novel benchmarks are also introduced to integrate aesthetic quality into evaluation metrics. Experimental results demonstrate the effectiveness of the proposed method and the benchmarks.
Strengths: 1. This paper addresses the aesthetic quality issue in image retrieval systems and introduces a reinforcement learning fine-tuning strategy that enables the retrieval model to directly retrieve images based on both semantics and aesthetic quality, eliminating the need for multi-stage filtering. This approach holds significant value.
2. The paper introduces two evaluation benchmarks, addressing the limitation of current image retrieval benchmarks that fail to evaluate aesthetic quality.
3. The experiments are comprehensive, validating the importance of each component in the proposed method.
Weaknesses: 1. The methodological process described in the article is somewhat cumbersome, with Figure 2 merely outlining key processes and concepts in a rudimentary manner, thereby increasing the difficulty for readers to comprehend.
2. The authors appear to conflate "no-reference image quality assessment" with "image aesthetic quality assessment." While these tasks are indeed closely related, they are distinct. MANIQA, for instance, should not be regarded as an aesthetic quality assessment model, and its paper does not evaluate the model's performance on aesthetic datasets.
3. There remain some details in the article that are inadequately explained. It is peculiar that in Appendix Table 7, the same stride seemingly yields a different number of images.
4. The manuscript contains typos. For example, the indicator function symbol in Equation 11 is clearly garbled.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. In line 62, it is said that the open-source IAA datasets cannot be used for aesthetic retrieval evaluation. Can you give a further explanation?
2. In the first step of data preprocessing, the authors use the concept of "topic". Can you give a further explanation?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper does not discuss the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1: Cumbersome description of the method
**R:** Thanks for your suggestion. We will find a way to further simplify the description of the method. The purpose of Fig. 2 is to illustrate the consistency of our approach and aesthetic concepts, and we will add more specific details to Fig. 3 that illustrates the specific steps. While at the same time, we will write a clearer index in the labels of Fig. 2 and Fig. 3 to avoid confusion for the reader.
## W2: Conflate "no-reference image quality assessment" with "image aesthetic quality assessment"
**R:** Thank you! We will modify the description to distinguish between MANIQA and the aesthetic quality assessment model. We use this model because we have experimented with more image quality-related models that include these three in the paper and found MANIQA performs well.
## W3: Number of samples in Table 7
**R:** Thanks! There may be some misunderstanding on the concept of stride, which refers to the interval between samples in our re-ordering sequence. The number of samples (as described in line 158-169, Sec. 2.3) is determined in terms of $u$ and $v$. The number of sampled images is $uv$ and the equation $|\mathcal{D}_{po}| = uC_v^2+vC_u^2$ shows the number of comparison pairs.
For example, given a sequence of 100 images with $u$=$v$=5 and stride=2, we will sample $uv=25$ images with 2 as interval: [1,3,5,...,49]. If stride=3, it will be [1,4,7...,73]. The number of selected images is irrelevant to the stride.
$|D_{po}|$ is the number of valid comparison pairs. When $u$=$v$=5 and stride=2 and we sampled [1,3,5...,49], we first split it with semantic dimension to [[1,3,5,7,9], [11,13,...],...,[...,47,49]]. There are 5 sequences of 5 images. For each sequence, such as [1,3,5,7,9], it contains $C_5^2$ valid comparison pairs: [(1,3), (1,5), ..., (7,9)]. Thus it will contain $5C_5^2$ pairs for all 5 sequences. Similarly, when we split with aesthetic dimension and result in [[1,11,21,31,41], [3,13,…]…, […,39,49]], it will also contain $5C_5^2$ pairs. Thus in this case $|\mathcal{D}_{po}|=5C_5^2+5C_5^2=100$.
## W4: Typos
**R:** Thanks! We will fix the typos in the final version.
## Q1: Explanation on why IAA dataset cannot be used for aesthetic retrieval evaluation
**R:** A data item in IAA can be formulated as `(Img, Text, Score)`, for example, `(<cat img>, cat, 0.6)`; and `(<dog img>, dog, 0.73)`. To be fair, retrieval must be compared with the same query, therefore our required data should be in the following format: `(Text, [{img_1, score_1}, {img_2, score_2}...])`; for example, `(dog,[{<dog img1>, 0.76}, {<dog img2>, 0.54}…])`. An alternative way is to retrieve images based on the IAA's queries, but we still need human resources to label the comparison. In addition, the queries' distribution does not match to the distribution of retrieval system's users. Thus, we decided to directly label the testing set.
## Q2: Explanation on "topic" in data preprocessing
**R:** We expect that the distribution of the test set resembles the distribution of queries that are commonly searched by target users. User queries often involve user privacy that most companies will protect and cannot be directly obtained. A common alternative is to count the topic distribution, which are tags like "financial scene" or "indoor decoration" for queries. We generate queries based on the distribution of these topics to ensure that our query distribution is consistent with that of target users.
---
Rebuttal Comment 1.1:
Comment: Thanks to the author's response. Most of my concerns are addressed. As my current rating is already the highest, l will maintain the original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments and your recognition of our work! We will add the explanation in the rebuttal to the revised version. | Summary: The paper looks into the alignment task for vision and language models within retrieval models where properties such as visual aesthetic comes to play. To achieve this, the paper collects some data to design a metric suitable for taking into account human aesthetic evaluation. And employs an RL-based technique to exploit the human opinion for better aligning the retrieved images with human aesthetic preferences.
Strengths: * It is well-written paper
* The concept of aligning vision with aesthetic preferences is interesting and useful in some applications.
* The experiments are well-designed and quite convincing.
* It is interesting that LLM rephrasing could improve the quality of results
Weaknesses: * The proposed metric could be elaborated better and maybe explained how the study ensured the metric is not designed under influence of the model.
Technical Quality: 3
Clarity: 3
Questions for Authors: * The work has been focused on visual aesthetics, given the LLM rephrasing results, it may be beneficial to look into the sophistication of language parts and how that could correlate with the aesthetic of the image. Could that motivate higher level of language sophistication is also correlated with higher visual aesthetics?
* Could you elaborate how the RL-based approach could scale?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not discuss any limitations with respect to the proposed method.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W: The design of the proposed metric
**R:** Thanks.
We proposed two metrics in the paper: HPIR weighted accuracy and win rate.
1. The construction of HPIR requires retrieving and filtering images according to the query, and then manually picking representative images to label. In order to exclude the influence of models during retrieving and filtering process, as depicted in Sec. 3.1, we leverage multiple retrieval systems (including Google, Getty, Bing etc.) and retrievers built upon various base models (CLIP, OpenCLIP, DataComp etc.) with different sizes and training data, to make it as comprehensive as possible. In addition, we merged results from these sources together and manually picked representative pairs. The performer does not know which model the data is coming from, and this process decoupled the data from the model's influence.
2. We use win rate to quantify the comparison between different systems, which is a fair metric for different models. Additionally, the retrieval database we use is confidential, and we use human labeler to ensure that the results of GPT-4V are not seriously biased, and we use such indicators in the end-to-end evaluation at the system level, making it robust and reliable.
## Q1: The correlation to the sophistication of language
**R:** Yes, higher levels of language sophistication is also correlated with higher visual aesthetics. But, "higher level of language sophistication" is also coupled or correlated with "aesthetic expectations and details" in our paper. Like we found in Sec. 4.2, even using longer query brings increase to the results. Higher level of language sophistication is also correlated with higher visual aesthetics due to the inductive bias within pre-training data.
However, a nice system should be user-friendly, without the need to write complex languages, and reduce the budget that comes with calling GPT. With our RL-finetuned model, users can achieve similar results with simple language as with complex language.
## Q2: The scaling ability of RL-based approach
**R:** Similar to RLHF in LLM, the more essential factor for whether an RL algorithm can scale mainly comes from the feedback. As an entry point to research, our feedback comes from aesthetic models, and it has a limited scaling upper-bound. In real application, we can obtain human feedback by, for example, user click-through rate. Feedback like this will exhibit excellent scaling properties and contribute to the continuous improvement of model capabilities, but due to privacy and policy reasons, it is out of the scope of our work. | Summary: This work aims to align vision models with human aesthetic standards in a retrieval system. To do this, the authors propose a preference-based reinforcement learning method that fine-tunes the vision models to distill the knowledge from both LLMs reasoning and the aesthetic models to better align the vision models with human aesthetics. The authors further propose a novel dataset named HPIR to benchmark the alignment with human aesthetics.
Strengths: 1. The idea of aligning vision models with human aesthetics in retrieval is interesting. This work has potential applications in various real-life applications.
2. The authors’ motivation of utilizing the reasoning ability of large language models (LLMs) to rephrase the search query and extend the aesthetic expectations is insightful.
3. The paper is well-written and informative.
4. The proposed dataset HPIR can be used by fellow researchers in the related fields.
Weaknesses: I feel it can be further improved in the following ways.
1. For benchmarking human preferences, it might be better to record down the human variance in their annotations. I understand the authors used multiple annotations to ensure robustness, but since aesthetics is a subjective concept, human variance itself tells something.
2. Following point 1, I feel the work can be made more solid if it includes some human evaluation studies on the experimental results. For example, in Fig. 5, it does not seem so obvious to me on the respective enhancement with finetuning.
Without the above two points, I feel the paper has somewhat overclaimed the "alighing vision models with human aesthetics".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have the authors considered human variance in aesthetics perception?
2. Are the objective metrics enough for results evaluation? Have the authors considered using human studies to evaluate the results?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did not indicate limitations in their paper, and mentioned that they will discuss it in future. I feel this paper has clear limitations such as the indication of human variance and the evaluation of the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1 & Q1: Human variance
**R:** Thank you for the suggestion.
We provide a metric 'confidence' for representing the robustness of the label.
The confidence score means the degree of agreement among all annotators, rather than a value provided by labeler. It is calculated through Equation 10.
This confidence score and variance have similar indicative meanings. For example, if a piece of data has the confidence score of 0.6, it means that 24 labelers hold the same choice and the other 6 labelers choose the opposite. Thus, $$Confidence=\frac{2\times24}{24+6}-1=0.6$$
We used this indicator and named it as confidence score because when both choices are supported by half of the labelers, $Confidence=0$, and when all labelers agree with a choice, $Confidence=1$.
The distributions of all confidence scores are shown in Fig. 15 of Appendix E.
As you mentioned, "aesthetics is a subjective concept," we used the confidence as a weight (see Equation 11) to ensure that our accuracy was only calculated on less controversial comparisons. This ensures the robustness of the experiment and excludes the influence of subjectivity.
Per your advice, we further calculate the variance for these labels. Let the positive choice be 1 and the negative choice be 0, and then the labeling results can be formulated as a sequence like [1,0,0,1,...]. It is easy to derive the following formula:
$$\text{variance} = \frac{ 2N_{pos} N_{neg}}{( N_{pos}+ N_{neg})^2}.$$
Here are the details of human variance of our labeled data, accompanied with confidence score:
| Confidence Score Range | Avg. Acc Confidence | Avg. Acc Variance | Avg. Aes Confidence | Avg. Aes Variance |
|:---:|:---:|:---:|:---:|:---:|
|$0.0 \leq c \leq 0.2$| 0.108 | 0.492 | 0.106 | 0.492 |
|$0.2 < c \leq 0.4$| 0.324 | 0.446 | 0.343 | 0.440 |
|$0.4 < c \leq 0.6$| 0.490 | 0.379 | 0.506 | 0.371 |
|$0.6 < c \leq 0.8$| 0.676 | 0.269 | 0.676 | 0.269 |
|$0.8 < c \leq 1.0$| 0.950 | 0.047 | 0.966 | 0.032 |
|All| 0.441 | 0.364 | 0.411 | 0.377 |
Last but not least, perhaps our statement of confidence is bothering you because we didn't explain its similarity to variance. We'll modify the description to make it more explicit.
## W2 & Q2: Lack of human evaluation
**R:** Thank you for the suggestion.
In Table 2 on page 8 of the main paper, we have presented a user study (last two rows), where we let multiple human labelers judge the images retrieved from models w. and w/o. alignment (using the same queries). These labelers are expert search engine users.
Users judged which retrieved results from system A and B were better.
The first six rows of Table 2 are judged by GPT-4V.
Our description of this part does not link it to the phrase "user study", causing confusion to the readers, and we will revise the description of this section.
We supplement more user studies over the experiments in Table 2 here:
| System A | System B | A to B win & similar rate by users |
|:-----:|:-----:|:-----:|
| Ours-FT / Datacomp-15M | Ours-PT / Datacomp-15M | Acc: 65.3 / Aes: 74.7 |
| Ours-FT / Internal-8M | Ours-PT + Re-rank / Internal-8M | Acc: 66.9 / Aes: 71.4 |
| Ours-FT / Internal-8M | Bing Search / web | Acc: 49.1 / Aes: 56.6 |
| Ours-FT / Internal-8M | Getty Search / getty images | Acc: 63.3 / Aes: 61.3 |
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the responses. I think this paper has potential to be published given the minor revisions done. I would like to keep the current borderline accept rating.
---
Reply to Comment 1.1.1:
Comment: We appreciate your recognition of our work and thank you for your valuable comments. We commit to add the human variance and additional human evaluation in the revised version. | Summary: This paper aligns the vision models with human values by leveraging LLM for query rephrasing and introducing preference-based reinforcement learning. The paper also presents a novel dataset named HPIR to benchmark the alignment with human aesthetics.
Strengths: This paper introduces a novel approach to align visual models with human aesthetics, combining LLM rewriting to enhance query understanding and using preference-based reinforcement learning to fine-tune the model. The paper is comprehensive in experiments and introduces the HPIR dataset for benchmarking. The paper is well-structured and the methods are clearly explained. Key concepts are well defined and the use of diagrams helps to effectively illustrate the results. And this paper improves the aesthetic quality of results in image retrieval by aligning visual models with human preferences. The proposed method and dataset provide valuable ideas for future research in this area.
Weaknesses: [W1] This paper lacks a detailed user study to validate the actual effectiveness of the proposed method. Including a user study with different participants to evaluate the subjective improvement of aesthetic alignment could provide stronger evidence for the actual effectiveness of the method.
[W2] Placing the related work in Section 6 makes it difficult for readers to have a clear understanding of the problem domain and existing research results before reading the specific methods and experiments, which is not conducive to the coherence of the paper structure.
Technical Quality: 3
Clarity: 2
Questions for Authors: Computational Cost: Can you elaborate on the computational costs and resource requirements of the reinforcement learning-based fine-tuning process? Are there any optimizations that could reduce the computational burden?
User Study: Have you conducted any user studies to validate the improvements in aesthetic alignment from a user perspective? If not, do you plan to include such studies in future work?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Exploring possible negative social impacts, such as implications for privacy or how the technology might be misused in unintended ways Research could benefit from deeper analysis of how biases in training data affect model outputs beyond just aesthetics, particularly with respect to cultural and demographic diversity.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## W1 & Q: Lack of user study
**R:** In Table 2 on page 8 of the main paper, we have presented a user study (last two rows), where we let multiple human labelers judge the images retrieved from models w. and w/o. alignment (using the same queries). These labelers are expert search engine users.
Users judged which retrieved results from system A and B were better.
The first six rows of Table 2 are judged by GPT-4V.
Our description of this part does not link it to the phrase "user study", causing confusion to the readers, and we will revise the description of this section.
We supplement more user studies over the experiments in Table 2 here:
| System A | System B | A to B win & similar rate by users |
|:-----:|:-----:|:-----:|
| Ours-FT / Datacomp-15M | Ours-PT / Datacomp-15M | Acc: 65.3 / Aes: 74.7 |
| Ours-FT / Internal-8M | Ours-PT + Re-rank / Internal-8M | Acc: 66.9 / Aes: 71.4 |
| Ours-FT / Internal-8M | Bing Search / web | Acc: 49.1 / Aes: 56.6 |
| Ours-FT / Internal-8M | Getty Search / getty images | Acc: 63.3 / Aes: 61.3 |
## Q: Computational Cost
**R:** The RL-finetuning process only used 4 NVIDIA A100 GPUs for fewer than 5 hours. If using smaller batch size or techniques like gradient accumulation, it can be trained within an acceptable time even using other consumer-grade GPUs with more than 4GB of memory. This is a small burden compared to most of today's work, including our pre-training phase.
## W2: Related work
**R:** Thank you for your comments, and we will make adjustments to the related work so that readers can better understand the article. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Taming the Long Tail in Human Mobility Prediction | Accept (poster) | Summary: This paper addresses the challenge of predicting less frequently visited points-of-interest (POIs) in human mobility data, a problem known as the long-tail issue in spatial distribution. The authors introduce a new framework called Long-Tailed Adjusted POI Prediction (LoTNext), which includes two main components: long-tailed graph adjustment module and long-tailed loss adjustment module. Additionally, the framework employs an auxiliary prediction task to enhance the model's generalization and overall accuracy. The effectiveness of LoTNext is demonstrated through experiments on two real-world trajectory datasets, where it significantly outperforms existing state-of-the-art methods in human mobility prediction.
Strengths: 1. The code has been provided, which makes the reproducibility of this paper good.
2. The paper is generally well-writern and easy to follow.
3. The proposed method is motivation-grounded.
Weaknesses: 1. The presentation quality of this paper can be further enhanced.
2. The authors are encouraged to conduct experiments on more datasets and provide more detailed analysis.
3. This paper can supplement more theoretical analysis to guarantee the proposed method's effectiveness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I am curious if the structure of preliminary model structure can bring significant influence to the final results.
2. According to my experiments, the detailed value setting of $\lambda_1$, $\lambda_2$, and $\lambda_3$ in Eq. 16 can lead to the obvious variation of the ultimate model performance. However, the hyperparameter sensitivity experiments on this point is missing. More discussions and empirical results regarding this are welcomed.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 7hGF:
**Weaknesses:**
> Q1. The presentation quality of this paper can be further enhanced.
**A1:** Thank you for your suggestions. We will check and polish the entire paper to ensure the presentation can be more clearly.
> Q2. The authors are encouraged to conduct experiments on more datasets and provide more detailed analysis.
**A2:** We appreciate your comments. We plan to introduce more datasets in our future work to more comprehensively evaluate our model. In addition, to better evaluate our model's performance, we add two latest baselines on the existing dataset. Please refer to the **"global" response** (please see above) for detailed results.
> Q3. This paper can supplement more theoretical analysis to guarantee the proposed method's effectiveness.
**A3:** Thank you for bringing this problem to our attention. We want to discuss **the motivation behind our sample weight adjustment module design** from a theoretical perspective.
* For the metric Acc@1 we have: $\text{Acc} = \frac{1}{N} \sum_{i=1}^{N} \mathbf{1}(\hat{y}^i = y^i)$, where $\mathbf{1}(\cdot)$ is the indicator function. On long-tail datasets, using the arithmetic mean would result in **very small penalties for tail data with low accuracy**. For example, the average of 0.01 and 0.99 is 0.5.
* In contrast, the harmonic mean: $H = \frac{n}{\sum_{i=1}^{n} \frac{1}{x_i}}$, which yields a value of 0.02 for 0.01 and 0.99, demonstrating **higher sensitivity to small values**, which is beneficial for penalizing tail data. However, the harmonic mean is defined by **reciprocals**, which increases the **difficulty of model optimization**.
* Similarly, the geometric mean of 0.01 and 0.99 is 0.10, which is also very **sensitive to small values**. Its simple definition allows the model to** optimize effectively** based on it. Therefore, we finally choose geometric mean to calculate baseline magnitude.
We hope these discussions can address your concerns.
**Questions:**
> Q4. I am curious if the structure of preliminary model structure can bring significant influence to the final results.
**A4:** Thanks for pointing out this potential problem. We try to remove the **Spatial Contextual Attention module (SC Att.)** and replace the **Transformer** with the most traditional **RNN** model to evaluate its impact on the overall model. The results are shown below.
Table Ⅰ. The different preliminary model performance comparison on Gowalla dataset.
|Method/Metric|Acc@1|Acc@5|Acc@10|MRR|NDCG@5|NDCG@10|
|-|-|-|-|-|-|-|
|With RNN|0.1529|0.3400|0.4210|0.2423|0.2511|0.2773|
|Wtihout SC Att.|0.1596|0.3547|0.4376|0.2523|0.2619|0.2888|
|**LoTNext**|**0.1668**|**0.3605**|**0.4429**|**0.2591**|**0.2686**|**0.2953**|
Table Ⅱ. The different preliminary model performance comparison on Foursquare dataset.
|Method/Metric|Acc@1|Acc@5|Acc@10|MRR|NDCG@5|NDCG@10|
|-|-|-|-|-|-|-|
|With RNN|0.2532|0.5242|0.6044|0.3778|0.3997|0.4258|
|Wtihout SC Att.|0.3117|0.6040|0.6805|0.4410|0.4740|0.4994|
|**LoTNext**|**0.3155**|**0.6059**|**0.6812**|**0.4469**|**0.4753**|**0.5001**|
* We can observe that replacing the Transformer module with an RNN significantly impacts the performance, especially on the Foursquare dataset. This highlights the powerful modeling capabilities of the Transformer.
* The RNN model under the optimization by our graph adjustment module, loss adjustment module, and other components has resulted in superior performance compared to other RNN-based models, such as STAN and Flashback. This demonstrates that our approach is both **generalizable and effective**.
* Removing the Spatial Contextual Attention layer has a substantial effect on the Acc@1 results, which is the most critical metric for accurately assessing the model's predictive ability.
* Therefore, **both modules are essential for the current model**.
> Q5. According to my experiments, the detailed value setting of $\lambda_1$, $\lambda_2$, $\lambda_3$ and in Eq. 16 can lead to the obvious variation of the ultimate model performance. However, the hyperparameter sensitivity experiments on this point is missing. More discussions and empirical results regarding this are welcomed.
**A5:** Thanks for your inspiring questions. We **fixed the weights** of three different $\lambda$ to test the model performance, and the results are as follows.
Table Ⅲ. The model performance comparison of the parameter $\lambda$ on Gowalla dataset.
|Weight/Metric|Acc@1|Acc@5|Acc@10|MRR|NDCG@5|NDCG@10|
|-|-|-|-|-|-|-|
|[0.5, 0.25, 0.25]|0.1641|0.3595|0.4408|0.2580|0.2676|0.2950|
|[0.25, 0.5, 0.25]|0.1668|0.3596|0.4424|0.2588|0.2680|0.2951|
|[0.25, 0.25, 0.5]|0.1653|0.3594|0.4421|0.2584|0.2680|0.2952|
|[0.33, 0.33, 0.33]|0.1580|0.3551|0.4386|0.2516|0.2614|0.2885|
|**Dynamic**|**0.1668**|**0.3605**|**0.4429**|**0.2591**|**0.2686**|**0.2953**|
Table Ⅳ. The model performance comparison of the parameter $\lambda$ on Foursquare dataset.
|Weight/Metric|Acc@1|Acc@5|Acc@10|MRR|NDCG@5|NDCG@10|
|-|-|-|-|-|-|-|
|[0.5, 0.25, 0.25]|0.3082|0.6037|0.6812|0.4419|0.4689|0.4942|
|[0.25, 0.5, 0.25]|0.3153|0.6059|0.6811|0.4469|0.4734|0.4983|
|[0.25, 0.25, 0.5]|0.3121|0.6055|0.6810|0.4452|0.4722|0.4974|
|[0.33, 0.33, 0.33]|0.3131|0.6053|0.6812|0.4453|0.4720|0.4970|
|**Dynamic**|**0.3155**|**0.6059**|**0.6812**|**0.4469**|**0.4753**|**0.5001**|
* The value of $\lambda$ does not significantly impact the performance.
* When $\lambda_2$ is maximum, indicating an increased weight for our LTA loss, the performance is optimal, second only to the dynamic parameters.
* When the three $\lambda$ values are equal, the model's performance on the Gowalla dataset shows a significant decline. This indicates that the model requires different emphasis on different losses to achieve further improvement.
*If you believe our paper is qualified for NeurIPS, we would truly appreciate it if you could give us further support by increasing the score.*
---
Rebuttal Comment 1.1:
Title: Response to Author's Rebuttal
Comment: Thanks for your detailed response. Most of my concerns have been addressed. I will raise the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to respond to our rebuttal! We are happy to hear that your concerns have been solved and that our paper warrants acceptance! | Summary: The paper presents the Long-Tail Adjusted Next POI Prediction (LoTNext) framework to address the long-tail problem in next POI prediction. This problem refers to the uneven spatial and temporal distribution of POI visits, making it challenging for prediction models to predict less frequently visited POIs. LoTNext combines a Long-Tailed Graph Adjustment module to reduce the noise and impact of long-tailed nodes in the user-POI interaction graph and a Long-Tailed Loss Adjustment module to balance the loss between head and tail POIs. Additionally, an auxiliary prediction task is employed to enhance generalization and accuracy. The proposed method was evaluated on two real-world trajectory datasets, Gowalla and Foursquare, where it significantly outperformed existing methods.
Strengths: - LoTNext introduces a unique combination of graph adjustment and loss adjustment modules to tackle the long-tail problem, which is a significant contribution to the field of human mobility prediction.
- The framework is evaluated on two real-world datasets and compared with ten existing methods, demonstrating superior performance across multiple metrics.
- The paper provides a thorough explanation of the methodology, including the embedding generation, transformer encoder, spatial contextual attention layer, and the overall optimization process, making it reproducible and transparent.
Weaknesses: - The proposed model is complex and involves multiple components and adjustments, but it is not clear how computationally expensive it would be to make predictions in services and elsewhere.
- The model performed well on the dataset used, but it is unclear under what conditions the proposed method will perform well, such as visit intervals and frequency of visits.
Technical Quality: 3
Clarity: 3
Questions for Authors: How does the complexity of LoTNext affect its scalability and real-time performance in practical applications? Are there any simplifications or optimizations that can be applied without significantly compromising performance?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitation of their work, particularly the potential privacy risks associated with the extensive use of user trajectory data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer mZab:
**Weaknesses:**
> Q1. The proposed model is complex and involves multiple components and adjustments, but it is not clear how computationally expensive it would be to make predictions in services and elsewhere.
**A1:** Thank you for your insightful questions. Table 3 in the appendix lists the inference time for each deep learning model during the testing phase (running one training/testing instance, i.e., test time divided by batch size). For a more intuitive comparison of computational cost, we copied Table 3 as Table Ⅰ below. We selected three sequence-based and three graph-based baselines to demonstrate the computational efficiency of our approach. We ensured that all models were executed on the same RTX 3090 GPU.
1. Sequence-based baseline
* **DeepMove** is the fastest among the sequence-based methods, as it only considers calculating attention using historical trajectories.
* Compared to **DeepMove**, **LSTPM** further introduces a geographical relationship adjacency matrix to enrich the spatial context, making it slightly slower than **DeepMove**.
* **STAN** employs a dual-layer attention architecture, with one attention layer aggregating spatiotemporal correlations within user trajectories and the other selecting the most likely next POI based on weighted check-ins, resulting in the longest inference time for **STAN**.
2. Graph-based baseline
* Due to batch training, the graph-based methods generally run significantly faster than the sequence-based methods.
* **GETNext** introduces additional computational overhead due to the need for extra POI candidate probability reorganization based on transition attention during the final prediction stage.
* **SNPM** requires extra computation time due to the search for similar neighborhoods within the graph.
* **LoTNext** requires more time to run compared to **Graph-Flashback** because **LoTNext** includes graph denoising and an auxiliary temporal prediction task. Table 1 and 2 (from the original paper) demonstrate the effectiveness of our proposed modules, even at the cost of some computational time. Therefore, considering that **LoTNext** encompasses more processing steps and overall accuracy, the increase in inference time is still acceptable.
Table Ⅰ Comparison of computational cost. Each method is benchmarked on the same NVIDIA
GeForce RTX 3090 GPU.
|Method| Inference Time ($10^{-3}$ Seconds) |
|-|-|
|DeepMove (Sequence-based)|1.422|
|LSTPM (Sequence-based)|3.417|
|STAN (Sequence-based)|2887.809|
|GETNext (Graph-based)|3.824|
|Graph-Flashback (Graph-based)|0.0918|
|SNPM (Graph-based)|0.491|
|LoTNext (Graph-based)|0.257|
> Q2. The model performed well on the dataset used, but it is unclear under what conditions the proposed method will perform well, such as visit intervals and frequency of visits.
**A2:** Thanks for proposing this potential concern. We further analyzed the visit intervals for all users in the two datasets. We firstly categorized the intervals into eight hourly ranges: [0, 1], (1, 3], (3, 6], (6, 12], (12, 24], (24, 48], (48, 72], and (72, ∞). The detailed results are shown in Figure III attached in the **"global" response** PDF file (please see above).
* Based on Figure III, approximately 30% of the check-in intervals in the Gowalla dataset are less than or equal to one hour, with half of the intervals being within 6 hours.
* In contrast, the Foursquare dataset has a more evenly distributed interval range, with half of the intervals within 12 hours, and around 20% of the intervals exceeding 72 hours.
* These **complex and uneven time distributions** highlight the importance of our time auxiliary prediction module.
Regarding the frequency of visits, Figures 7(a) and 7(b) in the appendix show the relevant results.
* We can observe that the maximum occurrence frequency of POIs in the Foursquare dataset is ten times that of the Gowalla dataset, and the number of extreme values is also higher in the Foursquare dataset.
* Additionally, it is clear that the majority of POIs in both datasets have very low occurrence frequencies.
* These results intuitively reflect **the long-tail nature of both datasets**.
**Questions:**
> Q3. How does the complexity of LoTNext affect its scalability and real-time performance in practical applications? Are there any simplifications or optimizations that can be applied without significantly compromising performance?
**A3:** Thank you for the inspiring question. We acknowledge that using multiple deep learning methods can indeed affect the scalability and real-time performance of the model during actual deployment. However, our LoTNext model has adaptable simplification measures.
* We can conduct **LightGCN** instead of traditional GCN can improve computation speed. The performance of **LightGCN** has been validated in many related works, ensuring it does not compromise the performance of GCN as a graph learning module.
* We can conduct **graph pre-training** to enhance training efficiency. A **well-pretrained graph representation** might eliminate the need for our graph adjustment module without affecting the overall model performance, thereby speeding up the training process.
* In addition, the loss adjustment module **do not involve overly complex calculations**, thus not significantly increasing computational overhead.
*If the above discussions address your concerns, we would greatly appreciate it if you could consider increasing the score.*
---
Rebuttal Comment 1.1:
Comment: I appreciate the feedback provided so far. Could you please share any additional thoughts or clarifications on the rebuttal? Your detailed review is important for the progress of this paper.
---
Rebuttal 2:
Comment: Dear Reviewer
Could you please read the rebuttal and engage the discussion with the authors ASAP?
AC | Summary: This paper introduces the LoTNext framework, which is designed to improve the prediction of human mobility patterns, specifically addressing the challenge of long-tail distribution in POI visitations. The authors propose a novel approach that includes a Long-Tailed Graph Adjustment module and a Long-Tailed Loss Adjustment module, along with an auxiliary prediction task, to enhance the model's ability to predict less frequently visited POIs. The paper demonstrates the effectiveness of LoTNext through comprehensive experiments on two real-world datasets, showing significant improvements over existing state-of-the-art methods.
Strengths: I like the research gap proposed by this paper. This is a worthwhile issue to study.
Weaknesses: (1) The evaluation could be expanded to include a broader range of metrics to further validate the generalizability of the LoTNext framework.
(2) It's better to have more explainability related experiments.
(3) A more detailed literature review is needed (at least in the appendix) so that the novelty of the method could be better evaluated.
(4) The comparison methods used are somewhat outdated. Why didn't you use the latest methods, such as TPG (https://arxiv.org/abs/2304.04151) or LLM-Move (https://arxiv.org/pdf/2404.01855), for comparison?
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Why does cutting off the long tail in the user-POI graph for noise reduction, followed by using loss to reintroduce the long tail, theoretically improve long tail prediction?
(2) In Figure 4, how should we interpret "four least frequently occurring pois"? Does it refer to only the four POIs with the lowest frequencies? I guess many POIs only appear once.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer Et5T:
**Weaknesses:**
> Q1. The evaluation could be expanded to include a broader range of metrics to further validate the generalizability of the LoTNext framework.
**A1:** Thank you for valuable suggestions. We add Normalized Discounted Cumulative Gain (NDCG) as a new metric, the specific results are shown in **"global" response** (please see above).
> Q2. It's better to have more explainability related experiments.
**A2:** Thanks for your comments. To make our model prediction results more interpretable, we visualized the prediction results of LoTNext and Graph-Flashback on the Gowalla dataset, as shown in Figure II attached in the **"global" response** PDF file (please see above). In Figure II, the blue circles represent the user's historical trajectories, the green circles indicate the model's predicted top-20 POIs, and the red circle marks the actual next POI.
* From Figure II, we can observe the predicted POI are **more dispersed and cover a wider area** in the LoTNext compared to the Graph-Flashback. This indicates that LoTNext provides a broader prediction range within the user's historical trajectory, potentially capturing a wider variety of POIs.
* The LoTNext model's predictions **include the actual next POI** within its top-20 suggestions, demonstrating its higher accuracy. In contrast, the Graph-Flashback model's predictions are more concentrated in fewer locations, which suggests it might focus on more predictable, routine movements but could **miss out on less frequently visited POIs**.
* In summary, LoTNext not only provides a **better balance** between exploring different areas and accurately predicting the next POI but also successfully predicts the actual next POI, compared with Graph-Flashback.
> Q3. A more detailed literature review is needed (at least in the appendix) so that the novelty of the method could be better evaluated.
**A3:** Thanks for your helpful advice. We will include a more detailed literature review in the final version, covering topics such as LLM-based human mobility prediction and other long-tailed learning methods. In addition, we would like to conclude our novelty as below:
* Due to the **input** of the human mobility prediction task being **not individual samples** but **sequences**, many long-tailed learning methods from CV and recommendation field are **hard to deploy** in the human mobility prediction task.
* In the human mobility prediction community, the current focus is on improving model prediction accuracy from the perspective of graph optimization learning, completely **overlooking the long-tail feature of human check-in datasets**.
* **Our study is the first to propose a general framework for human mobility prediction under the long-tail problem**.
* **LLM-based human mobility prediction** is still in its early stages and holds significant potential. We believe that exploring and addressing the **long-tail problem based on the LLM framework** is a very interesting direction for future research.
> Q4. The comparison methods used are somewhat outdated. Why didn't you use the latest methods, such as TPG or LLM-Move, for comparison?
**A4:** Thank you for your detailed comments and valuable paper recommendations. **LLM-Move [6]** is based on large language models (LLMs) for human mobility prediction tasks. However, **LLM-Move [6]** requires category as additional semantic information, which is not applicable in our dataset.
Therefore, we add **AGRAN [3]** suggested by reviewer 7ppu and **TPG [5]** as our new baselines, please refer to the **"global" response** (please see above) for a detailed experiment results explanation.
**Questions:**
> Q5. Why does cutting off the long tail in the user-POI graph for noise reduction, followed by using loss to reintroduce the long tail, theoretically improve long tail prediction?
**A5:** Thank you for your questions. We do not reintroduce the long tail through the loss function; instead, we ensure that the model does not overly focus on head data.
* First, the long-tailed loss adjustment module increases the attention to tail data via logit score adjustment.
* Next, the sample weight adjustment module calculates the vector magnitude of all samples and compares them to the geometric mean magnitude to identify those samples that contribute significantly to the model.
* This approach balances the learning of features from both head and tail data.
> Q6. In Figure 4, how should we interpret "four least frequently occurring pois"? Does it refer to only the four POIs with the lowest frequencies? I guess many POIs only appear once.
**A6:** Thank you for pointing out this confusing problem. We apologize for the unclear explanation here. Specifically, we divide the least frequently occurring POIs into four groups using the intervals [1,10), [10,20), [20,30), and [30,40). Then, we perform t-SNE visualization on these groups.
*If our responses could address your concerns, please kindly consider raising the score.*
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments. To ensure a thorough evaluation, I would appreciate it if you could review the rebuttal and provide any further feedback. Your input is greatly valued.
---
Rebuttal 2:
Comment: Dear reviewer,
Could you please read the rebuttal and engage the discussion with the authors ASAP?
AC
---
Rebuttal Comment 2.1:
Comment: Thanks for your detailed response. Most of my concerns have been addressed. I will raise the score.
---
Reply to Comment 2.1.1:
Comment: Thank you for your detailed review and for acknowledging the responses. If you are convenient, you may update your score by editing your original review. I truly appreciate your time and consideration.
---
Rebuttal 3:
Title: Update Score Reminder
Comment: The discussion period will end within an hour, the comment shows you would **increase the score** (perhaps you just forgot to do it...), could you please **kindly do it**? | Summary: This study proposes the Long-Tail Adjusted Next Point-of-Interest Prediction (LoTNext) framework. By combining a Long-Tailed Graph Adjustment module and a Long-Tailed Loss Adjustment module, it reduces the impact of long-tailed nodes in the user-POI interaction graph and adjusts loss through logit score and sample weight adjustment strategies. Experimental results show that LoTNext outperforms several existing methods on two real-world datasets.
Strengths: 1. The structure and organization of this paper are well-designed, and the writing is clear and easy to comprehend.
2. This paper investigates the long-tail problem by proposing a general framework for next POI recommendation, filling the gap in addressing the long-tail issue in POI recommendation. This work is meaningful and valuable.
3. To enhance the readability of the paper, the authors provide detailed results analysis, parameter settings, and the motivation behind the design of each module in the appendix.
Weaknesses: 1. In the related work section, the authors review common methods for addressing the long-tail problem in recommendation systems. Since this paper focuses on addressing the long-tail problem, adding several baselines that tackle the long-tail issue in recommendation systems (e,g,, [1]) would better demonstrate the effectiveness of the proposed method.
2. The novelty of this paper is not very strong. The long tail effect of check-in data, such as the POI frequency distributions, has been studied before.
3. Additional comparative analyses should be included to illustrate the shortcomings of baselines in handling the long-tail issue. For instance, comparing the proposed model's performance with all baselines (not just Graph-Flashback) on long-tail POIs would better demonstrate its effectiveness in addressing the long-tail problem.
4. The experimental results are not convincing enough, as the compared methods are not the SOTA method. More recent baselines should be compared (e.g., [2-4]).
[1] Meta graph learning for long-tail recommendation, SIGKDD, 2023.
[2] EEDN: Enhanced Encoder-Decoder Network with Local and Global Context Learning for POI Recommendation, SIGIR-23
[3] Adaptive Graph Representation Learning for Next POI Recommendation, SIGIR-23
[4] Spatio-Temporal Hypergraph Learning for Next POI Recommendation, SIGIR-23
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How do the authors balance the performance between head POIs and long-tail POIs? In other words, how do they enhance the performance of long-tail POIs without compromising the performance of head POIs? As shown in Figure 3(c), the proportion of predicted long-tail POIs is relatively high. Does this affect the prediction of head POIs?
2. In the experimental section, it would be beneficial to show the performance of the proposed method and the baselines on both head POIs and long tail POIs to enhance the persuasiveness of the conclusions.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors have already pointed out a limitation of this method: LoTNext relies on extensive user trajectory data, which, if deployed by certain institutions or companies, may pose a potential risk of privacy breaches, potentially leading to negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer 7ppu:
**Weaknesses:**
> Q1. In the related work section, the authors review common methods for addressing the long-tail problem in recommendation systems. Since this paper focuses on addressing the long-tail problem, adding several baselines that tackle the long-tail issue in recommendation systems (e,g, [1]) would better demonstrate the effectiveness of the proposed method.
**A1:** Thank you for your suggestions and valuable references. Compared with MGL [1], our task has several difference:
1. MGL [1] primarily addresses the click problem under long-tail problem, i.e., whether or not to interact with an item, without considering the temporal aspect. In contrast, our task in human mobility prediction inherently involves spatial-temporal features, such as check-in times and POI geographic locations.
2. The input of MGL is the user's feedback on item ratings, clicks, etc., without the temporal features. However, our model input is a trajectory sequence with spatial-temporal features, hence MGL is difficult to deploy under our datasets.
Due to time and computation resource constraints, we add the AGRAN [3] model as our new baseline. Additionally, we also include the TPG [5] as a new baseline suggested by reviewer Et5T. The specific results are shown in **"global" response** (please see above).
> Q2. The novelty of this paper is not very strong. The long tail effect of check-in data, such as the POI frequency distributions, has been studied before.
**A2:** Thanks for your insightful comments. Different with previous work, our paper has several novel points:
1. We are the **first work** to propose a solution specifically for the **long-tail problem in the human mobility prediction field**.
2. Our approach includes a graph adjustment module, a loss adjustment module, and a time prediction auxiliary module. Through these novel modules, we significantly enhance the model's performance in predicting long-tail data (from Table 1 and Figure 3) .
3. Previous studies have only treated check-in frequency as a data feature of POIs, which is fundamentally different from our approach.
> Q3. Additional comparative analyses should be included to illustrate the shortcomings of baselines in handling the long-tail issue. For instance, comparing the proposed model's performance with all baselines (not just Graph-Flashback) on long-tail POIs would better demonstrate its effectiveness in addressing the long-tail problem.
**A3:** Thanks for your detailed comments. Due to time constraints, we apologize for not being able to compare the performance of all baselines on tail data. Given our inclusion of the graph adjustment module, we focused on graph-based baselines. As shown in Table 1, Graph-Flashback and SNPM are the best-performing graph-based baselines. Following your suggestion, we included SNPM for comparison. The detailed results are shown in Figure I attached in the **"global" response** PDF file (please see above).
* From Figure I, we can find on both the Gowalla and Foursquare datasets, LoTNext outperforms other baselines in predicting tail data, particularly evident in the MRR metric.
* Moreover, we do not compromise the prediction of head data, as LoTNext also shows superior performance compared to the other two baselines in head data prediction.
> Q4. The experimental results are not convincing enough, as the compared methods are not the SOTA method. More recent baselines should be compared (e.g., [2-4]).
**A4:** Thank you for your suggestions. Please refer to the answer of Q1 in detail.
**Questions:**
> Q5. How do the authors balance the performance between head POIs and long-tail POIs? In other words, how do they enhance the performance of long-tail POIs without compromising the performance of head POIs? As shown in Figure 3(c), the proportion of predicted long-tail POIs is relatively high. Does this affect the prediction of head POIs?
**A5:** Thank you for your valuable questions. In our loss adjustment module, the logit score adjustment module prevents the model from overly focusing on head data, while the sample weight adjustment module balances head and tail data.
* First, the logit score adjustment module enhances the weights of tail samples by POI visit frequency.
* Next, the sample weight adjustment module calculates the vector magnitude of each sample, and comparing it to a baseline magnitude, we penalize samples that are less beneficial for the model's predictions.
* In summary, regardless of whether the data is head or tail, if the model determines that a sample contributes significantly overall, we increase its weight in the loss to ensure the model treats head and tail samples equitably.
* In addition, as you mentioned, according to Figure 3(c), LoTNext predicts a higher proportion of tail samples. However, this does not affect the model's overall performance. Figures 3(a) and (b) and Figures Ⅰ(a), (b), (c), and (d) in the **"global" response** PDF file (see above) demonstrate that our LoTNext outperforms other baselines in predicting both head and tail data.
> Q6. In the experimental section, it would be beneficial to show the performance of the proposed method and the baselines on both head POIs and long tail POIs to enhance the persuasiveness of the conclusions.
**A6:** Thank you for your suggestions. Figure 3(a) and (b) include the relevant experiments. Additionally, in response to your comments on Q3 and Q5, we have incorporated other baselines for comparison. Please refer to our responses to Q3 and Q5 for detailed information.
*If you are satisfied with our responses, we would be grateful if you could kindly check the positive remarks of other reviewers and consider raising the score.*
---
Rebuttal Comment 1.1:
Comment: Thank you for your initial feedback. If you have a moment, I would greatly appreciate any further thoughts on the points addressed in my rebuttal. Your insights are crucial for improving the quality of the work.
---
Rebuttal Comment 1.2:
Title: Rebuttal has been checked.
Comment: Thanks for the authors' rebuttal. I have checked the responses and other reviewers' reviews. Some concerns have been addressed. Hence, I would increase my scores.
---
Reply to Comment 1.2.1:
Comment: Thank you for taking the time to review the rebuttal and the other reviewers' comments. I appreciate your willingness to update your evaluation based on our provided rebuttal. | Rebuttal 1:
Rebuttal: # Response to All Reviewers:
We thank the reviewer for the very valuable, detailed and constructive feedback on our work. We especially thank the positive words:
* work is meaningful and valuable, worthwhile issue to study (Reviewer #7ppu & #Et5T)
* filling the gap in addressing the long-tail issue (Reviewer #7ppu & #Et5T)
* well-written, easy to comprehend and thorough explanation of the methodology (Reviewer #7ppu & #mZab & #JhGF)
* motivation explanation behind the design of each module (Reviewer #7ppu &#JhGF)
* detailed results analysis and superior performance across multiple metrics (Reviewer #7ppu & #mZab)
* significant contribution to the field of human mobility prediction (Reviewer #mZab)
* the reproducibility of this paper is good (Reviewer #mZab & #JhGF)
Due to word limitation, we conclude a similar question and answer it in response to your valuable comments. Thank you!
> Q1 Need to add more baselines, metrics and datasets. (Reviewer #7ppu & #Et5T)
**A1:**
Based on the feedback from reviewers #7ppu and #Et5T, we have summarized six highly valuable and relevant papers, each with different focuses:
1. **MGL [1]** focuses on addressing the long-tail problem in the recommendation domain. The input of MGL is the **user's feedback** on item ratings, clicks, etc., without the temporal features. However, our model input is a **trajectory sequence** with spatial-temporal features, hence MGL is difficult to deploy under our datasets.
2. **EEDN [2]** proposes an enhanced network to tackle implicit feedback and cold-start issues in POI recommendation by leveraging latent interactions between users and POIs. However, its **input trajectories lack specific temporal factors**, making this work more akin to the sequential recommendation and slightly different from our task definition.
3. **AGRAN [3]** proposes an adaptive graph representation method, which explores the utilization of graph structure learning to replace static graphs.
4. **STHGCN [4]** proposes to capture the higher-order information including user trajectories and the collaborative relations among trajectories by hypergraph. However, the construction of hypergraph requires **categories semantic information**, which is not applicable in our dataset.
5. **TPG [5]** proposes a framework that integrate temporal prompts and geography-aware strategies, overcoming encoding limitations of traditional methods.
6. **LLM-Move [6]** is based on large language models (LLMs) for human mobility prediction tasks. However, LLM-Move requires additional **category semantic information** as input, which is not applicable in our dataset.
In summary, we add AGRAN [3] and TPG [5] as new baselines to further validate the performance of our model. Additionally, we add Normalized Discounted Cumulative Gain (NDCG) as a new metric suggested by reviewer #Et5T (since the results of NDCG@1 and Acc@1 are same, we ignore it here). For better comparison, we only excerpt the best-performing baselines from the original paper. The detailed results are as follows:
Table Ⅰ. The performance comparison on Gowalla dataset.
|Method/Metric|Acc@1|Acc@5|Acc@10|MRR|NDCG@5|NDCG@10|
|-|-|-|-|-|-|-|
|Graph-Flashback|0.1495|0.3399|0.4242|0.2401|0.2497|0.2766|
|SNPM|0.1593|0.3514|0.4346|0.2505|0.2600|0.2872|
|AGRAN (new)|0.1005|0.2456|0.3154|0.1731|0.1764|0.1990|
|TPG (new)|0.1400|0.3071|0.3611|0.1948|0.2059|0.2374|
|**LoTNext**|**0.1668**|**0.3605**|**0.4429**|**0.2591**|**0.2686**|**0.2953**|
Table Ⅱ. The performance comparison on Foursquare dataset.
|Method/Metric|Acc@1|Acc@5|Acc@10|MRR| NDCG@5|NDCG@10|
|-|-|-|-|-|-|-|
|Graph-Flashback|0.2786|0.5733|0.6501|0.4109|0.4411|0.4661|
|SNPM|0.2899|0.5967|0.6763|0.4278|0.4480|0.4757|
|AGRAN (new)|0.1575|0.3736|0.4676|0.2600|0.2703|0.3008|
|TPG (new)|0.2321|0.4631|0.5493|0.3775|0.3891|0.4106|
|**LoTNext**|**0.3155**|**0.6059**|**0.6812**|**0.4469**|**0.4753**|**0.5001**|
* **AGRAN [3]** perform lower than the graph-based baselines such as **Graph-Flashback** and **SNPM**. However, the deployed datasets from **AGRAN [3]** are based on Foursquare-Singapore and Gowalla-Nevada, which are **city-level** making adaptive graph learning more suitable.
* Our datasets are **global-level**, as shown in the heatmap from Figure 6 in the appendix, which are significantly larger than city-level datasets. Therefore, it is challenging to adaptative build and learn embeddings from graphs on global-level datasets without relying on prior knowledge. These factors contribute to the lower performance of **AGRAN [3]** under our global human check-in datasets.
* **TPG [5]** performs better than **AGRAN [5]** and is very close to **STAN**. However, it requires knowing the exact time of the next location visit during prediction. **LoTNext** does not need future moments as prompts and performs better.
* **TPG [5]** explicitly models the geography of longitude and latitude, which poses a potential risk of user privacy leakage. **LoTNext** calculates spatial contextual attention based on geographic distance differences, which has a lower risk of user privacy leakage.
**References**
[1] Wei, C., et al. Meta graph learning for long-tail recommendation. KDD 2023.
[2] Wang, X., et al. Eedn: Enhanced encoder-decoder network with local and global context learning for poi recommendation. SIGIR 2023.
[3] Wang, Z., et al. Adaptive Graph Representation Learning for Next POI Recommendation. SIGIR 2023.
[4] Yan, X., et al. Spatio-temporal hypergraph learning for next POI recommendation. SIGIR 2023.
[5] Luo, Y., et al. Timestamps as Prompts for Geography-Aware Location Recommendation. CIKM 2023.
[6] Feng, S., et al. Where to move next: Zero-shot generalization of llms for next poi recommendation. IEEE CAI 2024.
Pdf: /pdf/9fde4c267f57931510a9a694c260d01f696d8989.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Iterative Methods via Locally Evolving Set Process | Accept (poster) | Summary: This paper considers the study of local algorithms for graph clustering which is an important problem in the field of graph data analysis. In particular this paper is considers the task of computing Personalized Page Rank (PPR) vectors for a given graph. In this problem the algorithm is given a graph in the form of its adjacency and degree matrices, the goal is to approximate the Personalized Page Rank vector for a given starting vertex and dampening factor $\alpha$ up to precision $\epsilon$ without accessing the entire graph. The classical algorithm of Andersen, Chung and Lang runs in time $O(1/\alpha \epsilon)$, which independent of the graph size. The central question posed by subsequent works is whether the dependence on $\alpha$ can be improved to $1/\sqrt{\alpha}$. The main contribution of the paper is to propose a new algorithmic framework based on the locally evolving set process. Under this framework they are able to implement existing algorithms such as Andersen et al.'s APPR algorithm as well as localized implementation of standard gradient descent. They are also able to develop localized versions of chebyshev and heavy ball methods that do achieve the $1/\sqrt{\alpha}$ dependence for some fixed constant value of $\epsilon$. Finally they show that on several large scale graphs, their new localized chebyshev and heavy ball methods do outperform APPR and related methods empirically.
Strengths: The main strengths of this paper are to develop a new algorithmic framework that can not only encompass existing algorithms but lead to the development of better ones that overcome previously known limitations for designing local graph clustering algorithms. They also back their theoretical analysis with the practical implementation of their method which is also shown to be superior to previous algorithms.
Weaknesses: One weakness is that the paper is only able to obtain a quadratic improvement in the dependence on the parameter $\alpha$, obtained by the local implementation of the Chebyshev and Heavy-ball method, only for a value of $\epsilon$ and not for all.
Technical Quality: 3
Clarity: 3
Questions for Authors: One question is that what is the core reason for not obtaining convergence result for accelerated methods for all $\epsilon>0$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Authors have addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time and effort to review our paper carefully. We appreciate the positive perspective on our work. Your concern on the range of $\epsilon$ can be effectively addressed as follows:
**Q1.** What is the core reason for not obtaining convergence results for accelerated methods for all $\epsilon > 0$?
**A:** To clarify, our results cover all cases for $0 \leq \epsilon \leq 1/d_s$, including Theorems 3.3, 3.5, and 4.2. When $\epsilon > 1/d_s$, the algorithm terminates before taking a single step, e.g. $T = 0$. In other words, we assume that the precision parameter satisfies $\epsilon \leq 1/d_s$ in our theorems because $\epsilon \leq 1/d_s$ ensures all local solvers run at least one local iteration, so that $T \geq 1$. All theorems hold when $1/d_s < \epsilon$. We thank the reviewer for pointing out if this was unclear in the manuscript and will revise it.
We are happy to have further discussions if needed!
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thanks to the authors for answering the question. My score remains the same. | Summary: This paper uses the evolving set procedure to give a local PageRank algorithm whose dependence on \alpha (the reset probability) is \sqrt{\alpha}.
It proposes accelerated local iterative methods with coefficients given by Chebyshev iteration. The convergence of this algorithm in both graph theoretic and general sparse linear systems settings are analyzed in detail. Discussions of the relations between this method and other local iterative algorithms are also given in detail.
The method was implemented and tested on a range of graphs, mostly coming from social networks. This includes two large ones with edges in the billions. On moderate ranges of \alpha (reset probability), the experiments show significant speedups (factor of about 3) and convergences (factor of 10) on most graphs.
Strengths: Local algorithms are widely used in graph analytics. The question studied is natural, and has been proposed before.
The method is theoretically well-founded, and has significant technical depth.
The experiments are thorough and well documents, and clearly demonstrate the advantages of this method in multiple parameter regimes.
Weaknesses: The gains only kick in at a relatively large number of steps: it's not clear to me that these are the parameter regimes in which local algorithms actually get used.
Ideally for the empirical works I'd also like to see comparisons of downstream tasks and effects on overall accuracies (e.g. F-1 score), but the paper itself has already covered a lot of ground.
Technical Quality: 4
Clarity: 4
Questions for Authors: Is the dependence on \epsilon optimal? Aka. have methods with \sqrt{\eps} (or even \log(1 / \eps)) dependences been ruled out?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: yes, limitations have been addressed, and are entirely theoretical w.r.t. some graph parameter regimes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time and effort to review our paper carefully. Your positive perspective on our work is so inspiring. We also believe our work is novel and some new interesting problems are worth to explore. Your main concerns and our responses are as follows:
---
**Q1.** The gains only kick in at a relatively large number of steps: it's unclear that these are the parameter regimes in which local algorithms get used.
**A:** Thank you for this excellent point. We assume you mean in what range of $\epsilon$ is useful in practice. Let's take typical examples to see when $\epsilon$ is enough/useful in practice. For the local clustering algorithm, in Fountoulakis's paper (see Fountoulakis' work in [1]), they used $\epsilon = 10^{-5}$ where the number of nodes in graphs is in the range $[2\times 10^6, 3\times 10^6]$. This is around $\epsilon \approx 1/n$ (roughly corresponding to the largest speedup as shown in Figure 4, dashed vertical lines are $\epsilon = 1/n$). For learning GNN models, PPRs are used to train the PPRGo model (see Bojchevski's work in [2]); the parameter $\epsilon$ used is $10^{-4}$ where the number of nodes in graphs is in the range $[1.87\times 10^4, 1.05 \times 10^7]$. This means $\epsilon \leq 1/n$ in all graphs. We found similar parameter settings in graph embedding and online graph node classification. The effective range of $\epsilon$ is $\epsilon \leq 10^{-4} / n$ where the speedup is significant, as presented in Fig. 4. This clearly indicates that our design demonstrates significant speedup, especially around $\epsilon=1 / n$.
If you mean the number of local iterations needed, then it is true that the number of local iterations will be slightly larger than that of standard solvers. The key point is the combination of both the number of iterations and local volumes, and then the runtime of local solvers is much smaller than that of standard solvers. Whether there is an optimal tradeoff between a local number of iterations and local volumes is an interesting problem.
---
**Q2.** Ideally, for the empirical works, I'd also like to see comparisons of downstream tasks and effects on overall accuracies (e.g., F-1 score), but the paper itself has already covered a lot of ground.
**A:** Yes, the downstream tasks, such as training good GNN models that have decent F1 scores, are typical examples of the methods that our proposed methods could apply. Any promising results will make our paper even stronger. However, due to space limits, we will consider these tasks in our future work. And we are looking forward to downstream applications.
---
**Q3.** Is the dependence on $\epsilon$ or $\alpha$ optimal?
**A:** Let us recall that the runtime bound is optimal for the standard Chebyshev method. Specifically, the *first-order optimal methods (such as Chebyshev) for linear system* have the runtime bound $\mathcal O (m/\sqrt{\alpha} \cdot \log 1/\epsilon)$. This runtime bound is optimal for finding a solution of a sparse linear system defined on a chain graph (see page Theorem 3.15 of Bubeck's work in [1] for more details). We will discuss both parameters $\alpha$ and $\epsilon$.
---
**References**
- [1] Fountoulakis, K., Roosta-Khorasani, F., Shun, J., Cheng, X., \& Mahoney, M. W. (2019). Variational perspective on local graph clustering. Mathematical Programming, 174, 553-573.
- [2] Bojchevski, Aleksandar, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. "Scaling graph neural networks with approximate PageRank." In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining, pp. 2464-2473. 2020.
---
Rebuttal Comment 1.1:
Title: thank you
Comment: Thank you for the detailed comments / clarifications.
It's good to know the \epsilon regime being considered here. However, my worry is that in such parameter regimes, non-local things with some dependencies on n might start winning. This is, of course, beyond the scope of this work, or even this discussion. So I'll keep my review/score unchanged. | Summary: This paper considers the approximate personalized page rank. Classical results for this problem have a runtime that is linear in $1/\alpha\epsilon$ where $\alpha$ is the damping factor and $\epsilon$ is the error parameter. The authors show that APPR is simply a local variant of Gauss-Seidel Successive Overrelaxation. Using this connection, the authors derive new run time bounds for APPR and also propose a new algorithm based on Gradient Descent. The execution time for both these are, in the worst-case, identical to the previous bounds. However, they are more sensitive to the state of execution of the algorithms (depend on the active nodes) and seem to mirror the actual performance of these algorithms. Also, under certain assumptions, they improve the worst-case execution time.
Strengths: The paper addresses an important problem, provides deeper insights into an existing algorithm, provides a new algorithm and also reanalyzes the algorithm in a more fine-grained way. All of this is done via connection to GSSOR which seems to be new.
I find the result quite interesting. However, I am not very familiar with recent work on personalized page rank. For this reason, I recommend accepting but with a low confidence.
Weaknesses: NA
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you explain how weak/strong are the assumptions that you make to achieve the improvement for the local Chebyshev method? I couldn't quite gauge the usefulness of this result.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time and effort to review our paper carefully. Your positive perspective on our work is inspiring. The main concern on the assumption we made is addressed as follows:
**Q1.** Comments on the assumptions we made in analyzing the local Chebyshev method.
**A:** (Ignore this answer if you read the general response section). Let us recall that our Theorem C.6 (Standard CH) provides a runtime bound for the standard Chebyshev method. The total number of operations is bounded by
$$
\mathcal{T}_{\mathrm{CH}} \leq m \cdot \left\lceil\frac{1+\sqrt{\alpha}}{2 \sqrt{\alpha}} \ln \frac{2}{\epsilon}\right\rceil,
$$
where $m$ is the number of edges. Our key lemma, Lemma 4.1 (Line 246), captures the evolving process of the local Chebyshev method. By considering the geometric mean reduction on $\beta_t$ and using Lemma 4.1, LocCH has the following runtime bound:
$$
\mathcal{T} _{\text{LocCH}} \leq \overline{\operatorname{vol}}({\mathcal S } _T) \cdot \left\lceil\frac{1+\sqrt{\alpha}}{(2-c) \sqrt{\alpha}} \ln \frac{2 y_T}{\epsilon} \right\rceil
$$
where $\overline{\operatorname{vol}} ({\mathcal S } _T)$ is the averaged volume and $y_T$ is defined by $y _ {t+1}-2 y _t+y _{t-1} / ( (1+\beta _{t-1}) (1+\beta _t ) )=0$. Ideally, when $\epsilon \rightarrow 0$, it roughly indicates the local solver is getting closer to the global one. This leads to $y_T \rightarrow 1$, $\overline{\operatorname{vol}} ({\mathcal S } _T) \rightarrow m$, and $c \rightarrow 0$ where $c$ is the parameter defined via the geometric mean of $\beta_t$. However, verifying how strong or weak this assumption is can be difficult. The main reason is the convergence of the second-order difference equation itself is complicated and makes the analysis harder. We are investigating and exploring alternative directions now. A more reasonable approach might be to consider a typical example on a chain graph, where one can obtain a much simpler formulation for $y_t$. We will develop a refined analysis for specific graph types in future work.
We are happy to have further discussions if needed. | null | null | Rebuttal 1:
Rebuttal: **General Responses**
We thank all reviewers for their time and effort in carefully reading our paper. We are very happy that you like our work. Some general concerns are worth discussing as follows:
---
**Q1.** Comments on the assumption of the local Chebyshev (LocCH) method.
**A:** Let us recall that our Theorem C.6 (Standard CH) provides a runtime bound for the standard Chebyshev method. The total number of operations is bounded by
$$
\mathcal{T}_{\mathrm{CH}} \leq m \cdot \left\lceil\frac{1+\sqrt{\alpha}}{2 \sqrt{\alpha}} \ln \frac{2}{\epsilon}\right\rceil,
$$
where $m$ is the number of edges. Our key lemma, Lemma 4.1 (Line 246), captures the evolving process of the local Chebyshev method. By considering the geometric mean reduction on $\beta_t$ and using Lemma 4.1, LocCH has the following runtime bound:
$$
\mathcal{T} _{\text{LocCH}} \leq \overline{\operatorname{vol}}({\mathcal S } _T) \cdot \left\lceil\frac{1+\sqrt{\alpha}}{(2-c) \sqrt{\alpha}} \ln \frac{2 y_T}{\epsilon} \right\rceil
$$
where $\overline{\operatorname{vol}} ({\mathcal S } _T)$ is the averaged volume and $y_T$ is defined by $y _ {t+1}-2 y _t+y _{t-1} / ( (1+\beta _{t-1}) (1+\beta _t ) )=0$. Ideally, when $\epsilon \rightarrow 0$, it roughly indicates the local solver is getting closer to the global one. This leads to $y_T \rightarrow 1$, $\overline{\operatorname{vol}} ({\mathcal S } _T) \rightarrow m$, and $c \rightarrow 0$ where $c$ is the parameter defined via the geometric mean of $\beta_t$. However, verifying how strong or weak this assumption is can be difficult. The main reason is the convergence of the second-order difference equation itself is complicated and makes the analysis harder. We are investigating and exploring alternative directions now. A more reasonable approach might be to consider a typical example on a chain graph, where one can obtain a much simpler formulation for $y_t$. We will develop a refined analysis for specific graph types in future work.
---
**Q2.** Comments on the parameter regimes in which local algorithms get used.
**A:** The range of $\epsilon$ that is useful in real-world applications depends on the specific task. However, as far as we know, the speedup over standard solvers can be applied to a wide range of downstream tasks. Let's look at typical examples to see when $\epsilon$ is sufficient or useful in practice. For the local clustering algorithm, in Fountoulakis's paper (see Fountoulakis's work in [1]), they used $\epsilon = 10^{-5}$ where the number of nodes in graphs is in the range $[2\times 10^6, 3\times 10^6]$. This is around $\epsilon \approx 1/n$ (roughly corresponding to the largest speedup shown in Figure 4, where the dashed vertical lines are $\epsilon = 1/n$). For learning GNN models, PPRs are used to train the PPRGo model (see Bojchevski's work in [2]); the parameter $\epsilon$ used is $10^{-4}$ where the number of nodes in graphs is in the range $[1.87\times 10^4, 1.05 \times 10^7]$. This means $\epsilon \leq 1/n$ in all graphs. We found similar parameter settings in graph embedding and online graph node classification.
Overall, the effective range of $\epsilon$ is $\epsilon \leq 10^{-4} / n$, where the speedup is significant, as presented in Fig. 4. This should cover most interesting downstream tasks where PPR is a crucial tool.
---
**Q3.** Is the dependence on $\epsilon$ optimal?
**A:** Let us recall that the runtime bound is optimal for the standard Chebyshev method. Specifically, the *first-order optimal methods (such as Chebyshev) for linear systems* have the runtime bound $\mathcal{O}(m/\sqrt{\alpha} \cdot \log 1/\epsilon)$. This runtime bound is optimal for finding a solution of a sparse linear system defined on a chain graph (see Theorem 3.15 of Bubeck's work in [3] for more details).
We believe this is true in terms of the **optimal local first-order methods** for local solvers. However, it remains interesting to see whether the lower bound can be identified, where one may find that it matches the bound conjectured in Line 295 of our manuscript.
---
**References**
- [1] Fountoulakis, K., Roosta-Khorasani, F., Shun, J., Cheng, X., & Mahoney, M. W. (2019). Variational perspective on local graph clustering. Mathematical Programming, 174, 553-573.
- [2] Bojchevski, Aleksandar, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. "Scaling graph neural networks with approximate pagerank." In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining, pp. 2464-2473. 2020.
- [3] Sébastien Bubeck, Convex Optimization: Algorithms and
Complexity, https://arxiv.org/pdf/1405.4980. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Theoretical Perspective for Speculative Decoding Algorithm | Accept (poster) | Summary: This paper presents a theoretical study on speculative decoding, an efficient inference method for large autoregressive models. It highlights practical implications, proposing a Pareto-optimal solution for the rejection-distribution bias tradeoff.
Strengths: - The authors provide a robust theoretical foundation, illustrating the practical implications of speculative decoding, such as the improvement of rejection accuracy, which cannot be achieved by simply changing the acceptance probability.
- The study explores the trade-offs between inference cost and quality degradation, supported by an optimization model. This analysis is valuable for practical applications.
Weaknesses: - The main figure does not clearly communicate the core concept of speculative decoding. It might lead readers to believe that speculative decoding primarily addresses hallucination, which is not its main advantage.
- The experimental results are not distinctly highlighted, and the authors do not explain how these results support their theoretical analysis. While the theoretical contributions are significant, the paper would benefit from more extensive empirical validation.
Technical Quality: 3
Clarity: 2
Questions for Authors: NA
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The results may not guarantee optimality in practical situations because real-world circumstances are more complex and varied than those considered in the theoretical analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the positive judgement and the great questions. Here are the detailed responses.
***Q1:*** The main figure does not clearly communicate the core concept of speculative decoding. It might lead readers to believe that speculative decoding primarily addresses hallucination, which is not its main advantage.
**Response:** Thank you very much, this is a very good suggestion! The purpose of Figure 1 is to show that, instead of direct decoding from the large model, speculative decoding will apply small model to decode and use large model as a verifier. We agree that without careful explanation the current figure might be confusing. We believe it is a easy fix and will modify it in the final revision.
***Q2:*** The experimental results are not distinctly highlighted, and the authors do not explain how these results support their theoretical analysis. While the theoretical contributions are significant, the paper would benefit from more extensive empirical validation.
**Response:** We thank the reviewer for judging our theoretical contribution as significant. Regarding how the experimental results support our theory, our experiment 4.2 shows that, for different level of $\epsilon$ (over-accpetance), the optimal solution derived from Theorem 4 (```Decoding-OPT```) outperforms other suboptimal solutions (```Decoding-UNO```) consistently under the WinRate metric maresued by ```RM-Mistral-7B``` and GPT4. We will highlight this in the final version. Besides, regrading the paper would benefit from more extensive empirical validation, we conducted two extra experiments as explicitly asked by reviewer Ceup and reviewer GGsr. Here we summarize it for you.
1. In addition to experiment 4.2 where we validate the optimality of Theorem 4, we also compare the optimal algorithm (```Decoding-OPT```) with vanilla speculative decoding. Since the standard SD preserves the quality of the large model, the WinRate of our biased algorithm ```Decoding-OPT``` will decrease as the over-acceptance parameter $\epsilon$ goes large. On the other hand, the average runtime over 200 prompts for SD is much higher than ```Decoding-OPT``` in the third line of the table. This empirically validates ```Decoding-OPT``` provide a tradeoff between quality and efficiency against SD. For more detail please see our response to reviewer Ceup ***Q3.***
| WinRate | $\epsilon=0.1$ | $\epsilon=0.4$ | $\epsilon=0.8$ |
| --- | --- | --- | --- |
| Decoding-OPT | 48% | 42% | 35.5% |
| SD | 52% | 58% | 64.5% |
| Inference Acceleration rate: Time(SD)/ Time(Decoding-OPT) | 1.54 | 2.97 | 6.32 |
2. Empirically, we also compare the Batch SD with vanilla SD and compute WinRate (measured by GPT4) over 200 prompts from Alpaca-Farm-Eval Dataset with 500 responses per prompt and the inference acceleration. The table shows that the quantity of batch algorithm is nearly the same as the vanilla SD, but the decoding is faster. However, when batch is very large (e.g. 10), the decoding speed can slow down (1.392<1.416) and this might due to batch processing cost, suggesting that using a large batch is not ideal in the real-world scenarios. For more detail please see our response to reviewer GGsr ***Q1.***
| WinRate | Batch = 2 | Batch = 4 | Batch = 8 | Batch = 10 |
| --- | --- | --- | --- | --- |
| Batch SD | 48.5% | 49.5% | 49% | 49.5% |
| Speculative Decoding | 51.5% | 50.5% | 51% | 50.5% |
| Inference Acceleration rate: Time(SD)/ Time(Batch SD) | 1.223 | 1.357 | 1.416 | 1.392 |
---
Rebuttal 2:
Comment: Thank you for your response. I would like to remain the score.
---
Rebuttal Comment 2.1:
Title: Reply to Reviewer qBZ2
Comment: Dear reviewer qBZ2,
Thank you for checking our rebuttal. Please let us know if you have any final questions.
Best, Authors | Summary: The paper presents a theoretical perspective on speculative sampling. Through Theorems 1 and 2, the authors demonstrate that the sampling method employed by speculative sampling is optimal and unbiased. Subsequently, Theorem 3 introduces a multi-candidate approach to enhance the acceptance rate of speculative sampling.
Strengths: The writing is very clear, with takeaways provided under each theorem to explain the theory.
Theorems 1 and 2 are crucial for speculative sampling. In paper [23], the authors showed that speculative sampling is unbiased but did not prove its efficiency compared to other rejection sampling methods. The proof provided here is very important.
Weaknesses: The experiments are not sufficient. I would like to see improvements in batch speculative sampling in real-world scenarios.
I am curious if batch speculative sampling can be combined with tree-style methods, e.g., [1] CaPE and [2] Medusa?
[1] Du, C., Jiang, J., Yuanchen, X., Wu, J., Yu, S., Li, Y., ... & You, Y. GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative Decoding. In Forty-first International Conference on Machine Learning.
[2] Cai, T., Li, Y., Geng, Z., Peng, H., Lee, J. D., Chen, D., & Dao, T. (2024). Medusa: Simple LLM inference acceleration framework with multiple decoding heads. In Forty-first International Conference on Machine Learning.
Technical Quality: 3
Clarity: 4
Questions for Authors: see weakness
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: na
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the positive comments and for the precise understanding of the theoretical contribution of our paper! Here are the detailed responses.
***Q1:*** I would like to see improvements in batch speculative sampling in real-world scenarios.
Response: Thank you for the question. First, we want to mention that the main purpose of batch speculative sampling study is to understand its theoretical efficiency improvement. The key term we derived, the “Batch Improvement” in Theorem 3, is the net gain compared to the vanilla speculative decoding. When implementing the Batch algorithm 4 via the Markov Chain simulation, the plot in Figure 2 Right shows the improvement for the rejection rate as number of batches $M$ goes large. In addition, as the reviewer requested, we add the experiments for batch speculative decoding LLM decoding. Concretely, the speculative decoding of existing Hugging Face LLMs does not support batches, (i.e. the function ```_speculative_sampling``` in ```transformers/generation/utils.py```), so we modify the ```_speculative_sampling``` in page 30 by augmenting the related quantities such as ```candidate_input_ids``` with one extra dimension that contains batches. We then implement the batch algorithm 4 in the same function, and also modify the assisted_decoding function in transformers/generation/utils.py. We compare the Batch SD with vanilla SD and compute WinRate (measured by GPT4) over 200 prompts from Alpaca-Farm-Eval Dataset with 500 responses per prompt and the inference acceleration. The draft model is still ```pythia-70m``` and the target model is still ```pythia-2.8b```. The table shows that the quantity of batch algorithm is nearly the same as the vanilla SD, but the decoding is faster. However, when batch is very large (e.g. 10), the decoding speed can slow down (1.392<1.416) and this might due to batch processing cost, suggesting that using a large batch is not ideal in the real-world scenarios. Running the experiments in the table spent 70+ A100 GPU hours.
| WinRate | Batch = 2 | Batch = 4 | Batch = 8 | Batch = 10 |
| --- | --- | --- | --- | --- |
| Batch SD | 48.5% | 49.5% | 49% | 49.5% |
| Speculative Decoding | 51.5% | 50.5% | 51% | 50.5% |
| Inference Acceleration rate: Time(SD)/ Time(Batch SD) | 1.223 | 1.357 | 1.416 | 1.392 |
***Q2:*** I am curious if batch speculative sampling can be combined with tree-style methods, e.g., [1] CaPE and [2] Medusa?
Response: Thank you for the in-depth question! This is an interesting aspect since the current Batch SD (Algorithm 4) is mostly suitable for theoretical study and might not be optimal for practical application. For instance, the batch rejection scheme in Algorithm 4 is to simply iterate over the next batch when the previous batches are rejected. It does not distinguish the unique characterizations of the batch candidate tokens. In this sense, CAPE [1], which uses uncertainty metric such as confidence score to adaptively choose the next token could further improve the chance of the token being accepted. Besides, Batch Algorithm 4 represents a simple parallel tree structure (Figure 3 Left), modify it with some tailor-designed tree-based methods such as tree-based attention in Medusa [2] is very likely to make batch decoding more efficient. We promise to add the discussions on [1],[2] in the final revision and leave how to make it work as future work.
---
Rebuttal Comment 1.1:
Comment: Due to this method can not be used in the real setting like tree spec, I still maintain my initial score.
---
Rebuttal 2:
Title: Reply to Reviewer GGsr
Comment: Dear reviewer GGsr,
Thank you for checking our rebuttal. To be academically rigorous, we do agree with the reviewer that our proposed batch algorithm 4 is different from the existing tree spec such as SpecInfer [Miao et al.], CAPE and Medusa since our setup (alg4) is suitable for the theoretical understanding (batch improvement in Theorem 3 and Figure 2 Right) for the efficiency of spec decoding, which is the main focus of the paper and this theory is lacking in the existing batch tree spec algorithms. We promise to discuss it carefully in the revision and will mention the limitation that it remains unclear on how to combine batch algorithm 4 with other tree-spec algorithms as you suggested. Thank you again for the great point.
Best, Authors | Summary: The author aim to develop theoretical understanding of speculative decoding. The authors assume that given a large and small model participating in speculative decoding, the computation complexity of the small model is negligible. Under this assumption, they characterize the expected rejection rate of speculative decoding. They show that this bound depends on the total variation distance between the generations from small and large model. Next, the authors show that spectral decoding gives optimal rejection bounds in class of all rejection based methods. Motivated by recent works analyzing batch speculative decoding, where the rejection is done only if M tokens are rejected from a given sample. Finally, given an acceptance probability, the authors show an optimal solution solution to the total variation loss between the distributions of large model and the one found by speculative decoding. This objective changes linearly with the rejection probability. This provides insights on selecting the optimal value of rejection threshold as per requirement. The presented theoretical results are backed up with appropriate experiments validating them.
Strengths: 1) The theoretical analysis presented by the authors provide several interesting insights about the inference efficiency observed by spectral decoding.
2) All the results are backed up with simulation experiments, which strengthen the results presented in the paper.
Weaknesses: 1) It is not completely clear, why making the assumption about negligible compute of the small model is not a strong assumption. Since the small model needs to generate the tokens autoregressively therefore even though its single pass could be small as compared to the larger model but it the context length is high i.e. several autoregressive passes are made, the compute of small model might not be negligible. It would be great if the authors can provide some empirical evidence to justify this assumption.
2) It would have been great if the authors provided evidence using real world models in support of their theory. Although, this is not a major weakness, but authors should consider it in the camera ready version.
Technical Quality: 3
Clarity: 3
Questions for Authors: I request the authors to kindly address the questions in the weaknesses section.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the positive judgement and the detailed feedback. Here are the detailed responses.
***Q1:*** It is not completely clear, why making the assumption about negligible compute of the small model is not a strong assumption. Since the small model needs to generate the tokens autoregressively therefore even though its single pass could be small as compared to the larger model but it the context length is high i.e. several autoregressive passes are made, the compute of small model might not be negligible.
**Response:** Thank you for the question about our assumption. First, we fully agree that, in practice, all draft models cost some time. However, the main study goal of this paper is to understand the theoretical/probabilistic property of speculative decoding, and the cost-negligible assumption is an abstraction for the theoretical cleanliness. The assumption we made highlights that the cost is only negligible compared to the large model, rather than assuming it has zero cost. These two have fundamental differences since cost-negligible means there is some cost, and the cost is theoretically abstracted in the notation O(1) (see second part of Assumption 1) when compared to some costly large model. This is some known concept summarized in [Leviathan et al., Jie Ou et al.]. Empirically, this is also very meaningful. For instance, when measuring the inference time over 200 responses of Alpaca-Farm-Eval for model ```pythia-70m``` and model ```pythia-12b``` separately, the ratio ```Time(pythia-12b)/Time(pythia-70m)```$\approx$ 12.8 for T=128 tokens, and for T=1028tokens, the ratio$\approx $18.3 is even larger (this is not surprising since the decoding time for auto-regressive model is super-linear in its decoding length and large model would be even slower compard to the small models). In this case, the decoding time for ```pythia-70m``` is not cost zero, but is smaller than ```pythia-12b``` by magnitude, therefore cost-negligible.
***Q2:*** It would have been great if the authors provided evidence using real world models in support of their theory. Although, this is not a major weakness, but authors should consider it in the camera ready version.
**Response:** Thank you for the kind question. We believe the models we used in experiment 4.2 (```pythia-70m``` and ```pythia-12b```) are real-world models, and our experiment 4.2 shows that, for different level of $\epsilon$ (over-accpetance), the optimal solution derived from Theorem 4 (```Decoding-OPT```) outperforms other suboptimal solutions (```Decoding-UNO```) consistently under the WinRate metric maresued by ```RM-Mistral-7B``` and GPT4. We will highlight this in the final version. Besides, we conducted two extra experiments as explicitly asked by reviewer Ceup and reviewer GGsr. Here we summarize it for you.
1. In addition to experiment 4.2 where we validate the optimality of Theorem 4, we also compare the optimal algorithm (```Decoding-OPT```) with vanilla speculative decoding. Since the standard SD preserves the quality of the large model, the WinRate of our biased algorithm ```Decoding-OPT``` will decrease as the over-acceptance parameter $\epsilon$ goes large. On the other hand, the average runtime over 200 prompts for SD is much higher than Decoding-OPT in the third line of the table. This empirically validates Decoding-OPT provide a tradeoff between quality and efficiency against SD. For more detail please see our response to reviewer Ceup ***Q3.***
| WinRate | $\epsilon=0.1$ | $\epsilon=0.4$ | $\epsilon=0.8$ |
| --- | --- | --- | --- |
| Decoding-OPT | 48% | 42% | 35.5% |
| SD | 52% | 58% | 64.5% |
| Inference Acceleration rate: Time(SD)/ Time(Decoding-OPT) | 1.54 | 2.97 | 6.32 |
2. Empirically, we also compare the Batch SD with vanilla SD and compute WinRate (measured by GPT4) over 200 prompts from Alpaca-Farm-Eval Dataset with 500 responses per prompt and the inference acceleration. The table shows that the quantity of batch algorithm is nearly the same as the vanilla SD, but the decoding is faster. However, when batch is very large (e.g. 10), the decoding speed can slow down (1.392<1.416) and this might due to batch processing cost, suggesting that using a large batch is not ideal in the real-world scenarios. For more detail please see our response to reviewer GGsr ***Q1.***
| WinRate | Batch = 2 | Batch = 4 | Batch = 8 | Batch = 10 |
| --- | --- | --- | --- | --- |
| Batch SD | 48.5% | 49.5% | 49% | 49.5% |
| Speculative Decoding | 51.5% | 50.5% | 51% | 50.5% |
| Inference Acceleration rate: Time(SD)/ Time(Batch SD) | 1.223 | 1.357 | 1.416 | 1.392 |
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their rebuttal. I acknowledge that I have read their reply. This increases my confidence in this work. | Summary: This paper provides detailed analysis to speculative decoding and batch speculative decoding. The conclusions of the paper are: (1) speculative decoding is unbiased and it shows the expected rejection rate; (2) speculative decoding has the lowest rejection rate in all the unbiased algorithm that belongs to the familty defined in Algorithm 2; (3) batch speculative decoding has lower rejection rate than speculative decoding; (4) it analyzes the trade-off between efficiency and effectiveness of the family of algorithm defined in Algorithm 2.
Strengths: 1. The paper provides comprehensive theoretical analysis.
2. The findings in Theorem 4 and 5 are interesting.
3. The paper is easy to understand.
Weaknesses: 1. Although the paper provides lots of theoretical analysis. But I find only Theorem 4 and 5 are somewhat interesting. Theorem 1 is already derived in the original speculative decoding paper. For Theorem 2, although speculative decoding is proven to be optimal in the family of algorithms defined in Algorithm 2, but I don't think there are a lot of existing algorithms can be formulated in Algorithm 2. In fact, is there any algorithm that belongs to Algorithm 2 and is unbiad and it not speculative decoding? For Theorem 3, the finding that batch speculative decoding has lower rejection rate than vanilla speculative decoding is not surprising.
2. Although Theorem 4 and Theorem 5 are interesting, it only solves half of the problem: given b, what should P be. It would be better if the authors could also discuss the design of b.
3. I think the paper can also be improved if the authors could summarize a new speculative algorithm from Theorem 4 and 5 and running experiments to compare with vanilla speculative decoding.
Technical Quality: 2
Clarity: 3
Questions for Authors: see weakness above
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: see weakness above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing detailed feedback. We have read your comments carefully and below are our detailed responses.
**Q1:** ---- Although ... Theorem 1 derived in the original speculative decoding paper. For Theorem 2, is there any algorithm that belongs to Algorithm 2 and is unbiased and it not speculative decoding? For Theorem 3, ... is not surprising. ----
**Response:** Thanks, but we respectfully disagree that only Theorem 4 and 5 are interesting, and we believe Theorem 1-3 have their own and new merits for understanding the theoretical properties for speculative decoding, which we detail below.
**On Thm 1:** The key distinction between our Thm1 and [Leviathan et al.] is that their derivation assumes i.i.d. generation of LLMs (see “If we make the simplifying assumption that the beta’s are i.i.d.” under their Definition 3.1), which almost never holds in practice since the distribution of the next token depends on the past generations which is not i.i.d. From the theoretical aspect, our metric $T/\sum_T E_{x_{1:n-1}~q}[TV(p_n,q_n)(\cdot|x_{1:n-1})]$ captures the sequential dependence via the conditional dependence on $x_{1:n-1}$. In this sense, our result is a precise theoretical measurement of sequence (that have dependence between tokens), and the result in [Leviathan et al.] only holds for a single token or a fully i.i.d sequence which is not practical (also see our Remark2).
**On Thm 2:** The significance of our optimality result is that it tells practitioners no improvement can be made over SpecDecoding if no other information can be leveraged. This is a theoretical guidance that can save time for practitioners who want to improve the efficiency of SpecDecoding. In addition, there are many algorithms that belongs to Algorithm 2 and is unbiased (i.e. in $\mathcal{F}$) but is not speculative decoding. This is summarized in our **Corollary 1** in appendix. For any $b_n$ satisfies $b_n\leq \min\\{1,\frac{q_n}{p_n}\\}$ for all $n\in[T]$, we can find an Algorithm 2 that is unbiased. This exactly provide the intuition why is SpecDecoding is optimal: if an Algorithm 2 wants to be unbiased, it needs to have smaller acceptance probability $b_n$ than speculative decoding $\min\{1,\frac{q_n}{p_n}\}$. We were not able to include this in the main paper due to space constraint, and we will add a comment in the final revision.
**On Thm3:** We agree with the reviewer’s intuition that batch algorithm can reduce the rejection rate, however, such a reduction the rejection rate is only observed empirically and a precise theoretical understanding of how much it can improve given the structure $p_n,q_n $ remains unknown. For instance, the nice work SpecInfer [Miao et al.] only prove the unbiasedness but there is no guarantees for decoding efficiency. To this end, we design a different parallel tree structure (Line 213) for batch setting. In order to derive the Batch Improvement term in Theorem 3, we design the novel intermediate quantity $f(x_{1:n})$ in D.1 with recursive computation for solving it, and the numerical simulation validates our theory. The significance of our batch theory is that: their is a scaling law for even using huge number of batches, meaning that the rejection rate won’t goes to zeros even the number of batch goes to infinity (see Figure 2 Right and D.2 in appendix). We believe all those insights are new.
**Q2:** ---- Although Theorem 4 and Theorem 5 are interesting, it only solves half of the problem: given b, what should P be. It would be better if the authors could also discuss the design of b. ----
**Response:** Thanks. This is a good question. The design of $b$ is more heuristic since with higher $b$ (accpeatance probability), there will be less rejections and the decoding is faster, but it will suffer quality loss according Theorem 4. It is a matter of how much response quality one is willing to suffer for gaining better efficiency. In our simulations and experiments in 4.2, we specify $b(x)=\min \\{1, \frac{q(x)+\epsilon}{p(x)}\\} $ for $\epsilon$ ranging from 0 to 1. We will add more discussions in the final revision.
**Q3:** ----if the authors could summarize a new speculative algorithm from Theorem 4 and 5 and running experiments to compare with vanilla speculative decoding.----
**Response:** Thanks. Indeed, the new algorithm that summarizes Theorem 4&5 is exactly Algorithm 2. Concretely, with the choice of $b(x)=\min \\{1, \frac{q(x)+\epsilon}{p(x)}\\}$, we specify the $\mathcal{P}_t $ in Algorithm 2 to be the optimal one $\mathcal{P}_t^\star$ which is defined in Line 756 in appendix and the code implementation is in page 30 starting from ```if mode_ ==1```). We compare our optimal algorithm ```Decoding-OPT``` with a suboptimal algorithm ```Decoding-UNO``` to show higher WinRate across different $\epsilon $s in section 4.2. In addition to that, as the reviewer requested, we conduct extra experiment below to compare the performance of ```Decoding-OPT``` and vanilla SD. The draft model is still ```pythia-70m``` and the target model is still ```pythia-2.8b```. The following table is the WinRate measured by GPT-4. We test 200 prompts from ```Alpaca-Farm-Eval``` Dataset with 500 responses per prompt. Since the standard SD preserves the quality of the large model, the WinRate of our biased algorithm ```Decoding-OPT``` will decrease as the over-acceptance parameter $\epsilon$ goes large. On the other hand, the average runtime over 200 prompts for SD is much higher than ```Decoding-OPT``` in the third line of the table. This empirically validates ```Decoding-OPT``` provides a tradeoff between quality and efficiency against SD. Running the experiments in the table spent 50+ A100 GPU hours.
| WinRate | $\epsilon=0.1$ | $\epsilon=0.4$ | $\epsilon=0.8$ |
| --- | --- | --- | --- |
| Decoding-OPT | 48% | 42% | 35.5% |
| SD | 52% | 58% | 64.5% |
| **Inference Acceleration rate:** Time(SD)/ Time(Decoding-OPT) | 1.54 | 2.97 | 6.32 |
---
Rebuttal Comment 1.1:
Comment: I appreciate the author’s efforts in providing the rebuttal. After reading the rebuttal and other reviewer’s opinions, I decided to raise my score.
---
Reply to Comment 1.1.1:
Title: Reply to reviewer Ceup
Comment: Dear reviewer Ceup,
Thank you for spending time reading our rebuttal. We authors are still available here to answer your questions in the next two days in case you have any last-minute questions.
Best, Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs | Accept (poster) | Summary: The authors introduced an innovative attention-based neural operator and evaluated it against various baselines. They employed masked pretraining and finetuning techniques, comparing the model's performance to multiple benchmarks. Their study included interesting problems such as fluid-structure interactions. The authors showed that their approach is effective for few-shot learning in their experimental evaluations.
Strengths: - The paper presents a new neural operator architecture based on attention mechanisms. This architecture demonstrates superior performance compared to tested baseline models on the NS and NS+EW benchmarks, highlighting its potential advancements in solving PDE-related problems.
- To the best of my knowledge, the authors are the first ones to use masked training for PDE learning effectively
- The mathematical formulation of the proposed model is well-articulated in the paper. This clarity helps readers understand the underlying principles of the model's operation.
- The study addresses a compelling and very relevant multiphysics problem involving fluid-structure interactions.
- The authors demonstrated through empirical evidence that their approach is effective for few-shot finetuning in various scenarios.
Weaknesses: - The study in Table 1 demonstrates the model’s performance on two specific PDEs: Navier-Stokes for fluid flow and a coupled Navier-Stokes with elastodynamics equations for fluid-solid interactions. While these cases provide some insight into the model's capabilities, they are not sufficient to generalize the model's applicability to a broader range of multiphysics problems.
- For the NS dataset with Reynolds number Re=400, the model trained from scratch with only 25 samples matches the performance of the pretrained model. In the case of NS+EW benchmark, when the Reynolds number increases to 4000, even with just 5 samples, both the finetuned and scratch-trained models exhibit similar testing errors. This suggests that pretraining may not provide significant advantages in many cases.
- The use of the L2 loss metric to evaluate model performance is problematic because it aggregates outputs of different physical meanings, such as pressure p, velocity u, and displacement d, into a single loss value. This can obscure individual variable contributions and lead to misleading conclusions about model accuracy.
- The absence of prediction visualizations diminishes the interpretability of the L2 loss values. Visualizing predictions could provide more intuitive insights into model performance and clarify discrepancies in the loss metric.
- The study in the Table 1 does not include a Fourier Neural Operator. Including such a benchmark is crucial to fairly evaluate CoDA-NO’s performance against an FNO model of similar size.
- The FNO model used for comparison in Table 2 has 1.9 billion parameters, vastly outnumbering the CoDA-NO's 11 million parameters. This overparameterization likely affects the model's performance due to the relatively small training set sizes and makes the comparison with CoDA-NO’s performance misleading. Smaller FNO models could provide a more realistic performance benchmark. The claim that CoDA-NO’s better performance compared to a much larger FNO model demonstrates parameter efficiency is misleading. Parameter efficiency should be evaluated with models of comparable sizes, and overparameterized models may not reflect typical scenarios.
- CoDA-NO has significantly higher inference times compared to other baseline models.
These points collectively highlight the need for more comprehensive experiments, appropriate metrics, realistic model comparisons, and practical considerations like inference time to fully evaluate the model's capabilities.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the reasoning behind applying the attention in the in the channel space? Do the models scale better? Do they have higher expresive power?
- Why is 1.9B FNO model used for comparisson?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors explained the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer values our work and recognizes its novelty and the mathematical formulation of the proposed model.
```
Q1. Evaluation on additional PDE datasets
```
In addition to the coupled fluid-solid interaction problem, we also provide experiments on different PDE systems (please see Table 2 and Appendix Sec E, Table 11-12), where we test the proposed CoDA-NO architecture on shallow water equation, diffusion equation, and Navier Stokes equation.
In addition, we also include the results of our study on the Rayleigh-Benard convection system. The CoDA-NO is pre-trained in a supervised way on the Navier–Stokes (NS) equations. Then it is subsequently finetuned on a limited few shot data ($N \in \{5,10,25\}$) of Rayleigh-Benard convection. We show the result in the following table, where we report the $L2$ loss. We see that, in the extremely low data regime, pre-trained CoDA-NO performs better than FNO and CoDA-NO trained from scratch.
| Model | Pre-training dataset | N=5 | N=10 | N=25 |
| -------- | -------- | -------- |----- |----- |
| FNO | - | 0.439 |0.249 |0.130 |
| CoDA-NO | - | 0.315 |0.272 |0.203 |
| CoDA-NO | NS | 0.2537 |0.2223 |0.179 |
Please note that these results are preliminary. As per reviewer qRRk's suggestion, we are conducting a comprehensive study on the Rayleigh-Benard convection system.
```
Q2. Performance gain from pre-training.
```
In most cases, pre-trained CoDA-NO demonstrated a significant performance improvement compared to CoDA-NO trained from scratch. While we acknowledge that in some instances, the difference is not substantial, it is consistently better. Additionally, in general, pre-trained models offer benefits such as reusability and faster convergence with fewer epochs required. The lack of a significant performance gain in a few settings does not diminish the importance and potential advantages of adapting the architecture and pre-training scheme. Further, our self-supervised pretraining method is also beneficial when the cost of generating training data with numerical solvers is very high.
```
Q3. Per variable loss
```
Previous studies have shown that aggregated loss demonstrates model accuracy across multiple variables [1, 2, 3] which helps to get a comprehensive overview of the performamnce. For instance, the results of the compressible NS equation (velocity and pressure field) are presented in an aggregated manner [1, 2, 3].
We also provide the per-variable performance of each of the variables (velocity $v_x, v_y$, pressure $p$, displacement $d_x,d_y$) on the combined Navier-Stokes and Elastic wave equation for $Re=4000$.
### Table of Horizontal velocity ($u_x$)
| Model | Pretrain Dataset | N=5 | N=25 | N=100 |
|-------|:----------------:|:-----: |:-----:|:-----:|
| GINO | - | 0.416 | 0.145 | 0.105 |
| DeepO | - | 0.513 | 0.597 | 0.189 |
| GNN | - | 0.056 | 0.186 | 0.119 |
| ViT | - | 0.393 | 0.710 | 0.437 |
| U-Net | - | 2.936 | 0.227 | 0.174 |
| CoDA-NO | - | 0.078 | 0.048 | 0.023 |
| CoDA-NO | NS | 0.061 | 0.034 | 0.016 |
| CoDA-NO |NS+EW | 0.047 | 0.020 | 0.010 |
### Table of Verticle velocity ($u_y$)
| Models | Pretrain Dataset | N=5 | N=25 | N=100 |
|---------|:---------------------:|:-----:|:-----:|:------:|
| GINO | - | 0.503 | 0.403 | 0.111 |
| DeepO | - | 1.006 | 0.865 | 0.688 |
| GNN | - | 0.462 | 0.270 | 0.169 |
| ViT | - | 1.334 | 0.726 | 0.4254 |
| U-Net | - | 0.967 | 0.417 | 0.269 |
| CoDA-NO | - | 0.240 | 0.153 | 0.0714 |
| CoDA-NO | NS | 0.233 | 0.132 | 0.067 |
| CoDA-NO | NS+EW | 0.208 | 0.120 | 0.060 |
### Table of Verticle velocity ($p$)
| Models | Pre-train dataset | N =5 | N =25 | N =100 |
|---------|:-----------------:|:-----:|:------:|:------:|
| GINO | - | 0.478 | 0.152 | 0.110 |
| DeepO | - | 1.435 | 0.337 | 0.229 |
| GNN | - | 0.209 | 0.149 | 0.110 |
| ViT | - | 0.560 | 0.257 | 0.148 |
| U-Net | - | 0.581 | 0.337 | 0.229 |
| CoDA-NO | - | 0.174 | 0.2251 | 0.077 |
| CoDA-NO | NS | 0.526 | 0.1326 | 0.064 |
| CoDA-NO | NS+EW | 0.398 | 0.097 | 0.0591 |
### Table of Horizontal displacement($d_x$)
| Models | Pre-training Dataset | N=5 | N=25 | NS+EW |
|---------|:--------------------:|:-----:|:-----:|:-----:|
| GINO | - | 1.326 | 0.353 | 0.117 |
| DeepO | - | 1.467 | 0.354 | 0.216 |
| GNN | - | 0.654 | 0.466 | 0.240 |
| ViT | - | 0.769 | 0.474 | 0.279 |
| U-Net | - | 1.083 | 0.730 | 0.365 |
| CoDA-NO | - | 0.582 | 0.670 | 0.139 |
| CoDA-NO | NS | 0.631 | 0.372 | 0.156 |
| CoDA-NO | NS+EW | 0.531 | 0.350 | 0.147 |
---
Rebuttal 2:
Title: Rebuttal
Comment: ### Vertical displacement ($d_y$)
| Models | Pre-training dataset | NS+EW | NS+EW | NS+EW |
|---------|:--------------------:|:-----:|:-----:|:-----:|
| GINO | - | 0.841 | 0.238 | 0.102 |
| DeepO | - | 0.760 | 0.171 | 0.050 |
| GNN | - | 0.561 | 0.322 | 0.162 |
| ViT | - | 1.223 | 0.335 | 0.068 |
| U-Net | - | 6.060 | 0.297 | 0.265 |
| CoDA-NO | - | 0.576 | 0.331 | 0.061 |
| CoDA-NO | NS | 0.381 | 0.138 | 0.085 |
| CoDA-NO | NS+EW | 0.355 | 0.129 | 0.065 |
[1] Dpot: Auto-regressive denoising operator transformer for large-scale PDE pre-training.
[2] Gnot: A general neural operator transformer for operator learning.
[3] Pdebench: An extensive benchmark for scientific machine learning.
```
Q4. Visualization of output
```
We provide visualization in the attached PDF (Figure 1). We will include more visualizations in the paper.
```
Q5. FNO as a baseline for fluid-solid interaction problem
```
As the domain's discretization is irregular, we don't directly apply FNO since the computationally feasible implementation of such models only supports uniform grids (through the deployment of FFT). However, for the shallow water equation, for which the discretization is uniform, we use the spherical variants of FNO as our baseline.
```
Q6. Parameterization of FNO
```
For the experiment on the PDEbench (Table 2, Table 11-12), we do a different setup compared to the fluid-solid interaction. In this case, we follow the same setup as DPOT and first pretrain the model using self-supervised learning and then fine tune with supervised loss, and in our setting, both are evenly split. Due to this relatively large dataset for both pretraining and fine-tuning, the risk of overfitting is low. Nevertheless, we have done an extensive ablation of the number of parameters of FNO, where the model is first trained in a self-supervised manner followed by supervised training. We report the $L_2$ for each of the datasets. We can see that FNO with 1.9B parameter does not overfit the data but rather performs better. We hypothesize that this is due to self-supervised pre-training done before supervised training and the size of the training set.
### Table for shallow water equation
| Model | # parameter | $L2$ |
|----------|:-----------:|:-----------------:|
| CoDA-NO | 11M | 0.0407 |
| FNO | 1.9B | 0.0463 |
| FNO | 485M | 0.0424 |
| FNO | 120M | 0.0410 |
| FNO | 11M | 0.0491 |
| FNO | 1M | 0.2355 |
### Table for diffusion-reaction
| Model | # Parameter | $L_2$ |
|----------|:-----------:|:------:|
| CoDA-NO | 11M | 0.0081 |
| FNO | 1.9B | 0.0141 |
| FNO | 485M | 0.0145 |
| FNO | 120M | 0.0153 |
| FNO | 11M | 0.0268 |
| FNO | 1M | 0.2085 |
```
Q7. Higher inference time
```
The objective of the proposed CoDA-NO is to provide suitable architecture for foundation models for solving PDEs and multi-physics problems. In this study, we demonstrated the extensive adaptability and advantages of CoDA-NO and its pre-training mechanism. Similar to other large foundation models (such as [1]), CoDA-NO has a higher inference time compared to a smaller model but is still significantly faster compared to numerical solvers. Furthermore, the computation time can be mitigated through careful implementation, parallel computation, and engineering efforts, which we plan to address in future work.
---
Rebuttal 3:
Title: Rebuttal
Comment: ```
Q8. The rationale for attention in channel space
```
Modeling physical phenomena has two sets of variables, viz., spatial and temporal locations and physical variables such as temperature, pressure, etc. **Having attention in the channel enables us to model the relationships between physical variables that interact together**. Given the well-established nature of the Transformer architecture and its property to handle variable length input, it has been a natural choice for our approach. We have redesigned key components of modern transformer layers, including positional encoding, self-attention, and normalization layers, to develop codomain attention in function space.
In many applications, physics problems are modeled using coupled PDEs, and generating data is often costly. Therefore, we need to have foundation models capable of tackling issues of a similar nature but with a different number of variables or changes in the variables. **Attention in the channel space helps the model to pre-train simultaneously over a wide range of PDEs with different variables without changing the architecture**. For example, fluid in the subsurface is often described by 12-14 variables that obey laws of thermodynamics, fluid dynamics, Darcy, and chemical interaction. Therefore, learning about similar physics, each with different variable counts at corresponding order, representing the particular physics, would help to train for the final subsurface model. (Please See Problem Statement (Sec 4) and lines 167-177).
There are also other applications for Codano where there are different sets of physical variables with varying counts across datasets. For example, in climate modeling, there are multiple datasets developed by different countries (please refer to CMIP6 global collaboration for climate data), each using slightly different PDEs with different variable counts. Learning the relationships among such variables represented as different channels is enabled by attention.
Moreover, a recent work [1] has shown the **universal approximation property** for attention-based operators in function space, i.e., that CoDA-NO can approximate any continuous operator. Although it is not straightforward to compare different neural operators on the basis of expressivity in a general sense, our empirical studies suggest that it generalizes and performs well on a variety of tasks without any significant change to the hyper-parameters.
[1] Continuum Attention for Neural Operators
```
Q9. Reason for large parameterization of FNO
```
We already addressed this in detail in a previous comment. For Table 2, the models are pre-trained and subsequently fine-tuned on the large PDEbench dataset. As FNO has a relatively simple architecture, we utilize a parameter-rich architecture to benefit from the pretraining and enable it to learn rich representations for subsequent prediction tasks.
---
Rebuttal Comment 3.1:
Comment: Thank you for conducting the additional experiments and providing further information. I really appreciate it.
However, I remain unconvinced that CoDA-NO is suitable for a broader range of multiphysics problems. Even in the new experiment, the FNO model trained from scratch outperforms the finetuned model with just 10 samples, which is a remarkably small number. Again, your conducted experiments suggests to me that pretraining CoDA-NO may not provide significant advantages in many cases.
If a model trained from scratch with only 10 samples can outperform CoDA-NO, it raises concerns about the complexity of the problems being considered (since models trained from scratch with such few samples are generally not expected to be accurate), as well as the ability of CoDA-No to generalize to unseen problems.
---
Reply to Comment 3.1.1:
Title: Reply to the Reviewer's Comment-1
Comment: We thank the reviewer for the response.
## On the performance of the newly reported Rayleigh-Bénard (RB) convection system
We believe there is a misunderstanding in interpreting the results of the RB system: with 10 samples, pre-trained CoDA-NO is better than FNO and not worse. Overall, **pretrained CoDA-NO outperforms FNO by an average of 20% on few-shot learning tasks** in these new results.
We want to emphasize that **our goal isn't just for CoDA-NO to outperform FNO on a fixed dataset but to show CoDA-NO's ability to adapt to multi-physics scenarios**. Unlike FNO, which is limited to a fixed set of physics, CoDA-NO can extend from single physics (such as fluid flow) to multi-physics (fluid flow + temperature for RB system) by easily only adding new channels for additional physical variables, without any change in the backbone architecture.
However, as we have already mentioned, this is a preliminary study. In particular, we could not yet simulate and train on sufficiently large datasets and tune our hyperparameters appropriately during the short discussion period. Please let us explain our process.
For the new experiments, we pre-trained CoDA-NO on a compressible Navier-Stokes (NS) dataset using only one configuration with a Mach number of 0.1, $\eta = 0.01$, and $\zeta = 0.01$ (not to be confused with the NS data use in the fluid-solid interaction reported in the Table 1 of the paper). Additionally, we conducted supervised training on a Rayleigh-Bénard convection (RB) system with a Rayleigh number of 2500, applying an initial perturbation to the temperature field to initiate convection. We pre-train with **only 6000 examples with a reduced resolution of $64 \times 64$** due to time constraints.
To our knowledge, there is no well-established public dataset for Rayleigh-Bernard convection systems; in particular, established benchmarks, such as PDEbench, do not include this system.
It required us to generate data ourselves, and with the limited time available, we faced challenges in simulating the Rayleigh-Bénard convection system due to its computational complexity. This process requires careful choices of parameters like **temperature difference, thermal expansion, fluid height, viscosity, geometry, and initial temperature perturbations**. While we are able to generate an initial dataset, we are currently still generating RB and NS datasets with different parameters to obtain a better understanding of the pre-training capabilities of CODA-NO.
In summary, the rebuttal period was too short for thorough data generation and pre-training with a large model, even though we worked continuously. Our initial results, shown in the previous response, had higher errors compared to those reported in the paper on fluid-solid interaction systems. This is why we called these results preliminary. The reason we reported those numbers was to inform the reviewers that we are dedicated to adding this study to the paper, as it will strengthen the message of this work. However, due to many challenges on the solvers' end and training overhead, we are worried this study will not be done in the next few days. | Summary: This paper presents a new operator learning method for solving multiphysics PDEs. The attention scheme is designed on channel space to capture multiple physical variables, which is called co-domain. Moreover, positional encoding and normalization layers are considered. Such a strategy enables self-supervised pretraining of PDE systems. The experiments have shown the effectiveness of the proposed method.
Strengths: - The proposed idea is interesting. It enables the generalization to coupled physical systems, which is of interest to the scientific machine learning (SciML) community. Also, self-supervised pretraining is one emerging tool in SciML and will gain a lot of attention in the future.
- This paper provides the experiments on a Navier-Stokes equation and its coupled version with the elastic wave equation.
- This paper is well-organized and well-written. The details are easy to follow.
Weaknesses: - This paper only considers one coupled system, i.e., NS and NS+EW. It may not validate the general applicability of the proposed method. The motivation of using this case should be enhanced. Also, considering some other PDE systems might strengthen the paper, such as the Rayleigh-Benard convection system. It is also a coupled system with NS + temperature.
- The motivation for the combination of positional encoding, self-attention, and normalization layers seems to be better clarified. Although those parts are modular (claimed in Line 68), the connections between each other are also important.
- In Appendix B.1, it would be good to include more details of self-supervised pretraining, such as masked ratio.
The evaluation metrics might not be sufficient. This paper only considers L2 errors. There are many papers considering relative l2 error [1]. For the turbulence data, researchers also care about the infinity norm a lot. It would be better to add more evaluation metrics in this paper.
**References:**
[1] Hao, Zhongkai, et al. "Dpot: Auto-regressive denoising operator transformer for large-scale pde pre-training." arXiv preprint arXiv:2403.03542 (2024).
[2] Ren, Pu, et al. "Superbench: A super-resolution benchmark dataset for scientific machine learning." arXiv preprint arXiv:2306.14070 (2023).
Technical Quality: 2
Clarity: 3
Questions for Authors: - This paper considers the generalization to different Reynolds numbers. Is it possible to generalize to different physical parameters of elastic wave equations, such as the object size or the solid density \rho^s?
- Lines 100-101, this paper claims that it considers diverse physical systems in terms of input functions, geometries, and Reynolds numbers. I would say it’s just different PDE scenarios within one PDE type (NS and NS+EW). It seems unrelated to the diversity of PDE systems, such as reaction-diffusion, convection systems, etc.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Please see my concerns in **Weaknesses** and **Questions**.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer values our work and recognizes the fact that this work is of interest to the scientific machine learning (SciML) community, presents a strategy that enables self-supervised pre-training of PDE systems, and states the importance of our experiments that have shown the effectiveness of the proposed method.
```
Q1. Additional Coupled PDE
```
We discuss the difficulty of modeling the fluid-solid interaction problem and justify our problem design in Appendix B1. Given the complex geometry and turbulent flow, the problem can be considered a challenging benchmark problem.
Additionally, we provide experiments on diverse PDEs (Navier–Stokes equations, Elastic wave equations, Shallow water equations, and Diffusion reaction; please see Table 2 and Appendix Sec E, Tables 11-12) and demonstrate the architecture's performance and flexibility.
As suggested, we also provide results on the Rayleigh-Benard convection system. The CoDA-NO is pre-trained in a supervised fashion on the Navier–Stokes (NS) equations. Then it is subsequently fine-tuned on a limited few shot data ($N \in \{5,10,25\}$) of the Rayleigh-Benard convection system. We show the result in the following table, where we report the $L2$ loss. We see that, in the low data regime, pre-trained CoDA-NO performs better than FNO and CoDA-NO trained from scratch. Please note that these results are primitive and yet to be finalized. Per the reviewer’s suggestion, we are running a comprehensive study on the Rayleigh-Benard convection system.
| Model | Pre-training dataset | N=5 | N=10 | N=25 |
| -------- | -------- | -------- |----- |----- |
| FNO | - | 0.439 |0.249 |0.130 |
| CoDA-NO | - | 0.315 |0.272 |0.203 |
| CoDA-NO | NS | 0.2537 |0.2223 |0.179 |
```
Q2. Motivation for the components of the CoDA-NO (The motivation for the combination of positional encoding, self-attention, and normalization layers seems to be better clarified. Although those parts are modular (claimed in Line 68), the connections between each other are also important.)
```
We outline our motivation in the problem statement (Section 4) and explain the need for Neural Operators that can handle PDEs with varying numbers of variables (lines 167-177). Given the well-established nature of the Transformer architecture and its property to handle variable length input, it has been a natural choice for our approach. We have redesigned key components of modern transformer layers, including positional encoding, self-attention, and normalization layers, to develop codomain attention in function space. We will further emphasize this in the main text.
```
Q3. Details of self-supervised pre-training, such as masked ratio
```
We provide the masking ratio and some additional details in Appendix F. For every training sample, one of the following two masking choices is selected with equal probability
1. 50\% of the mesh points of 60\% of variables are masked.
2. 30\% of the variables are masked out completely.
```
Q4. Additional Evaluation Metrics
```
L2 is the generalization of RMSE to function spaces and is considered a natural metric of evaluation. In this work, we also provide an Energy Spectrum representing the correct statistical distributions of the fluids to go beyond distance-based evaluation metrics (please see Appendix D.4). L2 norms, together with the Energy Spectrum, provide considerable evidence establishing the proposed model's superiority to the baseline. In addition to these two metrics, we also include L1 and relative L2 in our list of metrics. To this end, as the reviewer knows, any evaluation metric comes with its advantages as well as limitations. We hope that the current list plays a sufficient role in establishing evident results on the superior performance of the proposed method.
### Table: L1 and relative L2 Error
Results on the fluid-solid interaction dataset combining Navier-Stokes and Elastic wave equation **(NS-EW dataset)**.
| Models | Pre-training Dataset | # Train = 5 (L1/Rel-L2)| # Train=25 (L1/Rel-L2)| # Train=100 (L1/Rel-L2)|
|--------|----------------------|------------------------|-----------------------|------------------------|
| GINO | | 0.185/0.296 | 0.151/0.221 | 0.160/0.219 |
| DeepO | | 0.453/0.687 | 0.266/0.431 | 0.184/0.325 |
| GNN | | 0.083/0.130 | 0.056/0.082 | 0.059/0.082 |
| ViT | | 0.202/0.366 | 0.156/0.276 | 0.076/0.124 |
| U-net | | 0.793/1.186 | 0.284/0.463 | 0.174/0.291 |
| Ours | | 0.092/0.164 | 0.046/0.092 | 0.032/0.058 |
| Ours | NS | 0.074/0.141 | 0.032/0.072 | 0.030/0.059 |
| Ours | NS-EW | 0.066/0.128 | 0.040/0.077 | 0.033/0.057 |
---
Rebuttal 2:
Title: Rebuttal
Comment: Results on fluid motion dataset governed by Navier-Stokes equation **(NS Dataset)**.
| Models | Pre-training Dataset | # Train = 5 (L1/Rel-L2)| # Train=25 (L1/Rel-L2)| # Train=100 (L1/Rel-L2)|
|--------|----------------------|------------------------|-----------------------|------------------------|
| GINO | | 0.236/0.365 | 0.133/0.199 | 0.106/0.155 |
| DeepO | | 0.441/0.695 | 0.395/0.561 | 0.235/0.337 |
| GNN | | 0.141/0.187 | 0.074/0.096 | 0.049/0.071 |
| ViT | | 0.279/0.431 | 0.158/0.238 | 0.137/0.188 |
| U-net | | 2.001/ 3.508 | 0.683/1.178 | 0.298/0.422 |
| Ours | | 0.246/0.355 | 0.083/0.141 | 0.033/0.074 |
| Ours | NS | 0.080/0.148 | 0.040/0.081 | 0.024/0.0607 |
| Ours | NS-EW | 0.075/0.143 | 0.041/0.069 | 0.022/0.057 |
```
Q5. Generalization of different physical parameters
```
We also consider generalization to the Reynolds number and to the inlet boundary conditions (lines 291-292). The Reynolds number is the most important factor characterizing the dynamic of the PDE, and this is reflected in different benchmarking datasets, which often contain simulations at different Reynolds numbers. Hence we followed that.
It is possible to generalize different solid density $\rho^s$ and object sizes. However, to get a good result in the few-shot learning experiments, the pretraining data should contain PDEs with different $\rho^s$ and object sizes. We agree that this problem is a very important topic of study, deserving special treatment, and developing the right datasets for such studies is on the horizon for the ML community.
```
Q6. Application to diverse PDE system
```
We also provide experiments on different PDE systems (please see Table 2 and Appendix Section E Table 11-12), where we test the proposed CoDA-NO architecture on shallow water equation, diffusion equation, and Navier Stokes equation. Also, we agree that expanding to many more PDE systems is desirable, and for that, the field is in the process of making diverse PDE benchmark datasets.
---
Rebuttal Comment 2.1:
Comment: Thanks for the rebuttal. The additional experiments have resolved my concerns. I will raise my score. | Summary: This paper introduces Codomain Attention Neural Operator, which tokenizes function along the channel dimension. It allows to learn representations of different PDE systems within a single model. The authors shows that finetuning a pretrained CoDA-NO on different physics yields good accuracy.
Strengths: - I like the problem setting and the idea of the algorithm.
- The experimental task is quite interesting.
- The results are convincing, and the fact that the model can generalize to higher Reynold numbers seen during training is promising.
- I liked the used of GNO for handling non-uniform grids.
- The code seems solid.
Weaknesses: - To me, the main weakness of the paper is that the presentation lacks of clarity. I don't see the point of doing 3 pages of mathematics in function space if, in practice, everything is done in discrete space. I think this blurs the message of the paper and it is difficult for the reader to understand what is the relevant information for understanding the actual CoDA-NO algorithm. In my opinion, these mathematics are not essential to the algorithm and could be put in appendix. I can always express a neural network architecture in function space, but since in practice we are working on discretized space, it is never done in experimental deep learning papers. Moreover, no discussion on how to go from infinite-dimensional space to discretized space is given by the authors.
This space could be used to have the actual detailed architecture. I may have missed the point on the usefulness of these sections and am willing to understand the point of view of the authors regarding this.
- I don't fully understand the CoDA-NO algorithm and I think a Figure showing the whole architecture would have clarified this.
Technical Quality: 3
Clarity: 1
Questions for Authors: - Why do we need a VSPE per physical variable? Positional encoding are usually used when there is some sort of order between the tokens?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: Overall, I think my main obstacle to provide a better score is the fact that the paper is not very clear due to the introduction of a lot of mathematics not needed to understand the algorithm in practice. These mathematics do not bring any theoretical insight of the algorithm. They are just expressing the architecture in function space. I think this space could have been used to be clearer to explain what is the architecture or the philosophy of the work. Usually, in papers where I see such mathematics, a study of the sample complexity is provided (I know it is almost impossible to do for neural networks).
I am willing to discuss these points with the authors and to modify my score accordingly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer values our work and recognizes that this work presented a method for learning representations of different PDE systems within a single model.
```
Q1. Formulation of the Model in the Function Space and Clarity of the Paper
```
We politely disagree with the reviewer. Indeed, in practice, we typically work with discretized versions of functional data. However, this does not imply that we only deal with a "fixed" finite-dimensional space for which neural networks are designed. In practice, the discretizations often vary from one sample to another (i.e., the location and number of evaluation points). Many operations that work well for "fixed" finite-dimensional data do not behave consistently across resolutions. For example, regular CNNs (or GNNs with nearest-neighbor graph) collapse to pointwise operations when the resolution is refined (see [1]) and is thus not well suited for data that comes at different resolutions.
Such limitations of neural networks motivated the paradigm of neural operators as a generalization of neural networks to data originating from functions [2]. Designing architecture directly in the function space allows us to work across different discretizations seamlessly. It also allows us to draw parallels between various approaches to operator learning and ensure specific properties of the learning algorithms, such as the universal approximation property demonstrated in [3] for transformer architectures in function spaces. The mathematical foundation is often helpful in conveying the core ideas of the architecture design in operator learning. For example, here is reviewer ZkA5’s statement about our mathematical foundation: “The mathematical formulation of the proposed model is well-articulated in the paper. This clarity helps readers understand the underlying principles of the model's operation.”
To better explain the architecture, following the suggestion, we provide a detailed figure of the whole architecture (see Figure 2 in the rebuttal pdf).
[1] Neural Operators with Localized Integral and Differential Kernels
[2] Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs
[3] Continuum Attention for Neural Operators.
```
Q2. From infinite-dimensional space to discretized space
```
Our implementations are based on previous works on Neural Operators [1,2,3]. These studies discuss how operations on functional spaces are approximated on discretized data, e.g., using Riemann sum approximation of the integral operator, which scales appropriately with the resolution. We will clarify this concept in the paper.
[1] Fourier Neural Operator for Parametric Partial Differential Equations (see Section 4).
[2] Neural Operator: Graph Kernel Network for Partial Differential Equations (see Section 3).
[3] Neural Operator: Learning Maps Between Function Spaces With Applications to PDEs.
```
Q3. Figure showing the whole architecture.
```
Thanks for the suggestion. We added a detailed figure (Figure 2) in the rebuttal pdf.
```
Q4. Use of VSPE per physical variable
```
In this work, we treat each of the input function's variables (codomains) as a token for the Codomain attention. The model needs a way to identify each variable (e.g., temperature, velocity, pressure). In LLM, it is done through position ID. In CoDANO, we propose the Variable-Specific Positional Encoding (VSPE), which is learned separately for each variable and then concatenated with the corresponding variables. It allows the model to differentiate between the various PDE variables and their interactions. The following table demonstrates the benefit of using VSPE for fine-tuning on $5$ examples for $\mu = 5.0$ (from Appendix D1, Table 3). Please refer to Appendix D1, Table 3 for a detailed analysis.
| VSPE | Pretrain Dataset | NS | NS+EW |
|--------------|------------------|-------|-------|
| x | NS | 0.049 | 0.079 |
| x | NS EW | 0.045 | 0.057 |
| $\checkmark$ | NS | 0.025 | 0.071 |
| $\checkmark$ | NS EW | 0.024 | 0.040 |
```
Q5. On the clarity and organization of the paper
```
We have addressed this in a previous comment, noting that developing architecture directly in the function spaces is crucial, conveying the philosophy of the work. We also agree with the reviewer that this should not overshadow the practical implementation aspect of the work. Our philosophy and goals are outlined in the introduction (lines 28-41) and in the Problem Statement (see Sec 4).
We provide additional detailed figures (see the attached PDF), and we move some background details to the Appendix, as suggested by the reviewer.
To this end, we would appreciate it if the reviewer reconsidered our explanation and problem setup of operator learning in this paper. The field of neural operators is still in its early stages, and similar to the early stages of neural networks, explaining the problem setup and detail of the architecture may still be considered crucial and insightful.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal.
Q 1-2. Thank you for the thorough justification of why all these mathematics are in the main paper. I think I get the point, and I strongly suggest to write a high-level explanation motivating their introduction. Indeed, in such an experimental paper, one may ask themselves the same questions as mine, and be discouraged of reading it if it is not motivated enough.
Q3. Thank you for the figure. I think this is a bit better now, but I really want to insist on the fact that clarity about the architecture is key in these complex experimental papers, and not making it clear would discourage people wanting to enter the field.
Q4. Thank you for this clarification.
Q5. I hope the reorganization by the authors could make the method easier to understand.
Overall I think this is a good paper tackling a new problem. The paper is not very clear, but the authors seem to be willing to arrange the paper to make it more accessible. I will increase my score to 6, and discuss with the other reviewers and the AC to have their opinion.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and valuable suggestions. We are delighted to see that the reviewer has appreciated our work.
Response to Q1-2: We will enhance the high-level explanation in the introduction.
Response to Q3: In addition to adding the extended figure, we will incorporate pseudocode in the revised text to convey the procedure clearly. We believe including the extended figure and pseudocode will effectively communicate our idea.
Response to Q5: As indicated in the rebuttal, we will relocate some background information to the appendix and concentrate more on the motivation and high-level procedure in the main text. | Summary: The authors propose CoDA-NO, a neural operator architecture that captures interactions across different physical variables of coupled PDE systems. The method involves a generalization of the transformer architecture, including self-attention, positional encodings, and normalization, to function spaces. On two novel datasets for fluid-structure interaction and fluid dynamics, the authors show that their method achieves state-of-the-art performance.
Strengths: - The paper investigates an interesting problem of how to appropriately capture interactions across different physical variables, that allows for generalization to new codomains.
- As far as I am aware, the generalization of the Transformer architecture to function spaces is novel.
- The experimental results, especially the generalization capabilities (from fluid dynamics to fluid-solid interactions) are impressive.
- Ablation studies on the proposed architectural changes are thorough.
Weaknesses: Overall, the experiments seem quite compelling. However, it could be illuminating to provide a graphical visualization of the data from Table 1, regarding efficiency of fine-tuning and robustness to out-of-distribution inputs: see questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It seems that the performance of the models across the board continue to improve with increase few-shot fine-tuning samples beyond N=100. What does the scaling look like for the proposed model and where does performance saturate?
- Similarly, the model is evaluated on the in-distribution Re=400 and the out-of-distribution Re=4000 settings, for which the performance of the model is comparable. What does the scaling look like as the task becomes further out-of-distribution (e.g. decreasing velocity)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate that the reviewer values our work and recognizes the main contributions to the novel generalization of the transformer architecture, including self-attention, positional encodings, and normalization to function space, along with the introduction of two new challenging datasets. We further may state that in this work, we not only based our study on the mentioned two PDEs, but we also have SWE and Diffusion Eq. Per the reviewer's request, we are running experiments on the Rayleigh-Benard convection system, with initial results stated in the reviewer qRRk’s reply.
```
Q1 Visualization of the Data
```
We thank the reviewer for the visualization suggestion. We provide the visualization of data in the Appendix, Sec B Dataset Description.
```
Q2 Performance scaling with data (It seems that the performance of the models across the board continues to improve with an increase in few-shot fine-tuning samples beyond N=100. What does the scaling look like for the proposed model, and where does performance saturate?).
```
We conduct experiments with N=250, 500, 1000 for NS+EW with Re=400. We observe that performance continues to improve when training on more data. Therefore, finding the saturation point requires the generation of a much larger dataset, which requires future development and compute allocation for the solver. The precise scaling is problem-dependent.
| Model | Pre-training | N = 100 | N=250 | N=500 | N=1000 |
| -------- | -------- | -------- |----------|----------|-----------|
| CoDA-NO | NS | 0.005 | 0.003 | 0.003 | 0.001 |
| CoDA-NO | NS+EW | 0.003 | 0.003 | 0.002 | 0.001 |
```
Q3 Similarly, the model is evaluated on the in-distribution Re=400 and the out-of-distribution Re=4000 settings, for which the performance of the model is comparable. What does the scaling look like as the task becomes further out-of-distribution (e.g. decreasing velocity)?
```
For Re=4000, we observe that the performance of the pre-trained models is worse than for Re=400. We hypothesize that the reason may be intuitively twofold,
1. Fluid motion at Re=4000 is highly turbulent and
2. the data is out-of-distribution.
Inspired by other studies in machine learning, CoDA-NO will perform worse on out-of-distribution data and will require further data for finetuning. However, our evidence suggests that the performance will be better than training from scratch.
Also, finding the accurate scaling with respect to ODD, i.e., what degree of out-of-distribution the model can handle, requires extensive problem-dependent study
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed response and additional experimental results! I will leave my score as is. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable comments. We appreciate reviewers for recognizing the presentation of a novel neural operator architecture that “captures interactions across different physical variables of coupled PDE systems and the fact that the method involves a generalization of the transformer architecture, including self-attention, positional encodings, and normalization, to function spaces. “ It has been recognized by reviewers that this study is of interest to the scientific machine learning (SciML) community.
This paper also introduces two challenging datasets based on NS and NS+EW, which are coupled PDEs that resemble problems in weather forecasting and climate modeling.
Based on the reviewer's comments, we have the impression that reviewers may have interpreted that we only study CoDA-NO on the two mentioned datasets. We would like to state that we study CoDA-NO on four PDEs: NS, NS+EW, SWE, and Diffusion equations. Accordingly, we edited the paper's presentation, ensuring that the explanations and structure imply that we study more than two PDEs.
In addition, per the reviewer qRRk’s suggestion, we started to incorporate another PDE system, i.e., the Rayleigh-Benard convection system. Our first runs show positive results, as presented below, showing an **average improvement of 20% compared** to the Fourier neural operator (see the response to reviewer qRRk). We will add the finalized comprehensive study to the main draft.
Pdf: /pdf/b11252fcd1f0c4cc9c576854e34ef206372d6110.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural Collapse Inspired Feature Alignment for Out-of-Distribution Generalization | Accept (poster) | Summary: This paper addresses the problem of spurious correlations caused by environments from where data are collected.
The proposed method applies a mask to input data to separate spurious and semantic features.
The masked input data are fed into a local model specialized to each environment.
Each local model is trained to induce neural collapse for OOD generalization.
Strengths: - S1: Making use of neural collapse for OOD generalization is interesting.
Weaknesses: - W1: Comparison with not only OOD generalization methods but also spurious correlation (sometimes called bias or shortcut) methods is necessary. Methods that can automatically detect and split spurious and semantic features have been developed [a-e].
- W2: Types of spurious features that the proposed method can handle need to be clarified. Can the proposed method handle spurious features in superposition, e.g., objects and textures?
- W3: The rationale behind the proposed method needs to be clarified. For instance, it is unclear why the method adds the noise to the mask when learning it.
- W4: Deeper analyses in the experiments would make the paper more interesting. For example,
- Whether the neural collapse is achieved by the proposed method should be confirmed in the experiment.
- Visualizing learned masks would produce more valuable insights.
- W5: What is described in the introduction and what is done in the proposed method seems to be different. Although L42 states that `we propose to compute the Frobenius norm (F-norm) of the difference between the feature prototypes and the standard simplex ETF`, the F-norm does not appear in the proposed method.
- W6: Writing and formatting can be improved. There are many inconsistent spellings. For example,
- Is "variable features" in L153 the same as spurious features?
- The meaning of "interaction" in L189, 192, and so on is unclear. Maybe "training?"
- Such inconsistent spellings occur from Section 4.
[a] Tiwari, Rishabh, and Pradeep Shenoy. "Overcoming simplicity bias in deep networks using a feature sieve." ICML2023.
[b] Bahng, Hyojin, et al. "Learning de-biased representations with biased representations." ICML2020.
[c] Yang, Wanqian, et al. "Chroma-vae: Mitigating shortcut learning with generative classifiers." NeurIPS2022.
[d] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." ICML2021.
[e] Nam, Junhyun, et al. "Learning from failure: De-biasing classifier from biased classifier." NeurIPS2020.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Q1: Why do the accuracies of IRM (w/ env) in Tables 1 and 2 differ?
- Q2: When environment labels are not available, how are spurious and semantic features learned? Is there a possibility that the two types of features are learned conversely?
- Q3: How do we determine the number of local models when the total number of the environments is unknown?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Discussed in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ****Respond to W1:**** Thanks for the comments. We would like to explain as follows:
The OOD methods encompass techniques for addressing spurious correlations. Our comparison methods are comprehensive, covering both approaches that handle spurious correlations, such as IRM, VREx, GroupDRO, and other OOD methods like DG, including MLDG, MMD, and CDANN. Additionally, we have compared our work with those proposed by the reviewer. Our setting is distinct in that [a-e] address bias issues, while we focus on spurious correlations and follow [1] to benchmark our approach against state-of-the-art baselines.
**Comparison with [a]:** [a] involves utilizing an auxiliary network to identify and eliminate spurious features in the feature space early on. While, our method leverages the phenomenon of neural collapse to achieve feature alignment, thereby eliminating spurious features.
**Comparison with [b]:** [b] utilizes HSIC to achieve independence and learns invariant representations on biased data. In contrast, our method employs a two-stage interactive process, utilizing neural collapse to ultimately achieve learning of invariant features.
**Comparison with [c]:** [c] achieves invariant representation learning during network training by partitioning the hidden layer space into shortcut and complementary subspaces, guided by loss functions. In contrast, our method automatically partitions across different environments, leveraging the properties of class-invariant features to align class-specific features in various environments.
**Comparison with [d]:** [d] also follows a two-stage process: the first stage utilizes ERM for training, and the second stage handles instances where ERM identification errors occur. In contrast, our approach first partitions environments. Then, we employ neural collapse to align class-specific features.
**Comparison with [e]:** [e] involves constructing two networks: one amplifies bias information using GCE, representing variable information, while the other learns debiased information, representing semantic information. In contrast, we employ a two-stage interactive approach to learn invariant networks.
[1] Map: Towards balanced generalization of iid and ood through model-agnostic adapters, 2023.
****Respond to W2:**** Thanks for your questions. We would like to explain this question as follows: We are currently following existing work where neither the baseline models in the literature nor the existing benchmarks incorporate stacked features.
****Respond to W3:**** Thanks for the comments. We would like to clarify it as follows:
We follow the approach in [1,2], where we introduce some noise during the initialization of masks to enhance their randomness. This enables the masks to better capture invariant features.
[1] Invariant Representation Learning for Multimedia Recommendation
[2] Kernelized Heterogeneous Risk Minimization
****Respond to W4:**** Thanks for the comments. We would like to clarify it as follows:
In the introduction, we employ the F-norm to assess the feature conditions resulting from different methods. According to the theory of neural collapse, after network training, features collapse onto a simplex. To visually illustrate this phenomenon, we use the F-norm to measure the discrepancy between the features post-training and a standard simplex, as depicted in our first figure.
In the main text, we utilize Eq. 5 to align the post-trained features to the standard simplex using a push-pull mechanism. In different environments, invariant features are inherently more similar, thus they consistently align to the same ETH.
****Respond to W5:**** We will incorporate significant improvements in the subsequent revisions of the paper.
****Respond to Q1:**** Thanks for the very constructive comments. We would like to clarify it as follows:
In Table 1, we directly apply the IRM algorithm to restrict parameter variations, thereby achieving invariant learning without utilizing a masking mechanism. In Table 2, to demonstrate that our approach with masking can also yield improved results, we substitute the process of learning masks with the IRM loss as shown bellow.
$$\mathcal{L}\_{mask}=\mathbb{E}\_{e\in\mathcal{E}}\mathcal{L}^{e}+\alpha\\|\mathrm{Var}\_{e\in\mathcal{E}}(\nabla\_{\Theta^{mask}}\mathcal{L}^{e})\odot\mu\\|^{2}+\lambda\\|\mathbf{m}\\|^{2},$$
where the $\mathcal{L}^{e}$ is the classification loss, the second term is the constraint across environments, and the third term is a regularization term. $\mathcal{L}^e$ is the average loss value inside the environment $e$.
Consequently, the accuracies differ between the two tables.
****Respond to Q2:**** Thanks for the comment. We would like to clarify it as follows:
In our paper, we have clarified that when the environment is unknown, we first randomly partition the data into environments. Subsequently, through iterative interaction between environment partitioning as described in Algorithm 1 and mask learning, we achieve convergence in both environment partitioning and mask learning processes. We learn an invariant mask to distinguish between spurious and semantic features in the input data. The learning phase of the invariant mask minimizes the loss function defined in Eq. 5. When this process converges, we assert that the features align well with the standard simplex post-training. Importantly, since we minimize the loss on the masked features, which represent semantic features, there is no issue of conflicting learning between two types of features.
****Respond to Q3:**** Thanks for the comments. We would like to clarify it as follows:
When the number of environments is unknown, we treat it as a hyperparameter in accordance with our previous method. By adjusting the size of this hyperparameter, we aim to achieve optimal out-of-distribution (OOD) generalization performance. In our paper, we also conducted ablation experiments to investigate scenarios where the number of environments is unknown.
---
Rebuttal 2:
Title: The Additional Experiments according to the reviewer's suggestion
Comment: We compared the papers provided by the reviewer. Due to the unavailability of code for papers [a, c, e], we evaluated the methods from [b, d] on ColoredCOCO and COCOPlaces. We also compared our approach with Rubi [f] and LearnedMixin [g]. Our method demonstrates superior performance compared to these approaches.
Methods |ColoredCOCO| COCOPlaces|
| -------- | ------------------ | ----------------- |
**Ours** |**63.9±0.5** |**43.7±0.7**
Rebias [b] |56.0±0.2 |39.2±0.2
Jtt [d] |55.3±0.3 |38.0±0.4
RUBi [f] |53.8±0.5 |32.7±0.7
LearnedMixin [g] |52.0±1.4 |30.2±0.5
[a] Overcoming simplicity bias in deep networks using a feature sieve. ICML2023.
[b] Learning de-biased representations with biased representations. ICML2020.
[c] Chroma-vae: Mitigating shortcut learning with generative classifiers. NeurIPS2022.
[d] Just train twice: Improving group robustness without training group information. ICML2021.
[e] Learning from failure: De-biasing classifier from biased classifier. NeurIPS2020.
[f] Rubi: Reducing unimodal biases for visual question answering. Neurips2019.
[g] Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. EMNLP2019.
---
Rebuttal Comment 2.1:
Comment: Thank you for the rebuttal and additional experiment.
For Q2, my question is whether the mask learns to remove the spurious features from input data.
For instance, in ColoredCOCO, is there any possibility that the mask learns to remove the semantic features, i.e., the objects, and the predictive model learns classification using background colors?
Without prior knowledge such as environment labels, the model and mask cannot distinguish which is the semantic feature.
---
Reply to Comment 2.1.1:
Title: Thank you for the reviewer's further discussion.
Comment: Thank you for the reviewer's further discussion. Below, we provide a detailed analysis of how our method achieves environment partitioning and semantic feature mask optimization through a **two-stage interaction process when environment labels are unknown.** We also explain how the mask is utilized to learn invariant semantic information.
We initialize $m\odot x$ as the semantic or invariant features of $x$. Consequently, $(1-m) \odot x$ represents the spurious or variable features of $x$. Our method iteratively learns and interacts in two stages, gradually achieving environment partitioning and distinguishing between semantic and spurious information.
**Step 1: Environment Partitioning Process:** Since the distinction between different environments arises from variations in spurious features, we use these spurious features to partition environments. Based on the current sample $x$, we obtain spurious feature $\Psi(x) = (1-m) \odot x$, then randomly assign environment labels to the sample. We use the environment partitioning model to make predictions for the samples and reassign environment labels by maximizing the predicted values, as illustrated in Equation 4.
$$e(x) = argmax_{e \in \mathcal{E}} \Gamma^{(e)}(x,\Psi|\omega_e).$$
**Step 2: Mask Optimization and Learning Stage:** For datasets $D^e_{tr}$ under different environments, we apply the invariant feature mask $m$ to obtain the semantic features for each environment. These semantic features $\Phi^e(x)$ are then used for model training across environments. Since the categories are consistent across environments, their semantic features are similar. Therefore, we employ neural collapse methods for feature alignment, aiming to align semantic features from different environments onto a common Simplex Equiangular Tight Frame (ETF), as shown in Equation 5,
$$\mathcal{L}\_{mask}=-\log\frac{n\_{e,y}^\gamma\exp(\beta\cdot\mathbf{v}\_y^T\mathbf{f})}{\sum\_{k}
n\_{e,k}^\gamma\exp(\beta\cdot\mathbf{v}\_k^T\mathbf{f})}, k\in K, e \in \mathcal{E}.$$
This process results in the updated invariant mask and the newly defined environment partitions after one complete iteration.
After performing multiple rounds of **Step1: environment partitioning** and **Step2: mask optimization**, if the data within the newly defined environments does not significantly differ from that in the environments defined in the previous iteration, we consider the environment partitioning to have converged. At this stage, we use the data from these environments to train the invariant network, aiming to achieve out-of-distribution (OOD) generalization. The final results reflect the effectiveness of our method, demonstrating the successful separation of semantic and spurious features using semantic masks.
We look forward to your feedback if you have any further questions!
---
Rebuttal 3:
Title: Many thanks for raising the score and we also provide further analysis to address the reviewer's concerns.
Comment: We sincerely appreciate the reviewer for raising the score. Regarding the concerns you mentioned, we would like to offer further clarification. Although we are currently unable to use CAM visualization to demonstrate our mask performance, we plan to include a mask visualization in the final version to provide a more intuitive presentation. Below, we present more granular experimental results and detailed analysis to clarify the mask learning process for the two types of features.
> **Experiments**
We designed an experiment using our proposed method for a two-stage interactive learning process to obtain partitioned environments and invariant masks $m$. To verify that $m$ can effectively distinguish between spurious and invariant (or semantic) features, we conducted additional experiments. Specifically, we used the feature information from $m \odot x$ and $(1-m) \odot x$ to predict during the test phase and compared their performance. The results show that using $m \odot x$ significantly improves the model's out-of-distribution (OOD) generalization performance compared to using $(1-m) \odot x$. This demonstrates that our proposed mask accurately captures both invariant (or semantic) and spurious features.
Methods|ColoredMNIST|ColoredCOCO|COCOPlaces|
|---|---|---|---|
**Ours (m)**|**66.9±2.4**|**56.9±1.1**|**36.7±0.9**
Ours (1-m)|21.8±0.9|16.3±1.8|8.4±0.6
> **Analysis**
We initialize the masks, so that $m \odot x$ may not accurately reflect the input's semantic features. To obtain a semantically representative mask $m$, we align features of different categories from various environments to a standard simplex equiangular tight frame (ETF), leveraging the property of neural collapse (where features from different classes form a canonical simplex ETF upon balanced training). In response to the reviewer's concerns, we can address them using two metrics in our paper.
- **The variation in environment partitioning:** It is significant at the beginning because the initial mask $m$ does not accurately capture the invariant features of the input. Consequently, $1 - m$ also fails to effectively represent spurious features, leading to suboptimal environment partitioning. However, as training progresses with the neural collapse algorithm, the semantic mask $m$ improves, leading to better separation of semantic features and spurious features. Ultimately, environment partitioning stabilizes, showing minimal changes when it is complete. The table below shows changes in environment partitioning between the new environment and old environment over multiple interactions using our method in ColoredCOCO dataset. It indicates that the variation in environments decreases over interactive time, suggesting that $1 - m$ increasingly captures spurious information for better environment partitioning.
Number of interactions|0|1|2| 3 |4 |5|6|7|8|9|10|11|12|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
$\Delta N$|3200|2011|1099 | 696 | 442| 509 | 405| 325 | 280|235| 174|75|23
where $\Delta N = N^{(e)}\_{new} - N^{(e)}_{old}$. $\Delta N$ represents the number of sample changes between the previous iteration (i.e., the old environment) and this iteration (i.e., the new environment). As the number of iterations increases, the change in the number of samples between environments gradually converges to 0, representing the environment partitioning fit.
- **The loss incurred from using the neural collapse method for alignment in the models in different environments:** Initially, the loss is high because $m$ is initialized. However, as the number of interactions increases, the loss gradually decreases to a low level. At this point, $m$ effectively captures the semantic features across different environments. The table below shows the initial loss at different stages of interaction using our method in ColoredCOCO dataset. It reveals a successive decrease in loss, indicating that $m$ increasingly captures the invariant (or semantic) features of the input more effectively.
Number of interactions |0|1|2| 3 |4 |5|6|7|8|9|10|
|---|---|---|---|---|---|---|---|---|---|---|---|
$\mathcal{L}\_{mask}$|10.69|5.48|2.25 | 1.69 | 1.55| 1.47 | 1.21| 1.02 | 0.82|0.65| 0.50
where $\mathcal{L}\_{mask}=-\log\frac{n\_{e,y}^\gamma\exp(\beta\cdot\mathbf{v}\_y^T\mathbf{f})}{\sum\_{k}
n\_{e,k}^\gamma\exp(\beta\cdot\mathbf{v}\_k^T\mathbf{f})}, k\in K, e \in \mathcal{E}$.
We sincerely hope the reviewer will reevaluate our paper based on the additional experiments and explanations provided above. We look forward to your feedback if you have any further questions!
---
Rebuttal Comment 3.1:
Title: A kind request for your feedback about the summary of our clarifications.
Comment: Many thanks for taking time on reviewing our paper. We would like to provide a summary of the key issues raised by the reviewer. We addressed the reviewer's questions through both **rigorous analysis** and **comprehensive experiments**.
1. **Experiments:** To verify whether $m$ can effectively distinguish between spurious and invariant features, We trained the invariant network using $m \odot x$ and $(1-m)\odot x$, and compared their performance during the final testing phase.
Methods|ColoredMNIST|ColoredCOCO|COCOPlaces|
|---|---|---|---|
**Ours (m)**|**66.9±2.4**|**56.9±1.1**|**36.7±0.9**
Ours (1-m)|21.8±0.9|16.3±1.8|8.4±0.6
2. **Analysis:** we can address them using two metrics in our paper.
- **The variation in environment partitioning:**
Num |0|1|2| 3 |4 |5|6|7|8|9|10|
|---|---|---|---|---|---|---|---|---|---|---|---|
ColoredCOCO|3200|2011|1099 | 696 | 442| 509 | 405| 325 | 280|235| 174
- **The loss incurred from using the neural collapse method for alignment in the models in different environments:**
Step |0|1|2| 3 |4 |5|6|7|8|9|10|
|---|---|---|---|---|---|---|---|---|---|---|---|
ColoredCOCO|10.69|5.48|2.25 | 1.69 | 1.55| 1.47 | 1.21| 1.02 | 0.82|0.65| 0.50
We sincerely hope the reviewer will re-evaluate our paper based on the additional experiments and explanations provided above. We look forward to your feedback if you have any further questions! | Summary: The paper leverages the neural collapse inspired ETF behavior to simulate different environments in datasets, and uses it for OOD classification.
Strengths: The paper uses a phenomenon that's apparent in the standard setting, for a task that varies from the standard setting. It uses intuitive notions to tackle the task of OOD classification. The paper experiments are generally convincing.
Weaknesses: The paper seems generally consistent and well merited. The experiments are a bit lacking, but are convincing.
Technical Quality: 3
Clarity: 2
Questions for Authors: The following papers seem missing from the neural collapse literature that may be helpful:
1. https://arxiv.org/abs/2112.15121
2. https://proceedings.neurips.cc/paper_files/paper/2023/hash/b63ad8c24354b0e5bcb7aea16490beab-Abstract-Conference.html
3. https://openreview.net/pdf?id=162TqkUNPO
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Respond to Q1:** Thank you for the reviewer's suggestions. We will include citations to these papers in a subsequent version.
---
Rebuttal 2:
Title: We would like to supplement detailed discussion on neural collapse literature and more convincing experiments.
Comment: We sincerely appreciate the reviewer's positive feedback on our manuscript. Due to the time constraints of the rebuttal period, training on datasets like NICO, which require extensive computational resources, took longer than anticipated. Therefore, we have to present all the supplementary experimental results now. **We sincerely hope that the reviewer will re-evaluate our work and decide the final rating based on the following discussions.**
> **The following papers seem missing from the neural collapse literature that may be helpful.**
**We thank the reviewer for providing three important neural collapse literature, and would like to incorporate the following discussion in our final version.**
- [1] employs the neural collapse algorithm in **transfer learning to facilitate few-shot learning**, enabling the training of a linear classifier on top of the learned penultimate layer and achieving good results. In contrast, our approach utilizes the neural collapse phenomenon in the context of imbalanced categories across different environments to **learn invariant masks and perform environment partitioning,** ultimately enhancing OOD generalization.
- [2] primarily leverages the phenomenon of neural collapse to guide **semantic clustering of categories in self-supervised learning.** This is achieved by using regularization to promote clustering within the same category and to separate clusters of different categories. In contrast, our method aligns features of the same category across different environments to learn invariant feature masks. These masks are then used to enable the network to better learn invariant features, thereby enhancing OOD generalization.
- [3] represents a highly significant contribution that can greatly enhance the quality of our work. It primarily focuses on using the neural collapse phenomenon to introduce a novel method for **measuring the generalization bounds of neural networks.** This method, being empirically non-vacuous and largely independent of depth, offers superior generalization bounds compared to traditional measurement approaches. In future versions, our method can draw on the approach presented in this paper to conduct a theoretical analysis of the generalization bounds.
**We also investigate recent advances in neural collapse algorithm for other applications as below.**
- [4] investigates the use of the neural collapse phenomenon within a **continual learning** setting. The scenario involves introducing new classes in a few-shot manner after the model has been trained on a large dataset, leading to an imbalance between old and new classes and resulting in severe catastrophic forgetting. By employing the neural collapse method, the model can align old class features and separate new class features, thereby preserving knowledge of old classes while learning new class knowledge effectively.
- [5] addresses the problem of sample imbalance in **large models** using the neural collapse method. By leveraging the property of neural collapse where features between classes exhibit a standard ETF distribution after balanced training, the method guides the collapse of features across different categories. This approach enhances the performance of large models in the context of sample imbalance.
> **The experiments are a bit lacking, but are convincing.**
**To make the experiment section in our paper more convincing, we have added extensive experiment as below.**
- **We conduct additional ablation experiments to explore the effect of mask position (i.e., different neural network layers) on the OOD performance, by applying masking to the output features of different layers in the network..**
Methods|ColoredCOCO|COCOPlaces|ColoredMNIST
|-|-|-|-|
Output Layer (suggested)|**63.9 ± 0.5**|43.7 ± 0.7|**58.7 ± 2.8**
Layer 0|62.3 ± 0.6|41.2 ± 0.5|54.2 ± 1.5
Layer 1|63.3 ± 0.4|**44.5 ± 0.3**|53.6 ± 1.7
Layer 2|61.2 ± 0.2|40.2 ± 0.1|50.6 ± 0.3
Layer 3|63.2 ± 0.3|43.2 ± 0.6|51.2 ± 0.9
We found that putting the mask in the output layer (i.e., the previous layer of the linear classifier) gave the optimal results in general, which is also consistent with the findings of neural collapse.
- **In comparison with the two-stage HRM method, we found that our approach stably outperforms HRM.**
Methods|ColoredMNIST|ColoredCOCO|COCOPlaces|NICO|
|-|-|-|-|-|
Ours|**66.9 ± 2.4**|**56.9 ± 1.1**|**36.5 ± 0.8**|**85.9 ± 0.2**
HRM [6]|61.3 ± 0.6|52.7 ± 0.7|33.4 ± 0.6|79.6 ± 0.9
- **We also compared our method with more competing baselines and found that our method outperforms them.**
Methods |ColoredCOCO| COCOPlaces| NICO
|-|-|-|-|
**Ours** |**63.9 ± 0.5** |**43.7 ± 0.7** | **85.4 ± 0.6**
Rebias [7] |56.0 ± 0.2 |39.2 ± 0.2 | 78.4 ± 1.7
Jtt [8] |55.3 ± 0.3 |38.0 ± 0.4| 75.5 ± 0.3
RUBi [9] |53.8 ± 0.5 |32.7 ± 0.7 | 73.4 ± 0.2
LearnedMixin [10] |52.0 ± 1.4 |30.2 ± 0.5 | 73.7 ± 0.1
We will definitely put the above additional experimental results into our revised manuscript -- thank you so much!
---
Rebuttal 3:
Title: Please kindly find our supplemental references as follows.
Comment: [1] "On the role of neural collapse in transfer learning." arXiv, 2021.
[2] "Reverse engineering self-supervised learning." NeurIPS, 2023.
[3] "Comparative generalization bounds for deep neural networks." TMLR, 2023.
[4] "Neural collapse inspired feature-classifier alignment for few-shot class incremental learning." ICLR, 2023.
[5] "Bridging the gap: neural collapse inspired prompt tuning for generalization under class imbalance." KDD, 2024.
[6] "Heterogeneous risk minimization." ICML, 2021.
[7] "Learning de-biased representations with biased representations." ICML, 2020.
[8] "Just train twice: Improving group robustness without training group information." ICML, 2021.
[9] "Rubi: Reducing unimodal biases for visual question answering." NeurIPS, 2019.
[10] "Don't take the easy way out: Ensemble based methods for avoiding known dataset biases." EMNLP, 2019.
---
Rebuttal Comment 3.1:
Comment: Dear reviewer F1YK,
Since the discussion period will end in a few hours, we will be online waiting for your feedback on our rebuttal, which we believe has fully addressed your concerns.
We would highly appreciate it if you could take into account our response when updating the rating and having discussions with AC and other reviewers.
Authors of # 10491 | Summary: The spurious correlation between image background features and their labels is a significant research problem, and the existing research suffers from the issue of difficult decoupling. In this paper, we propose a new approach to solve the spurious association problem by alternately performing environment segmentation and learning semantic masks from the perspective of neural collapse. Extensive experiments are conducted on four datasets and the results show that the proposed method significantly improves the out-of-distribution performance.
Strengths: This paper explores an important and widespread problem in real-world applications with solid and extensive experiments. The writing is clear and the narrative is easy to follow, facilitating an understanding of the spurious correlations problem. The use of neural collapse is particularly innovative.
Weaknesses: W1: In lines 48-50, it is mentioned that IRM-based methods learn similar representations from different environments, indicating a lack of proper alignment. Could you provide a corresponding experiment to demonstrate this phenomenon?
W2: In Figure 3, the explanation of the middle module that uses logits to judge the environment is unclear. Could you please clarify the structure of the local models, the number of local models used, and the specific meaning of the logit values?
W3: Could you explain the differences between masks based on pixel-level and feature-level approaches? If using feature-level masks, what is the impact of different network-layer features on model performance?
This work addresses an important and interesting question by introducing neural collapse from an invariant perspective, which I believe can provide valuable insights to the community. However, my main concern is that the same mask is used to learn both invariant and variable feature information. What are the advantages of the mask learning mechanism proposed in this paper compared to HRM's [1] mask mechanism?
[1] Heterogeneous Risk Minimization
Technical Quality: 3
Clarity: 3
Questions for Authors: For more information see Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors have adequately described the limitations in their submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ****Respond to W1:**** Thank you for pointing out the issue. We indeed omitted the comparisons in our manuscript. We have actually demonstrated this in Figure 1, where we used the F-norm to measure the degree of alignment. A smaller F-norm indicates that, after training, the feature prototypes are closer to the standard ETF.
****Respond to W2:**** Thank you for the reviewer's reminder. We sincerely apologize for the misunderstanding caused by our negligence. A precise description will be provided in future revisions.
As shown in Figure 3, our method represents the variable features of the input as $(1-m) \odot x$. By using $(1-m)$, we obtain the variable features (equivalent to the environmental information) from the previously partitioned data. The neural networks corresponding to different environments are then trained on these variable features. After training, the neural networks from different environments predict the respective logits. For each input’s true label, the index with the maximum value determines the environment of the input. The structure of the local model is the same as the local model used for alignment with the neural collapse algorithm later.
****Respond to W3:**** Thanks for the valuable comments. We would like to explain as follows:
The primary difference between the two methods is the location of the masking. Pixel-level masking is applied to the input image, offering stronger interpretability. Feature-level masking, on the other hand, applies masking to the output features of the neural network, achieving better results and reducing the dimensionality of the masking. To address the reviewers' concerns, we performed experiments by applying masking to the output features of different layers in the network.
Methods |ColoredCOCO| COCOPlaces|
| -------- | ------------------ | ----------------- |
Ours |63.9±0.5 |43.7±0.7
Ours layer0 |62.3±0.6 |41.2±0.5
Ours layer1 |64.3±0.3 |44.5±0.3
Ours layer2 |61.2±0.2 |40.2±0.1
Ours layer3 |63.2±0.3 |43.2±0.6
****Respond to W4:**** Thanks for the valuable comments. We would like to explain as follows: Our method aligns invariant features across different environments by leveraging the phenomenon of neural collapse. For the same category, invariant features should be similar; thus, we use the same ETF for alignment, facilitating mask learning. Although the HRM algorithm also employs a two-stage approach, it partitions environments through clustering and then learns the mask through regularization. We conducted comparative experiments with the HRM algorithm to evaluate our method.
Methods |ColoredMNIST | ColoredCOCO| COCOPlaces|
| --- | ---|--- | --- |
Ours |66.9±2.4|56.9±1.1 |36.5±0.8
HRM |61.3±0.6 | 52.7±0.7 |33.4±0.6
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Thank you for your rebuttal!
I want to thank the authors for their tremendous efforts during the rebuttal process. The additional comparative experiments have significantly improved the quality of the paper. The authors conducted relevant experiments to demonstrate the impact of mask positioning, thoroughly analyzed the differences between the proposed method and the HRM approach, and provided extensive experimental evidence to validate the effectiveness of their method, addressing my concerns. Based on these points, I have decided to raise my score to 7. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Batched Energy-Entropy acquisition for Bayesian Optimization | Accept (poster) | Summary: This paper introduces a new acquisition function BEBBO for batched BO. BEBBO tries to build (negative) free-energies-like acquisition function, enabling gradient-based optimization, tight exploration-exploitation control, and risk-averse BO under heteroskedastic noise. It tries to improve existing parallel acquisition functions in the following ways:
1. uses a hyper-parameter T to directly balance exploration and exploitation by separating these two parts clearly;
2. keeps the behavior predictable by scaling E and I with batch size;
3. enables the optimization of gradient descent by holding the availability of closed-form expressions for GP.
This paper demonstrates several experimental comparisons and shows its effectiveness.
Strengths: 1. This paper shows an enlightening acquisition function method inspired by statistical physics.
2. The experimental results show the effectiveness of BEBBO on problems without noise or with heteroskedastic noise.
Weaknesses: 1. The idea is straightforward: simply combine two common components—entropy reduction and the weighted sum of values. In my opinion, the novelty is not very strong.
2. The article doesn't discuss the situation when BEEBO is used with other surrogate models. It seems that if BEEBO is not used with GP, it loses the advantages of closed-form expressions and gradient descent optimization.
3. In control problems shown in Figure A2, the performances of meanBEBBO and maxBEBBO are not outstanding. Especially, these two variants are surpassed by KB in Robot pushing problem.
4. Although BEEBO performs well on many synthetic test problems, its versatility and effectiveness require more experimental validation in specific applications.
5. In the experiments in main text, the authors only showed the comparison with q-UCB. It would be better to show comparisons with other batched baselines and provide a thorough analysis. The comparison with q-UCB shows the advantage on the balance between exploration and exploitation. But other advantages emphasized in the paper, such as the benefits of gradient descent optimization and the tight control of the explore-exploit trade-off, are not fully demonstrated. I suggest that the authors can re-organize the paper to move the comparison with other baselines to the main paper.
6. The theoretical analysis is not deep. No regret bound is analyzed.
Technical Quality: 2
Clarity: 3
Questions for Authors: - How to set the parameter T properly? Do you have some instructions?
- In line 158, why does I(x) also scale linearly with batch size?
- The Equation A5 is not shown.
- What are the advantages of multiplying batch size Q in E(X)?
- How does the behavior of BEBBO change with batch size?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions. We respond to them individually below, and are looking forward to further discussion.
> 1. The idea is straightforward [..]
We are convinced that BEEBO is a novel acquisition function. In section B.2, we extensively compare BEEBO to the, to our knowledge, most similar existing work, GIBBON. We show how BEEBO’s entropy term can have advantages over GIBBON’s, and empirically show that the closed-form summarization term works better for large batches than GIBBON’s lower bound approximation.
Moreover, BEEBO’s closed-form softmax-weighted sum expectation is novel. It is of course not our intent to claim that we have discovered entropy reduction. However, we are not aware of any existing work that proposed an acquisition function of this form for batch BO.
>2 . [...] other surrogate models.
Indeed, BEEBO was designed to work primarily with GPs, as closed-form expressions and multivariate normal posteriors enable efficient computation. We have edited the abstract to already mention the GP focus there. In the outlook section, we now provide a discussion of how BEEBO could be generalized:
>Beyond GPs, BEEBO could be generalized to work with any probabilistic model. However, GPs are unique in that $H_{\textup{aug}}$ is available in closed form and can be used to compute $I(\mathbf{x})$ analytically, without solving the integral over $\mathbf{y}$ in Eq. 4. Other models may require approximations and sampling-based approaches for computing the information gain.
>3. In control problems [...] the performances [...] are not outstanding [...]
We agree - on these problems, KB turns out to be very strong, not just in comparison to BEEBO, but all baselines including TuRBO, which was originally found to be state of the art for these problems. We think that this is a consequence of using LogEI. Hence, we also mentioned in the results section of the main text that KB using LogEI performs very competitively for large batches. We have now updated this statement to explicitly point to the control problems also.
>4 . Although BEEBO performs well on many synthetic test problems [..] more experimental validation in specific applications.
We are happy to include additional experiments. Are there any specific ones that we should consider? Overall, we are convinced that our empirical section is comprehensive and well aligned with common practices in BO research, such as e.g. Wilson et al, NeurIPS 2017 [24], Ament et al, NeurIPS 2023 [57], Gonzalez et al. AISTATS 2016 [37].
> 5. [...] comparisons with other batched baselines [...]
Thank you for the recommendation. We had tested multiple layouts for the results and ultimately decided that given the number of experiments and methods, including the baselines in the main text tables makes them too hard to read and interpret. We have thus maintained the focus on q-UCB, and discuss additional baselines in the text, ensuring the supplement is referred to at numerous places.
We believe that both Figure 1, as well as the 33 batch regret experiments in Table 3, fully demonstrate the controllability of the trade-off in BEEBO and highlight the key problem with q-UCB not having tight control with large batch sizes.
> 6. The theoretical analysis is not deep. No regret bound is analyzed.
We provide an extended analysis of the BEEBO algorithm in the supplementary section B, tying BEEBO to the rich theoretical framework of existing methodology, such as UCB, DPP and local penalization.
A theoretical analysis focusing on regret bounds, along the approach outlined for B-UCB’s bounds in [34] would indeed be very interesting, and we believe this deserves its own paper.
Given the provided theoretical linkage to existing methodology, we consider the manuscript to be quite comprehensive, focusing on empirical demonstration of BEEBO's efficiency. With 33 experiments conducted, three different batch sizes (q=5,10,100), eight different acquisition strategies, several of them operated at different scales (GIBBON default & scaled, q-UCB & BEEBO at different T), and 10 replicates each (corresponding to more than 12,800 recorded BO trajectories total), we consider our benchmarking approach thorough. We believe that including an analysis of regret bounds in the manuscript as well will only challenge the focus of our paper for a reader even more, as already indicated by reviewer mXVT.
> How to set the parameter T properly
This is indeed very important - we do not want the controllability (which we consider an advantageous feature in general) to become a nuisance. As outlined in our main response, we now feature the relation of T to UCB more prominently.
> In line 158, why does I(x) also scale linearly with batch size?
The linear scaling of $I(x)$ is a consequence of I being a log determinant (or more precisely, the difference of two log determinants). Considering the log determinant as the sum of log eigenvalues, we see that the scaling is in fact linear, as we add an eigenvalue for each increase in Q.
> The Equation A5 is not shown.
We removed this trailing newline.
>What are the advantages of multiplying batch size Q in E(X)?
To maintain a Q-independent T parameter, we wish for both $I(x)$ and $E(x)$ to scale equally. As we define $E(x)$ to be the mean or the softmax-weighted sum, this expression does not scale with Q itself, and we therefore multiply it.
>How does the behavior of BEBBO change with batch size?
As just outlined, the explore-exploit parametrization of BEEBO is invariant. That being said, it is correct that there will always be an effect from Q. Specifically, at large Q compared to the search domain, one might expect to see the exploration (=repulsion) term grow larger, as points will compete to occupy the same regions of the domain. We believe that our additional low-Q experiments in the updated manuscript support this, showing that at low Q higher explore rates are beneficial for BEEBO as well as for q-UCB.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I keep my evaluation. | Summary: Proposing a new acquisition function inspired by statistical physics, which allows explicit control of exploration-exploitation trade-offs in a batch BO setting.
Strengths: Drawing inspiration from statistical physics is a promising direction, as it naturally aligns with Bayesian approaches.
Weaknesses: **Major Points**
1. **Lack of Unique Selling Point:**
The method does not appear to solve any unique cases that other methods cannot. While the related work section outlines many similar approaches, this work only compares itself to q-UCB. Without a theoretical study, such as convergence rate analysis, there is insufficient motivation to adopt another heuristic approach like this. To demonstrate efficacy, a comprehensive empirical study with extensive comparisons to existing works is necessary, given the unclear advantages.
2. **Review of Claimed Selling Points:**
- **Not Based on MC Integration Like BoTorch:**
While this is true, it is unclear if it is beneficial. MC approaches are approximations but have convergence guarantees (refer to the BoTorch paper's appendix). This work lacks such guarantees.
- **Tight Control of Exploration-Exploitation Trade-off:**
The proposed method is not the only solution. UCB theory can bound the feasible region by [max LCB(x), UCB(x)] with 1 - $\delta$ probability (e.g., see [1]). This region can be controlled by $\beta$ hyperparameters, corresponding to $\delta$. Constrained batch BO within this region would yield similar results with theoretical guarantees.
- **Heteroskedastic BO:**
There are no comparisons with existing methods. UCB variants can address this problem. Vanilla UCB theory does not differentiate between epistemic and aleatoric uncertainty. Therefore, UCB with heteroscedastic GP can serve the same purpose. For example, training a heteroscedastic GP with the observed dataset and replacing the inner GP on noise variance with normal isotropic likelihood variance (pure epistemic uncertainty) will yield similar results as risk-averse batch BO, without needing the change in modelling the acquisition functions regardless of hetero-/homo-scedastic cases like this work.
3. **Setting k in Practice:**
Setting $k$ is not an easy task for users. In UCB, $\beta$ presents a similar challenge, but there are theoretical guidelines and algorithm to compute this (e.g., [2]).
4. **Constrained Uncertainty Sampling:**
This work can be understood as a variant of constrained uncertainty sampling. As [3] explains, variance reduction can be written similarly to the entropy proposed in this paper (see section 2.2). It also shows that the variance-only approach is inferior to UCB both theoretically and empirically. The batch setting may lead to model misspecification, particularly when hallucination (aka fantasization) is applied. The concerns and approach are notably similar to ([49] in your citation number), making a comparison with their method unavoidable.
5. **Data Augmentation Procedure:**
The explanation is unclear. Is it fantasization (aka hallucination) or simply observed points? How does this differ from Eq.(3) in [3]?
- [1] Fengxue Zhang, Jialin Song, James C Bowden, Alexan- der Ladd, Yisong Yue, Thomas Desautels, and Yuxin Chen. Learning regions of interest for bayesian optimiza- tion with adaptive level-set estimation. In International Conference on Machine Learning, pages 41579–41595. PMLR, 2023.
- [2] Kihyuk Hong, Yuhang Li, and Ambuj Tewari. An optimization-based algorithm for non-stationary kernel bandits without prior knowledge. In International Conference on Artificial Intelligence and Statistics, pages 3048–3085. PMLR, 2023.
- [3] Srinivas, N., Krause, A., Kakade, S., & Seeger, M. (2010). Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Proceedings of the 27th International Conference on Machine Learning (pp. 1015-1022).
Technical Quality: 2
Clarity: 2
Questions for Authors: Questions are written in the above weakness section.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations are discussed in the discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments. We have answered them individually below, and are happy to discuss further.
>**Lack of Unique Selling Point**
We would like to point out that as mentioned in line 275, supplementary section D.1 provides a comprehensive benchmark beyond q-UCB, including GIBBON, TS, KB, q-EI and now, as suggested by reviewer x4GE, also TurBO.
Moreover, we provide an extensive theoretical analysis of how BEEBO compares to other strategies in supplementary section B, highlighting how BEEBO can be understood in light of existing works and improve upon them, tying our method to theory such as DPPs.
BEEBO does in fact address issues that other methods face: q-UCB lacks the controllability of acquisition behavior in practice, as quantified in Table 3, and GIBBON is known to scale poorly to larger batch sizes, whereas BEEBO does so.
>**Not Based on MC Integration**
We believe that Figure 1 and Table 3 highlight a serious practical problem with large batch q-UCB in BoTorch. Moreover, the MC q-EI also failed to outperform BEEBO in our extended benchmark.
As BEEBO is an analytical expression without sampling, we are unsure what kind of convergence guarantees are meant by the comment. We optimize BEEBO using gradient descent, and while it is, as most acquisition functions, multimodal, it can be optimized efficiently using multiple restarts, as is common in BO practice (and done in BoTorch).
>**Tight Control of Trade-off**
Thank you for the reference. BALLET uses local regions to shrink high-dimensional search spaces, similar to TuRBO. However, unlike TuRBO, the paper is exclusively about single-point BO. It is not directly obvious to us how batched acquisition is meant to be done using their acquisition function (Eq. 7) without further generalizing BALLET. The sampling-based acquisitions (Eq. 8,9) may permit batch mode, but no mention of this is made in the paper.
We would like to note that the primary focus of our work is batch BO, rather than high-dimensional BO. We now include TuRBO as an exemplary method for trust region approaches.
We think it would be inadequate to benchmark single-point works that make no mention of batched BO in their own experiments. It is beyond the scope of our paper to generalize existing single-point works to batch mode. If there is a batched BALLET that we are not aware of, we would be happy to add it to our benchmark.
We have modified our claims to explicitly state “Tight Control of the explore-exploit trade-off *in batch mode*”.
>**Heteroskedastic BO**
We fully agree that existing (single-point) methods also address heteroskedasticity. Hence we compare to Makarova et al.’s RAHBO [11] in supplement section B. We are happy to include any other relevant UCB variants that handle batch BO. A key limitation of existing work is that to our knowledge, e.g. RAHBO has not been generalized to batches. The core focus of our work is batch, with heteroskedasticity as a secondary consideration.
As you mention, UCB does not differentiate between epistemic and aleatoric uncertainty. Thus we are unfamiliar with the concept that we can get risk-averse BO from a heteroskedastic GP without modifying UCB as done in e.g. Makarova et al. Could you provide a reference?
>**Setting k**
It is true that setting parameters in BO can be challenging. However, in our work, we considered UCB-like controllability a desirable feature. In supplementary section B.1, we show how BEEBO’s $T$ and UCB’s $\beta$ (or $\kappa$, as used this work to avoid confusion with the softmax $\beta$) are related. We always use $T’$, a parameter scaled for UCB equivalence, to allow users to follow existing guidance and allow fair experimental comparison.
As outlined in the main response, we have edited section 3.1 to feature this more prominently.
>**Constrained Uncertainty Sampling**
[3] indeed also discusses the information gain. However, we understand that it does not use it for batch BO.
We are unsure how the “variance-only approach” comment is meant to be understood in light of our work. We would also expect only using the variance of a point to be a poor strategy for BO - however, we do no such thing.
Could you provide a reference regarding misspecification, and why batch BO in general causes more misspecification vs. single-point? We understand that [49] considers misspecification of GPs irrespective of the BO strategy, with no causal relation to batch BO.
Note that BEEBO does not use iterative fantasization - we immediately discard fantasized GPs after computing H_aug. The model itself remains unchanged for further gradient descent iterations, and no hallucinated observations are used.
We attempted including SOBER [49]. Following the quickstart guide in the official repository, and plugging that into the experimental loop that we used for all other experiments, we encountered problems such as linalg errors, invalid gaussians, and OOM errors on some test problems. We have thus decided to exclude SOBER, rather than reporting results that may not be valid due to technical limitations in the available code. We are open to add a statement on the exclusion due to these problems in the manuscript.
> **Data Augmentation Procedure**
The terms in BEEBO and [3] do not differ - they both refer to the information gain of a GP (and call it so). We avoided using the term fantasization, as it is often understood as fantasizing (and using) $y$ values at $x$, as done in e.g. the KB baseline. As also pointed out in [3], when using the entropy (and not the posterior mean), $y$ is not used, so no hallucinated $y$ values affect the result.
As we introduce the concept already in Eq. 4, the generic form that marginalizes over $y$, we use the term augmentation, as in the that case it would not be called fantasization.
We have edited Alg. 1 to use the term “fantasize” instead of "augment" to make clear that we use fantasization to compute $C_{aug}$ on the implementation level.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, and I acknowledge your effort. Since I had not initially reviewed the appendix, my concern about weak evaluation was misplaced. However, I believe the main paper should be self-contained to convey key points effectively. While it is great to include many experimental results, they should be accompanied by clear explanations of "why". While theory is the easiest to answer, experimental investigation beyond final convergence comparison is also crucial.
I would like to clarify a few questions, as it seems the authors may not have fully read the related papers. Given the limited time, I understand that reviewing all related literature can be challenging. Therefore, I believe another round of conference review would be beneficial, allowing more time to refine the work. Thus, I still recommend rejection but have raised my score to 4 in recognition of the extensive experimental efforts.
**MC Acquisition Function (AF):** To decide the next query point, the AF must be maximised, which involves an "inner" optimisation loop that is non-convex. The MC AF leverages sub-modularity and has proven to converge to the true global maximum of the AF in each round. The closed-form AF requires heuristics, such as multi-start local optimer like L-BFGS-B. In this context, I do not believe a closed-form expression is necessarily better than MC methods, as they offer different benefits.
**Tight Control:** This serves as a counter-example that "UCB theory" can achieve tight control through $\beta$. BALLET is an example of a method that restricts exploration via confidence bounds. This is not about suggesting another baseline comparison; rather, it is about preferring simpler, well-established approaches. When a provably converging method easily comes up, why introduce a new heuristic?
**Heteroscedastic BO:** While I do not have a specific reference, this idea is fairly intuitive and the paper should exist somewhere. It serves as another counter-example.
Good luck with the revision!
---
Reply to Comment 1.1.1:
Comment: Thank you for taking the time to consider our rebuttal! We appreciate the effort to work together to improve the clarity of our work.
**MC AF optimization**:
We agree that MC is a powerful paradigm and that different strategies provide different benefits. Optimizing MC AFs uses sample average approximation to optimize towards the true maximizer $\alpha^* $ . As you also mentioned in your comments, Appendix D.3 in the BoTorch paper provides more theoretical background to that approach. As one would expect for MC approximations, there is a critical dependency on the number of samples: The results of theorems 1 and 2 for approximating $\alpha^{*}$ both hold in the **limit $i \to \infty$, an infinite number of samples**.
In practice, one will naturally face a situation with a rather limited number of samples, as also indicated in various BoTorch tutorials (e.g. 256 samples in the $q$-NEI tutorial), so the true AF is by definition *approximated*.
As mentioned in D.3, this will therefore leave us with an **integration error**, that becomes more critical in large $q$ as the AF is of dimension $\mathbb{R}^{q}$. Our closed-form expression does by design not suffer from an integration error.
Note that BoTorch also uses multi-start L-BFGS-B for MC AFs to mitigate the fact that the optimization approximates the true AF (BoTorch paper section 4.1).
**Tight control**:
Thank you for clarifying how the BALLET comment was intended. Again, our argument is not that UCB *theory* does not provide control (we are convinced it does), but that the practical limitations of BoTorch's MC methods at large $q$ outlined above do not deliver that control in practice, as we see empirically.
**Heteroscedastic BO**:
We would really appreciate a reference, as it is not intuitive to us. Our understanding of heteroskedastic noise and UCB is rooted in the analysis of Makarova et al. 2021, which deals with heteroskedastic GPs, and shows how UCB in fact needs to be modified to deliver risk-averse behaviour. | Summary: The paper introduces a new approach to batch Bayesian optimization that explicitly trades off between exploration and exploitation via energy and entropy terms. The method is able to efficiently generate large batches for evaluation that outperform comparable methods for Bayesian optimization.
Strengths: The method is novel and well-motivated.
I found the analysis in Appendix B to be especially strong, in which the proposed method BEEBO is compared to other methods for batch Bayesian optimization. This analysis shows the originality of the method and gives strong context for it.
The paper is clearly written and easy to read. The appendix was especially useful and had many valuable parts.
The analysis of heteroskedastic noise was great to see and shows a useful and understudied setting where the method is especially valuable.
Weaknesses: I think the method is interesting and will be of value to the field. However, I do not find that the experimental evaluation of the method provides full support for the claims of the paper.
**Issue 1: Evaluation limited to large-batch setting**
The major issue is that the paper claims the method is for general batch Bayesian optimization problems, without any qualifiers that I can see. The experiments all use q=100, which is a large batch size. Smaller batch sizes are often of interest too, e.g. batch sizes of 5, 10, and 50 in the GIBBON paper. The setting q=100 used here is also used in the TURBO paper (Eriksson et al. 2019) where it is described as a "large batch size."
Given the experiment results in the paper, I don't know if this method will perform well for small- or medium-sized batches. Thus, either the experiments need to be expanded to include experiments with batch sizes such as 5 and 10, or the framing of the paper needs to be adjusted to emphasize that the method is specifically for large-batch Bayesian optimization, not general batch BO problems.
This issue also relates to the choice of baselines. The experiments only explore large-batch settings, where GIBBON (as the paper notes) is known to perform poorly. If the paper wants to claim that it performs better than GIBBON in general, then it needs to make that comparison on batch sizes of 5 and 10. If the paper wants to claim superiority only on large-batch settings, then that's fine to only use q=100, but then it needs to compare to state-of-the-art for large batch. Thompson sampling is a popular method for large-batch settings which is included as a baseline, but to my knowledge the state-of-the-art for large-batch BO is TURBO (Eriksson et al. 2019). In fact the q=100 and the robot pushing and rover trajectory problems are all exactly as in the TURBO paper, so its inclusion as a baseline is pretty obvious and, I think, necessary.
**Issue 2: Lack of statistical significance**
The results of the experiments do not appear to be statistically significant. The main results given in the main text are tables, and these tables do not have confidence intervals. The only place where uncertainty in the results is show are the figures in the appendix, and there the confidence intervals appear to overlap in most cases. This is due to the use of only 5 replicates. I appreciate that these experiments are costly to run since they are using 1000 iterations, nevertheless the lack of statistical significance in most of the results provides weak support for the claim that BEEBO is actually better, vs. what we're seeing just being noise in the experiments. The paper needs some form of statistical analysis to convince the reader that what we're seeing is not just noise in the experiments. The best way to do this would be to include confidence intervals in tables 2 and 3, and then increase the number of replicates as necessary to achieve statistical significance in the differences in means. I do not feel it appropriate to highlight the method as being "best value" when it is possible that the confidence interval for that value contains the values of the other methods.
Technical Quality: 3
Clarity: 2
Questions for Authors: The issues raised above can be addressed by running more experiments (batch sizes 5 and 10; TURBO; more replicates to get reasonable CIs for the results tables). But it will probably require more experiments that can be run in the rebuttal period. Do we expect the method to work well for Q=5 and Q=10?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging comments! Having a compelling experimental evaluation is of key importance to us, and we have performed the requested experiments to further demonstrate BEEBO's performance.
> **Issue 1: Evaluation limited to large-batch setting**
Thank you for the helpful suggestions! We have now ensured that
- TurBO is also discussed in the related works section.
- TurBO is included in the experiments.
- Experiments on batch sizes 5 and 10 were added.
Please see the PDF of the global response for the results of the additional experiments.
Indeed, it was our focus to improve performance in the “large-batch” setting, as this is where we found $q$-UCB to underperform, and GIBBON to not be applicable. We have now clarified this in multiple places in the paper to be more precise.
> **Issue 2: Lack of statistical significance**
We appreciate that we need to be more rigorous with our reporting of results. We have now doubled the number of replicates and performed statistical testing on the differences in means. We can confirm that on aggregate over all experiments, BEEBO’s performance gain over the baselines is statistically significant. We have updated all results tables accordingly, and include the results of the statistical testing in the appendix. Please see the global response for combined p-values.
(For the time being, performance of GIBBON will still be based on nine replicates on five Ackley test problems, as one replicate can take multiple days. This should not affect the findings, as we already achieved statistical significance, but we will update the manuscript once the last runs finish for consistency.)
We agree that the best way to present results would be in principle to include the confidence intervals in Tables 2 and 3. However, we found that this makes the tables very dense and extremely hard to read.. As a compromise, we added the confidence intervals to the extended tables in the supplementary material (and the rebuttal PDF), and mention this in the captions of Tables 2 and 3.
>The issues raised above can be addressed by running more experiments [..] Do we expect the method to work well for Q=5 and Q=10?
We really appreciate that the reviewer is mindful of computational requirements - thankfully, we managed to run all critical experiments in the rebuttal period, as outlined above. We find that also at Q=5 and Q=10, BEEBO is competitive with TuRBO and GIBBON. Especially at Q=10, q-UCB shows strong performance. Overall, both BEEBO and q-UCB benefit from higher exploration rates at low Q.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional analyses. It is good to see that BEEBO continues to perform well in the small-to-medium batch setting, and that it performs well relative to TurBO. It is interesting that the sensitivity to T' appears to be higher for q=5 and q=10, a phenomenon that probably merits investigation.
As progress was made on my two main concerns, I will raise my score. Although other reviews have raised good points that I hope the authors will consider further. | Summary: This work introduces a batched acquisition function that balances exploration and exploitation by using a weighted sum of mutual information and expected value, with the weights defining the trade-off. The discussion links the proposed algorithm to UCB and asserts that it naturally addresses heteroskedastic noise.
Strengths: 1. The proposed acquisition function and its optimization and approximation methods are straightforward and practical.
2. The paper provides extensive empirical results to illustrate the proposed algorithm's efficiency.
Weaknesses: 1. The introduced parameter controlling the trade-off lacks interpretation as in previous methods.
2. The completeness of the related work discussion is concerning. This is potential because the summarization lacks high-level extraction of the design of the algorithm, and the focus of the paper is, to some extent, scattered。
Technical Quality: 2
Clarity: 3
Questions for Authors: One concrete example of the second aforementioned weakness is the criticism of MC approximation in high-dimensional applications. Recent advancements in applying MCMC address this issue in a principled manner and might be of interest.
***Reference:***
Yi, Zeji, Yunyue Wei, Chu Xin Cheng, Kaibo He, and Yanan Sui. "Improving sample efficiency of high dimensional Bayesian optimization with MCMC." arXiv preprint arXiv:2401.02650 (2024).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Discussed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments! We have incorporated the feedback in the updated manuscript, and look forward to further discussion.
> The introduced parameter controlling the trade-off lacks interpretation as in previous methods.
We fully agree that an entropy quantity is in principle harder to intuitively interpret than a univariate variance, as used in (single-point) UCB. However, we note that when working in batch mode, the $q$-UCB parameter also becomes harder to interpret, as illustrated in Figure 1. In supplementary section B.1, we provide a derivation showing how BEEBO’s $T$ parameter can be set to match the interpretation of the (single-point) UCB parameter.
In the experiments as well as in our available implementation, we always make use of this derivation, operating with the UCB-matched $T’$, rather than a raw $T$.
We have reworked main text section 3.1 to more prominently explain the relation of $T$ to UCB.
> The completeness of the related work discussion is concerning. This is potential because the summarization lacks high-level extraction of the design of the algorithm, and the focus of the paper is, to some extent, scattered。
Thank you for pointing out the recent Yi et al. paper on MCMC for BO, which we have now included in the related works section. Could you provide further guidance as to what you would find missing from the related works section?
Given the space constraints and the focus of our paper, our intent was to offer a concise summary of batch BO methodology, rather than delving further into MC methods. As suggested by another reviewer, we have now also added TurBO to the related works section as well as the experiments.
The related works section now ends in
>Eriksson et al. demonstrate that overexploration also can be problematic in higher dimensions, and alleviate this using local trust regions in TuRBO. Maintaining such regions with high precision discretization can be memory-expensive, as indicated by Yi et al., who suggest using MCMC-BO with adaptive local optimization to address this by transitioning a set of candidate points towards more promising positions.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's responses to my concerns, especially regarding the interpretability and the related works. Concerning the other potentially related works, I believe my fellow reviewers have proposed sufficient concrete examples. My additional comments on the algorithm's possible advantage over existing theoretical sound algorithms align with the author's recent response. The applicability of the acquisition optimizer is a known problem. Therefore, it is still meaningful to introduce algorithms like BEEBO. I hope the author can address the remaining concerns of other reviewers. At the same time, I maintain my original evaluation.
***Reference:***
In section B by Shahriari et al. 2016: "Unfortunately, these auxiliary optimization techniques can be problematic for several reasons." "First, in practice, it is difficult to assess whether the auxiliary optimizer has found the global maximizer of the acquisition function."
- Shahriari, Bobak, et al. "Taking the human out of the loop: A review of Bayesian optimization." Proceedings of the IEEE 104.1 (2015): 148-175. | Rebuttal 1:
Rebuttal: We thank all reviewers for their helpful feedback! We are excited to hear that they find BEEBO
- *Helpful and practical* (mXVT)
- *Novel and with strong context* (x4Ge)
- *Effective with heteroskedastic noise* (CMG8, x4Ge)
- Has a *promising Statistical physics motivation* (4oKm, CMG8)
We have responded to each reviewer separately. In summary, we would like to highlight the following key improvements of the manuscript:
- **TuRBO was added** to the benchmark (x4GE) and the related works (mXVT). as an exemplary method of batched trust region BO methods such as also BALLET (4oKm)
- The **number of replicates** was **doubled** to demonstrate **statistical significance** (x4GE)
- Benchmark experiments at **batch sizes q=5 and q=10** were added (x4GE)
- Highlight the scaling of the T parameter for **UCB-like interpretability** more prominently (mXVT, CMG8)
As also pointed out by many reviewers, we consider it crucial that BO methods are benchmarked thoroughly against existing methods. This is why in our experimental section, we have now recorded more than **12,800 batch BO trajectories** in total, across 33 problems, multiple acquisition strategies, batch sizes and replicates. We believe that this is adequate empirical evidence of BEEBO's performance, which is complemented by Appendix B, which connects BEEBO to the theoretical background of existing work.
Please see the attached PDF for:
- an updated Table A1 including TuRBO, based on 10 replicates
- the q=5 benchmark
- the q=10 benchmark
As the PDF is limited to one page, we include the p-values for the meanBEEBO results in the table here. These were computed using a paired one-sided t-test for each test problem over the 10 replicates, followed by aggregation using Fisher's method.
In the updated manuscript, we include this table in extended form - showing the p-value of each t-test, and the aggregation, for meanBEEBO as well as maxBEEBO. Please just let us know if you wish to see individual p-values that below results are based upon, we will provide them as comments as needed (The full table would be 33x21 cells and poorly suited for the available markdown formatting).
|meanBEEBO |Method | Fisher p-value|
|-----|---------|-----------------|
|$T'$ = 0.05 | $q$-UCB | 2E-31 |
| | $q$-EI |1E-13 |
| | TS | 2E-30 |
| | KB | 6E-06|
| | GIBBON | 8E-58 |
| | GIBBON (scaled) | 5E-47 |
| |TuRBO | 4E-29 |
| $T'$ = 0.5 | $q$-UCB | 6E-28
| |$q$-EI$ | 1E-18
||TS| 2E-34
||KB| 6E-10
||GIBBON| 1E-78
||GIBBON (scaled)| 2E-58
||TuRBO| 1E-44
|$T'$=5.0| $q$-UCB | 1E-28
|| $q$-EI | 1E-03
||TS| 4E-20
||KB| 2E-06
||GIBBON| 9E-56
||GIBBON (scaled) | 8E-36
||TuRBO| 2E-46
As for the $T$ parameter scaling, section 3.1 "BEEBO with Gaussian processes" in the updated manuscript ends in
>Using the kernel's learned amplitude $A$, we can relate BEEBO's $T$ parameter to the $\kappa$ of UCB. This allows us to configure BEEBO using a scaled temperature $T'$ that ensures both methods have equal gradients at iso-surfaces, enabling the user to follow existing guidance and intuition from UCB to control the trade-off. A derivation is provided in section B.1
We believe this will make it easier for the reader to find the relevant derivation, as opposed to the previous version where this was only mentioned in the experimental section.
We are looking forward to further discussion.
Pdf: /pdf/19326f5bcfc9a718f41a7237156a9c8c0e5d0c1c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Space-Time Continuous PDE Forecasting using Equivariant Neural Fields | Accept (poster) | Summary: The paper presents a novel framework for solving Partial Differential Equations (PDEs) by leveraging the power of Equivariant Neural Fields (ENFs). The authors propose a space-time continuous approach utilizing the symmetry of the PDEs, which is crucial for improving generalization and data-efficiency. The framework is tested on various geometries and PDEs, showing its effectiveness in handling complex dynamics.
Strengths: 1. **Data efficiency**: By designing a system that preserves the symmetries of PDEs, the proposed framework enhances the model's ability to generalize from limited data.
2. **Novel initialization method**: The use of meta-learning to structure the latent space of the ENF simplifies the learning process and leads to better performance than autodecoding.
Weaknesses: 1. **Error Accumulation:** The usage of ODESolver might pose a challenge with error accumulation over time, particularly for dynamics occurring beyond the training horizon, which could affect the model's long-term predictive accuracy. So it would be helpful if the model is tested in a longer timespan.
2. **Lack of Comparative Analysis:** While the paper compares its approach to a baseline method, a more comprehensive comparison with existing state-of-the-art methods in PDE solving would strengthen the paper's claims, such as Geo-FNO[1], GNOT[2], Transolver[3].
[1] Li, Z., Huang, D. Z., Liu, B., & Anandkumar, A. (2023). Fourier neural operator with learned deformations for pdes on general geometries. *Journal of Machine Learning Research*, *24*(388), 1-26.
[2] Hao, Z., Wang, Z., Su, H., Ying, C., Dong, Y., Liu, S., ... & Zhu, J. (2023, July). Gnot: A general neural operator transformer for operator learning. In *International Conference on Machine Learning* (pp. 12556-12569). PMLR.
[3] Wu, H., Luo, H., Wang, H., Wang, J., & Long, M. (2024). Transolver: A fast transformer solver for pdes on general geometries. *arXiv preprint arXiv:2402.02366*.
Technical Quality: 3
Clarity: 4
Questions for Authors: Using ODESolver often incurs higher computational costs. Did the authors implement any acceleration methods to speed up the integration process?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: 1. More experiments should be conducted to prove the model's efficiency in longer timespan.
2. More baselines should be listed in the paper so that model's efficiency can be thoroughly tested and confirmed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough assessment, and for the valuable criticism of our work. We address each of the reviewers' concerns in detail here, and hope to continue the discussion if any details remain unclear.
**Error accumulation** The reviewer is concerned about error accumulation in long-term rollouts as a result of requiring a neural ODE forward propagation. We agree, and as such provide an additional experiment with a more in-depth analysis of long-term rollouts. We extend the dataset of Navier-Stokes solutions we use in the paper to 80 timesteps, and provide test-set results in Figs 1, 2, 3 of the PDF. In Fig. 1 we provide average test MSE per timestep for our model in comparison with FNO and DINo on the setting of having a fully observed initial condition $\nu_0$. Here, we can see that FNO clearly outperforms both NeF-based neural solvers in terms of long-term error accumulation - in agreement with the results shown in our paper in Tab. 1 - although our proposed method retains relatively low MSE up to 20-30 steps after the train horizon (marked in green), where DINo starts error accumulation inside the training time horizon. In Fig. 2 we show results where the initial condition $\nu_0$ for each test trajectory is only sparsely observed, i.e. where only 50% of the initial state is given as input to the model. Here, FNO quickly deteriorates in performance and accumulates error at a higher rate than the NeF-based framework we propose in this work. To us these results indicate that (1) our equivariant NeF-based continuous space-time solving framework provides better generalisation and less error accumulation than its non-equivariant counterpart and (2) retains this relatively limited rate of long-term error accumulation in settings with sparsely observed initial conditions.
**Improving comparative analysis** The reviewer indicates that they would like to see experimental comparison with a wider range of baseline methods. We provide results for a baseline proposed by the reviewer; Transolver, which has shown very promising results for PDE modelling on general geometries.
We took the Transolver model in PyTorch from their github repository, and use the same hyperparameter settings (resulting in a 7.1M parameter model) - modifying the training objective to be identical to the autoregressive one we use in our experiments (i.e. mapping from frame to frame i.o. taking 10 frames as input). Tab. 5 shows the results for the Transolver model on Navier-Stokes in 2D after training on a single A100 to convergence over 700 epochs in approximately 49 hours. Notably, Transolver achieves 1.80E-02 and 1.85E-02 train and test mse respectively within the training horizon, and 4.85E-01 and 4.90E-01 train and test mse outside the training horizon. During training in the Navier-Stokes task we observed noticeable instabilities, with the method producing high-frequency artefacts in rollouts in this autoregressive setting. For sparsely subsampled initial conditions, performance deteriorates further - highlighting the benefit of NeF-based continuous PDE solving.
For experiments on internally-heated convection, due to the large size of the input frames, we were required to scale down the Transolver model size in order to be able to fit the model on our A100 GPU, from 256 to 64 hidden units, resulting in a 1.2M parameter model - somewhat comparable in size to the model we use in our experiments (889K). We train the model for 2000 epochs on an A100 GPU, taking approximately 30 hours. Results in Tab. 4 show the strong performance of this model in the training time horizon - achieving 4.13E-04 test MSE where our framework achieves 5.99E-04, indicating that it is indeed a strong baseline for solving PDEs over complicated geometries. However, we also note that outside of the training horizon, error accumulates more quickly for the Transolver model, indicating it has somewhat overfit the training horizon dynamics. Here, Transolver achieves 2.09E-02 test MSE, where our framework achieves 8.21E-03. We hypothesise that - due to the equivariance constraints placed on our model reflecting inductive biases about this PDE, it extrapolates more reliably beyond the train horizon and over the validation set.
Furthermore, we provide results in Tab. 4 for sparsely observed initial conditions. These results clearly show the advantage of NeF-based continuous solvers in this sparsely observed setting, both DINo and our framework significantly outperform Transolver, which is unable to provide accurate solutions either within the training horizon or outside of it.
In the camera ready version, we will further include the results for the transolver baseline in our other experiments.
**Computational cost of use of ODESolver** The reviewer raises concerns about computational efficiency of our method. We did not implement any specific acceleration methods for solving the forward ODE in the latent space. The main reason we did not feel we need to is because the neural ODE solver operates on a drastically compressed representation of the PDE state, i.e. in the Navier-Stokes experiment we operate on a set of 4 latents with a context vector living in $\mathbb{R}^{16}$. Since the neural ODE solver operates on such a small representation, its overhead is relatively limited. In order to provide more insight into the computational requirements of our method we provide training and inference runtimes and memory usage of our method compared with the different baselines we show in our paper in Tab. 1, 2 of the PDF. We will add these details to the appendix of our manuscript.
We also provide a direction for how to significantly reduce the memory overhead of our decoder by approximating the integral listed in appx C. Eq. 10 in our response to R. hu7A. We will be exploring this in future work.
---
Rebuttal Comment 1.1:
Comment: It would be better if some results were shown in tabular format. Most of my questions are answered and I will keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their invested time. Tabularized results for the added experiments referred to in our rebuttal text can be found in the PDF attached to the general rebuttal. For the experiments with per-step MSE over 80 unroll steps we felt the error accumulation was better visualized in a graph, however we would be glad to provide those results also in tabular form in the appendix.
If any questions remain after our rebuttal, we would be happy to discuss them further! | Summary: This work proposes a space-time continuous method for solving PDEs that respects the inherent symmetries of the PDE via equivariance constraints. Building upon prior work which (a) fits a conditional neural field to output latent vectors and (b) evolves the latent state through time via a Neural ODE, the contribution of this work is to additionally enforce equivariance constraints in the latent space itself. Secondly, the work employs meta-learning to obtain the initial latent representation, which improves the structure of the latent space representation and accelerates the inference time. The authors show improved performance of the method for linear and nonlinear PDEs on complex geometries such as the 2d torus, 3d sphere and 3d ball.
Strengths: - The proposed method significantly reduces overfitting compared to non-equivariant baselines.
- The method shows good stability at 3–5x the length of the training regime (even though the error accumulates slowly).
Weaknesses: - The computational cost of the method v/s baselines is not shown. Relatedly, do the DINo baselines have similar parameter counts as your method?
Technical Quality: 3
Clarity: 3
Questions for Authors: - In figure 2, why is the bottom right image not the rotated version of the solution field on the top right?
- Complex geometries can be a strong use-case for symmetry-preserving PDEs. However, complex geometries can often have non-trivial boundary conditions as well. Is there any way the method can be extended to handle non-equivariant boundary conditions?
- Due to the global attention mechanism in the ENF there could be scalability concerns. Could you comment on how the method could be applied to larger scale problems in this case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their assessment of our work, and appreciate their recognition of the benefits of incorporating the symmetry constraints in PDE solving. We elaborate on the raised concerns below.
**Computational cost**. We provide comparison of the computational cost of our method to all other baselines - as well as their parameter counts in tables 1 and 2 of the provided PDF. When looking at the results there can be observed that the training time per epoch is low in comparison to the best performing models (transolver and GFNO), although a bit higher than the less performing models (FNO and DINO). However, we want to stress that looking at inference time, our method is the most time-efficient, mostly attributable to the fact that we’re using a combination of meta-learning with latent-space ODE solving. For new trajectories, initial states only need to be fitted for 3 SGD steps compared to DINO’s 300-500 SGD steps of plane auto-decoding, which provides a clear speed-up during inference.
**Application to non-equivariant boundary conditions** The reviewer asks if/how our method could be extended to non-equivariant boundary conditions. As R. nkRS, i6aX ask the same question, we provide a joint answer in our response to R. nkRS.
**Scalability and model complexity** The reviewer raises valid concerns regarding the computational cost and scalability of our approach, especially in the context of solving complex/ high-dimensional PDEs. To better assess the computational cost of our approach and contrast it with existing methods, we added information about parameter count, runtimes and GPU memory usage during training and inference (see Tab. 2 of the PDF) for the experiments on Navier-Stokes. These results show that our method excels in inference compared to the baseline methods, although it is not as memory efficient as FNO[1] or DINo [2]. Although we feel it is outside the scope of our current work, we do agree with the reviewer that aiming for improved scalability should be a focus of future research into this approach, as this would enable application to higher-resolution / higher-dimensional problem settings. We outline an idea for future research into ENF-based PDE solving here, which revolves around a more efficient implementation of the cross-attention operation.
Much of the computational cost of our method is attributable to the calculation of the query and key and values $q_{i,j},k_i, v_{i,j} \in \mathbb{R}^\text{hidden}$ between a point $x_j$ and each of the latents $z_i$ in a latent state $Z$. To sample solution values with a set of $16$ latents for a $64x64$ output grid - for example - this requires the calculation of query and key values for each of $16 \times 64^2$ combinations of latents and points. Note however that, within the ENF architecture’s main operation (described in the manuscript in appx C. Eq 10.) we enforce a latent $ ( p_i, \mathbf{a_{i}} ) $ to be local in the domain of the PDE by weighting the attention value calculated between a latent $i$ and a point $x_j$ through applying a Gaussian window weighted by the relative distance between this point $x_j$ and the latents pose $p_i$ on the domain of the PDE. For latents far removed from a sampled point $x_j$, this Gaussian window forces the attention coefficient from $j$ to $i$ to be negligibly small, in turn nullifying contributions for such latents $i$ to the output for $f_\theta(x_j, Z)$. For an approximation of Eq 10, it could thus be economical to forego calculating these attention coefficients altogether, by first applying an efficient knn algorithm to sort the relative distances from each coordinate $x_j$ to each latent $z_i$, and only calculating query key and value vectors for the nearest $N$ latents. Preliminary results on image regression tasks show that performance loss is negligible, and this method allows for sampling from much larger numbers of latents, making it an interesting possible future direction for scaling the proposed framework to much larger and more complex PDEs.
**In figure 2, why is the bottom right image not the rotated version of the solution field on the top right?** Indeed the function on the bottom right is a rotated version of the solution, but for this particular solution that is quite hard to tell. We will change this figure to be more easily legible in the camera ready version.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the detailed responses to my questions. I will keep my current score unchanged. | Summary: The paper attempts to learn the dynamics of certain PDEs from time series data using implicit neural representations, while encoding symmetry information of the domain. In fact, constructing a neural model that is aware of Euclidean transformations is the primary focus of this paper. To this end, the authors design two equivariant neural networks. Given an initial condition, a latent state is obtained by meta-learning. This latent state is then integrated in time by a neural ODE (first network) to obtain a final latent state. The second network then takes this final latent state as an input, and maps any given point coordinate (in the domain) to the solution at the final time. Examples are presented on periodic domains in $ \mathbb{R}^2 $, 2-torus, 2-sphere and the 3D-ball. The paper builds on a 2022 ICLR paper [1] which attempts the same, but without any symmetry assumption.
[1] Yuan Yin, Matthieu Kirchmeyer, Jean-Yves Franceschi, Alain Rakotomamonjy, and Patrick Gallinari. Continuous pde dynamics forecasting with implicit neural representations. 2022.
Strengths: The paper is well written.
PDEs are often posed on domains that have symmetric properties. This is in addition to the fact that the operators appearing in the PDE have their own symmetry / directional properties. While learning from data, most of the existing methods attempting to learn PDE dynamics ignore the symmetry information. Therefore, this is a welcome idea.
The method exhibits impressive extrapolation results.
Weaknesses: Only infinite domain (or periodic boundary conditions) are considered.
In the examples, the transformation groups are chosen by carefully considering the nature of the domain and the operators appearing in the equations. But in a real application, this information, especially the operator information, is not known a priori.
Extrapolation results are shown where results outside the training time horizon are predicted. The problem with such prediction is that they look good until they do not. And there is no logical or analytical bound on the time horizon where the extrapolation is supposed to work. The time horizon is always chosen so as to exhibit the effectiveness of the method. But no analysis is presented in that regard. Therefore such extrapolation results, even though impressive in some respects, do not add to either the understanding or the applicability of this method to a new application.
Memory and execution (training) times are not compared (only the training times of the proposed method are included). Error comparisons are made with other methods. Sometimes this method outperforms the other methods (e.g., Table 2), but in some cases, it is marginally better than the others (e.g., Table 3, 4). Providing memory and training times would make these comparisons more well rounded.
(Minor) Line 276: should be $ \nabla \cdot u = 0 $.
Technical Quality: 2
Clarity: 3
Questions for Authors: How will this method work with other boundary conditions?
In the examples, the invariance groups are chosen according to the equation at hand. But how does one choose the invariance groups when the underlying operators and functions (RHS) are unknown?
The extrapolation results are compared with ground truth data, and it is seen that the accuracy deteriorates as the inference horizon goes farther from the training horizon. In a new application, how to determine a limit for accurate extrapolation, i.e., the temporal horizon where extrapolation is always successful? Does this limit exist?
The forward problem triggers an ODE solve. What is the typical DOFs associated with this ODE solve?
What is the boundary condition applied on the heat equation example?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their very thorough treatment of our manuscript, and appreciate the recognition of the importance of encoding symmetry information in neural PDE solvers. We also acknowledge the reviewer's constructive feedback and address their concerns in detail below.
**How will this method work with other boundary conditions?** In the experiments, we consider both domains like the flat torus (square with periodic boundaries) or the 2-sphere, where no boundary conditions are needed because they are enforced by construction of the bi-invariant, and domains such as the 3-sphere where we use explicit boundary conditions, in particular, zero radial velocity and fixed temperature gradient at the boundary of the sphere. The method proved to work well in both cases.
Other boundary conditions may be incorporated likewise, through appropriate construction of a bi-invariant that respects these boundary conditions. From given boundary conditions it is immediate to verify if they are preserved by a given group action - we show examples of this in the manuscript Sec. 3.1 with Eqs 7, 8. As a further example, for instance for many cylinder flow problems there would exist vertical flip symmetries (the $Z_2$ symmetry group), which could be incorporated by choice of appropriate bi-invariant.
Reviewers R. i6aX, R. NkrS and R. hu7A all ask about the usability in settings when the PDE has no underlying symmetries or when the symmetries are unknown - as such we give an answer jointly. Please see our response to R. NkrS. - under _Equivariance constraints in real-world applications_.
**Limits for successful extrapolation** The reviewer touches upon a very valid drawback of deep-learning based surrogates, namely the fact that to a large extent these methods do not provide any guarantees on extrapolation error bounds as they are purely learned from data with neural networks, and so this would require being able to analytically determine generalisation bounds. We feel this would be a very valuable future research direction.
In order to better empirically assess error accumulation of our method for long-term extrapolation, we provide experimental results for unrolling of Navier-Stokes for 80 i.o. 50 timesteps (see our response to R. zC2e for details). We apply our model both in a setting with fully observed initial conditions and sparsely observed (50%) initial conditions, and provide plots of accumulated MSE along these trajectories in Figs 1, 2, 3. We compare against DINo and FNO, and show that we consistently improve over DINo in terms of long-term extrapolation. FNO achieves better extrapolation limits due to reduced error accumulation in the fully observed setting, but very rapidly deteriorates even within the train horizon with sparsely observed initial conditions, whereas our proposed approach loses very little in terms of extrapolation performance in this sparse setting.
Although we can not provide guarantees on extrapolation limits, results in all experiments show that train and test extrapolation MSEs are relatively consistent. As such, a possible way to obtain empirical extrapolation limits for a new dataset would be to analyse train extrapolation MSE and determine empirical soft bounds for test extrapolation limits by assessing the window in which train extrapolation MSE remains bounded.
**Memory and execution times** We understand that information on the execution / inference times and memory complexity would allow for a better comparison of our approach with baselines. We provide this information in Table 1, 2, which will be included in the appendix of our manuscript! Notably, although our framework has increased GPU memory consumption during training and inference (compared to DINo), we show a marked reduction in inference time attributable to the use of meta-learning to obtain the latents for the initial state.
**What are the typical DOF associated with the ODE solve?** The neural ODE solver operates on the latent space of the ENF, as such, the number of DOF of this ODE is equal to the total size of a single set of latents. We vary this as a hyperparameter over the different datasets:
- Navier-Stokes: 4 latents, each with a 2-D pose $p_i$ and a 16-D context vector $\mathbf{c}_i$, 72 DOF.
- Planar diffusion: 4 latents, each with a 2-D pose $p_i$ and a 16-D context vector $\mathbf{c}_i$, 72 DOF.
- Spherical diffusion: 18 latents, each with a 2-D pose $p_i$ and a 4-D context vector $\mathbf{c}_i$, 108 DOF.
- Shallow-water: 8 latents, each with a 2-D pose $p_i$ and a 32-D context vector $\mathbf{c}_i$, 272 DOF.
- Internally heated convection: 25 latents, each with a 3-D pose $p_i$ and a 32-D context vector $\mathbf{c}_i$, 875 DOF.
We typically obtain these settings through running a small hyperparameter search over a range of different settings. We will add these numbers to the appendix for the camera-ready version.
**What is the boundary condition applied in the heat equation example** We use Dirichlet boundary conditions with value 0.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for answering my questions in detail. I am going to keep my original score unchanged. | Summary: The work proposes a novel framework combining equivariant neural fields and neural ODEs, providing a continuous space-time solution for PDEs while respecting associated equivariance constraints. The author uses PDE-specific bi-invariant attributes for equivariant neural fields and a meta-learning approach for learning the initial latent state. The proposed method achieves better performance on the chosen PDE problems.
Strengths: 1. The work addresses an important and complex issue of equivariance in the context of solving partial differential equations (PDEs). The proposed architecture is not only space-time continuous but also respects the equivariance constraint. This characteristic makes it particularly valuable and effective for various types of scientific research and applications.
2. The proposed method is well-motivated and clearly explained in the paper.
Weaknesses: I have found the empirical study to be the weak point of the work. In order to argue the effectiveness of the proposed solution over existing approaches, the authors need to consider established benchmarks, large-scale datasets, PDEbench, and CFDBench, especially with irregular domains (domains with holes or solid objects as hindrances).
I also find that the choice of baselines is not extensive. For example, SFNO [a] is used for the shallow water equation. Also, baselines like [b,c] are not considered.
Also, as the proposed solution is claimed to be time continuous, zero-shot super-resolution along the time domain should be demonstrated (analogous to Table 3).
a. Spherical Fourier Neural Operators: Learning Stable Dynamics on the Sphere
b. GNOT: A General Neural Operator Transformer for Operator Learning
c. Geometry-Informed Neural Operator for Large-Scale 3D PDEs
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the training and inference time of the proposed method compared to existing methods like FNOs and Deeponets?
2. what is the Reynolds number of the Navier-Stokes equation problem?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer kxDM for their effort in reviewing our work. We’re glad to see that the reviewer agrees on the value of adding equivariance constraints to neural PDE solving.
**Improving comparative analysis** The reviewer raised concerns about comparison with a wider set of baselines. To this end, we include a general purpose transformer PDE solving architecture (Transolver [1]) to the list of baseline by applying it both to 2D Navier Stokes and 3D internally heated convection, since this model has shown SOTA results on PDE solving over different geometries.
We adapt the Transolver model in PyTorch from the official github repository, and use the same hyperparameter settings as in the original paper (resulting in a 7.1M parameter model) - modifying the training objective to be identical to the autoregressive one we use in our experiments (i.e. mapping from frame to frame). Tab. 5 shows the results for the Transolver model on Navier-Stokes in 2D. Notably, Transolver achieves 1.80E-02 and 1.85E-02 train and test mse respectively within the training horizon, and 4.85E-01 and 4.90E-01 train and test mse outside the training horizon. During training in the Navier-Stokes task we observed noticeable instabilities, with the method producing high-frequency artefacts in rollouts in this autoregressive setting. For sparsely subsampled initial conditions, performance deteriorates further - highlighting the benefit of NeF-based continuous PDE solving.
For experiments on internally-heated convection, due to the large size of the input frames, we were required to scale down the Transolver model size in order to be able to fit the model on our A100 GPU, from 256 to 64 hidden units, resulting in a 1.2M parameter model - somewhat comparable in size to the model we use in our experiments (889K). We train the model for 2000 epochs on an A100 GPU, taking approximately 30 hours. Results in Tab. 4 show the performance of this model within the training time horizon - achieving 4.13E-04 test MSE where our framework achieves 5.99E-04, indicating that it is indeed a strong baseline for solving PDEs over complicated geometries. However, we also note that outside of the training horizon, error accumulates more quickly for the Transolver model, indicating it has somewhat overfit the training horizon dynamics. Here, Transolver achieves 2.09E-02 test MSE, where our framework achieves 8.21E-03. We hypothesise that - due to the equivariance constraints placed on our model reflecting inductive biases about this PDE, it extrapolates more reliably beyond the train horizon.
We also provide results in Tab. 4 for sparsely observed initial conditions. These results clearly show the advantage of NeF-based continuous solvers in this sparsely observed setting, both DINo and our framework significantly outperform Transolver, which is unable to provide accurate solutions either within the training horizon or outside of it.
**Application to irregular domain** Secondly, the reviewer raised the concern that the method is only tested on regular domains and asked whether we could extend the empirical study to irregular domains as well. We apply our model to the recommended CFDBench dataset [2], which contains solutions to initial conditions for computational fluid dynamics problems with varying boundary conditions, fluid properties and domain shapes - and applied our method on three different problems (cavity, dam and cylinder). We fit these problems with translational bi-invariants (which results in similar translation equivariance constraints to CNN-based neural PDE solvers) - keeping the same model architecture we used in all our experiments, with 25 latents and context vectors $\mathbf{c_i} \in \mathbb{R}^{16}$ and without any specific finetuning. The results can be seen in Tab. 6 in the PDF and show that our approach also handles these more complicated and varied geometries and boundary conditions well - even without the presence of the global symmetries. We hypothesise that the weight-sharing that the equivariance constraints result in might have a regularising effect beneficial to PDE solving; PDEs are built from differential operators that are themselves generally equivariant.
**Continuous-time solving** We provide results for a zero-shot temporal superresolution experiment to show the continuous-time property of our solving framework. We generate a dataset of Navier-Stokes solutions at a higher time-resolution, with timestep size $d\tau=0.25$. We train our model on temporally subsampled frames resulting in a training time resolution of $d\tau=1.0$. We then evaluate the model on step-sizes $d\tau=0.5, d\tau=0.25$. As shown in Tab. 3 of the attached PDF, both inside and outside the train horizon the higher sampling resolution introduces very little error, even with $4\times$ as many unrolling steps, showcasing reliable continuous-time performance of the proposed approach.
**Experimental details / computational requirements** The reviewer asked what the Reynolds number was used to generate the Navier Stokes data. In our experiment we use viscosity $\nu=$ 1e-3, which in our setup results in a Reynolds number of $\frac{1}{\nu} \sim 1000$.
We provide training and inference times for our method compared to the baseline models in Tab. 2 of the PDF. We note that although memory-wise our method has somewhat significant requirements (due to memory-intensive meta-learning), during inference our method outperforms the baselines.
[1] Wu, H., Luo, H., Wang, H., Wang, J., & Long, M. (2024). Transolver: A fast transformer solver for pdes on general geometries. arXiv preprint arXiv:2402.02366.
[2] Luo, Y., Chen, Y., & Zhang, Z. (2023). Cfdbench: A comprehensive benchmark for machine learning methods in fluid dynamics. arXiv preprint arXiv:2310.05963.
---
Rebuttal Comment 1.1:
Comment: Thanks for the additional experiments. For the experiments on the irregular domain (Table 6, rebuttal pdf), how have you used FNO? FNO is generally only used for regular grids.
---
Reply to Comment 1.1.1:
Comment: To use FNO in the sparsely observed setting we randomly sample a mask that is used as dropout applied to the initial condition, e.g. 50% of the values of the initial condition are set to zero.
Thanks for your question! We'll make sure this is clear from the manuscript itself. If any concerns remain, please let us know! | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough investigation of our work, and for investing the time to write out valuable criticism. We’re happy to see that reviewers regard our equivariant space-time continuous PDE solving method as valuable and effective for scientific research applications. Additionally, we appreciate the recognition of our efforts to address significant limitations of existing NeF-based solvers, and are happy to see that the reviewers found the work well-motivated and clearly explained.
Though the reviewers do not indicate any critical flaws, they raise - and agree on - a number of concerns, primarily aimed at (1) strengthening the motivation of our method for a wider family of PDEs and boundary conditions (Rev. NkrS, kxDM, hu7A) and (2) expanding the experimental validation of our method by including additional baselines and datasets (Rev. hu7A, zC2e, kxDM, i6aX). We address the most pressing of these concerns in this general response, referring to the attached single-page pdf, and provide additional detailed comments to each reviewer individually.
### **Questions regarding motivation/usability for non-symmetric PDEs / PDEs with unknown symmetries**
R. NkrS and R. i6aX point out that in our analysis “boundary conditions are symmetric and the chosen PDEs exhibit certain symmetries. In real-world applications, these assumptions might not always hold, potentially limiting the applicability of the proposed method” and that “the invariance groups are chosen according to the equation at hand. But how does one choose the invariance groups when the underlying operators and functions (RHS) are unknown”. This is a valid point, however, we highlight that our method *can* be employed with relaxed or without equivariance constraints both through use of a non-symmetric “bi-invariant” (see appx. D $\mathbf{a}^\emptyset_{i,m}$) and through a procedure we outline in our response to R. NkrS under **Equivariance constraints in real-world applications**, which means that it would retain its utility in scenarios where the symmetries are not (fully) known or are deemed irrelevant.
We argue that the attention-based conditional neural field trained with meta-learning can enhance forecasting accuracy due to its ability to capture local dependencies effectively, also in these settings. To underscore this, we provide additional experimental results on CFDBench (see response to R. kxDM), which doesn’t exhibit symmetric boundary conditions or global symmetries and contains varying geometries with hindrances and varying boundary conditions - achieving performance competitive to the baselines set by the authors.
On the other hand, in science and engineering it is not rare that known symmetries are used to develop models even when the complete details of the system are not fully known (e.g., in atmospheric and oceanic sciences rotational symmetry is often assumed). For these reasons, we posit that an equivariant model for spatio-temporally continuous PDE dynamics forecasting constitutes a valuable contribution, also in real-world settings.
### **Improving comparative analysis (R. hu7A, zC2e, kxDM, i6aX)**
**Comparison with additional baselines** Rev. zC2e and kxDM point out that on a number of experiments we compare with Yin. (2023), and argue for including additional baseline results for neural PDE solvers that can similarly handle the varying geometries we’re operating on, as this would strengthen the paper’s claims. To this end, we implemented an additional baseline proposed by the reviewers, which we detail below.
We adopt the Transolver model (2024), and train it on the Navier-Stokes and Internally-Heated Convection tasks. Find a detailed description of the experimental setup in our response to Rev. kxDM, results are listed in Tab 4, 5. The Transolver proved unstable in the autoregressive Navier-Stokes experiment, obtaining comparable performance to DINo, due to high-frequency artefacts arising on rollout - and so it underperforms FNO and our method. Moreover, it deteriorates heavily with sparsely observed initial conditions. Transolver does achieve competitive error in the internally-heated convection experiment in the fully observed setting, i.e. when 100% of the initial state is observed, indicating that it is a strong baseline in settings with complicated geometry and boundary conditions. However, the Transolver model deteriorates considerably in performance with subsampled initial conditions in both experiments - indicating that indeed it is not a model suitable for space-time continuous solving on complex geometries. In these settings, the proposed model architecture outperforms Transolver. We feel these results further strengthen the case for our proposed model architecture, and we thank the reviewers for this suggestion. In the camera ready version, we will further include results for this baseline in our other experiments.
**Computational cost / parameter count** (R. i6aX, hu7A) We provide the parameter counts of our model versus each of the baselines, as well as their respective runtime and GPU memory usage during training and inference in tables 1 and 2 of the pdf. We note that although our method is not the most memory efficient - meta-learning requires a significant computational overhead - it is the fastest in inference settings when unrolling an unseen state. This is attributable to (1) the fact that we’re using a latent-space solver which operates on a drastically compressed representation of the PDE state, and use (2) meta-learning over auto-decoding, which means we only require 3 gradient descent steps to obtain the initial state compared to 300-500 typically used in auto-decoding (Yin 2023).
This information, as well as all extra provided experiments, will be included in the appendix of the manuscript. We again thank the reviewers for their diligence, and hope to discuss further if any questions or concerns remain.
Pdf: /pdf/77dc7d743129fd4f7757b0345208ccece5ccf68b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces a novel framework that leverages Equivariant Neural Fields (ENFs) to solve Partial Differential Equations (PDEs). By preserving geometric information in the latent space, the proposed method respects the known symmetries of the PDE, enhancing generalization and data efficiency. The framework demonstrates improved performance in various challenging geometries, validated through experiments against other neural PDE forecasting methods.
Strengths: The paper presents an innovative approach by integrating equivariant neural fields, which respect the symmetries of PDEs, thereby enhancing model performance.
The methodology addresses significant limitations of existing NeF-based PDE solvers, particularly in generalization to unseen spatial and temporal locations and geometric transformations.
Extensive experimental validation across various geometries (e.g., plane, torus, sphere) demonstrates the robustness of the proposed framework over existing methods.
Weaknesses: The framework's performance decreases when extrapolating beyond the training horizon for complex PDEs.
While the approach shows competitive performance, the computational complexity due to the global attention operator in the ENF backbone can be high.
Error accumulation in long-term predictions could be mitigated with increased model capacity, but this comes at the cost of computational resources.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The framework assumes that the boundary conditions are symmetric and that the PDEs exhibit certain symmetries. In real-world applications, these assumptions might not always hold, potentially limiting the applicability of the proposed method to PDEs with different or more complex boundary conditions.
The use of a global attention operator in the Equivariant Neural Field (ENF) backbone increases the computational complexity. This can lead to high computational costs, especially when scaling the model for larger datasets or more complex PDEs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough assessment of our work, and we’re happy to see that the reviewer deems our approach innovative and appreciates the experimental validation we provide. We thank the reviewer for highlighting a number of important considerations with regards to our proposed approach, and discuss limitations mentioned by the reviewer in detail here. We would love to hear your thoughts.
**Equivariance constraints in real-world applications** The reviewer mentions that our method assumes symmetric boundary conditions and/or symmetric PDE formulations, and remarks that in real-world applications, such information might not always be known - limiting applicability of our approach in these cases.
*Note: Since R. i6aX, hu7A also ask after usability with more complex boundary conditions and applicability when PDE symmetries are unknown, we provide a joint response here.*
In general, not knowing the boundary conditions in advance when solving a PDE would result in an ill-posed problem. Boundary conditions help ensure the uniqueness and stability of the solution. From given boundary conditions it is immediate to verify if they are preserved by a given group action - we show examples of this in the manuscript Sec. 3.1 with Eqs 7, 8.
The concern raised about not always knowing the symmetry in advance is a very valid point, but it must be placed within the broader context of inductive biases in deep learning. First, our method extends existing approaches by introducing equivariance - an inductive bias that has shown to be beneficial in many DL-based scientific research applications [3, 4] - to improve forecasting accuracy and consistency for PDEs with known symmetries. Furthermore, our proposed attention architecture with Gaussian windows results in local latent variables that can be beneficial regardless of whether or not symmetries are present. Second, it is often the case - also in real-world applications - that some a priori knowledge about the system is available. It is reasonable to assume, for example, that a spherical domain, such as in the shallow water system, is equivariant with respect to rotations around the axis of rotation. On the contrary, full SO(3) equivariance in real-world climate data defined over the globe can be ruled out a priori by noting that Coriolis (fictitious) forces, due to the rotation of the sphere, would break this symmetry.
Note that our method can indeed be applied when symmetries are unknown or inexact, as we show with an additional experiment applied on CFDBench - as suggested by R. kxDM and shown in Tab. 6. We hypothesise that often the differential operators that define PDEs themselves are symmetric (i.e. derivatives, Laplacians), so introducing these same symmetries into the building blocks of the neural solver itself might still be beneficial to reduce the problem complexity, but an extensive investigation would be needed to properly verify this intuition. Another possibility would be to allow for soft-equivariance constraints to be imposed on the modelled solutions. A simple approach to this end - gleaned from application of GNNs used in latent force field discovery in particle physics [8] - would be to attach one or more symmetry-breaking dataset-wide shared “reference frame nodes” to the latent ODE graph, that could encode for any symmetry-breaking biases that the data contains, to e.g. account for absolute positioning of hindrances or objects within the domain.
Recent works in equivariant deep learning have explored symmetry-discovery / soft symmetry constraints in model design, highlighting another interesting research direction. E.g. [5] initialise their model as being fully equivariant, but allow relaxation of this constraint through optimization, learning an interpolation factor between a set of equivariant and non-equivariant kernels. [6] instead proposes to learn generators for symmetry groups from data directly, overcoming the need for explicit specification of the specific equivariance constraints for novel PDEs - and provides a very extensive list of examples of symmetries common to a variety of PDEs (moreover noting that “simplifying systems of PDEs using their symmetries and consequent coordinate transformations was, in fact, the primary reason Sophus Lie discovered Lie groups”). Though it falls outside of the scope of our current work, similar adaptations might help the proposed framework transfer effectively to settings with inexact or unknown symmetries.
**Scalability and model complexity** The reviewer remarks that the global attention operation used in the decoder may limit scalability of our method. Since our latents are localised we can approximate the attention operator by restricting the number of latents that are attended to based on their relative distance to the sampled coordinate. We provide a little further detail in our response to R. hu7A.
[1] Li, Z., et al. Fourier Neural Operator for Parametric Partial Differential Equations. In ICLR.
[2] Yin, Y., et al. Continuous PDE Dynamics Forecasting with Implicit Neural Representations. In The Eleventh International Conference on Learning Representations.
[3] Helwig, J., et al (2023). Group equivariant fourier neural operators for partial differential equations. arXiv preprint arXiv:2306.05697.
[4] Bogatskiy, A., et al. (2022). Symmetry group equivariant architectures for physics. arXiv preprint arXiv:2203.06153.
[5] Wang, R., et al. (2022, June). Approximately equivariant networks for imperfectly symmetric dynamics. In International Conference on Machine Learning (pp. 23078-23091). PMLR.
[6] Gabel, A, et al. (2024). Data-driven Lie point symmetry detection for continuous dynamical systems. Machine Learning: Science and Technology, 5(1), 015037.
[7] Kofinas, M, et al. (2024). Latent field discovery in interacting dynamical systems with neural fields. Advances in Neural Information Processing Systems, 36.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I will keep my original score. | null | null | null | null | null | null |
CRAYM: Neural Field Optimization via Camera RAY Matching | Accept (poster) | Summary: The manuscript #3263 entitled "CRAYM: Neural Field Optimization via Camera RAY Matching" proposes a novel uncalibrated NeRF strategy based on prior keypoints matching across images. Specifically, the authors propose two novelties to improve the quality of the reconstruction and the pose estimation of the cameras: 1) Enriched ray features using surrounding rays sampled around the keypoints and 2) a ray matching index which can be used to re-weight the color regression part, leading to better robustness to occlusions.
The proposed technique has been evaluated across various standard datasets and against meaningful NeRF-like algorithms.
Strengths: - The idea of separating key rays and auxiliary rays is interesting and meaningful.
- Numerous and conclusive results.
- Assessment on a large number of datasets.
- Good ablation study underlying the benefit of each novelty.
Weaknesses: - The robustness of the approach against outlier matches is not evaluated. Introducing artificial outliers (wrongly matched keypoints) into the dataset to assess how well the technique can handle mismatches would be of some interest.
*Question*: Would the matched rays consistency and the epipolar geometry compensate for that? Or would the training diverge?
- As stated in the literature review of this manuscript, other approaches taking advantage of the epipolar geometry and prior matching have already been designed in this context. I have difficulty understanding what is significantly different with this work apart from the sampling of additional surrounding rays and the matching ray index used to weight color prediction using image pairs. These two novelties seem rather incremental, but they nonetheless lead to strongly improved results.
*Question*: I assume that the other keypoints-based approaches are not "self-calibrated". Is the proposed technique the first "keypoint-based" calibration-free NeRF? If it is not the case, it would be meaningful to compare against such techniques too.
- Adding surrounding rays around a key ray appears to be quite effective; however, the sampling of auxiliary rays is not well described in the paper.
*Question*: How are the rays sampled?
- The initialization of the pose lacks details.
*Question*: What is the effect of the pose initialization on the result?
- The intrinsic parameters of the camera could additionally be optimized.
*Question*: Just out of curiosity, have you conducted such an experiment?
- In equation (4), it seems that the proposed solution considers only pairs of images.
*Question*: How are those pairs selected?
The proposed approach is inspired by existing techniques integrating matched keypoints (like using the epipolar loss) and other techniques, such as NeuS.
- The loss function contains many regularization factors.
*Question*: Is the final loss hard to balance?
Overall, the paper is interesting and proposes a few contributions that seem to lead to strongly improved results. Moreover, the approach has been evaluated on various standard datasets and against representative methods. However, this novel approach remains relatively incremental, and many points remain to be clarified regarding the robustness of the technique. For all the above-mentioned reasons, I would like to issue a rather mixed opinion regarding the acceptance of this work for this conference.
Technical Quality: 3
Clarity: 4
Questions for Authors: The questions are integrated in the Weaknesses part.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The limitations of the paper may not have been entirely investigated, specifically in terms of robustness. For instance, the influence of the initial pose and outliers has not been demonstrated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and suggestions, especially the series of detailed questions, which we will all answer below. We hope the answers will address your concerns.
Q: The robustness of the approach against outlier matches is not evaluated. Introducing artificial outliers (wrongly matched key points) into the dataset to assess how well the technique can handle mismatches would be of some interest. Would the matched rays' consistency and the epipolar geometry compensate for that? Or would the training diverge?
A: Yes, our Matched Ray Coherency formulation is precisely designed to compensate for wrongly-matched rays. The use of auxiliary rays serves to improve the robustness of ray matching. In practice, erroneous keypoint/ray matches do happen and they probably do not need to be "artificially introduced." For example, when the noise level increases, wrongly-matched rays will tend to happen more frequently. To this end, our robustness experiments against different noise levels (see Table 2 in the paper) can be regarded also as a robustness test on our method's ability to handle wrongly-matched rays.
Q: As stated in the literature review of this manuscript, other approaches taking advantage of the epipolar geometry and prior matching have already been designed in this context. I have difficulty understanding what is significantly different with this work apart from the sampling of additional surrounding rays and the matching ray index used to weight color prediction using image pairs. These two novelties seem rather incremental, but they nonetheless lead to strongly improved results. _Question_: I assume that the other keypoints-based approaches are not "self-calibrated". Is the proposed technique the first "keypoint-based" calibration-free NeRF? If it is not the case, it would be meaningful to compare against such techniques too.
A: Yes, CRAYM is the first "keypoint-based" calibration-free method, as far as we are aware of. Other approaches usually compute the reprojection distance of the matched pixels, while we compute the distance between the reprojected pixel and the epipolar line as the epipolar loss, thereby boosting the convergence of the camera poses.
Q: Adding surrounding rays around a key ray appears to be quite effective; however, the sampling of auxiliary rays is not well described in the paper. _Question_: How are the rays sampled?
A: Currently, we randomly sample n auxiliary rays from the k nearest rays of key rays. More details of auxiliary rays will be given in the revision.
Q: The initialization of the pose lacks details. _Question_: What is the effect of the pose initialization on the result?
A: As stated in the paper, we follow L2G-NeRF to perturb the ground-truth camera poses with additive noise as the initial poses. Table 2 shows the results of using different noise levels on the initial poses. In general, well-initialized poses produce robust and fast convergence.
Q: The intrinsic parameters of the camera could additionally be optimized. _Question_: Just out of curiosity, have you conducted such an experiment?
A: All the evaluations assume known camera intrinsics, as most of the other methods do. We have performed experiments to additionally optimize the intrinsics and they succeed in learning the intrinsics with some more interactions.
Q: In equation (4), it seems that the proposed solution considers only pairs of images. _Question_: How are those pairs selected?
A: We do not select image pairs for training. Instead, we randomly select a ray from the ray set generated with all the training images and their poses. If a matched ray is found for it, the matched ray pair is optimized. Otherwise, the selected ray is optimized on its own.
Q: The loss function contains many regularization factors. _Question_: Is the final loss hard to balance?
A: Empirically, we found that these items in the loss function promote one another rather than weakening/antagonizing one another and we do not need to tune them for an optimal balance in producing our results.
---
Rebuttal Comment 1.1:
Comment: I would like to sincerely thank the authors for their responses, which have clarified many of my questions. However, in light of the other reviews, I agree that this paper appears to be rather incremental. For this reason, I would like to maintain my original rating of BA.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Bu9t,
Thank you for your thorough reviews and thoughtful evaluation of our paper. We greatly appreciate your positive rating. We will continue to refine our manuscript in light of your feedback and hope to meet your expectations in the final version. | Summary: This paper presents a new technique called camera ray matching, which is integrated into the joint optimization of camera poses and a neural field. The method utilizes an uncalibrated set of images as input, incorporating photometric and geometric constraints through key points and key rays matching, with the aim of enhancing the quality of novel view rendering and 3D surface reconstruction. The approach comprises two simple modules and is implemented using grid-based representation (iNGP). Photometric experiments were exclusively compared with MLP-based methods, specifically NeRF-like, on the synthetic dataset of the vanilla NeRF, while geometric experiments demonstrate some positive results. The authors provide additional results in the appendix.
Strengths: This work is a positive extension to the field of neural reconstruction (like NeRF and SDF) under the setting of images captured with noisy poses. Authors make efforts to simultaneously solve the problems involving the camera pose, detailed renderings, and accurate surface reconstruction. Experiments show good results.
Weaknesses: Too many factors are taken into account in the writing simultaneously, which leads to a lack of clear theme or a clear academic or technical problem to be addressed in this paper. The work appears to build incrementally upon previous research and offers limited novelty. The so-called Epipolar loss and Point-alignment loss are actually based on Bundle Adjustment (BA), using key points matching, which has been previously applied in the optimization of neural reconstruction in works such as SCNeRF, BARF, L2G, and Level2sfm. The proposed two modules do not bring significant innovation. It is also confusing that this work is implemented using a grid-based representation (i.e., iNGP), while the compared methods are implemented using MLP-based representation, which does not allow for a precise and fair comparison. I suggest that the authors refer to ZipNeRF for guidance on how to formulate research problems and conduct appropriate comparisons.
Technical Quality: 2
Clarity: 2
Questions for Authors: Based on the mentioned shortcomings, I have several questions:
1. What is the main research problem addressed in this paper? It appears that the paper intends to address three problems simultaneously: camera pose, fine-detailed rendering, and accurate surface reconstruction. However, if the goal is to solve the camera pose problem, it would be beneficial to present more results related to the accurate regression of camera poses before discussing high-fidelity results. Could you provide additional results about the accurate regression of camera poses?
2. If the primary focus is on the issue of high-fidelity rendering under noisy camera poses, the experimental results do not offer strong conviction. It would be beneficial to showcase more experimental results on different types of datasets, similar to how BARF, SPARF, and L2G-NeRF were compared on the LLFF dataset, with an emphasis on trajectory of pose optimization and high-fidelity renderings.
3. Initialization of camera poses is a sensitive issue in joint optimization. Have you attempted to change the initial camera poses or initialize poses using COLMAP to test the robustness of pose regression?
4. This work is implemented using grid-based representation, while the compared methods are implemented using MLP-based representation, leading to an imprecise and unfair comparison. I am curious about the total number of iterations each method was trained for. BARF and L2G-NeRF trained all models for 200K iterations, and SPARF trained models for 100K iterations. It seems likely that this paper trained for significantly fewer iterations (perhaps 10k iterations) due to the inherent faster convergence of the grid-based representation. It would be fairer to re-implement this paper using MLP-based representation and train for a duration comparable to other methods, then make comparisons. Additional experiments are needed to support these claims.
5. The comparison under sparse-view conditions lacks precision and distinction. SPARF evaluated on the DTU dataset with only 3 views, while you used 48 images in your setting, although you have reported results for sparse input (3 views) on the LEGO data, it may not be sufficiently convincing.
6. Since Neus and PET-NeuS have compared the surface reconstructions on the DTU dataset, I would appreciate the inclusion of visual results. Additional ablation studies on 3D surface reconstruction using your modules on the DTU dataset would enhance the paper.
7. Considering the integration of multi-resolution hash encodings into a neural surface representation, I recommend that the authors compare with Neus2. Neus2 has compared their work with Instant-NGP and Neus, showcasing fast training and detailed surface reconstruction. If the goal is to demonstrate the superiority of 3D surface reconstruction, it is most appropriate to consider Neus2 in the comparison.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have identified a limitation where the meshes extracted from the constructed SDFs may still contain messy inner structures over invisible areas. I recommend that the authors explore the possibility of finding the SDFs of surface points instead of the SDFs of each sampled point along rays. This would involve assessing whether the depth accumulated by all points along the rays is more accurate than the output surface generated by all sampled points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the careful review and insightful questions. The reviewer's various suggestions are right on and we hope the rebuttal will alleviate the concerns raised.
Q: Additional results on accurate regression of camera poses?
A: Please refer to Table 9 in the supplemental material. It shows *exactly* the quality of pose regression by our method. The results show that CRAYM yields better pose estimation than other methods. If requested and space permitting, we can move the relevant materials to Section 4 of the main paper.
Q: It would be beneficial to showcase more experimental results on different types of datasets with an emphasis on trajectory of pose optimization and high-fidelity renderings.
A: This request coincides with what Reviewer GtQM asked above. Hence, we copy the results on the LLFF dataset below. The trajectories of LEGO from NeRF-Synthetic (Blender) are shown in the uploaded PDF file. Other trajectories will be provided in the revised paper.
Table 1: Mean scores over three metrics.
| | BARF | L2G-NeRF | BAA-NGP | CRAYM (ours) |
| ---------------- | ------- | ------------ | ------------ | ----------------- |
| PSNR Mean | 24.23 | 24.54 | 22.52 | **24.76** |
| SSIM Mean | 0.73 | 0.75 | 0.68 | **0.76** |
| LPIPS Mean | 0.23 | 0.20 | 0.22 | **0.18** |
Table 2: PSNR scores over various models.
| PSNR | BARF | L2G-NeRF | BAA-NGP | CRAYM |
| -------- | ----- | -------- | --------- | --------- |
| Fern | 23.88 | 24.57 | 19.37 | **24.83** |
| Flower | 24.29 | 24.90 | **25.16** | 25.04 |
| Fortress | 29.06 | 29.27 | 29.24 | **29.39** |
| Horns | 23.29 | 23.12 | 19.71 | **23.30** |
| Leaves | 18.91 | 19.02 | **19.96** | 19.57 |
| Orchids | 19.46 | 19.71 | 12.45 | **19.81** |
| Room | 32.05 | 32.25 | 29.72 | **32.44** |
| T-rex | 22.92 | 23.49 | **24.56** | 23.68 |
| Mean | 24.23 | 24.54 | 22.52 | **24.76** |
Table 3: SSIM scores over various models.
| SSIM | BARF | L2G-NeRF | BAA-NGP | CRAYM |
| -------- | ---- | -------- | -------- | -------- |
| Fern | 0.71 | 0.75 | 0.50 | **0.79** |
| Flower | 0.71 | 0.74 | **0.81** | 0.76 |
| Fortress | 0.82 | 0.84 | 0.83 | **0.85** |
| Horns | 0.74 | 0.74 | 0.72 | **0.75** |
| Leaves | 0.55 | 0.56 | **0.68** | 0.60 |
| Orchids | 0.57 | **0.61** | 0.14 | 0.59 |
| Room | 0.94 | **0.95** | 0.90 | 0.91 |
| T-rex | 0.78 | 0.80 | **0.86** | 0.83 |
| Mean | 0.73 | 0.75 | 0.68 | **0.76** |
Table 4: LPIPS scores over various models.
| LPIPS | BARF | L2G-NeRF | BAA-NGP | CRAYM |
| -------- | ---- | -------- | -------- | -------- |
| Fern | 0.31 | 0.26 | 0.38 | **0.25** |
| Flower | 0.20 | 0.17 | **0.10** | 0.13 |
| Fortress | 0.13 | **0.11** | 0.14 | 0.12 |
| Horns | 0.29 | 0.26 | **0.24** | **0.24** |
| Leaves | 0.35 | 0.33 | **0.23** | 0.29 |
| Orchids | 0.29 | 0.25 | 0.42 | **0.23** |
| Room | 0.10 | **0.08** | 0.12 | 0.09 |
| T-rex | 0.20 | 0.16 | 0.11 | **0.10** |
| Mean | 0.23 | 0.20 | 0.22 | **0.18** |
Q: This work is implemented using grid-based representation, while the compared methods are implemented using MLP-based representation, leading to an imprecise and unfair comparison.
A: This is a fair point when considering the *original* implementations. However, when making comparisons to CRAYM, we did implement BARF with *grid-based* representations as the baseline and the comparison results are shown in paper and also in the four tables above. The gap between the BARF baseline (27.30) and CRAYM (31.60) shows that the improvement comes mainly from exploiting the matched rays in our proposed formulation. The results of another grid-based representation method, namely NeuS2, are also provided in the table at the end of this rebuttal.
Q: The comparison under sparse-view conditions lacks precision and distinction. SPARF was evaluated on the DTU dataset with only 3 views, while you used 48 images in your setting, although you have reported results for sparse input (3 views) on the LEGO data, it may not be sufficiently convincing.
A: The overall synthesis quality with sparse views is demonstrated with the whole evaluation image set. We randomly select three images from the evaluation set as the final results. The comparison with SPARF is shown below, with our method outperforming:
| | PSNR | SSIM | LPIPS | CD |
| ----- | --------- | -------- | -------- | --------- |
| SPARF | 15.91 | 0.69 | 0.40 | 1.270 |
| CRAYM | **16.08** | **0.70** | **0.41** | **0.094** |
Q: I recommend that the authors compare with Neus2.
A: Good idea. Please first note that in Table 4 in the submitted paper, we applied a progressive feature mask on the multi-resolution hash encoding into a neural surface representation as our baseline, which combines BARF and NeuS2. The results of NeuS2, the BARF baseline, l2G-NeRF, and CRAYM are shown in the table below, with our method coming on top. Although NeuS2 produces better reconstructions than L2G-NeRF, L2G-NeRF outperforms NeuS2 in the quality of the rendered views.
| | PSNR | SSIM | LPIPS | CD |
| -------- | --------- | -------- | -------- | --------- |
| NeuS2 | 26.83 | 0.86 | 0.17 | 0.075 |
| BARF Baseline | 27.30 | 0.91 | 0.10 | 0.063 |
| L2G-NeRF | 27.71 | 0.91 | 0.06 | 0.115 |
| CRAYM | **31.60** | **0.96** | **0.03** | **0.039** |
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 67PC,
Thank you for your comprehensive reviews and constructive suggestions. We have evaluated our method on the LLFF dataset and compared it with NeuS2. The quality of pose regression by our method is presented in Table 9 of the supplemental material. We hope the rebuttal will help address your concerns, and we are eager to resolve any further issues to ensure our submission meets the high standards of the conference.
---
Rebuttal 2:
Comment: Thanks for the great efforts of the authors! After carefully reading the responses both from the authors and other reviewers, most of my concerns have been addressed. I tend to improve my rating to BA now and suggest the authors add more experimental details/results and discussions in the revision.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer 67PC,
Thank you very much for recognizing our work and for your positive rating. We appreciate your feedback and will continue to refine our manuscript. In the revision, we will include optimized trajectories, a discussion on grid-based methods, additional experimental details and results.
Thanks again,
The authors | Summary: This work suggests a novel neural representation and training scheme that jointly solves for the scene representation and the multi-view camera localization. It is done using several new ideas that generalize existing NeRF based methods.
The representation itself is a combination of a geometry-network, which predicts a signed-distance-function (SDF) and a feature vector, that are fed into the texture-network that predicts the usual color and density values.
The main key novelty, is that the optimization is done over matching rays, obtained from matching keypoints using a pretrained network. The standard photometric loss function is extended to incorporate an epipolar loss (that constrains the camera positions) and a point-alignment loss that ensures the ray intersect at the predicted depth estimates along the rays. Another strong addition, is the use of 'auxiliary' rays around each matched pair of rays, from which features are fused to produce a more robust representation, that can aid the optimization under errors in matching and camera poses.
Extensive experiments demonstrate the importance of each component and the strong performance of their combination.
Strengths: * The paper presents an extension of the NeRF framework, based on several novel and interesting additions that are framed in a single pipeline. The experimental results show that these contributions work well together and yield new state-of-the-art results, across the board.
* One promising idea, in my view, is the joint optimization of both geometry and texture networks, which clearly complement eachother and are helpful in obtaining stronger and more accurate constraints on the scene understanding (as opposed to most NeRF pipelines that focus on image reconstruction and are less accurate for 3D reconstruction).
* The other strong idea, is the joint optimization of matching rays, once again - imposing consistency contraints (on both camera and surface locations) that were not previously exploited to such an extent in prior work.
* The paper is well written and the contributions are very clearly highlighted, while the understanding of the conventional parts is left for the reader (which is mostly fine).
Weaknesses: * Reproducibility - I believe that many details are missing (including from the appendix) for one to be able to implement the proposed method. For example:
* What are the settings of the preprocessing SuperPoint and SuperGlue matching? What is the typical match density?
* How are the auxiliary rays sampled? How many and under which distribution?
* What is the function g in Eq 2 that fuses the key and auxiliary features?
* What are the balance weights in the final loss (Eq. 7)?
* How are poses initialized?
* Complexity - There is no discussion what so ever about the impact of the suggested changes on memory and runtime complexity, but at traning and in inference.
* Qualitative results are relatively limited.
* Synthesized images are all very small, so it is difficult to appreciate the fidelity.
* No depth images are shown
* No examples of key and auxiliary point matches are shown (over entire images)
Technical Quality: 4
Clarity: 4
Questions for Authors: * Is the training done only on matching key points? If so, what happens if correct matching keypoints do not 'cover' the space adequately?
* How sensitive is the method to the quality and density of the 2D matches? It reportedly worked well on sparse image sets, which is somewhat surprising.
* What are the runtimes (training and inference) compared to some baseline methods?
* How different is the use of auxiliary rays compared to sampling from conical frustums as in Mip-NeRF?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your care and insights in the reviews and the encouraging remarks!
Q: Reproducibility
A: All the implementation details mentioned along with other useful information will be added to the supplemental material in the revision. The source code and any data used will surely be released upon paper publication.
Q: Complexity
A: When training CRAYM, each iteration consumes around twice the amount of resources vs. other low-resource alternatives, since our method optimizes two matched rays together in each iteration. However, CRAYM converges much faster than these alternatives (10-20k iterations vs. 200k iterations) during training. During inference, the views are synthesized from the optimized field with the same rendering process and each ray is rendered separately, so the inference efficiency of CRAYM is the same as the other methods.
Q: Qualitative results, synthesized images, depth images, example key, and auxiliary point.
A: We will provide these results in the revised paper as space would allow. All other requested additions will be incorporated into the supplementary material.
Q: Is the training done only on matching key points?
A: No. Matched rays are optimized together, whereas rays without matched companions are optimized separately. Without using any matched rays, CRAYM will be downgraded to the baseline method.
Q: How sensitive is the method to the quality and density of the 2D matches?
A: In our Matched Ray Coherency formulation, we explicitly account for potentially erroneous (i.e., low-quality) 2D matches by using the matchability between two rays as a weight to either accentuate or discount the color consistency constraint; see line 66 in the paper. In terms of sensitivity with respect to the density of the 2D matches, in our experiments, we have observed that even with sparse input views and sparsely distributed matched rays, CRAYM can still notably improve the optimization convergence. We would be happy to provide such a sensitivity analysis in the revision. | Summary: This paper introduces Camera Ray Matching for optimizing camera poses and neural fields from multi-view images. The optimized feature volume supports novel view synthesis and 3D geometry reconstruction by probing camera rays, which carry both geometric and photometric information. CRAYM claims to improves efficiency and accuracy by focusing on keypoints and integrating multi-view consistencies, enhancing both geometric reconstruction and photorealistic rendering. The method shows result in NVS and geometry reconstruction compared to baseline methods.
Strengths: - The paper is well-structured and easy to follow.
Weaknesses: - Experiments were only conducted on NeRF-synthetic datasets and not on LLFF datasets.
- Comparison is made with older baseline methods (e.g., SPARF, BARF, L2G) which are more than 2 years old. It’s recommended to include more recent methods such as NoPe-NeRF and BAA-NGP.
- It is suggested that the authors perform Neural Image Alignment to enhance the evaluation.
Technical Quality: 2
Clarity: 2
Questions for Authors: Figure 5 and Table 3 appear to have a significant overlap in the information they present.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: YES
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the suggestions in the review and hope the rebuttal will help address the concerns raised.
Q: Experiments were only conducted on NeRF-synthetic datasets and not on LLFF datasets. Also add Neural Image Alignment to enhance the evaluation.
A: We have evaluated our method on the LLFF dataset and the results are listed in the four tables below. The first table shows the mean scores for PSNR, SSIM, and LPIPS, comparing our method (CRAYM) to three baselines including BAA-NGP, a recent method suggested by the reviewer. Our method outperforms all the baselines over all metrics. The remaining three tables show performance numbers for each of the metrics on various models.
Evaluations on the Neural Image Alignment task will be added in the revision.
Table 1: Mean scores over three metrics.
| | BARF | L2G-NeRF | BAA-NGP | CRAYM (ours) |
| ---------------- | ------- | ------------ | ------------ | ----------------- |
| PSNR Mean | 24.23 | 24.54 | 22.52 | **24.76** |
| SSIM Mean | 0.73 | 0.75 | 0.68 | **0.76** |
| LPIPS Mean | 0.23 | 0.20 | 0.22 | **0.18** |
Table 2: PSNR scores over various models.
| PSNR | BARF | L2G-NeRF | BAA-NGP | CRAYM (ours) |
| ---------- | ------- | ------------ | ------------- | ----------------- |
| Fern | 23.88 | 24.57 | 19.37 | **24.83** |
| Flower | 24.29 | 24.90 | **25.16** | 25.04 |
| Fortress | 29.06 | 29.27 | 29.24 | **29.39** |
| Horns | 23.29 | 23.12 | 19.71 | **23.30** |
| Leaves | 18.91 | 19.02 | **19.96** | 19.57 |
| Orchids | 19.46 | 19.71 | 12.45 | **19.81** |
| Room | 32.05 | 32.25 | 29.72 | **32.44** |
| T-rex | 22.92 | 23.49 | **24.56** | 23.68 |
| Mean | 24.23 | 24.54 | 22.52 | **24.76** |
Table 3: SSIM scores over various models.
| SSIM | BARF | L2G-NeRF | BAA-NGP | CRAYM (ours) |
| ---------- | ------- | ------------ | ------------- | ----------------- |
| Fern | 0.71 | 0.75 | 0.50 | **0.79** |
| Flower | 0.71 | 0.74 | **0.81** | 0.76 |
| Fortress | 0.82 | 0.84 | 0.83 | **0.85** |
| Horns | 0.74 | 0.74 | 0.72 | **0.75** |
| Leaves | 0.55 | 0.56 | **0.68** | 0.60 |
| Orchids | 0.57 | **0.61** | 0.14 | 0.59 |
| Room | 0.94 | **0.95** | 0.90 | 0.91 |
| T-rex | 0.78 | 0.80 | **0.86** | 0.83 |
| Mean | 0.73 | 0.75 | 0.68 | **0.76** |
Table 4: LPIPS scores over various models.
| LPIPS | BARF | L2G-NeRF | BAA-NGP | CRAYM (ours) |
| ---------- | ------- | ------------ | ------------- | ----------------- |
| Fern | 0.31 | 0.26 | 0.38 | **0.25** |
| Flower | 0.20 | 0.17 | **0.10** | 0.13 |
| Fortress | 0.13 | **0.11** | 0.14 | 0.12 |
| Horns | 0.29 | 0.26 | **0.24** | **0.24** |
| Leaves | 0.35 | 0.33 | **0.23** | 0.29 |
| Orchids | 0.29 | 0.25 | 0.42 | **0.23** |
| Room | 0.10 | **0.08** | 0.12 | 0.09 |
| T-rex | 0.20 | 0.16 | 0.11 | **0.10** |
| Mean | 0.23 | 0.20 | 0.22 | **0.18** |
Q: Include more recent methods such as NoPe-NeRF and BAA-NGP.
A: The comparisons with BAA-NGP were included in the tables above, and as shown, our method outperforms all the baselines including BAA-NGP, over all three metrics PSNR, SSIM, and LPIPS, on the LLFF dataset.
As for NoPe-NeRF, we had tested the code released by the authors of that work. However, since NoPe-NeRF heavily relies on depth estimation results to adjust the camera poses, it cannot effectively generalize to other datasets, as in our setting.
Q: Figure 5 and Table 3.
A: Thanks! We will reduce the overlap in the revision.
---
Rebuttal Comment 1.1:
Comment: The author addresses most of my concerns. Please add a discussion about NoPe-NeRF, even if not making a direct comparison. I am raising the score to BA.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer GtQM,
We are pleased to learn that the majority of your concerns have been addressed. Thank you for your thoughtful and constructive feedback on our submission. We will include a discussion about NoPe-NeRF in the revised version. We are committed to addressing any additional concerns you may have to ensure our paper meets the high standards of the conference. | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for their comprehensive reviews of our paper. The insightful questions and various suggestions for additional experiments and clarifications will surely strengthen this work.
Here, let us first start with some quick remarks on common reviewer comments. The individual rebuttals will fully address all the reviewer questions and concerns, with additional experiment results provided as requested.
Two reviewers raised questions about the robustness/sensitivity of our method against wrongly-matched keypoints/rays. To this, let us reiterate that our Matched Ray Coherence formulation is designed to tackle this exactly; see lines 62-70 in the main paper. The robustness tests against different noise levels, shown in Table 2 of the main paper, can be regarded also as a test on how well our method handles potentially-mismatched keypoints/rays, since higher-level noise tends to introduce more mismatches.
Two reviewers requested experiments and comparisons on the LLFF dataset, as well as comparisons to more recent methods such as NoPe-NeRF, BAA-NGP, and NeuS2. To this end, we conducted these experiments and reported results in several new tables. The key conclusion is that our method, CRAYM, outperforms all the baselines and over all three metrics (PSNR, SSIM, and LPIPS). Please refer to the details in the individual rebuttals.
Other requested experiments can either be found in the supplementary material (e.g., on camera pose regression) or reported in the individual rebuttal (e.g., on sparse views). Again, our method comes on top.
All in all, we hope that the rebuttal will alleviate the concerns that the reviewers raised. We are happy to include as many new experimental results as space would allow in the revision. Code and any data used will be released upon paper publication.
-- The authors.
Pdf: /pdf/b330372ce1d81a3cc2d1b8e3215f046b8716e993.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sharpness-diversity tradeoff: improving flat ensembles with SharpBalance | Accept (poster) | Summary: This paper presents introduces a training approach for ensemble learning called SharpBalance to balance sharpness and diversity within ensembles. This paper shows theoretically that SharpBalance achieves a better sharpness-diversity trade-off.
Strengths: 1. Ensemble learning is an important research direction.
2. Understanding of sharpness and diversity within deep ensembles is important for the study of generalization to both in-distribution and out-of-distribution data.
3. The paper is technically sound.
Weaknesses: 1. Since SharpBalance focuses "on a diverse subset of the sharpest training data samples", it may not apply in small datasets where available data is already sparse.
2. Empirical improvement over existing methods is marginal.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why does SharpBalance seem more effective on corrupted data?
2. Do models in an ensemble converge on the same local minima?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1
We conducted an additional experiment to verify the effectiveness of the proposed method on small datasets, with results shown in Table 9 of the rebuttal PDF. The small datasets were generated by randomly subsampling the training set with ratios of 0.3 and 0.5. The experiments used a three-member ResNet18 ensemble on CIFAR10. The results demonstrate that SharpBalance maintains its performance advantage on small datasets compared to the two baseline methods. The hyperparameter search and setup are consistent with Appendix D.3 of the submitted paper.
## Weakness 2
We clarify that SharpBalance demonstrates more pronounced empirical improvements as the number of ensemble models increases. In Figure 16 of the rebuttal PDF, we present additional results showing the impact of increasing the number of models in the ensemble. The accuracy difference between SharpBalance and the baseline methods becomes more significant, especially on corrupted data. Specifically, SharpBalance outperforms the baseline by up to 1.30% when ensembling 5 models on CIFAR100-C.
## Question 1
Ensembles trained with SharpBalance achieve greater diversity among individual models while maintaining their individual predictive power. The enhanced diversity allows the ensemble model to perform well under distribution shifts in the noisy dataset. This is because, in a diverse ensemble, each individual model may capture more distinct features of the data distribution and effectively mitigate the impact of corrupted data [1-3]. Also, research in [4] suggests that the high diversity of the features learned by the model promotes transferability to OOD data. On the other hand, the improved sharpness-diversity tradeoff also reduces the sharpness of the overall ensemble model, which reduces the impact of overfitting. On corrupted datasets, overfitting is more harmful to the ensemble generalization performance due to the presence of noisy features, which makes SharpBalance more effective over baseline methods.
## Question 2
Rigorously defining "local minimum" in light of mode connectivity [5] can be tricky. Here we only provide an intuitive answer using commonly acknowledged statements in this field, which may be imprecise under certain conditions. In general, models in an ensemble are unlikely to converge to the same local minima. Studies in [6] show when models are randomly initialized, the training trajectories of these models tend to explore diverse modes (minima) in the loss landscape and as a result, do not converge to the same local minimum. In particular, ensemble members trained by SharpBalance are unlikely to converge to the same local minimum either. This is implied by the increased diversity of the individual model's output as models residing in the same local minimum are more likely to output similar logits resulting in a low diversity among ensemble members. Notice that this explanation is based on the common intuition that a loss landscape indeed contains many minima. If, in the overparameterized case where all local minima are connected through low-loss paths [5], this statement has to be taken with a grain of salt.
**Reference**
[1] Abe et al. Pathologies of Predictive Diversity in Deep Ensembles.
[2] Stickland et al. Diverse Ensembles Improve Calibration.
[3] Kumar et al. Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift.
[4] Nayman et al. Diverse Imagenet Models Transfer Better.
[5] Garipov et al. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs.
[6] Fort et al. Deep Ensembles: A Loss Landscape Perspective.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response, which clarified some points.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: We sincerely thank the reviewer for acknowledging our response. | Summary: This paper investigates the sharpness and diversity within deep ensembles. Specifically, it identifies the trade-off phenomenon between sharpness and diversity with both theoretical and empirical evidence. Additionally, it proposes a method called SharpBalance, which trains individuals using selective 'sharp' subsets. Conducted experiments have demonstrated the effectiveness of the proposed SharpBalance when applied to deep ensembles.
Strengths: There are several strengths in this paper:
- The exploration of sharpness and diversity in deep ensembles is both interesting and novel.
- Sufficient theoretical and empirical evidence has been provided for validation.
- The proposed method is simple, effective, and accompanied by code for verification.
Weaknesses: However, I still have the following concerns:
- The evaluation seems a bit weak. The authors should consider comparing with more ensemble baselines.
- What is the scale of $D_{SAM}^i$ and how does it change during training? Providing some details on this would help in understanding the proposed method.
- Refer to Line 166: How do the authors train individuals with the full datasets? Are these individuals trained with different initializations?
- (Optional) As described, the model's generalization is not merely correlated with sharpness, which aligns with some recent advanced SAM variants. Thus, integrating these advanced variants [1][2] with SharpBal would be more beneficial for studying the trade-off between sharpness and diversity.
References:
[1] Random Sharpness-Aware Minimization. In NeurIPS 2022.
[2] Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization. In CVPR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the **Weaknesses**.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have provided **Limitations** section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1
In addition to the main experiments, we compared SharpBalance with ensemble baselines, including those in the appendix and new experiments in the rebuttal PDF.
1. **Ensemble with models trained with different hyperparameters.** In Appendix F.4, we compared with the "SAM+" baseline, which forms an ensemble using three models trained with different SAM perturbation ratios (0.05, 0.1, and 0.2).
2. **Ensemble of moving averages (EoA) method [1].** In Appendix F.4, we also compared SharpBalance with EoA, a strong baseline that uses an efficient model averaging protocol.
3. **Ensemble with other strong SAM optimizer GSAM [2].** In Table 7 of the rebuttal PDF, we presented new experiments with the "Deep Ensemble + GSAM" baseline, which replaces the SAM optimizer with GSAM.
The results show that SharpBalance outperforms all these baseline methods.
## Weakness 2
The scale of $D_{SAM}^i$ is determined by the hyperparameter $k$. For each model, we choose the subset of data samples with the highest "per-data-sample sharpness" in accordance with the definition provided in Section 4.3 of the submitted paper. The subset $D_{SAM}^i$ is then formed by taking the union of all such subsets except the $i$-th. For instance, in the case of a three-member ensemble trained on CIFAR10, we set $k$ as 50, and the scale of $D_{SAM}^i$ is about 70% of the training set. $D_{SAM}^i$ is determined at a specific training step and remains constant for the $i$-th model until the end of the training, implying that its scale remains unchanged.
## Weakness 3
We train each individual model with different random initializations and different data orderings, controlled by distinct random seeds.
**Reference**
[1] Arpit et al. Ensemble of averages: Improving model selection and boosting performance in domain generalization.
[2] Zhuang et al. Surrogate gap minimization improves sharpness-aware training.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttal to all reviewers, and I agree with Reviewer b225 that the current empirical improvements are marginal. However, I believe this paper still offers valuable insights on sharpness and diversity within deep ensembles. Therefore, I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: We sincerely thank the reviewer for their comments and acknowledging our insights and contributions. We will ensure that our paper is updated to include clarifications in the rebuttal. | Summary: The paper proposes SharpBalance, that is a method aiming to investigate the relationship between sharpness and diversity for deep ensembles.
Strengths: - SharpBalance looks quite effective for the out-of-distribution setting. The goal of balancing sharpness and diversity within ensembles is an important idea.
- Great theoretical analysis
Weaknesses: - The authors are aware of the paper called “Diversity-Aware Agnostic Ensemble of Sharpness Minimizers” [1], the idea is quite like the proposed paper, they aim to investigate the relations between sharpness and diversity on ensemble learning. I suggest the authors to discuss the main differences between both.
[1] Anh Bui, Vy Vo, Tung Pham, Dinh Phung and Trung Le, Diversity-Aware Agnostic Ensemble of Sharpness Minimizers, arXiv:2403.13204.
- Regarding the baselines the authors only compare SharpBalance with SAM. Nevertheless, newer, and stronger baselines like GSAM [2] and OBF [3] should also be benchmarked since they are the current state-of-the-art.
[2] Zhuang, J., Gong, B., Yuan, L., Cui, Y., Adam, H., Dvornek, N., Tatikonda, S., Duncan, J., and Liu, T. Surrogate gap minimization improves sharpness-aware training. arXiv preprint arXiv:2203.08065, 2022.
[3] Vani, A; Tung, F; Oliveira G; Sharifi H. Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics, International Conference on Machine Learning (ICML) 2024.
- Another point to improve are the datasets. I strongly suggest the authors to benchmark with at least a couple large scale datasets. Options are ImageNet-V1 [4] for training and ImageNet-Real [5] and ImageNet-V2 [6] for testing, ImageNet-R [7] for out-of-distribution robustness benchmark and ImageNet-Sketch [8].
[4] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. IEEE, 2009
[5] Beyer, L., He ́naff, O. J., Kolesnikov, A., Zhai, X., and Oord, A. v. d. Are we done with imagenet? arXiv preprint arXiv:2006.07159, 2020.
[6] Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize to imagenet? In Interna- tional conference on
machine learning, pp. 5389–5400. PMLR, 2019.
[7] Hendrycks, D., Basart, S., Mu, N., Kadavath, S., Wang, F., Dorundo, E., Desai, R., Zhu, T., Parajuli, S., Guo, M., et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8340–8349, 2021.
[8] Wang, H., Ge, S., Lipton, Z., and Xing, E. P. Learning robust global representations by penalizing local predictive power. In Advances in Neural Information Processing Systems, pp. 10506–10518, 2019.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Regarding the new discovery contribution. As I previously stated on the weakness section were the authors aware of the paper “Diversity-Aware Agnostic Ensemble of Sharpness Minimizers. Could the authors present the main differences between this method and SharpBalance.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - The proposed method is not a significant improvement for ID scenarios as the authors claim. I would tone that down over the whole text. Maybe clearly state that SharpBalance is indeed superior to OOD scenarios and competitive on ID settings.
- I would not claim the phenomenon called sharpness-diversity trade-off is a discovery, paper [1] is addressing the same phenomena and it was publicly available on arxiv before submission and on openreview since the beginning of the year.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness1 and Question
We present three key distinctions between DASH in [1] and SharpBalance. First, SharpBalance offers a comprehensive identification and rigorous analysis of the sharpness-diversity trade-off phenomenon. Second, our novel theoretical approach using random matrix theory provides precise quantifications and reveals the relationship between sharpness and diversity. Finally, our theory-motivated algorithm provably achieves greater diversity while maintaining the same level of sharpness, leading to an improved ensemble performance.
1. **(Comprehensive identification and analysis of the sharpness-diversity trade-off).** SharpBalance provides a thorough examination of the sharpness-diversity trade-off through extensive experiments across various settings, including different sharpness/diversity measures and different model capacities, and rigorous theoretical analysis with random matrix theory. Our findings theoretically proved the existence of such a trade-off and empirically demonstrated the universality of this phenomenon through extensive experiments. DASH proposed in [1], in contrast, offers an intuitive explanation of why reducing sharpness might decrease diversity. The explanation is that the decrease in diversity is a result of models being initialized closely and updated with the same mini-batch of data.
2. **(Tight quantification using random matrix theory).** SharpBalance develops a general theoretical framework using random matrix theory (RMT) to explain the interplay between sharpness and diversity in deep ensembles which allows us to derive tight quantifications for these two important metrics, providing a more accurate characterization of the training dynamics of ensemble models. We used these quantification results to characterize the relationship between sharpness and diversity, further validating the correctness of our empirical observation. DASH, on the other hand, provides an upper bound on generalization error with the sharpness of both base learners and the ensemble.
3. **(Theory-based algorithm provably achieves improved performance).** We propose SharpBalance to balance sharpness and diversity within ensembles and theoretically show that the method achieves an improved trade-off between the two metrics. Empirical validations suggest our method indeed enhances the trade-off and therefore improves ensemble performance. The algorithm selects a sharpness-aware subset of data for each individual model to train on the sharpness objective and is simple, effective, and computationally cheap to implement. In contrast, DASH adds a KL divergence constraint on the output logits, which is different from our method which uses distinct subsets of data to train individual models. While their method will introduce diversity, our method is *theoretically guaranteed* to improve diversity. Furthermore, we have extensive experiments to demonstrate the improved diversity of SharpBalance. The two methods are in fact orthogonal and can be seen as complements of each other for promoting diversity.
## Weakness 2
We conducted new experiments to include the stronger SAM method GSAM. The results are shown in Table 7 of the rebuttal PDF. We combine GSAM with Deep Ensemble as a new baseline method "Deep Ensemble + GSAM" and incorporate the GSAM into our method SharpBalance. The results show that the new baseline with GSAM outperforms the original baseline in ID and OOD performance but still underperforms SharpBalance (w/ SAM). Furthermore, we enhance SharpBalance by replacing the SAM with GSAM, which leads to better ID performance. The hyperparameter search and setup are consistent with Appendix D.3 of the submitted paper.
## Limitation 1 and 2
We thank the reviewer for the suggestions. In the revised draft, we will provide more precise statements on SharpBalance's performance in ID and OOD settings and clarify the contributions of [1] and our work, as suggested.
**Reference**
[1] Bui et al. Diversity-Aware Agnostic Ensemble of Sharpness Minimizers
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors thorough rebuttal and the effort they put into addressing the concerns raised. After carefully considering their responses, I have decided to raise my score from 5 to 6 as a reflection of their efforts to clarify and improve upon the points.
---
Reply to Comment 1.1.1:
Title: Thank you for raising the score!
Comment: We thank the reviewer for their positive feedback and for raising the score. We will ensure that the clarification and results are included in the updated manuscript. | Summary: Ensemble methods and sharpness-aware optimization techniques are well-known strategies for improving generalization. This work identifies a trade-off between sharpness and diversity, observing that reducing sharpness can diminish diversity and harm ensemble performance. Through theoretical and empirical analysis of this sharpness-diversity trade-off, the authors present SharpBalance, an algorithm for training ensembles with sharpness-aware solutions without sacrificing diversity. Evaluation results on CIFAR-10/100, TinyImageNet, and their corrupted variants confirm the effectiveness of SharpBalance.
Strengths: - Ensemble methods and sharpness-aware optimization techniques are both prominent approaches for improving generalization. The aim of this work, which combines these two approaches, is well-motivated.
- While the theoretical analysis uses the variance metric to indicate diversity, the experimental results show consistent trends across different diversity metrics. It suggests that the proposed analysis is widely applicable to the general concept of diversity.
- Extensive empirical results effectively validate the theoretical analysis. The summary plots of the results are generally highly readable.
Weaknesses: - The evaluation results are centered exclusively on classification accuracy; since ensembling usually highlights both predictive accuracy and uncertainty, relying solely on accuracy to assess overall performance is insufficient.
- Specifically, for the corrupted CIFAR benchmark, uncertainty metrics like negative log-likelihood or expected calibration error are more important than test accuracy, but these aspects are not currently considered.
- It seems that all experiments were conducted exclusively with residual networks. It is essential to verify if the proposed analysis and algorithm are applicable to other architecture families as well.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It appears that the current emphasis is on logit-ensemble (lines 82-83). Does the same rationale apply when ensembling categorical probabilities (i.e., probability-ensemble)?
- In the proposed SharpBalance algorithm, it seems that the training data and objective for the i-th member are defined using other members (such as members i+1, i+2, as illustrated in the figure). Does this imply that in practice, each member is trained sequentially?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Section 5 addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weakness 1 and 2
In Figure 14 of the rebuttal PDF, we present the results for negative log-likelihood and expected calibration error. These uncertainty metrics exhibit trends similar to the accuracy metrics reported in the main paper: "Deep Ensemble + SAM" outperforms "Deep Ensemble", and our method outperforms both baselines. The experiments were conducted using ResNet-18 on CIFAR100, with metrics reported on corrupted datasets. Additionally, we observe that both metrics improve as the number of ensemble members increases for all three methods.
## Weakness 3
We provide new experiments on transformer models on vision and language tasks. The results are shown in Table 8 of the rebuttal PDF. We show that "Deep Ensemble + SAM" outperforms the Deep Ensemble while SharpBalance still outperforms both baselines. This observation is consistent with the residual network results in Figure 7 of the submitted paper.
Here we describe the experimental setup. For vision tasks, we constructed the three-member ensemble by fine-tuning the pre-trained ViT-T/16 model on CIFAR100 dataset, evaluated on in-distribution and CIFAR100-C test set. For language tasks, we constructed the three-member ensemble by fine-tuning the pre-trained ALBERT-Base model on Microsoft Research Paraphrase Corpus (MRPC) dataset and evaluated the performance on its validation set. The hyperparameter search and setup are the same as in Appendix D.3 of the submitted paper.
## Question 1
Yes, the rationale behind SharpBalance also applies when ensembling categorical probabilities. Studies in [1-3] reveal that the correlation between the output probabilities of probability-ensemble can significantly affect the classification error rate and uncertainty quantification. Therefore, obtaining more diverse ensemble members following the idea of SharpBalance is certainly beneficial to probability-ensembles.
We conducted additional experiments to verify this insight, with new results shown in Figure 15 of the rebuttal PDF. Results show that SharpBalance outperforms both baseline methods, including Deep Ensemble and "Deep Ensemble + SAM".
## Question 2
SharpBalance trains each ensemble member in parallel, distinct from classical boosting strategies. In particular, the $i$-th member's objective is computed based on the current status of the other members, and hence, there is no sequential dependency on the training of the individual models. In practice, the sharp-aware subsets for each model are selected at a synchronized time step that happens only once in the training process.
**Reference**
[1] Ryabinin et al. Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets.
[2] Brown et al. Managing Diversity in Regression Ensembles.
[3] Brocker et al. From ensemble forecasts to predictive distribution functions.
---
Rebuttal Comment 1.1:
Comment: > SharpBalance trains each ensemble member in parallel, distinct from classical boosting strategies.
Does SharpBalance only allow parallel training and not sequential training, similar to repulsive deep ensembles or particle-optimization-based variational inference? This is a crucial issue regarding scalability. If I can fit only one model in memory at a time and not multiple models, would this make SharpBalance impractical in such situations?
---
Rebuttal 2:
Title: Authors' further response to Reviewer's comments
Comment: We thank the reviewer's response and would like to clarify how SharpBalance can be applied to sequential training when memory constraints only allow for training one model at a time. Adapting our parallel training pipeline to a sequential approach is straightforward, and we'll outline the process below.
In sequential SharpBalance training, each model is iteratively trained to a predefined timestep using the full dataset. This timestep corresponds to the synchronization point in parallel training. Once all models reach this point, SharpBalance partitions the dataset into a sharpness-aware set and a normal set for each model. This partitioning process can be done sequentially, fitting one model at a time in memory to compute the "per-data-sample sharpness". The other models are used only to determine the partition for the $i$-th model and are not required for subsequent computations. Finally, each model is trained to completion using its respective sharpness-aware and normal sets.
Regarding the scalability of SharpBalance in sequential training:
1. The dataset partition, which is the most computationally intensive procedure, occurs only *once* throughout the entire training process.
2. The main computational bottleneck in the dataset partition is the "per-data-sample sharpness" calculation. However, this can be done efficiently by evaluating one model at a time, as it doesn't involve pairwise interactions between models. In a parallel computing scenario, only an ordered list of sharpness values needs to be transmitted from each ensemble member.
3. As the number of ensemble members ($N$) increases, the dataset partition for each model essentially becomes a set union problem of sorted arrays. Using min-heaps, this operation can be completed in $\mathcal{O}(mN)$ time, where $m$ is the number of distinct items in the union. If we consider $m$ as a constant, the time complexity of the partitioning grows *linearly* with respect to the number of ensemble members asymptotically. Further optimizations on constants are possible by exploiting set theory: the union of $N-1$ subsets can be efficiently computed by subtracting the $i$-th model's sharpness-aware subset from the union of all sharpness-aware sets. It's worth noting that these set unions are performed on index sets, resulting in small computational overheads.
4. In most of our experiments, we can demonstrate improvement using a few ensemble members. We do not see a big improvement after increasing the number of ensemble members to more than five, similar to the phenomenon reported in Section 4.2 of [1].
5. We think the technique suggested by the reviewer might be from the paper 'Repulsive Deep Ensembles are Bayesian,' which suggests that as $N$ → infinity, the KDE approximation can converge to the true Bayesian posterior. As discussed earlier, our method will not incur substantial overhead as $N$ increases because the interactions between ensemble members occur through fast set union computations.
**Reference**
[1] Ovadia et al. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift.
---
Rebuttal 3:
Comment: Thank you for the further clarification. I believe that the NLL and ECE results for corrupted CIFAR benchmarks shared by the authors will help emphasize the advantages of the proposed SharpBalance even more. It would be helpful if you could also provide the NLL and ECE results for the existing test data in addition to the corrupted data. I am maintaining my current score as I am inclined to accept this work.
---
Rebuttal Comment 3.1:
Title: Thank you for your feedback and additional results
Comment: We thank the reviewer for the constructive feedback. The additional experiment results using ECE and NLL metrics on the in-distribution (existing) test data are provided in the following tables. Results show that SharpBalance outperforms the two other baselines in both metrics on the existing test data. We will include the new results and clarification in the updated draft.
*Table. ECE metric on CIFAR100.* "ECE" represents the expected calibration error. The model architecture is ResNet-18.
| Method / Number of models | 2 | 3 | 4 | 5 | 6 |
| -------- | -------- | -------- | -------- | -------- | -------- |
| Deep Ensemble | 0.0310 | 0.0206 | 0.0191 | 0.0186 | 0.0180 |
| Deep Ensemble + SAM | 0.0184 | 0.0179 | 0.0160 | 0.0151 | 0.0141 |
| SharpBalance | **0.0151** | **0.0142** | **0.0130** | **0.0128** | **0.0110** |
*Table. NLL metric on CIFAR100.* "NLL" represents negative log-likelihood. The model architecture is ResNet-18.
| Method / Number of models | 2 | 3 | 4 | 5 | 6 |
| -------- | -------- | -------- | -------- | -------- | -------- |
| Deep Ensemble | 0.820 | 0.785 | 0.763 | 0.749 | 0.742 |
| Deep Ensemble + SAM | 0.742 | 0.724 | 0.718 | 0.711 | 0.705 |
| SharpBalance | **0.732** | **0.720** | **0.715** | **0.709** | **0.695** | | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for the constructive feedback, which helps us improve our paper. Please refer to the attached PDF for our new experiments and see below for our responses to each comment.
Pdf: /pdf/135158339f771e13c5c8f839b1ede0b6ea5dfa6b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
S-MolSearch: 3D Semi-supervised Contrastive Learning for Bioactive Molecule Search | Accept (poster) | Summary: The paper introduces S-MolSearch, a framework for ligand-based virtual screening in drug discovery that addresses the challenges of limited and noisy binding affinity data. By utilizing molecular 3D information and semi-supervised contrastive learning, S-MolSearch processes both labeled and unlabeled data to train molecular structural encoders and generate soft labels for the unlabeled data, drawing on inverse optimal transport principles. The framework outperforms existing structure-based and ligand-based virtual screening methods, as evidenced by its superior performance on the LIT-PCBA and DUD-E benchmark datasets.
Strengths: - Well-written
- Well-organized experimental settings and comparison methods
Weaknesses: - There is a lack of discussion on the reasons behind the performance differences and improvements, with only numerical comparisons of the experimental results.
- There is insufficient experimentation and consideration regarding the time required for virtual screening.
- There are no results for experimental metrics such as AUROC or BEDROC, which were used in previous studies.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Both S-MolSearch and existing methods experience a decline in performance as the % of EF increases. Additional discussion on the reasons for this phenomenon is needed.
- Why do soft labels based on inverse optimal transport seem to have a significant impact on the DUD-E dataset but a lesser impact on the LIT-PCBA dataset?
- What aspects of the semi-supervised approach in Table 4 do the authors think primarily contributed to the performance improvement compared to fine-tuning?
- Is it possible to extend this method from a zero-shot setting to a few-shot setting? If so, how do the authors think its performance would compare to existing methods in that case?
- In virtual screening, not only performance but also processing time is important. How does the screening time compare to that in existing studies?
- How does the performance compare to existing models when using measurements like AUROC or BEDROC instead of EF #%?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper addressed limitations and potential social impacts in Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the thoughtful questions and feedback provided by you. We have carefully considered your queries and provide detailed responses below.
**Consideration of Screening Time**
We have conducted additional experiments to measure the screening time of S-MolSearch compared to traditional methods. The results, as shown in the table below, indicate that S-MolSearch has a significant advantage in search time consumption compared to traditional molecular search methods even if the molecular embeddings are not precomputed.
Notably, if the molecule database is fixed and all molecules are pre-encoded and stored in vector database such as FAISS, S-MolSearch is expected to achieve a search speed of tens of millions of molecules per second.
| Method | Molecules/sec on DUD-E |
|------------|-----------------------|
| Shape-it [1] | 70 |
| ROSHAMBO [2] | 440 |
| S-MolSearch| 1316 |
**Lack of Results for Experimental Metrics Such as AUROC or BEDROC:**
We have expanded our experimental evaluation to include AUROC and BEDROC metrics. These additional results, found in table 1 and 2 in the addtional PDF, provide a more comprehensive comparison of S-MolSearch with existing models and highlight its robust performance across different evaluation criteria. The results show that S-MolSearch demonstrates superior performance on AUROC and BEDROC.
On DUD-E, S-MolSearch trained on data with a 0.4 similarity achieves AUROC and BEDROC results of 84.61% and 54.22%, respectively, surpassing all baselines. S-MolSearch trained on data with a 0.9 similarity achieves AUROC and BEDROC results of 92.56% and 75.37%, respectively, exceeding the best baseline by 50% in BEDROC.
On Lit-PCBA, S-MolSearc$h_{0.4}$ achieves AUROC and BEDROC results of 57.34% and 7.58%, respectively, surpassing all baselines in BEDROC and being comparable to the best baseline in AUROC. S-MolSearch$_{0.9}$ achieves AUROC and BEDROC results of 61.78% and 8.48%, respectively, achieving the best results in both AUROC and BEDROC.
**Decline in Performance as the % of EF Increases:**
We provide the specific calculation formula for EF in appendix section B. The upper limit of EF decreases with the increasing top x%. The theoretical maximum value of EF is calculated as 100 divided by x. For example, the maximum value of EF 1% is 100, and the maximum value of EF 5% is 20.
From the perspective of virtual screening tasks, this can be attributed to the increased difficulty in distinguishing between active and inactive molecules as more molecules are considered, which tends to dilute the enrichment factor. Increasing the screening percentage implies that a more diverse array of active molecules needs to be identified, which is often more challenging.
**Impact of Soft Labels on Different Datasets**
We think this is caused by differences in the benchmarks. The molecules in DUD-E have more analogues and decoy biases, making it crucial for the model to use soft labels to effectively distinguish between closely related molecules. The LIT-PCBA dataset ensures diversity in data representation, offers a broad distribution across chemical space, and effectively minimizes inherent biases, so the impact of soft labels is less pronounced.
**Contributions of the Semi-Supervised Approach in Table 4 Compared to Fine-Tuning:**
We would like to clarify once again that the fine-tuning we refer to in Table 4 involves initial pre-training with contrastive learning on an unsupervised dataset, followed by contrastive learning-based fine-tuning using active supervised data. Compared to S-MolSearch, we believe that the fine-tuning approach does not optimally integrate the information from both the unsupervised and supervised datasets.
The performance improvements observed in Table 4 can be primarily attributed to the integration of unlabeled data through our semi-supervised approach, which enhances the model's ability to generalize beyond the limitations of fine-tuning.
**Extending the Method from Zero-Shot to Few-Shot Setting**
Extending S-MolSearch from a zero-shot to a few-shot setting is feasible. We explored two few-shot settings. In terms of data division, we randomly selected 70% of the data from each target in DUD-E as the training set for few-shot learning. The remaining 30% of active molecules and all inactive molecules were used as test data. With the training dataset, we employed two settings. The first setting(random) considers the query molecule as part of the active molecule set bound to the target, then combines it with data bound to the same target as positive pairs and data bound to different targets as negative pairs. The second setting(fix query) fixes one molecule in the positive pair as the query molecule and selects another molecule from the active molecules to form positive pairs. The construction method for negative pairs is the same as in the first setting. ZS represents zero-shot in this table, and FS represents few-shot. The results are all obtained by S-MolSearch trained on data with a 0.4 similarity.
Because there is no universal setup for few-shot in this scenario, we do not compare it with other methods.
| Configuration | AUROC (%) | EF 0.5% | EF 1% | EF 5% |
|---------------------|-----------|----------|--------|--------|
| 0.4,ZS, fix query | 84.87 | 79.07 | 46.44 | 11.70 |
| 0.4,FS, fix query | 98.32 | 165.09 | 19.06 | 9.66 |
| 0.4,ZS, random | 85.38 | 79.08 | 47.11 | 11.82 |
| 0.4,FS, random | 97.21 | 154.90 | 86.00 | 18.48 |
Thank you very much for reading. We hope that our responses adequately address your concerns.
*Reference:*
[1] Taminau J, Thijs G, De Winter H. Pharao: pharmacophore alignment and optimization.
[2] Atwi R, Wang Y, Sciabola S, et al. ROSHAMBO: Open-Source Molecular Alignment and 3D Similarity Scoring.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' time and effort. Their rebuttal has addressed all of my concerns. As a result, I would like to raise my score to 7: Accept.
---
Reply to Comment 1.1.1:
Comment: We appreciate your insights and support in improving the quality of this work. Thank you for your valuable feedback and for raising our score. | Summary: The paper introduces "S-MolSearch," a semi-supervised contrastive learning framework designed for ligand-based virtual screening in drug discovery. This framework uniquely leverages labeled binding affinity information to produce soft labels for unlabeled molecules, integrating 3D molecular structures and binding affinity data. The paper also proposes a novel semi-supervised learning paradigm that combines contrastive learning with Inverse Optimal Transport (IOT).
Strengths: 1. The supervision idea is novel and useful, and the target application is very impactful with broad implications.
2. The paper is well-written and the experiments are comprehensive.
Weaknesses: 1. Memory Consumption Concerns: The model employs a parallel architecture with two f_\theta encoders and one g_\phi encoder, based on the Uni-Mol framework. Although utilizing pretrained models has shown significant performance benefits, the paper should address potential memory management strategies, especially for future applications involving molecules with a greater number of atoms.
2. Utilization of 3D Structures: The paper promotes a novel semi-supervised contrastive learning paradigm, yet the core contribution does not seem to revolve around the innovative use of 3D structures, as this capability primarily stems from the Uni-Mol architecture. It would be beneficial if the authors could clarify any specific enhancements made to ensure the effective preservation and utilization of geometric information within the model. Absent such enhancements, clearer distinctions should be made regarding the role of 3D structures to prevent misconceptions about the paper presenting a new geometric deep learning technique.
3. Clarity in Section 3.4: The explanation of how $\Gamma$, which approximates the distribution of $C$ under constraints from $U(p,q)$, relates to the continuous optimal transport problem is not clear. Moreover, the motivation and necessity of soft labels, beyond experimental justifications, needs further elaboration. The section would benefit from additional visual aids or high-level descriptions, akin to the clarifications provided in sections 3.3 and 3.5, to aid in comprehension.
4. Component Efficacy in Table 3: There appears to be a discrepancy in the impact of model components across different benchmarks—soft labels are pivotal for DUD-E, whereas pretraining is more crucial for LIT-PCBA, with soft labels showing minimal importance. Insights into this inconsistency would be valuable. Furthermore, an evaluation of how the Uni-Mol encoder alone performs on these tasks would provide additional context on the effectiveness of the proposed enhancements
Minor points and typos:
L153-154 is not clear.
L162: It would be beneficial to include illustrations of $M_{sup}$ in the figures for clarity.
Formula 2 and L 168: It is better to give intuitive explanations of $1_N$.
L184: Inconsistent notation. $g(\psi)$ or $g_\psi$?
L281: Misplaced comma.
Technical Quality: 3
Clarity: 2
Questions for Authors: The same as weakness.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The focus of the paper is predominantly on molecule binding affinity, heavily relying on a pretrained encoder. This reliance could limit the model's applicability across a broader spectrum of bioinformatics data. A more detailed discussion on the dependency on pretrained models and potential strategies to mitigate this limitation would enhance the paper's breadth and applicability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review and constructive feedback. Below, we address each of your comments and questions, aiming to clarify and enhance the understanding of our work.
**Memory Consumption Concerns:**
We measure memory consumption under different scenarios, as shown in the table below. Here, supervised learning corresponds to using one encoder, and S-MolSearch corresponds to using two encoders. The table shows that increasing the number of encoders leads to a linear increase in memory consumption. Since Uni-Mol is relatively lightweight, this does not significantly impact regular use.
Considering the potential applications involving large-scale data or large molecules in the future, memory consumption poses a challenge. In future work, we plan to explore update strategies, for example, similar to [1], to decrease memory consumption.
| Model | Memory used |
|----------------------|-------------|
| Supervised (Single Encoder) | 9.5 G |
| S-MolSearch (Two encoder) | 22.4 G |
**Utilization of 3D Structures:**
We believe that utilizing the 3D information of molecules is advantageous in ligand-based virtual screening scenarios. While we do not introduce a new geometric deep learning technique, we ensure the preservation of geometric information by leveraging Uni-Mol.
We want to point out that one of the primary contributions of this work is the combined use of both 3D molecular information and binding affinity information for virtual screening, as described in lines 77-79, rather than providing a new backbone for molecule pretraining.
**Clarity in Section 3.4:**
We realize that our current phrasing may lead to some misunderstandings. In fact, the optimal transport form we introduce is derived from [2], which presents a smooth and sparse form of optimal transport. This form not only effectively manages the computational cost but also maintains consistency with traditional optimal transport problems.
In S-MolSearch, $\Gamma$ can be interpreted as a joint probability distribution under given marginal probabilities. Through the design of the cost matrix $ C $, we ensure that $ \Gamma_{i,j} $ is positively correlated with $ x_{i}x_{j}$. The additional $ L2$ norm regularization ensures the sparsity of $\Gamma $. This implies that for labels with low confidence from the supervised model, the pseudo-labels are heavily punished. Overall, the introduced smooth optimal transport form guarantees that signals from the supervised model are more effectively transferred to the unsupervised model, while handling high uncertainty with appropriate regularization. We have revised the wording in the manuscript and added relevant citations accordingly.
**Component Efficacy in Table 3:**
We conduct an intuitively analysis to provide insights into why soft labels are pivotal for DUD-E, while pretraining is more crucial for LIT-PCBA. We believe this is related to the inherent biases of each dataset.
Regarding the pivotal role of soft labels for DUD-E [3], the molecular distribution in DUD-E has more analogues and decoy biases, making it crucial for the model to use soft labels to effectively distinguish between closely related molecules. The LIT-PCBA dataset ensures diversity in data representation, offers a broad distribution across chemical space, and effectively minimizes inherent biases, so the impact of soft labels is less pronounced. The importance of pretraining for LIT-PCBA [4] arises because the broader molecular distribution seen during pretraining allows the model to capture a wider variety of molecular features, thereby enhancing performance on this dataset.
As for the performance of original Uni-Mol, results in the tables demonstrate that the addition of our proposed components significantly enhances performance. The improvements underscore the value of integrating soft labels and pretraining into our framework, particularly in achieving better results across different benchmarks.
| DUDE | EF 0.5% | EF 1% | EF 5% |
|--------------|---------|-------|-------|
| Uni-Mol | 9.82 | 7.97 | 4.22 |
| S-MolSearch 0.4 | 40.85 | 34.60 | 11.44 |
| LIT-PCBA | EF 0.5% | EF 1% | EF 5% |
|--------------|---------|-------|-------|
| Uni-Mol | 3.22 | 1.94 |1.40 |
| S-MolSearch 0.4 | 10.93 | 6.28 | 2.47 |
**Minor Points and Typos:**
Thank you for pointing these out. L162: We emphasize $M_{sup}$ in Figure 2 to help readers understand our work more clearly. Formula 2 and L168: 1N is an N-dimensional vector of all ones. We add this in line 170. And we correct other points and typos in the manuscript
**Limitations:**
Pre-trained models are versatile and can be utilized for a variety of tasks and Uni-Mol demonstrates strong performance across several applications. In virtual screening tasks, the model benefits from exposure to a broader molecular space, which we believe makes the use of pre-trained models reasonable and advantageous.
We also conducted ablation studies on pre-training, as shown in Table 3, which demonstrate that pre-training provides improvements.
Thank you very much for reading. We hope that our responses address your concerns and demonstrate the enhancements made to our work.
*Reference:*
[1] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning.
[2] Blondel M, Seguy V, Rolet A. Smooth and sparse optimal transport[C]//International conference on artificial intelligence and statistics. PMLR, 2018: 880-889.
[2] Mysinger M M, Carchia M, Irwin J J, et al. Directory of useful decoys, enhanced (DUD-E): better ligands and decoys for better benchmarking.
[3] Tran-Nguyen V K, Jacquemard C, Rognan D. LIT-PCBA: an unbiased data set for machine learning and virtual screening.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I maintain my score. | Summary: This paper proposed a Ligand-based Virtual Screening method S-MolSearch. which can leverages molecular 3D information and affinity information in semi-supervised contrastive learning.
Strengths: 1. The method is able to leverage both labeled and unlabeled data simultaneously and achieves excellent performance on DUDE and Lit-PCBA benchmarks.
2. The approach of using the principles of inverse optimal transport for semi-supervised learning is quite innovative and worth adopting.
3. The ablation experiments are sufficient, and the experimental section is quite robust.
Weaknesses: 1. In the method section, it is unclear to me whether during inference only encoder$g_{\psi}$ is used, or both $\psi$ and $f_{\theta}$ are used simultaneously?
2. If the application scenario involves a newly provided protein without reference molecules, how should ligand-based virtual screening methods handle this situation?
Technical Quality: 3
Clarity: 2
Questions for Authors: Refer to weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: S-MolSearch predominantly focuses on the molecular affinity data, omitting broader biochemical interactions, which suggests a potential area for improvement.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for supporting our work and careful review! We have considered each of your questions, and we provide detailed responses below.
**Inference Process with Encoders**
During inference, only the encoder gψg_{\psi}gψ is used to generate the molecular embeddings. The encoder fθf_{\theta}fθ is primarily used during the training phase for generating pseudo-labels. We have clarified this in the revised manuscript in Section 3.1, line131.
**Handling New Proteins Without Reference Molecules**
Our current work is based on known query molecules. If the query molecules are unknown, some existing techniques might be helpful for this situation. One possible approach is to construct a pseudo-ligand based on the shape of the protein pocket, as demonstrated in [1]. Another feasible approach is to combine S-MolSearch with structure-based methods, such as [2]. We believe this can serve as an excellent direction for further enhancing our future work.
Thank you for their helpful suggestions. We hope that our responses adequately address your concerns.
*Reference:*
[1] Gao B, Jia Y, Mo Y, et al. Self-supervised pocket pretraining via protein fragment-surroundings alignment.
[2] Zhang X, Gao H, Wang H, et al. Planet: A multi-objective graph neural network model for protein-ligand binding affinity prediction.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I maintain my score. | Summary: The paper introduces a new method for ligand-based virtual screening based on contrastive learning and inverse optimal transport. Two molecule encoders are trained. The first encoder is trained using a contrastive loss function on the ChEMBL data by pairing compounds that are active toward the same protein, and compounds active toward different targets are treated as negative pairs. Next, the second encoder is trained by using the pseudo-labels produced by the first model. The proposed model is tested on two benchmark datasets, DUD-E and LIT-PCBA. Additionally, an ablation study is conducted, and the impact of the labeled data scale is visualized.
Strengths: Originality:
- The approach seems to be novel. I have not found any similar papers that use optimal transport for the ligand-based virtual screening task.
Quality:
- The theory described in the paper is formally proven in the Appendix.
- The proposed method obtains excellent results in both tested benchmarks.
- The quality of the learned representation is demonstrated in Figure 2.
Clarity:
- The paper is written clearly and is easy to follow.
- Figure 1 shows the idea of the model very clearly.
Significance:
- The presented method is an interesting and effective way to utilize all the available public data to build a strong model for ligand-based virtual screening.
Weaknesses: Quality:
- It would be interesting to see some qualitative examples of molecules that were found to be similar to the active compounds in the virtual screening process. Do the trained similarities correlate with the Tanimoto similarity?
Clarity:
- Does the “sup” subscript in Section 3.4 correspond to the “label” subscript in Proposition 1? What is the difference between these two sets?
Minor comments:
- A typo in line 151, “we employs InfoNCE.”
- In line 183, something is missing before “1”.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How do you solve the cases in contrastive learning where one molecule binds to multiple targets? In this example, you need to be careful not to create a negative pair, where one molecule is the one binding to both targets and the other molecule binds to only one of them.
2. How do you avoid treating two molecules as a negative pair if both can bind to the same target? For example, they are inhibitors of two similar proteins, which increases the chance of them binding to both at the same time.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations have been described.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for supporting our work and careful review! We appreciate the detailed review and constructive feedback. We have addressed each of your comments and questions below, aiming to clarify and enhance the understanding of our work.
**Qualitative Examples and Similarities**
We agree that providing qualitative examples would enhance the understanding of our model's capabilities. We have added two examples of molecules identified as similar to query molecules in DUD-E. These molecules are all active molecules. The results indicate that molecules with high Tanimoto similarity tend to have high embedding similarity. These qualitative examples can be found in Figure 1 in the attached PDF.
**Clarification on Subscripts in Section 3.4**
The “sup” subscript in Section 3.4 indeed corresponds to the “label” subscript in Proposition 1. The two subscripts have the same meaning, representing supervised learning on labeled data. For clarity, we have standardized the subscript to "sup" throughout the manuscript.
**Minor Comments**
Thank you for pointing out the typographical errors. We have corrected the typo in line 151 and added the missing context in line 183 (now it's in line 186). These corrections are reflected in the revised manuscript.
**About Negative Pairs**
In our view, both of your questions pertain to the potential occurrence of false negative pairs. False negatives may arise in scenarios where a single ligand binds to multiple targets or multiple ligands bind to the same target.
The way we construct the training data, as described in lines 60-64 of the manuscript, considers molecules binding to the same target as positive pairs, while molecules binding to different targets are considered negative pairs. This approach can mitigate the occurrence of false negatives to some extent.
In the raw ChEMBL data, approximately 21% of the ligands can bind to two or more targets. To further analyze the situation of false negatives during training, we count the false negative pairs that appear in each batch of the sampled training data over one epoch and find that false negative pairs account for an average of 0.76% of all negative pairs. This is a relatively small number, and considering the robustness of contrastive learning during training, we believe these false negative pairs will not significantly impact the current results [1].
Nonetheless, the presence of false negatives remains a concern. In the future, we plan to address this issue by designing more robust contrastive learning objectives and constructing more refined datasets to minimize the occurrence of false negatives.
We believe our response effectively addresses your concerns. If you have additional questions or need further clarification, we are open to further discussion.
*Reference:*
[1] Wu J, Chen J, Wu J, et al. Understanding contrastive learning via distributionally robust optimization.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing qualitative examples and further clarifications. This addresses my concerns. I will keep my positive score. | Rebuttal 1:
Rebuttal: We sincerely appreciate the valuable feedback and insightful comments provided by each of you. Your input has been instrumental in refining our work and enhancing the clarity and depth of our manuscript.
In response to your suggestions, we prepare an additional PDF document. It includes qualitative examples of molecular embedding similarity and Tanimoto similarity, as well as experimental results tables containing AUROC and BEDROC.
Pdf: /pdf/73da91b8fad3951dfda221eddc06a86ea45656f8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bandits with Preference Feedback: A Stackelberg Game Perspective | Accept (poster) | Summary: This paper considers bandits with preference feedback. It first constructs a novel confidence set that covers the ground truth with high probability. Then from a Stackelberg game perspective, it proposes an efficient algorithm that enjoys tighter regret bound than SOTA.
Strengths: 1. The technique used to construct the confidence set is interesting. The resulting confidence set is tighter.
2. The Stackelberg game perspective is interesting, and allows the author(s) to design the algorithm with better exploration-exploitation trade-off as demonstrated in the experiment.
Weaknesses: The major concern is the practical applicability of the algorithm. Seems that the proposed algorithm can hardly scale up to a higher dimension (e.g., dimension equals to 7). In the proposed algorithm, a complicated sequential optimization problem needs to be solved. Notably, the experiment only considers two-dimensional problem, in sharp contrast to recent works (e.g., 12-dimensional problem considered in Xu et al. [2024]).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the abstract, the review claims that the regret bound is 'rate-optimal'. Is there a matching lower bound?
2. Can the results be generalized to other link functions?
3. In line 168, should it be $\sup_{|a|\leq B}$ instead of $\sup_{a\leq B}$?
4. Could the authors explain the linear growth of multiSBM in Fig. 1 (b)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Some major comments:
1. The paper discusses the comparisons to [Xu et al. 2024]. Seems that the theoretical gain of $T^{\frac{1}{4}}$ mainly comes from tighter confidence set rather than the design of the algorithm. If the algorithm POP-BO in [Xu et al. 2024] is equipped with this tighter confidence set, it could also achieve the same regret bound. To see which algorithm is empirically better, it would be interesting to compare the proposed algorithm with POP-BO equipped with the tighter confidence set in this paper.
Besides the points mentioned above, here are some minor comments:
1. In line 109, I guess it should be {$0, 1$} instead of $[0, 1]$.
2. In line 178 to 179, typo in "ridge estimator estimator".
3. In line 574, abuse of the notation $s$.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and the points you raised. We addressed them in the paper and we believe it will help the paper to reach a broader community. Follows our response to your questions.
> The major concern is the practical applicability of the algorithm. Seems that the proposed algorithm can hardly scale up to a higher dimension (e.g., dimension equals to 7). In the proposed algorithm, a complicated sequential optimization problem needs to be solved. Notably, the experiment only considers two-dimensional problem, in sharp contrast to recent works (e.g., 12-dimensional problem considered in Xu et al. [2024]).
We acknowledge that scalability is a crucial point for practicability. Bilevel optimization problems are known to be hard, however, due to their common large-scale applications (e.g. for data re-weighting in tuning LLMs[4]), there are efficient approximation algorithms. This is mentioned in Line 275, and given the technical tools for vectorizing, parallelizing, and utilizing multiple machines, we do not consider this to be a major concern for applicability.
To address your concern, we are adding an experiment on a **32-dimensional** problem using restaurant reviews from the Yelp open dataset. We believe this further demonstrates the scalability of our approach. For further discussion, please see our general rebuttal and the uploaded PDF that contains its cumulative regret plot [**Fig A-b**], as a proof-of-concept. We highlight that we have not tuned or modified the algorithms for this experiment, and they are being used off-the-shelf. The regret curve is best viewed as a proof-of-concept, rather than an algorithmic benchmark.
> In the abstract, the review claims that the regret bound is 'rate-optimal'. Is there a matching lower bound?
Thank you for raising this question. We realize that this claim is not accurate enough, and we have now updated the abstract.
There are *no lower bounds*, for dueling bandits or bandits with direct feedback, under the RKHS assumption. This has been an open problem [1] and is only resolved under a GP assumption [2] or for time-continuous bandits [3]. These papers hint that the *existing upper bounds are not improvable*.
Our regret rates in $T$, **match the tightest existing upper bounds** for cumulative regret in kernelized bandits, that is, $\tilde{\mathcal{O}}(\gamma_T \sqrt{T})$, *up to the choice of kernel*. We updated the abstract and specifically removed the word “rate-optimal” to avoid confusion. We have also included this remark about the rates, updating the paragraph on Lines 309-313.
> Can the results be generalized to other link functions?
Our theoretical analysis does in fact hold under a broader set of link functions $s: \mathbb{R} \to [0, 1]$ as long as $s$ is,
- monotonically increasing and symmetric around $0$, i.e., $s(x) = 1-s(-x)$ and $s(0) = 0.5$,
- differentiable and $\kappa = \sup_{a \leq B} 1/\dot{s}(a)$ exists.
We have kept the problem setting for the sigmoid function, as it has a simple probabilistic interpretation (feedback $y$ becomes a Bernoulli random variable) and connects to the common Bradly-Terry model.
However, we have *added a remark to Section 3 and pointed out this extension*.
> In line 168, should it be $sup_{|a| \leq B}$ instead of $sup_{a \leq B}$?
The sigmoid function is symmetric therefore the two supremum are equivalent.
> Could the authors explain the linear growth of multiSBM in Fig. 1 (b)?
Thank you for raising this point. The linear regret of MultiSBM was a result of a minor error in our implementation. We corrected it and updated our results. **Fig A** and **Table B** in the uploaded PDF include the corrected performance of MultiSBM. Due to the single-page limit of the PDF, we included only the Ackley function's cumulative regret plot but have updated all figures in the paper.
MultiSBM is now a competitor to MaxMinLCB for smooth functions (e.g. Branin and Matyas), however, for the more challenging problems (e.g. Ackley and Eggholder) our algorithm maintains its performance advantage. MaxMinLCB remains the top-performing algorithm across the problem instances.
> The paper discusses the comparisons to [Xu et al. 2024]. Seems that the theoretical gain of $T^{\frac{1}{4}}$ mainly comes from tighter confidence set rather than the design of the algorithm. If the algorithm POP-BO in [Xu et al. 2024] is equipped with this tighter confidence set, it could also achieve the same regret bound. To see which algorithm is empirically better, it would be interesting to compare the proposed algorithm with POP-BO equipped with the tighter confidence set in this paper.
The POP-BO algorithm is built on the same logic as the MultiSBM that we included in our experimental results. *We have updated the paper specifying this connection*.
MultiSBM (POP-BO) performs competitively on smooth and simpler reward functions, however, our algorithm maintains its performance advantage on the more challenging functions and the Yelp experiment.
We kindly refer the reviewer to the general rebuttal for a further discussion on the similarities between MultiSBM, POP-BO, and MaxMinLCB.
---
### References:
[1] Vakili, Sattar, Jonathan Scarlett, and Tara Javidi. "Open problem: Tight online confidence intervals for RKHS elements." Conference on Learning Theory. PMLR, 2021.
[2] Scarlett, Jonathan, Ilija Bogunovic, and Volkan Cevher. "Lower bounds on regret for noisy gaussian process bandit optimization." Conference on Learning Theory. PMLR, 2017.
[3] Lattimore, Tor. "A lower bound for linear and kernel regression with adaptive covariates." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023.
[4] Pan, Rui, et al. "ScaleBiO: Scalable Bilevel Optimization for LLM Data Reweighting." arXiv preprint arXiv:2406.19976 (2024).
---
Rebuttal 2:
Title: Thanks for the rebuttal
Comment: I would like to thank the authors for the detailed responses. Most of my concerns are addressed. And I would like to maintain my rating. | Summary: This paper considers novel game-theoretic acquisition function for pairwise action selection with preference feedback. It is tailored to the setting with infinite domains and nonlinear kernelized rewards. The preference-based confidence sequences for kernelized utility functions are shown to be tight and anytime valid. The proposed algorithm MAXMINLCB is shown to satisfy a sublinear regret. Various simulations were conducted to showcase the advantage of the proposed method.
Strengths: 1. Although preference-based bandit optimization with linear utility functions is fairly well understood, such approaches cannot capture real-world problems with complex nonlinear utility functions. This paper aims to close this gap. The considered problem is timing and interesting.
2. The technical contribution is non-trivial. Although there have been attempts to prove convergence of kernelized algorithms for preference-based bandits, such works employ a regression likelihood model which requires them to assume that both the utility and the probability of preference lie in an RKHS. Moreover, a sample-efficient algorithm is lacking for such approaches. In contrast, this work uses a kernelized logistic negative log-likelihood loss to infer the utility function, and provide confidence sets for its minimizer.
3. Some theoretical result, like Kernelized Logistic Confidence Sequences in Theorem 2, is also of independent interest.
4. In spite of a theoretical paper, it is well written and is easy to follow.
Weaknesses: 1. In practice, how to determine the hyper-parameters like $\gamma_t$, $L$, and $B$ in (5)? Is there any data-driven way to select them?
2. In the main Theorem 6, the regret bound is $\gamma_T^{D}\sqrt{T}$. The term $\gamma_T^{D}$ is the T-step information gain of kernel, which is also a function of $T$. The authors claim that this rate improves that of Xu et al. (2024) by a factor of $T^{1/4}$. However, the cumulative regret bound in Theorem 5.2 of Xu et al. (2024) is of a similar order. Xu et al. (2024) also provided explicit regret upper bounds for various common kernels in Theorem 5.5. Hence, it is also interesting to provide an explicit form of $\gamma_T^{D}$ for some common kernels, and to compare these regret upper bounds in a fair way.
3. In Figure 1, the authors presented the result for the Ackley function, which shows a clear advantage of the proposed method. However, in more extensive simulations (e.g., Matyas function in Figure 6, ) in the appendix, the proposed method is outperformed by the competitors. It is helpful to provide some discussion on these results, and offer some insights on when the proposed method would work well. This is helpful for practitioners.
Technical Quality: 3
Clarity: 3
Questions for Authors: see Weakness
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and comments. We are glad that you find our contributions to be timely and strong. We have reflected your suggestions in the paper, making it more clear for future readers.
> In practice, how to determine the hyper-parameters like 𝛾𝑡, 𝐿, and 𝐵 in (5)? Is there any data-driven way to select them?
There are two common ways:
- If there's enough budget for data acquisition, these can be selected via cross-validation on a small sampled pool.
- Another approach is to create a second bandit instance on top, to optimize for these parameters [1, 2].
In practice, we directly tune for $\beta\_t$, rather than separately for its elements (e.g. no need to tune $\gamma\_t$). In our experiments, we just set $\beta\_t = 1.0$ without tuning it.
> In the main Theorem 6, the regret bound is $\gamma_T\sqrt{T}$. The term $\gamma_T$ is the T-step information gain of kernel, which is also a function of 𝑇. The authors claim that this rate improves that of Xu et al. (2024) by a factor of 𝑇1/4. However, the cumulative regret bound in Theorem 5.2 of Xu et al. (2024) is of a similar order.
Xu et al. (2024) also provided explicit regret upper bounds for various common kernels in Theorem 5.5. Hence, it is also interesting to provide an explicit form of $\gamma_T^{(D)}$ for some common kernels, and to compare these regret upper bounds in a fair way.
**On the regret rate.** Please note that the definition of $\beta_T$ in [Xu et al.] is looser by a factor of $T^{1/4}$ than ours. In Theorem 5.2, the regret bound is:
$$\sqrt{\beta_T} \sqrt{T} = T^{3/4}\gamma_T$$
This looseness by a factor of $T^{1/4}$ can also be viewed in Theorem 5.5, where the regret of a linear bandit is reported to be $O(T^{3/4})$.
**Regarding the dueling information gain.** Our regret bounds are indeed in terms of $\gamma_T^{(D)}$ and Xu et al. presents their result for $\gamma_T$. We have now updated the text, *refraining from directly comparing the tightness of rates*, for the comparison to be more rigorous. However, we expect $\gamma_T^{(D)}$ and $\gamma_T$ to exhibit identical rates with $T$ which is the case for linear kernels. We are working on extending this result to more complex kernels. Our intuition is that summing two kernels (to create the dueling kernel) does not change the kernel complexity. Thank you for this suggestion!
> In Figure 1, the authors presented the result for the Ackley function, which shows a clear advantage of the proposed method. However, in more extensive simulations (e.g., Matyas function in Figure 6, ) in the appendix, the proposed method is outperformed by the competitors. It is helpful to provide some discussion on these results, and offer some insights on when the proposed method would work well. This is helpful for practitioners.
We appreciate the observations and agree that a discussion on these results is beneficial for practitioners.
Overall, we see MaxMinLCB as a robust choice demonstrated by its consistent performance across the wide range of optimization problems considered in our experimental results.
Branin, Matyas, and Rosenbrock are relatively smooth functions that present easier optimization problems. We observe less differentiated performance between the algorithms. For example, MultiSBM excels and outperforms MaxMinLCB for Branin and Matyas.
In contrast, functions such as Ackley, Eggholder, Hoelder, and Michalewicz present more challenging optimization problems characterized by larger gradients, valleys, ridges, and numerous local optima. In these cases, MaxMinLCB demonstrates a clear advantage.
**Figure C** in the uploaded PDF visualizes the Ackley, Matyas, and Branin functions to highlight these differences.
We will include a detailed discussion in the revised manuscript to provide practitioners with insights into when the proposed method is most effective and how it compares to other algorithms under various conditions.
References
---
[1] Berkenkamp, Felix, Angela P. Schoellig, and Andreas Krause. "No-regret Bayesian optimization with unknown hyperparameters." Journal of Machine Learning Research 20.50 (2019): 1-24.
[2] Sessa, Pier Giuseppe, et al. "Multitask learning with no regret: from improved confidence bounds to active learning." Advances in Neural Information Processing Systems 36 (2023): 6770-6781.
---
Rebuttal Comment 1.1:
Title: thanks for the clarification
Comment: the authors' clarification is helpful. I do not have any additional question and would like to maintain my rating. | Summary: The paper examines the problem of bandit optimization with preference feedback in large domains and nonlinear (kernelized) rewards. It introduces MAXMINLCB, which adopts a game-theoretic approach to action selection under comparative feedback. Additionally, it proposes kernelized preference-based confidence sets, which can be utilized in related problems.
Strengths: (1) Rather than jointly selecting the arms in dueling bandits, the proposed method jointly optimizes both actions by choosing them as the equilibrium of a two-player zero-sum Stackelberg game. This approach enables a more efficient exploration/exploitation trade-off.
(2) The regret guarantee presented in this paper is tighter by a factor of $T^{1/4}$ compared to Xu et al. (2024).
Weaknesses: (1) Although the paper uses a kernelized logistic model to approximate the rewards, this approach may remain too simplistic for capturing the complexity of rewards in real-world applications.
(2) The paper lacks a comparison in the experiments with the related work by Xu et al. (2024).
(3) In real applications, it is more common to rank between two state-action pairs. However, the paper does not consider contextual information and solely focuses on the multi-armed setting, which is less interesting and useful.
Technical Quality: 3
Clarity: 2
Questions for Authors: (1) Can you implement the algorithms compared in Section 6.2 using the original confidence sets from the references?
(2) Can this work be extended to contextual bandits with preference feedback?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Please see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time and the points you raised. We addressed them in the paper and we believe it will help the paper to reach a broader community. In light of these updates, we appreciate if you can reconsider your assessment on the scope and relevance of the paper.
> Although the paper uses a kernelized logistic model to approximate the rewards, this approach may remain too simplistic for capturing the complexity of rewards in real-world applications.
We would first like to highlight that the MaxMinLCB acquisition function is not tied to the kernelized model in this paper and may be used together with any calibrated model, e.g. a Bayesian Neural Network with valid uncertainty estimates.
We believe that a kernelized approach for adaptive learning is still relevant in real-world problems, e.g. if vector embeddings of the data are available. For complex data modalities, finding an accurate feature map is beyond the computational budget of many practitioners, and we believe there is value in developing computationally light methodology which can be set up on top of available vector embeddings. It is in fact common to use kernelized and linear models on top of LLM embeddings in Active Preference Optimization [3, 4].
We are conducting a new experiment to demonstrate this claim given vector embeddings of Yelp reviews for restaurants [**Fig. A Tab. B**- Uploaded PDF]. We can find the preferred choices, based on comparative feedback of a user and observe that **the algorithm still works on a text-based domain, even though it is designed for euclidean spaces**. Please refer to the general response for details of this experiment. Note that neither of the algorithms have been modified or tuned to this new domain, therefore the regret act as a proof-of-concept, rather than a benchmark.
While the *key contributions of this paper are intended to be primarily theoretical*, we aim to add this experiment to demonstrate the practical applicability of our problem setting. We hope that this experiment can make the paper more accessible to a broader community.
> Can this work be extended to contextual bandits with preference feedback? The paper does not consider contextual information and solely focuses on the multi-armed setting, which is less interesting and useful.
Thank you for the interesting question! We have been thinking about contextual extensions as future work. Here’s our take:
1) If the context is stochastic and given by the environment, **our approach almost immediately applies**.
Contextual bandits and linear/kernelized bandits are equivalent up to choice of the feature map/kernel. This is extensively discussed in Chapter 19.1 and 19.2 of the bandit book [1] for the linear setting, and analyzed in [2] for the kernelized setting. *We updated section 3 and added this remark*.
2) If the context can be actively chosen by the agent (e.g. choosing a prompt from a fine-tuning dataset), **we require a non-trivial extension of our algorithm**.
This is an interesting scenario, in which the agent is optimizing both for the context and action pairs. We are in fact working on a follow-up project, which uses our novel MaxMinLCB acquisition function to solve this problem, with an eye to active preference optimization [3, 4]. Here, the aim is to give a worst-case sample complexity bound. *We now mention this in the Conclusion section.*
> The paper lacks a comparison in the experiments with the related work by Xu et al. (2024).
We would like to highlight that the POP-BO algorithm proposed by Xu et al. (2024) uses the **same** action selection logic as the MultiSBM algorithm, which is already included in our benchmarks. We have updated section 6, specifying this connection.
Our experimental results indicate that MultiSBM performs comparably or even slightly better than MaxMinLCB on smooth functions. However, MaxMinLCB outperforms MultiSBM on more challenging functions such as Ackley and Eggholder. **Figure A** and **Table B** in the uploaded PDF compare performance on smooth and wilder functions.
> Can you implement the algorithms compared in Section 6.2 using the original confidence sets from the references?
Most baselines are designed for linear settings, which is misspecified for the complex test functions that we consider [c.f. **Fig. C** in the rebuttal PDF]. This would result in *linear* regret curves [c.f. Fig. 1a of the paper]. Therefore, we implemented the baselines using our improved confidence sets to make the regret curves more informative. This allows the reader to compare the acquisition functions more fairly.
The IDS algorithm and POP-BO have kernelized confidence sets but are (theoretically) expected to be looser [c.f. Fig. 1a of the paper]. We will update the paper with a comparison to these algorithms (using their original confidence sets). Thank you for this suggestion!
---
### References:
[1] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
[2] Krause, Andreas, and Cheng Ong. "Contextual gaussian process bandit optimization." Advances in neural information processing systems 24 (2011).
[3] Das, Nirjhar, et al. "Active Preference Optimization for Sample Efficient RLHF." ICML 2024 Workshop on Theoretical Foundations of Foundation Models.
[4] Mehta, V., Das, V., Neopane, O., Dai, Y., Bogunovic, I., Schneider, J., & Neiswanger, W. (2023). Sample Efficient Reinforcement Learning from Human Feedback via Active Exploration.]
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and most of my questions have been addressed. I have updated the score accordingly. | Summary: This paper considers bandit optimization with preference feedback over continuous action spaces and kernelized reward function. The goal in this problem is to minimize the dueling regret against an optimal action over a finite time-horizon. Previous works on this problem are either restricted to finite action spaces or linear reward functions. The proposed algorithm casts the problem as kernalized logistic regression and designs confidence sets for the relative preference between two actions. It then proposes an action selection strategy based on a game theoretic Leader-Follower formulation that utilizes these confidence intervals. The paper provides a regret bound as well as empirical evaluation for the proposed algorithm.
Strengths: The main contributions of the paper are two-fold:
1. Expanding the existing literature on dueling bandits by studying kernelized reward functions under infinite and continuous action sets. This requires new techniques to bound the confidence intervals.
2. Proposing a principled game-theoretic approach to action selection in dueling bandits that can be of further interest.
In my opinion these are two important contributions to the literature. Since these ideas are likely to be relevant to other learning problems with preference feedback such as RLHF, I think that the results in this paper have a good scope. The paper is well-written and the contributions are clear.
Weaknesses: Experimental evaluation can include other algorithms that are known to perform better than RUCB such as RMED (Komiyama et al., 2015) and Double Thompson Sampling (Wu and Liu, 2016).
Technical Quality: 4
Clarity: 4
Questions for Authors: It would be interesting to see if the current approach can be extended to other link functions beyond the sigmoid such as probit.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The paper can include a section of limitations of the current work in terms of social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the valuable review and constructive suggestions. Please find below our comments on the raised questions.
> Experimental evaluation can include other algorithms that are known to perform better than RUCB such as RMED (Komiyama et al., 2015) and Double Thompson Sampling (Wu and Liu, 2016).
We appreciate the recommendation and acknowledge that RMED and Double Thompson Sampling (D-TS) perform better than some of our benchmark algorithms on the finite arm dueling bandit problem. However, our paper focuses on the continuous domain, making these algorithms less comparable for our study. **We have added the references to the related works section.**
In our experimental setup, we intentionally separated the confidence set estimation and action selection to evaluate these problems independently. Although many of the action selection algorithms we used are designed for the finite arm problem, they can be adapted to the continuous domain with minor modifications, as detailed in appendix D and E.
In contrast, for RMED and D-TS **we have not found straightforward extensions** to the continuous domain. For instance, D-TS depends on counting the number of times arm $i$ beats arm $j$, which is challenging to interpret in a continuous setting.
Nonetheless, we are keen to hear if you have a suggestion on how to incorporate these algorithms into our comparisons.
> It would be interesting to see if the current approach can be extended to other link functions beyond the sigmoid such as probit.
Thank you for this suggestion! Our theoretical analysis does in fact hold under a broader set of link functions $s: \mathbb{R} \to [0, 1]$ as long as $s$ is,
- monotonically increasing and symmetric around $0$, i.e., $s(x) = 1-s(-x)$ and $s(0) = 0.5$,
- differentiable and $\kappa = \sup_{a \leq B} 1/\dot{s}(a)$ exists.
We have kept the problem setting for the sigmoid function, as it has a simple probabilistic interpretation (feedback $y$ becomes a Bernoulli random variable) and connects to the common Bradly-Terry model.
However, we have **added a remark to Section 3 and pointed out this extension**.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my questions! I would like to keep my original score. | Rebuttal 1:
Rebuttal: We thank all reviewers and chairs for their work reviewing our paper.
We are delighted to receive high-quality constructive feedback highlighting that our contributions are “clear” and our “ideas are likely to be relevant to other learning problems with preference feedback such as RLHF”. We are certain that these inputs are going to improve our work and enhance the paper’s contribution to the community.
In the following, we address questions that appear in more than one reviews, and could be valuable in a broader context.
## Connection between MaxMinLCB, MultiSBM, and POP-BO
We would like to highlight that both POP-BO[Xu et al., 2024] and MultiSBM [Ailon et al., 2014] build on the *same* ideas. These algorithms select one action $\mathbf{x}\_t$ in each time step by maximizing an optimistic estimate of the winning probability against the action chosen in the previous time step $\mathbf{x}\_{t-1}$. Then compares the actions $\mathbf{x}\_t$ and $\mathbf{x}\_{t-1}$.
Even though this connection is not mentioned in [Xu et al. 2024], the two algorithms are equivalent if used with the same confidence sets. Consequently, the results we present for MultiSBM in Section 6 carry over to POP-BO.
Additionally, we would like to note that the MaxMinLCB acquisition function can be viewed as a generalization of the above-mentioned strategies. One could formulate both POP-BO and MultiSBM as follows:
$\max\_\{\mathbf{x} \in \\{\mathbf{x}’\_{t-1} \\}\} LCB\_{t}(\mathbf{x}, \mathbf{x}'(\mathbf{x}))$
$s.t. \mathbf{x}'(\mathbf{x}) = \min\_\{\mathbf{x} \in \mathcal{M}_t\} LCB\_{t}(\mathbf{x}, \mathbf{x}')$.
The main difference between this optimization problem and the MaxMinLCB defined in Eq. (7) is that domain for $\mathbf{x}\_t$ is restricted to the singleton $\mathbf{x}’\_{t-1}$ instead of $\mathcal{M}_t$, therefore, MaxMinLCB can be considered a more general acquisition function than the other two.
## Further experiments on applicability
To further demonstrate the scalability and applicability of our approach, we included preliminary results of an ongoing experiment in the uploaded PDF [**Fig A-b, Tab B** - Uploaded PDF] that provide evidence for scaling our approach to complex real-world problems.
In this setting, we use the Yelp open dataset of restaurant reviews to learn the preferences of users and recommend restaurants. We subset the data for restaurants in Philadelphia, USA with at least 500 reviews and users who reviewed at least 90 restaurants. **The final dataset includes 275 restaurants, 20 users, and a total of 2563 reviews.**
We consider the action space of vector embeddings of the reviews for each restaurant for which we use the *text-embedding-3-large* OpenAI embedding model and have a **dimensionality of 32**. We consider each user separately and estimate their utilities for restaurants not yet reviewed using collaborative filtering.
Please note that *neither of the algorithms are tuned or modified for this experiment*. The intention on releasing pre-liminary result, is to demonstrate that 1) the computations easily scale, 2) kernelized approach is still applicable on a text-based domain. The regret result on the Yelp data, *shall be viewed as a proof-of-concept, rather than a benchmark*.
In the uploaded PDF, **Figure A-b** (right) shows that the results of this larger problem align with previous conclusions. MaxMinLCB remains the best-performing algorithm with MultiSBM following second. We included the cumulative regret in **Table B** (final row) as well.
## Correction
Since the initial submission, we identified a minor error in the code for generating Table 1 in our paper. The conclusions drawn from these results remain unchanged. The correction slightly affected numerical values in the table but not qualitative relationships. We refer reviewers to Table B in the uploaded PDF for the corrected version of Table 1.
Pdf: /pdf/efe21f504b49ac000668dca6f305e9da630d1235.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FasMe: Fast and Sample-efficient Meta Estimator for Precision Matrix Learning in Small Sample Settings | Accept (poster) | Summary: This paper proposes a meta-learning method for estimating the precision matrix on a new task with small data.
The proposed method uses common edges estimated from multiple auxiliary datasets as meta-knowledge. Then, it estimates the precision matrix on the new task, assuming its true edges contain all the estimated common edges (meta-knowledge). Some theoretical guarantees are also provided.
Experiments with synthetic and real-world datasets show the effectiveness and efficiency of the proposed method.
Strengths: - The paper is generally well-written and easy to follow.
- Concrete algorithm and its theoretical guarantees are presented (but, I didn't read their proof).
- Strong performance in terms of both accuracy and efficiency in the experiments.
Weaknesses: - As the authors stated in the Limitation section, the assumption of Eq. 8 might not be held in general machine learning tasks, although it fits well in biological domains.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Weakness
Thank you for your detailed review and for highlighting this crucial aspect of our research. As mentioned in Line 149-150: "The assumption has been widely adopted [28, 18, 30] and has proven feasible and applicable in the biological and genetic domains." We have validated this assumption through numerous real-world experiments across various biological datasets, detailed in Section 6.2 and Appendix B.2.
In the Limitations Section, we acknowledge potential challenges in sparse and high-dimensional datasets from other domains, noting: "For other domains, if the edges are sparse and the data are high-dimensional, it may be difficult to recover the common structure from the data without some prior evidence." However, we would like to respectfully point out that this assumption has been successfully applied in some other fields as well. For instance, references [9, 10] (See global rebuttal for references) validate this assumption in industrial sensor and stock price datasets, respectively. Additionally, we have applied this assumption to a financial dataset, as described in Lines 566-575 of the appendix. The inherent characteristics in financial data allow for the adaptation of this assumption. For example, the relationships between ships and transport or between steel and the chemical industry (shown in Figure 6 in Appendix) are common across various economic datasets, showcasing our method's versatility.
Moving forward, we plan to extend our work by relaxing the sub-Gaussian data assumption and further reducing the sample size requirements for training meta-knowledge. This will enhance the applicability of our approach across a broader range of fields. We greatly appreciate your insightful feedback and constructive suggestions. We have added the above discussion to the revised paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I'll keep the current rating. | Summary: This paper introduces FasMe, a meta-learning approach for efficient precision matrix estimation in small sample settings. By leveraging meta-knowledge and maximum determinant matrix completion, FasMe reduces sample size requirements and improves computational efficiency. Experimental results show FasMe to be significantly faster and more accurate than existing methods, particularly in low-data environments.
Strengths: 1) Paper investigates a key issue in precision matrix estimation and proposes a reasonable method to address the problem.
2) Paper provides thorough theoretical and experimental analyses to justify the method’s ability to reduce the sample requirement and enhance learning efficiency.
3) Paper has good representation.
Weaknesses: I have few doubts about the method and experiments presented in the article.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is the work primarily applicable to biological scenarios, or can it be widely used in other contexts as well?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: nan
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Response to Question
Thank you for your positive feedback and insightful question.
Yes, our work is primarily applied to biological scenarios. For one thing, high-dimensional, small-sample settings are more common in biological research, as exemplified by our case study on Cholangiocarcinoma at the beginning of the paper. For another thing, as stated in Line 149-150 of our paper, "The assumption has been widely adopted [28, 18, 30] and proved to be feasible and applicable in the biological and genetic domains [2]." This highlights the strong foundation and successful application of our work in these fields.
However, our work can also be applied to other domains, although perhaps not as widely as in the biological field. In the appendix (Line 566-575), we demonstrate the application of our method in the financial domain, presenting experimental results that show promising performance. As we mentioned above, the assumptions and dependencies leveraged by our method are more commonly found and well-studied in biological data. Nevertheless, the inherent characteristics of financial data allow for its adaptation. For example, the relationships between ships and transport or between steel and the chemical industry (shown in Figure 6 in Appendix) are common across various economic datasets, showcasing our method's versatility.
We believe this is a valuable point of discussion and have added the discussion in the main text of the revised paper to elaborate on these aspects further.
Thank you again for your thoughtful question. | Summary: The authors propose a method to estimate sparse precision matrices from few samples. Theoretical properties of the proposed method are studied, and experiments on synthetic and brain fMRI data are presented.
Strengths: Strengths:
* The paper is overall well written, and fairly easy to follow and comprehend.
* Theoretical guarantees for sub-Gaussian distributed random variables are presented, and they seem to be novel contribution.
* The experiments on the synthetic dataset clearly demonstrate improvement over the relevant competing methods.
Weaknesses: Weaknesses:
* Currently quantitative results are presented for synthetic data. It would be nice to see more quantitative evaluations on benchmark datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: * How are the meta learning tasks linked to learning the new precision matrix in the theoretical part? It would be nice if the authors can elaborate on the assumptions for the learnability of the new matrix.
2. Why is N=0 in Eq. 13?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your kind feedback and perceptive questions.
### 1. Response to Weakness
In addition to the synthetic datasets, we have conducted extensive experiments on real-world datasets. Specifically, we used the ChIP-Seq dataset from the ENCODE project and the fMRI dataset from the OpenfMRI project, as described in Section 6.2 (Page 9, Line 335-345) and Appendix B.2 (Page 13-14, Line 518-565). Additionally, we have referenced Appendix B.2 in Line 345.
The datasets used in our work have been widely utilized as the benchmark datasets in other research papers in this domain. We list some papers as evidence:
- ChIP-Seq dataset from the ENCODE project:
- Mitra et al. (2013) [1] introduced a Bayesian graphical model for inferring chromatin states from **ChIP-Seq data** on histone modifications. This approach identifies complex dependencies among various histone modifications, enabling a better understanding of their roles in gene regulation.
- Lundberg et al. (2015) [2] developed ChromNet, a computational method to infer the chromatin network from **ChIP-Seq datasets**, which identifies conditional dependencies among regulatory factors to discern direct from indirect interactions more effectively. This approach enables the analysis of large-scale ChIP-Seq data, revealing both known and previously unidentified interactions among regulatory factors.
- Ng et al. (2018) [3] developed a graphical model to visualize regulatory relationships between genome-wide transcription factor binding profiles from **ChIP-Seq datasets**, demonstrating an innovative approach to discern direct versus indirect transcription factor interactions and enhance the understanding of transcriptional regulation.
- Shu et al. (2021) [4] introduced a computational framework utilizing neural networks for the inference and visualization of gene regulatory networks (GRNs) from single-cell RNA-sequencing data. Their model significantly enhances the accuracy of gene interaction analysis within and across various cell types, leveraging **ChIP-Seq datasets** to validate inferred GRNs and providing a robust method for biological data integration and interpretation.
- fMRI dataset from OpenfMRI project:
- Ryali et al. (2012) [5] developed a method for estimating functional connectivity in fMRI datasets using stability selection-based sparse partial correlation with an elastic net penalty. This approach provides more accurate identification of functional connections, even with large **fMRI datasets**, enhancing the understanding of brain function and connectivity.
- Using **fMRI datasets**, Luo (2014) [6] proposes a hierarchical graphical model to effectively estimate large inverse covariance matrices, facilitating improved inference of functional brain networks and their hierarchical interactions. This model addresses the challenges of high dimensionality in brain data, offering a more accurate understanding of brain connectivity.
- Sulek (2017) [7] utilized graphical models with a lasso penalty to analyze functional magnetic resonance imaging **(fMRI) datasets,** focusing on brain activities during saccadic eye movement tasks. The study applied the graphical lasso method to construct sparse graphs that elucidate the connections between different brain regions, comparing these connections before and after specific tasks to understand changes in brain connectivity.
- Chung et al. (2021) [8] developed a robust graphical model that jointly estimates multiple precision matrices from **fMRI datasets**, facilitating a detailed analysis of brain networks by integrating regularized aggregation to account for individual variability and detect outliers, significantly enhancing the accuracy and reliability of connectivity assessments across subjects.
The above references can be seen in the global rebuttal.
We appreciate your suggestion and will ensure to highlight these real-world datasets more clearly in the revised version of our paper. Additionally, we will include the aforementioned references in the revised version as they provide significant evidence of the relevance and utilization of these benchmark datasets in the field. Thank you again for your valuable feedback.
### 2. Response to Q1
We appreciate the opportunity to clarify this point. Due to the space limitation in the main text, we have provided a detailed theoretical explanation of the assumptions that elaborate on the link between the auxiliary tasks and the new task in Appendix F, specifically in Definition 2 (Page 21, Lines 748-759). Additionally, we have referenced this in the main text on lines 153-154. We hope this addresses your concerns and are grateful for your attention to detail.
### 3. Response to Q2
We apologize for any confusion our notation may have caused. The $N=0$ in Eq. 13 should be read in conjunction with the subsequent $\forall(i, j) \in \mathcal{G}\_M$. To avoid any misunderstanding, we will revise this equation to $\forall(i, j) \in \mathcal{G}\_M, N_{ij}=0$. We appreciate your attention to detail and your helpful suggestion.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the rebuttal, I am keeping the initial ratings. | null | null | Rebuttal 1:
Rebuttal: ## References
[1] Mitra R, Müller P, Liang S, et al. A Bayesian graphical model for chip-seq data on histone modifications[J]. Journal of the American Statistical Association, 2013, 108(501): 69-80.
[2] Lundberg S M, Tu W B, Raught B, et al. Learning the human chromatin network from all ENCODE ChIP-seq data[J]. bioRxiv, 2015: 023911.
[3] Ng F S L, Ruau D, Wernisch L, et al. A graphical model approach visualizes regulatory relationships between genome-wide transcription factor binding profiles[J]. Briefings in bioinformatics, 2018, 19(1): 162-173.
[4] Shu H, Zhou J, Lian Q, et al. Modeling gene regulatory networks using neural network architectures[J]. Nature Computational Science, 2021, 1(7): 491-501.
[5] Ryali S, Chen T, Supekar K, et al. Estimation of functional connectivity in fMRI data using stability selection-based sparse partial correlation with elastic net penalty[J]. NeuroImage, 2012, 59(4): 3852-3861.
[6] Luo X. A hierarchical graphical model for big inverse covariance estimation with an application to fmri[J]. arXiv preprint arXiv:1403.4698, 2014.
[7] Sulek T R. An application of graphical models to fMRI data using the lasso penalty[D]. University of Georgia, 2017.
[8] Chung J, Jackson B S, Mcdowell J E, et al. Joint estimation and regularized aggregation of brain network in FMRI data[J]. Journal of Neuroscience Methods, 2021, 364: 109374.
[9] Hara S, Washio T. Learning a common substructure of multiple graphical Gaussian models[J]. Neural Networks, 2013, 38: 23-38.
[10] Banerjee S, Ghosal S. Bayesian structure learning in graphical models[J]. Journal of Multivariate Analysis, 2015, 136: 147-162. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trained Models Tell Us How to Make Them Robust to Spurious Correlation without Group Annotation | Reject | Summary: This paper addresses the problem of subpopulation generalization, also known as spurious correlations. Building on the Last Layer Retraining (DFR) method, it removes the constraints on a small subset of annotations. The paper introduces the Environment-based Validation and Loss-based Sampling (EVaLS) method. Unlike DFR, EVaLS divides the validation set $D^{val}$ into two parts: (1) $D^{LL}$, where losses from an ERM-trained model are used as a proxy for identifying minority groups for retraining, and (2) $D^{MS}$, where environment inference methods are used for partitioning environments. The paper presents theoretical insights and empirical results demonstrating the effectiveness of EVaLS.
Strengths: * The paper is well-structured and presented in a clear and organized manner, making it easy to comprehend and follow along.
* The proposed method is simple but effective and explores a relatively challenging area in existing literature (*i.e.* subgroup generalization without group annotations).
* The authors provide some theoretical analysis to support their claims.
Weaknesses: * The novelty and contribution of the proposed method may be limited for the following reasons: 1) The paper combines multiple previously proposed methods (*i.e.* DFR [1], EIIL [2]) all at once, which inherently guarantees a nontrivial performance; (2) The primary technical contribution, at least from my perspective, is the loss-based sampling, which has been already explored extensively in the noisy label literature and has been used as tools for pseudo-labeling.
* The paper fails to discuss recently proposed methods that also require no group annotations, such as SELF [3], BAM [4], and BPA [5]. In particular, SELF is also a direct follow-up of DFR. The authors are encouraged to discuss the limitations and strengths of loss-based schemes against the class-based schemes advocated by SELF and BAM.
* More analyses can be included to provide further understanding of the selected loss-based samples. For example, given a threshold, how much percent of the high-loss and low-loss samples are indeed the minority and majority samples and how does this percentage change with the threshold?
[1] Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations, ICLR 2023
[2] Environment inference for invariant learning. ICML 2021
[3] Towards Last-layer Retraining for Group Robustness with Fewer Annotations. NeurIPS 2023
[4] Bias Amplification Enhances Minority Performance. TMLR 2024
[5] Unsupervised learning of debiased representations with pseudo-attributes. CVPR 2022
Technical Quality: 2
Clarity: 3
Questions for Authors: * Is EVaLS sensitive to hyperparameters?
* [minor] There seem to be typos in your corollary C.1.
* Can the authors make clarifications on how the conditions in proposition 3.1 are met or relaxed in practice? How does the distribution of actual experimental benchmark datasets compare to your assumptions?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Aforementioned in Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for highlighting the relevant works and for your insightful requests regarding the method’s sensitivity and the compatibility of its theoretical and practical aspects.
# Weakness 1
Please refer to the general response in the Author Rebuttal for a review of the contributions of this work. We also note that the modules in the method can be replaced while completely maintaining the overall contribution and framework. In Appendix E, we substitute loss-based sampling with other methods (Sec. E.1) and provide an ablation study on replacing EIIL with other simple environment inference techniques (Sec. E.2).
# Weakness 2
Thank you for pointing out relevant works.
As stated in Appendix A (L552-555), SELF introduces CB last-layer retraining that does not require group annotations for retraining or model selection. Another method proposed by SELF, ES disagreement, requires group annotations for model selection but not for the retraining (see Appendix A, L547-549). It also needs an early-stopped version of the trained ERM along with the final model.
BAM is a two-stage training method that works with or without group annotations for model selection. Without group annotations, BAM uses ClassDiff to select hyperparameters, ensuring balanced performance across all classes.
BPA is a clustering-based method (see L206). While BAM claims BPA doesn’t need group annotations for model selection, it is important to note that our investigation shows that it does.
All comparisons use similar splits and architectures as the benchmarked methods in the paper.
## Coamparison with class-balanced schemes
Following Yang et al. (reference [4] in the paper): Given input $x=(x_{c}, x_{s})\in\mathcal{X}$, which consists of core feature $x_{c}$ and spurious feature $x_{s}$, and label $y\in\mathcal{Y}$, we can write the classification model:$$P(y|x)=\frac{P(x|y)}{P(x)}P(y)=\frac{P(x_{c},x_{s}|y)}{P(x_{c},x_{s})}P(y)=\frac{P(x_{c}|y)}{P(x_{c})}\frac{P(x_{s}|x_{c}, y)}{P(x_{s}|x_{c})}P(y)$$Spurious correlation occurs when $P(x_{s}|x_{c}, y)\gg P(x_{s}|x_{c})$, while class imbalance represents the scenario where $P(y)\gg P(y')$ for $y, y'\in\mathcal{Y}$. Thus, as stated in Preliminaries section (L112-123), such shifts can occur independently in datasets.
Yang et al. observes in Figure 2 of the paper that Waterbirds, CelebA, and CivilComments have significant class imbalance, but MultiNLI does not. It can be seen that robustness improvements in class-balanced schemes in SELF and BAM relate to the amount of class imbalance in the dataset. For example, as emphasized in various parts of the paper (L79, L257, L313-315), the CivilComments dataset does not show spurious correlation but does exhibit class imbalance. Consequently, class-balanced schemes significantly improve worst-group accuracy on this dataset. However, results for MultiNLI (which exhibits attribute imbalance but no class imbalance (see L114-116)) are just slightly higher than ERM (for BAM) or even lower than it (for SELF).
Class-balanced schemes are helpful for handling class-imbalance-based subpopulation shifts but fail in addressing spurious correlations. As the authors of SELF note, these schemes can’t match state-of-the-art methods. This is due to their inability to manage subpopulation shifts caused by spurious correlations. Our approach, which involves selecting an equal number of samples from each group through a loss-based sampling scheme, creates a class-balanced retraining dataset. However, our paper clarifies (L316-325) that environment-based validation doesn’t address other types of subpopulation shifts beyond spurious correlations.
Worst group and average accuracy (in parentheses) for CB last-layer retraining (SELF) and BAM on datasets with spurious correlations is as follows.
|Method|Waterbirds|CelebA|UrbanCars|
|-|-|-|-|
|CB last-layer retraining|$92.6_{\pm0.8}(94.8_{\pm0.3})$|$73.7_{\pm2.8}(93.6_{\pm0.2})$|$21.9_{\pm13.0}(49.1_{\pm7.0})$|
|BAM + ClassDiff|$89.1_{\pm0.15}(91.4_{\pm0.31})$|$80.1_{\pm3.32}(88.4_{\pm2.32})$|-|
Although we couldn’t provide BAM results for UrbanCars due to high training computational costs, it’s evident that on this benchmark, which lacks class imbalance but has spurious correlations, the reported class-balanced scheme shows no improvement compared to ERM.
## Discussing the results of the suggested methods
Regarding UrbanCars: we were unable to provide results for BAM during the rebuttal phase due to its significant computational cost. For ES disagreemnet SELF, WGA and AA are $82.1_{\pm1.8}$ and $89.3_{\pm0.8}$ respecitvely. For BPA, WGA is $62.4$.
For other datasets, results are obtained from the respective papers.
For Waterbirds, ES disagreement SELF and BAM have higher WGA at the cost of lower average accuracy. EVaLS-GL outperform these methods with similar level of group supervision on all other datasets.
# Weakness 3
Figure 2 in the paper provides exactly the information you’re looking for! Please refer to the general response.
# Q1
Please see the general response.
# Q2
You are absolutely correct. The $F^2(\beta)$ should change to $F^2(\alpha)$ as $tail_L$ is a function of $\alpha$.
# Q3
Due to the technical difficulty of dealing with arbitrary distributions, our proof relies on the Gaussian nature of the logit distribution. In a case study (class 2 of CelebA with 94% spurious ratio on 20000 datapoints), this assumption almost holds (see qqplot in Figure 4 in the attached PDF). Evaluating our theorem's conditions, the empirical variance of the minority is higher than the majority (see “Distribution of logits” in the general rebuttal), satisfying the necessary and sufficient condition (L619) for Proposition C.1. The theorem proves the existence of suitable $\alpha, \beta$ (see L223-225). To empirically justify our tail-based method, we tested all possible tails $\alpha, \beta$ and found 6 value pairs that make the dataset group balance. Thus, our practical results align with our theoretical work.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thank you for your detailed response. I think it generally addresses my concerns so I will increase my score. I would encourage the authors to include these additional discussions in the final version of the paper.
---
Rebuttal 2:
Title: Response to Reviewer 7E4z
Comment: We deeply appreciate your consideration of our rebuttal. Thank you for suggesting the inclusion of the discussions in the final version of the paper; we will certainly do so.
We have carefully addressed all your concerns based on your feedback. If there are any specific areas where you feel further clarification or justification is needed, we are eager to address them promptly. Your guidance helps us achieve the highest standards, and we hope this will be reflected in a higher score. | Summary: To address the issue of spurious correlations when group labels are unavailable, this paper proposes a new method called EVaLS. It first creates a balanced training dataset using loss-based sampling. Then, it evaluates the accuracy of the balanced training set based on the inferred environments from the validation set, and selects models accordingly.
Strengths: 1. The paper is well-written, and includes a rich set of experiments with necessary theoretical explanations.
2. It is essential to discuss the multiple (unknown) spurious features case which has been overlooked in previous studies.
Weaknesses: 1. Why is the approach of using high-loss points (considered as the minority group) more effective than directly using misclassified points (considered as the minority group) in methods like JTT? Intuitively, compared to misclassified points, high-loss points are more "implicit" and no obvious thresholds, which could potentially result in high-loss points actually belonging to the majority group, thus exacerbating the imbalance in resampling.
2. If the author can show the balance level of samples obtained through loss-based sampling compared to directly using labels (misclassified points), it could further illustrate the advantages of loss-based sampling.
3. In Section "Mitigating Multiple Shortcut Attributes", if color is treated as a known spurious attribute and shape as an unknown spurious attribute, how would the performance of EVaLS be affected? Based on my understanding, there is a possibility that simplicity bias could cause the model to prioritize learning the simpler feature, color, and struggle to learn the more complex shape attribute. Therefore, considering color as known and shape as unknown can better show the performance of EVaLS in handling complex spurious features.
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive and insightful feedback. Our response to the mentioned weaknesses is as follows:
# Weakness 1
1. We must emphasize that the optimal number of selected high/low loss samples for retraining the last layer is chosen from a set of various values, based on the worst validation group/environment accuracy. Hence, the proposed method could be considered a more general solution that encompasses selecting misclassified samples (If selecting misclassified samples yields the best validation results, this configuration is chosen automatically by our method).
2. Having a hyperparameter ($k$) for controlling the number of selected samples from loss tails gives us flexibility to handle some challenges. For example,
- There is a tradeoff between the purity and the number of selected high-loss samples: as observed in Figures 2 and 7 in the paper, minority samples are more commonly seen among samples on which the ERM model has a high loss. As we increase the number of high-loss selected samples, the proportion of minority samples among them decreases. On the other hand, retraining on a larger number of samples could improve the overall training process. Choosing flexibly among various numbers of selected samples grants EVaLS the advantage of finding an optimal point in this tradeoff. You can see how sensitive the worst validation group accuracy is to selecting $k$ in Figure 5 in the attached PDF in the general response. Choosing only misclassified samples could not handle this trade-off especially when the number of misclassified samples is too high or low.
- There are situations in which the dataset contains corrupted data or samples with label noise on which the ERM model has a high loss or may misclassify. Using such samples for retraining the last layer degrades its performance. However, choosing the number of selected samples based on the worst validation group/environment accuracy instead of using a specific number allows our method to mitigate this issue to an extent.
We also conducted an experiment in which we replaced the loss-based sampling in EVaLS with the selection of misclassified samples and an equal number of randomly selected correctly classified samples from each class. As can be seen in the results, the performance is degraded compared to EVaLS on the Waterbirds and Urbancars datasets, with a slight improvement but a higher standard deviation on the CelebA dataset.
| | Waterbirds| CelebA| Urbancars|
|:------:|:-------:|:--------:|:--------:|
| | Worst/Average| Worst/Average| Worst/Average|
|Misclassified Selection | $77.8_{\pm 5.2}$/$94.0_{\pm 0.4}$ | $85.9_{\pm 1.0}$/$89.4_{\pm 0.8}$ | $78.4_{\pm 4.5}$/$86.9_{\pm 1.4}$ |
| EVaLS| $88.4_{\pm 3.1}$ /$94.1_{\pm 0.1}$ | $85.3_{\pm 0.4}$/$89.4_{\pm 0.5}$ | $82.13_{\pm 0.92}$\|$88.1_{\pm 0.9}$ |
# Weakness 2
As requested, the balance levels of samples selected based on loss and misclassification are presented in the following table for the Waterbirds, CelebA, and UrbanCars datasets.
The ratio $\frac{minority}{k}$ for each class is reported for three seeds, with the average across all seeds reported in parentheses. The results for the UrbanCars dataset are reported for the minority group with the lowest number of training samples.
|Dataset|High-Loss| |Misclassified | |
|-|:---:|:----:|:----:|:---:|
| | Class 1 | Class 2 | Class 1 | Class 2 |
| Waterbirds | $\frac{53}{55} , \frac{44}{45}, \frac{25}{25}(98_{\pm 1.0})$ | $\frac{48}{55}, \frac{41}{45}, \frac{21}{25}(87.5_{\pm 3.6})$|$\frac{40}{41},\frac{40}{41}, \frac{40}{41} (97.6_{\pm 0.0})$|$\frac{17}{20}, \frac{17}{20}, \frac{17}{20} (85.0_{\pm 0.0}) $ |
|CelebA| - | $\frac{58}{250}, \frac{58}{250}, \frac{20}{50}(28.8_{\pm 9.7})$ | - | $\frac{58}{262},\frac{58}{250},\frac{58}{270} (22.3_{\pm 0.8})$ |
|Urbancars| $\frac{9}{10}, \frac{24}{30}, \frac{24}{30} (83.3_{\pm 6.0})$ | $\frac{9}{10}, \frac{20}{30}, \frac{19}{30} (73.3_{\pm 14.5})$ | $\frac{44}{66},\frac{49}{73}, \frac{48}{69} (67.8_{\pm 1.6})$ | $\frac{35}{64}, \frac{29}{50}, \frac{33}{55} (57.6_{\pm 2.7})$|
# Weakness 3
Thank you for requesting such an interesting comparison! We believe that your intuition is correct. When the unknown spurious attribute is shape instead of color, methods that use the known group annotations show higher worst group accuracy compared to the previous scenario. In other words, when the unknown spurious attribute is a weaker shortcut, group robustness becomes a simpler task. Note that since EVaLS does not rely on any information regarding spurious attributes, it does not matter which spurious attribute is known and which is unknown; the results remain the same as reported in Table 2 of the paper. You can see the results for other methods for the new settings in the table below.
|Method|Worst Group Accuracy|
|-|:-:|
|AFR|$60.97_{\pm2.64}$|
|AFR + EIIL|$61.54_{\pm1.85}$|
|DFR|$71.5_{\pm3.2}$|
|EVaLS-GL|$65.5_{\pm0.6}$|
We found that utilizing group annotations for a simpler attribute (a stronger shortcut) enhances results compared to the previous setting where the unknown attribute was simpler. Regarding DFR, we believe that part of its performance is due to the larger number of retraining samples compared to EVaLS and EVaLS-GL (as stated in l355), rather than the provided group annotation. To verify this, we evaluated the performance of DFR when its number of retraining samples is reduced to the number of samples used for EVaLS. In this case, the performance of DFR drops to $66.3_{\pm1.15}$, which is lower than EVaLS.
Additionally, the performance gap between AFR and AFR + EIIL, as well as between EVaLS-GL and EVaLS (using environment-based validation instead of ground truth validation group labels of the known spurious attribute), decreases.
---
Rebuttal Comment 1.1:
Comment: Dear author,
Thank you for your reply and the additional experimental results.
After reading the reply and the opinions of other reviewers, I maintain my score. Although I agree with Reviewer 7E4z's point that this work is of limited novelty, I think the author's research is complete and detailed with enough experiments to show the effectiveness of EVaLS. In addition, I appreciate the discussion on multiple (unknown) spurious features. Therefore, I keep my score.
Best,
Reviewer SN9B
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer SN9B,
Thank you for taking the time to read our rebuttal. We are pleased to hear that you find our research complete and detailed, with sufficient experiments demonstrating its effectiveness.
We must emphasize that the title of our work reflects its content. We propose a method to derive all necessary information from the model itself to enhance its robustness against spurious correlations. Section 3, which introduces EVaLS, presents solutions such as loss-based sampling for creating a balanced dataset and partitioning the validation set into environments for model selection. We have also conducted multiple ablations in Section E to evaluate different modules while maintaining the overall framework. Thus, our work has never proposed using and combining previous methods as a novelty (as Reviewer 7E4z has stated).
Our work demonstrates the following points, as stated at the beginning of the general author rebuttal:
- In contrast to what is considered a requirement for previous methods (see L528-549 in Appendix A-Related Work), we show that ideal group discovery is not required for model selection in robustness to spurious correlations. Instead, identifying environments with group shifts (Sec. 3.2) works effectively (see Table 1).
- Relying on what a trained model learns, rather than auxiliary (even ground-truth) information (as in methods like AFR, DFR, etc.), could be more effective in the case of unknown spurious correlations (see Table 2 and Table 1-UrbanCars).
- Loss-based sampling (Sec. 3.1) is not only an effective method for robustness to group shifts (compared to other methods such as loss-based weighting schemes like AFR or upweighting misclassified samples like JTT; see EVaLS and EVaLS-GL in Table 1), but it is also supported by a theory for data balancing with general assumptions (Sec. 3.3 - Theoretical Analysis).
Best regards,
Authors of Submission 19478 | Summary: The paper studies how to improve the model’s robustness to multiple spurious correlations when the group labels (indicator for spurious correlation) are unknown in general. The proposed approach, EVaLS, leverages the loss from a base ERM model to sample a balanced subset to prevent learning from spurious correlations. In addition, a new synthetic dataset (Dominoes-CMF) for multiple spurious attributes is crafted. Empirically, the proposed approach sometimes has advantages over the rest of the baselines when using the same amount of additional information (group label).
Strengths: 1. The main paper is generally well-written.
2. The theoretical analysis in Section 3.3 (with derivations and proofs in Appendix) shows that for one-dimensional Gaussian distributions, choosing the tails on the two sides of the distributions creates balanced groups, even though the original data distribution is skewed.
3. Environment inference technique is demonstrated to be useful for separating the dataset into groups with different distributions of the subpopulations and then for model selection.
4. The proposed technique only requires last-layer retraining on part of the validation set, which is generally more efficient.
Weaknesses: 1. Figure 2 attempts to illustrate more minority samples have high loss while the majority samples have low loss. However, in each of the plots, only the % of one of the minority or majority groups is shown. The illustration can be improved by showing the % of both majority and minority groups in the same plot, and showing the actual distribution of the loss for the groups.
2. Though the idea is straightforward, it is unclear how the loss-based instance sampling is actually implemented. It is helpful to provide an algorithm or pseudocode to improve the presentation.
3. The theoretical analysis is generally sound but limited to a case without discussing the use of the loss (which may not be Gaussian) and the spurious correlations (which involve at least two dimensions of core and spurious features [1]).
4. The experimental results are less polished and sometimes the advantages are not so clear over other baselines. Some results are missing for datasets such as UrbanCars and MultiNLI. Only a few baselines are compared for the new dataset in Table 2. There is also no convincing and fine-grained analysis (e.g., ablation study) to understand how the proposed approach ensures data balancing and improves group robustness.
5. The paper initially focuses on improving group robustness when multiple spurious correlations are present, but the experimental results are lacking for these more challenging datasets.
[1] Sagawa, Shiori, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. "An investigation of why overparameterization exacerbates spurious correlations." In *International Conference on Machine Learning*, pp. 8346-8356. PMLR, 2020.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Since it’s an essential prerequisite for loss-based sampling, how well are the majority/minority groups separated in the logit space?
2. How is the number of balancing samples $k$ chosen? How balanced are the samples when they are selected from the optimal $k$?
3. I am curious about why the result for Civilcomment dataset is so much higher than the other baselines. Is the evaluation consistent with the baseline methods?
4. Minor typo: line 233 should be without loss of generality (w.l.o.g.).
5. Other minor issues with the experiments: The best-performing results in each category of approaches should be bold. The column header “best” should be “average” in Tables 4 and 5.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed the limitations in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed reviews and insightful questions.
# Weakness 1
Figure 2 in the paper **does not** show the proportion of minority (majority) samples that have high (low) loss. Instead, it depicts the proportion of minority/majority samples among the top x% of samples with the highest/lowest loss values. At the 100% mark on the x-axis, it shows the proportion of minority/majority groups in the entire dataset.
For example:
- In the second plot from the left, about 80% of the 50% samples with lowest loss in both class 1 and 2 of the Waterbirds dataset belong to majority groups.
- In the second plot from the right, over 30% of the highest-loss samples (top 1%) in the unbalanced class (class 2) of CelebA belong to minority groups.
When n% of samples in the top x% of samples with the highest/lowest loss belong to minority/majority groups, the remaining (100-n)% belong to majority/minority groups. To clarify, we will update the x-axis title to “% of selected samples from samples with highest/lowest loss values”.
As requested, Figure 1 in the attached PDF of the general author’s rebuttal shows the % of both majority and minority groups in the same plot, and Figure 3 illustrates the loss value distribution for these groups.
Please let us know if any further information or clarification is required.
# Weakness 2
We appreciate the suggestion to include pseudocode. We present a pseudocode in the “General Response”.
# Weakness 3
The scenario you described, involving multiple dimensions for spurious and core features, already aligns perfectly with our framework. As noted in L217-219 (and elaborated further in L585-589 in the Appendix C), we assume that both minority and majority groups follow a Gaussian distribution in the *feature space*. This assumption leads to Lemma C.1 (line 590), where the Gaussian distribution for our logits (**not losses**) is derived. Using a Gaussian distribution in the feature space is a common practice due to the technical challenges associated with other distributions. The paper you referenced also assumes a Gaussian distribution for both core and spurious dimensions. So, we believe that our assumption contains the one you have mentioned.
As mentioned in L215-216, the order of samples in logit space and loss space is monotonic. Therefore, it is unnecessary to focus on the distribution of the loss or assume any specific distribution (Gaussian or anything else) for it.
# Weakness 4
**”results are missing for datasets such as UrbanCars and MultiNLI.”**
The missing results (depicted by - in Table 1 in the paper) are as follows:
|Method|Dataset|Worst Group Accuracy|Average Accuracy|
|-|:-:|:-:|:-:|
|GDRO|UrbanCars|$73.1_{\pm2.0}$|$84.2_{\pm1.3}$|
|GDRO + EIIL|UrbanCars|$76.5_{\pm2.6}$|$85.4_{\pm2.1}$|
|GDRO + EIIL|MultiNLI|$61.2_{\pm0.5}$|$79.4_{\pm0.2}$|
|JTT|UrbanCars|$79.5_{\pm5.1}$|$86.3_{\pm1.0}$|
**“Only a few baselines are compared for the new dataset in Table 2.”**
Table 2 shows that loss-based sampling and environment-based validation effectively handle unknown spurious correlations, focusing on methods with similar setups (last-layer retraining). We also include the worst group accuracy for a new method (AFR) and its environment-based validation version. The worst group accuracies are:
AFR: $54.27_{\pm2.73}$
AFR + EIIL: $61.54_{\pm1.85}$
**“There is also no convincing and fine-grained analysis (e.g., ablation study) to understand how the proposed approach ensures data balancing and improves group robustness.”**
*Regarding data balancing*:
Figure 2 in the paper shows the proportion of minority/majority samples among those with the highest/lowest loss at various thresholds. The balance level for chosen \( k \) is detailed in the response to Q2 in this rebuttal.
*Regarding group robustness*:
We conducted several ablation studies and analyses to understand how EVaLS work:
- Group shifts under environment inference (Appendix-Table 3)
- Ablations on environment-based validation methods (Sec. E.2) and alternatives to loss-based sampling (Sec. E.1)
- Experiments on benchmarks with single and multiple spurious correlations (e.g., UrbanCars)
- A new dataset (Dominoes-CMF) for studying multiple independent spurious correlations
For further analysis, please let us know.
# Weakness 5
Regarding multiple spurious correlation dataset, we use UrbanCars (as outlined in L308-312), and also propose Dominoes-CMF dataset with two independent spurious correlations (Sec. 2.2). Table 2 shows that EVaLS, which doesn’t require group annotation, improves model robustness to unknown spurious correlations compared to more supervised solutions.
Finally, note that EVaLS is a robustification method to unknown spurious correlations that are learnt by a trained model. So, as illustrated in the results of the paper, it improves robustness for various benchmarks with single or multiple spurious correlations.
# Q1
Please refer to the “Distribution of logits for training datasets” section in the general response.
# Q2
The optimal number of balancing samples, $k$, is determined by the worst accuracy on inferred environments. The tables below report the proportion of minority group in selected high-loss samples, as well as the proportion of majority group within the selected low-loss samples for a sample seed:
Waterbirds
|Class 1 (#Min/#High loss)|Class 1 (#Maj/#Low loss)|Class 2 (#Min/#High loss)|Class 2 (#Maj/#Low loss)|
|:-:|:-:|:-:|:-:|
|$44/45$|$38/45$|$41/45$| $42/45$|
Class 2 of CelebA
|#Min/#High loss|#Maj/#Low loss|
|:-:|:-:|
|$58/250$| $249/250$|
# Q3
Refer to L313-315. The CivilComments dataset has class imbalance, so fine-tuning/retraining on class-balanced data can significantly improve group robustness. EVaLS-GL’s effectiveness is due to using a class-balanced dataset for retraining, as also noted in reference [2] (see L553-554).
# Q4 and Q5
Thanks for informing us about the typos. You are right. We will correct them in the final version.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thanks for providing the detailed responses including experimental results, pseudo code, useful statistics, etc.
W3: To clarify, what I meant was that the proposed approach utilizes “**loss**-based sampling” (class-dependent), but the analysis in proposition 3.1 is based on the **logits** (class-independent). It’s perfectly fine to assume Gaussian distribution in the features space and hence logit space, but when the loss is involved, it becomes a bit more complicated. This is because now we have different classes even for logistic regression. The logistic loss is monotonic for each class but when the label flips it’s no longer the same function. As assumed there are two Gaussian distributions for the **logits**, for binary classification, there will be four resulting distributions on the **losses**, because there are two transformations $-log(\sigma(L))$ and $-log(1-\sigma(L))$. Does the analysis for logits still hold in this scenario with losses?
Furthermore, I referenced [1] because they considered two feature dimensions (core and spurious), the majority/minority groups are clearly defined as spuriously-correlated/non-spuriously-correlated w.r.t. the features and labels. When you reduce the analytical setup to majority and minority distributions, the relationship with spurious correlation becomes blurred. That’s why I mentioned it as a limitation.
W4: Thanks for filling in the missing results. Please also include the few baselines pointed out by reviewer 7E4z to make the empirical comparison more comprehensive, but just to clarify why CivilComments and MNLI are out of the scope of EVaLS?
---
Reply to Comment 1.1.1:
Title: Response to Reviewer RE7s
Comment: Thank you for considering our rebuttal and paying special attention to the theoretical aspects of our work. We have tried to answer all your concerns in our response and hope you find them helpful. Please let us know if there is anything that needs to be clarified, as we find these discussions very helpful.
**W3**: We would like to thank you for clarifying your arguments. We think we now understand better what you meant.
We should emphasize that **both** the loss-based sampling and theoretical analysis are **class**-dependent. We state that our theoretical arguments are class-dependent in L215-218, and also L584-587 in Appendix C. Consider the setting we have assumed for the classification task and the loss function (L214 in Sec. 3.3, L583 in Appendix C). As you have stated, in the case of using logistic loss (sigmoid function), loss values are calculated as $-log(\sigma(L))$ and $-log(1-\sigma(L))$ (each for one of the classes). Therefore, the order of samples in logit and loss spaces is monotonic (L215, L584) in **each class**. The only difference between the two classes is that the direction of the monotonicity is different for them. But as our analysis are **class**-dependent, there is no worry about applying two different transformations, as the transformations themselves are **class**-dependent.
Please note that our theoretical analysis aims to show that loss-based sampling can create a balanced dataset. Focusing on logit space is sufficient to demonstrate sampling thresholds. Consequently, as seen in Proposition 3.1 (and C.1), we assume that we have $1-\epsilon$ spurious correlation, and minority (proportion $\epsilon$) and majority (proportion $1-\epsilon$) samples in each class are drawn from different Gaussian distributions in the feature and hence logit spaces (as you have noted). Additional assumptions about the feature space structure (like in the paper [1] you mentioned) are more restrictive. We do not fully comprehend how assuming a more structured feature space is helpful to further develop the theory as the current assumptions are less restrictive.
**W4**: Thank you. As shown in our response to reviewer 7E4z, while the mentioned methods target different subpopulation shifts or have higher supervision, our method outperforms them in almost all experiments (the only exception is Waterbirds, where some have higher WGA but lower average accuracy). We will include the new baseline results in the final paper.
As stated in Sec. 2.1 (L112-L123), There exist multiple types of subpopulation shifts (see response to reviewer 7E4z for class-imbalance). Refer to reference [4] in the paper for a categorization of subpopulation shifts and their formulation (Table 1 in their paper). Our focus in designing environment-based validation (EV) is on spurious correlation. The types of subpopulation shifts in CivilComments and MultiNLI are class-imbalance and attribute-imbalance, respectively (see the definition of these datasets in Appendix D.3, L714-725).
Regarding your question, please refer to L316-324. As explained there, patterns distinguishing groups in CivilComments and MultiNLI are not predictive for the target class (see tables below). This reduces their visibility in the model’s final layers (reference [29] in the paper discusses the role of various layers of neural networks in different types of shifts). Since environment inference algorithms like EIIL (see Appendix E.2 for further investigation) depend on the last layers of a trained model, they cannot infer environments with notable group shifts (defined in L195-198) in CivilComments and MultiNLI. The group shifts in CivilComments and MultiNLI (L323-324) are significantly lower than those of the datasets that are reported in Table 3. Thus, the focus of environment-based validation is on datasets with spurious correlation.
Nevertheless, loss-based sampling (LS) is effective for CivilComments and MultiNLI. Our EVaLS-GL, using ground-truth group labels for model selection and loss-based sampling for retraining, outperforms all other methods on CivilComments and those with similar group supervision on MultiNLI. Only GDRO, with full group annotations during training, performs better on MultiNLI.
**CivilComments and MultiNLI training sets statistics**
CivilComments:
|Group |Class|Attribute |# Train Data|
|-|:-:|:-:|:-:|
|$G_1$|0|No Identities|148186 (55%)|
|$G_2$|0|Has Identities|90337 (33%)|
|$G_3$|1|No Identities|12731 (5%)|
|$G_4$|1|Has Identities|17784 (7%)|
MultiNLI:
|Group|Class|Attribute|# Train Data|
|-|:-:|:-:|:-:|
|$G_1$|0|No Negations|57498 (28%)|
|$G_2$|0|Has Negations|11158 (5%)|
|$G_3$|1|No Negations|67376 (32%)|
|$G_4$|1|Has Negations|1521 (1%)|
|$G_5$|2|No Negations|66630 (32%)|
|$G_6$|2|Has Negations|1992 (1%)|
---
Rebuttal 2:
Comment: Dear Reviewer RE7s,
We’re glad our clarification was helpful. It’s good to know that the pseudo-code and highlights have been effective in preventing confusion. We appreciate your suggestion to include them in the final version and will incorporate them.
### Regarding the limitations that you have stated
As you mentioned, we state in the Discussion section (L356-357) that environment-based validation is limited to spurious correlation and not other types of subpopulation shifts.
### Regarding SELF [2]
Please refer to the response to Reviewer 7E4z. As we described there and also in Appendix A-Related Work (L547-554), reference [2] in the paper proposes two methods:
*Class-balanced (CB) last layer retraining*: See L552-554. As we discuss comprehensively in the response to Reviewer 7E4z, this method is not helpful in targeting robustness to spurious correlation, but *it improves robustness to class-imbalance, another type of subpopulation shift which is independent of spurious correlation*. For details, refer to the formulations in the response to Reviewer 7E4z.
*Early-stop (ES) disagreement SELF*: As noted in the response to Reviewer 7E4z, ES disagreement requires group annotations for model selection, but not for retraining (L547-551). It also needs an early-stopped version of the trained ERM and the final model.
In the following table, we restate the worst group accuracy for our methods and those of SELF on spurious correlation benchmarks. We also add the results of CB last-layer retraining and ES disagreement SELF for Dominoes-CMF.
|Dataset|Group Info (Train/Val)|Waterbirds|CelebA|UrbanCars|Dominoes-CMF|
|-|-|-|-|-|-|
|ES disagreement (SELF)|$\text{x}/\checkmark$|$93.0_{\pm0.3}$|$83.9_{\pm0.9}$|$82.1_{\pm1.8}$|$60.5_{\pm1.3}$|
|EVaLS-GL (ours)|$\text{x}/\checkmark$|$89.4_{\pm0.3}$|$84.6_{\pm1.6}$|$82.3_{\pm1.2}$|$63.6_{\pm1.3}$|
|CB last-layer retraining (SELF)|$\text{x}/\text{x}$|$92.6_{\pm0.8}$|$73.7_{\pm2.8}$|$21.9_{\pm13.0}$|$53.3_{\pm5.0}$|
|EVaLS (ours)|$\text{x}/\text{x}$|$88.4_{\pm3.1}$|$85.3_{\pm0.4}$|$82.1_{\pm0.9}$|$67.1_{\pm4.2}$|
|ERM|$\text{x}/\text{x}$|$66.4_{\pm2.3}$|$47.4_{\pm2.3}$|$18.67_{\pm2.0}$|$50.6_{\pm1.0}$|
*Waterbirds*: SELF’s methods outperform EVaLS. As SELF illustrates in their Sec. 3 (Preliminaries), since CB last-layer retraining is performed on validation data, which is group-balanced for Waterbirds, CB last-layer retraining is also group-balanced for Waterbirds. When the dataset for is not group-balanced, CB last-layer retraining achieves a worst group accuracy of $77.4_{\pm0.3}$ (Table 7 in reference [2] of the paper (SELF)).
*CelebA*: As you can see, ***EVaLS and EVaLS-GL outperform both CB last-layer retraining and ES disagreement***. We believe there may have been an oversight in your response, as it incorrectly suggests their method outperforms ours. CB last-layer retraining underperforms all other benchmarked methods in Table 1.
*UrbanCars*: EVaLS-GL outperforms all other methods. Both EVaLS and ES disagreement (with higher group supervision) show similar worst group accuracy. CB last-layer retraining shows no significant improvement over ERM. This is expected, as UrbanCars’ training data is class-balanced, unlike Waterbirds and CelebA. Therefore, further class-balancing for CB last-layer retraining makes no difference.
*Dominoes-CMF*:
EVaLS outperforms other methods. EVaLS-GL also outperforms methods in SELF [2]. CB last-layer retraining does not show any significant improvement compared to ERM. This is expected, as Dominoes-CMF also has class-balanced training data like UrbanCars.
#### Conclusion:
CB last-layer retraining targets class-imbalance shifts and is ineffective for spurious correlation. Except for the Waterbirds dataset, our methods ٍ(EVaLS and EVaLS-GL) outperform those of SELF [2] with similar group supervision on all other datasets. EVaLS shows comparable or better results (***e.g., in CelebA***) compared to methods with higher group supervision. See Table 1 in the paper for more comparisons.
---
Rebuttal Comment 2.1:
Title: Evaluation on CivilComments Dataset
Comment: Previously in Q3, I asked about why the performance of the proposed approach is much higher for the CivilComments dataset.
The author responded that "The CivilComments dataset has class imbalance, so fine-tuning/retraining on class-balanced data can significantly improve group robustness."
However, as I looked closely into the details, that might not be the case. In SELF[2], the worst-group accuracy is calculated over **4** groups by aggregating all identity categories into one spurious feature. In other benchmarks such as JTT, DFR, and AFR, the worst-group accuracy is calculated over **16** groups (8 attributes and 2 labels). Therefore, the result of EVaLS-GL in CivilComments column might not be directly comparable to each other except with SELF (added later) and the ERM/DFR implemented by SELF. Please look into this carefully and correct/rerun the misleading results if necessary.
Relevant public GitHub repos:
https://github.com/tmlabonte/last-layer-retraining
https://github.com/AndPotap/afr
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer RE7s,
We deeply appreciate your notice. Our setting follows the practice in the GitHub repository `izmailovpavel/spurious_feature_learning` in `datasets.py`, which is referenced in the official implementation of DFR as an extension of their code functionality. Both (line 266 in their code and line 129 in `data/civilcomments.py` in our code) collapse grouping as “Has Identities”/”No Identities” in each class (4 groups in total). We realize that all the methods that report results around ~79-80% worst group accuracy on CivilComments (like the class-balanced schemes suggested by Reviewer 7E4z and previously reported results for EVaLS-GL) use the 4-group setting of CivilComments, while the methods (as you mentioned) that report on the 16-group setting, result in around ~69-70% worst group accuracies.
We carefully considered your notice and reran the experiment of EVaLS-GL on the CivilComments dataset with 16 groups. **EVaLS-GL on CivilComments with 16 groups has worst group accuracy and average accuracy (in parentheses) of $\boldsymbol{68.0_{\pm0.5}(89.2_{\pm0.3})}$%**. Evaluating ERM on worst accuracy for 16 groups also resulted in $56.3_{\pm4.8}$ (average accuracy does not differ with the 4 group setting).
We should restate that, as mentioned by previous works (reference [4]-Table 2) and as seen in the following table, CivilComments is not an instance of a dataset with spurious correlation (L313). In other words, no identity attribute in CivilComments is predictive for any of the labels (see L316-318). However, loss-based sampling with ground-truth group annotations (EVaLS-GL) still significantly improves worst group accuracy compared to initial ERM.
Proportion of attributes in each class for CivilComments dataset:
|Toxicity (Class)|Male|Female|LGBTQ|Christian|Muslim|Other Religions|Black|White|
|-|-|-|-|-|-|-|-|-|
|0|0.11|0.12|0.03|0.10|0.05|0.02|0.03|0.05|
|1|0.14|0.15|0.08|0.08|0.10|0.03|0.1|0.14|
### Conclusion
EVaLS-GL results in $68.0_{\pm0.5}$% worst group accuracy when CivilComments is examined with 16 groups (identity attributes in the table above in each class), and $80.5_{\pm0.4}$% when it is examined with 4 groups (“Has Identities”/”No Identities” attributes in each class).
We acknowledge the discrepancy in the reported results of EVaLS-GL on the CivilComments dataset and appreciate your feedback. We hope the above explanations provide clarity for a thorough comparison. We will incorporate this discussion regarding the EVaLS-GL results on CivilComments into the final version of the paper.
Finally, we again greatly appreciate your attention to this matter and your notice regarding this.
---
Below, we present the updated main table (Table 1) of the paper, including the results you requested to be added to it in this rebuttal thread for a more comprehensive comparison. If a method’s worst group accuracy is better than others with a similar or weaker level of assumption about the availability of group annotations during training/validation phases, it is underlined. Worst group accuracies are bolded if a method outperforms all other methods. SC stands for spurious correlation datasets. Results for CivilComments are calculated for the 16-group setting (∗ denotes results for the 4-group setting). Other notations are similar to the main table. We would like to remind you that environment-based validation only works for spurious correlation subpopulation shift (L316-324).
|Method|Group Info (Train/Val)|Waterbirds (SC)|CelebA (SC)|UrbanCars (SC)|CivilComments|MultiNLI|
|-|-|-|-|-|-|-|
|GDRO|$\checkmark/\checkmark$|$91.4$|$\underline{\boldsymbol{88.9}}$|$73.1_{\pm 2.0}$|$69.9$|$\underline{\boldsymbol{77.7}}$|
|DFR|$\text{x}/\checkmark\checkmark$|$92.9_{\pm0.2}$|$88.3_{\pm1.1}$|$79.6_{\pm2.2}$|$\underline{\boldsymbol{70.1_{\pm0.8}}}$|$74.7_{\pm0.7}$|
—-------------------------------
|GDRO+EIIL|$\text{x}/\checkmark$|$77.2_{\pm1.0}$|$81.7_{\pm0.8}$|$76.5_{\pm2.6}$|$67.0_{\pm2.4}$|$61.2_{\pm0.5}$|
|ES disagreement SELF|$\text{x}/\checkmark$|$\underline{\boldsymbol{93.0_{\pm0.3}}}$|$83.9_{\pm0.9}$|$82.1_{\pm1.8}$|$79.1_{\pm2.1}^*$|$70.7_{\pm2.5}$|
|JTT|$\text{x}/\checkmark$|$86.7$|$81.1$|$79.5_{\pm 5.1}$|$\underline{69.3}$|$72.6$|
|AFR|$\text{x}/\checkmark$|$90.4_{\pm1.1}$|$82.0_{\pm0.5}$|$80.2_{\pm2.0}$|$68.7_{\pm0.6}$|$73.4_{\pm0.6}$|
|EVaLS-GL (Ours)|$\text{x}/\checkmark$|$89.4_{\pm0.3}$|$84.6_{\pm1.6}$|$\underline{\boldsymbol{82.3_{\pm1.2}}}$|$68.0_{\pm0.5}$/$80.5_{\pm0.4}^*$|$\underline{75.1_{\pm1.2}}$|
—-------------------------------
|EVaLS (Ours)|$\text{x}/\text{x}$|$88.4_{\pm3.1}$|$\underline{85.3_{\pm0.4}}$|$\underline{82.1_{\pm1.0}}$|Not SC|Not SC|
|Class-balanced last-layer retraining|$\text{x}/\text{x}$|$\underline{92.6_{\pm0.8}}$|$73.7_{\pm2.8}$|$21.9_{\pm13.0}$|$80.4_{\pm0.8}^*$|$64.7_{\pm1.1}$|
|ERM|$\text{x}/\text{x}$|$66.4_{\pm2.3}$|$47.4_{\pm2.3}$|$18.7_{\pm2.0}$|$56.3_{\pm4.8}$|$64.8_{\pm1.9}$| | null | null | Rebuttal 1:
Rebuttal: We are thankful for the time and consideration that reviewers have dedicated to reviewing our work. Our work is in continuation of numerous efforts towards annotation-free group robustness (see Appendix A, L538-L549). The main contributions are as follows:
1. **Environment-based validation drops the requirement of group annotation for model selection**: RE7s (strength 3), SN9B (summary), and 7E4z (summary) point out that we use environment inference methods to achieve environments with group shifts. These environments are used for model selection via worst environment accuracy (WEA) (as described in Sec. 3.2 and Figure 1-b,d). As stated in the abstract (L15-24) and introduction (L64-69), and as our results show, for the first time, we observe that using environment inference methods to achieve environments with group shifts (as illustrated in the Appendix, Table 3) suffices for model selection to mitigate spurious correlation. Thus, we drop the requirement for the availability of group annotations for model selection, which, as we discuss in Appendix A (L549-551) is a limitation of previous efforts.
2. **Enhancing the robustness of trained models to unknown spurious correlations they rely on**: Real-world cases often involve unknown (unlabeled) spurious correlations (Abstract, L6-9), which previous methods have not adequately addressed (refer to Appendix-L538-551). Although EVaLS does not require group annotations, results show its robustness against learned shortcuts in scenarios with unknown spurious correlations. It is also effective in settings like the UrbanCars dataset, which has multiple spurious correlations that are partially known or completely unknown (SN9B, strength 2). We also propose Domino-CMF (Sec. 2.2,) as a benchmark featuring two independent spurious patterns, one of which is unknown (RE7s, summary). Notably, EVaLS, even without group supervision, achieves a higher worst group accuracy compared to methods that rely on higher levels of group supervision for the known pattern (AFR, DFR, EVaLS-GL).
3. **Providing theoretical explanations and guarantees regarding loss-based sample selection for the retraining phase**: As stated in the Introduction (L46-51), while it has been known in the literature that loss is a plausible proxy for detecting minority samples, we demonstrate (in Sec. 3.3 and more comprehensively in Appendix C) for the first time, that there are conditions for the effectiveness of loss-based sampling in mitigating spurious correlations. RE7s (strength 2) and 7E4z (strength 3) point out the analysis as a strength of our work. The insights gained from these theoretical findings help us derive loss-based sampling as an effective balancing method.
We are glad that reviewers find the paper well-written (RE7s (strength 1), SN9B (strength 1)), well-structured and presented in a clear and organized manner (7E4z (strength 1)), with a set of rich experiments and necessary theoretical explanations (SN9B (strength 1)), and also find EVaLS simple (7E4z (strength 1)), effective (7E4z (strength 1)), and efficient (RE7s (strength 4)).
## Missing Results
The missing results of Table 1 and 2 are completed (see the answer to RE7s) and will be reported in the revised version.
## Loss-Based Sampling and Model Selection Pseudocode
**Input:** Held-out dataset `D`, ERM-trained model `f_ERM`, maximum `k` value `maxK`
**Output:** Optimal number of samples `k*`, Best model `f*`
1. Split the held-out dataset `D` into train and validation:
- `D_MS, D_LL = splitDataset(D)`
2. Infer environments from the validation split:
- `envs = inferEnvs(D_MS)`
3. Sort `D_LL` samples by their loss:
- `sortedSamples = sortByLoss(f_ERM, D_LL)`
4. Initialize `wea* = 0, k*= 0, f*= None`
5. For `k` from 1 to `maxK`:
- Select top-`k` high-loss samples:
- `highLossSamples = sortedSamples[:k]`
- Select top-`k` low-loss samples:
- `lowLossSamples = sortedSamples[-k:]`
- Combine samples:
- `selectedSamples = {highLossSamples, lowLossSamples}`
- Retrain the last layer:
- `f = retrainLastLayer(selectedSamples)`
- Evaluate the retrained model:
- `wea = evaluateWEA(f, envs)`
- If `wea > wea*`:
- `wea* = wea`
- `f* = f`
- `k* = k`
## Sensitivity to $k$ and $l1$
The set of hyperparameters that are used could be found in Appendix D.4. Note that the parameters $k$ (number of selected samples from each loss tail) and $\lambda$ ($l_1$ regularization factor) are selected automatically by the environment-based validation scheme depicted in the work.
You can see heatmaps for sensitivity of the worst validation group accuracy (WGA) of our method to $k$ and $\lambda$. Differences between lowest and highest WGA among all combinations for Waterbirds, CelebA, and UrbanCars are around 10%, 16%, and 25% respectively.
## Regarding Figure 2 in the paper
Note that Figure 2 illustrates the percentage of samples that are indeed minority or majority for various thresholds of x% of samples with the highest/lowest loss. Note that if n% of samples within the x% of samples with the highest/lowest loss belong to minority/majority groups of a class, the remaining (100-n)% belong to the majority/minority groups of that class.
## Distribution of logits
Below, we report characteristics of distribution of our datasets in the logit space for Waterbirds and CelebA. We use Earth Mover’s distance on logits (which is from the same unit of variable) to quantify the distance. We also report mean and std of our main datasets so that the Earth’s Mover’s Distance makes sense.
| | WaterBirds | |CelebA|
|:-:|:-:|:-:|:-:|
| |Class 1|Class 2|Class 2|
| |Min/Maj|Min/Maj|Min/Maj|
|Mean|$-6.77$/$-19.17$|$2.55/11.39$|$-1.02$/$6.42$|
|STD|$6.31$/$6.23$| $6.97$/$4.75$ | $7.64$/$6.48$|
|Earth Mover’s Distance|$12.40$|$8.84$|$7.43$|
In addition to the table, the overall distribution of logits per group of Waterbirds and CelebA datasets is available in Figure 2 of the attached PDF.
Pdf: /pdf/a838883e6665b2b7ddf0136fde8e91f0529f351e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Algorithmic Collective Action in Recommender Systems: Promoting Songs by Reordering Playlists | Accept (poster) | Summary: This paper explores the possibility of boosting the recommendation of a song in an automatic playlist continuation system by using a collective strategy of adding the song into the training playlists of the APC at a specific position. The paper shows that adopting a strategy that targets low frequency context makes it possible to very significantly boost the exposition of the song in the output of the APC.
Strengths: - Very interesting idea which is novel in the domain of music recommendation
- Quite surprising experimental result that is worth sharing (possibility of boosting exposition of a song with the DirLof strategy).
Weaknesses: - Limited scope:
- The idea is tested in the limited context of automatic playlist continuation and is likely not adaptable to broader applications.
- Only one APC model was tested, so the results may be specific to this model. It would have been a good idea to test the impact of the collective action on other models (such as the baseline proposed in the reference paper for the APC model). Also, it’s unclear whether the effect would be robust to hyperparameter changes in the APC model.
- the method is tested in a static context (not in the real world), so the actual impact of the method on usage (for instance, whether a user exposed through the APC to a song boosted by the collective strategy would listen to it or not) is not tested (this would require access to the production of a streaming music service, though)
- Very limited insights on the design of the efficient strategy (DirLof) and why it works. The presentation of the strategy is actually quite unclear, while it’s central to the paper.
- There may be other simpler baselines that would be worth testing. The paper shows that inserting the song at the end of the playlists is less performance than inserting the song randomly, which suggests that inserting it earlier in the playlist sequence may help. So a baseline that would always insert the song at the beginning of the sequence could have decent results (the first place is likely to be avoided as it would mean that the song would never appear as the target in the training of the APC, but low positions that appear regularly as targets in the training could be considered).
Technical Quality: 3
Clarity: 4
Questions for Authors: - Could the authors rephrase / clarify the definition of the DirLof strategy and rationale behind it? something which is very surprising is that “the collective selects the anchor songs s0 with the smallest overall song frequency”. But it’s likely that smallest song frequency is for song appearing only once among playlists and these song are then mostly noise in playlists and it’s hard to get why those anchors would help improving recommandation of the inserted song. Also, statistics on songs with low frequency are likely much more difficult to obtain (the statistics will be much less reliable than for popular songs).
- It’s very unclear what the distributions over playlists are (P, P0, P*). It seems it’s almost used as a synonym of dataset, and the expectations are actually just empirical expectations. I’d like the author to clarify this point and to explain why it’s necessary to introduce such notations.
- In table 2, it’s unclear how statistics are estimated when there is no training data to compute them from (while there is still a significant amplification).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 1
Limitations: - As mentioned in the weaknesses, there are other simple baselines that would be worth testing. That would help support the claim that the specially designed DirLof strategy is actually efficient.
- Also the authors should comment on the possible transferability of their results to other APC models, and to modifications of hyperparameters of the model.
On the ethical side, the method is a bit too much presented as a way for artists to attack a recommender system, somewhat rerank their songs and make money out of it, which has some important ethical implications in terms of fairness among artists: As the payment system of most streaming services is based on subscription (so a fixed amount of money), artificially boosting an artist comes at the detriment of other artists.
I would then rather avoid this message of "opportunity for artists" in the paper (and even in the title) and turn it instead as a warning to music streaming platforms that recommender systems may lack robustness to attacks and that they should be aware of it and take action. I think presenting the paper in this second way would solve this ethical issue, but the current message is, to me, ethically borderline.
Minor comments
- The authors claim that “most large platforms have shifted from relying on collaborative-based models for APC to building deep learning based recommenders …” but 1) the references are only Spotify based 2) the references are research papers which doesn’t mean that the shift actually happened and that platforms no longer use collaborative-based models part in their complex architecture.
- In equation (1):
- The authors likely want to specify that the recommended song shouldn’t be part of the playlist i.e S’ **∩** p = **∅** in the argmax
- The argmax actually corresponds to the top K elements in terms of similarity, which is quite straightforward. But the sum in the equation makes it not very clear at first read. It could be worth it to state it explicitly before the equation.
- There are several notations that use the letter h (song embedding, playlist embedding, playlist mapping). I think using different letters may help make things clearer.
- In Figure 3, it seems like s* is subtituted to s_i (there is no s_i in (b)) while just before equation 4, it’s said that only insertions are considered.
- “*Existing data poisoning techniques, for example, operate in different settings, not complying with Constraint 1 and typically assuming white-box access to the model or involving test-time profile manipulations.*” ⇒ this needs references.
- *“Thus, collective action can make a tremendous difference for these artists: suppose an artist’s song is streamed 10,000 times, yielding a revenue for the artist of $40 for royalties of $0.004 per stream [32]; an amplification of 25 would increase this revenue to $1, 000.”* I understand this is purely illustrative, but the figures won’t reflect much the truth. First 0.004$ per stream is an average that depends on the platform, but also on the kind of registration of the user and on the country of the user with quite important variations. Second, several platforms (Spotify, Deezer…) are shifting to a payment model were all streams don’t have the same value, especially recommended streams are “discounted”, and some “real” artist may get a boost. Once again, I get it’s for illustrative purpose, but it brings confusion on how music payment system works, and then should be avoided in my opinion.
- *“Moreover, when considering s* as a relevant recommendation, collective action even enhances the system’s performance”* Isn’t this completely obvious? if presentation of s* is boosted (which happens, given the amplification), other recommendations are barely affected (as shown on figure 6, solid line), and you consider s* a valid recommendation, then for sure recommendation metrics will increase, won’t they?
Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the careful reading and the detailed feedback. We have performed the additional investigations in response to your suggestions. Let us elaborate on the individual comments below.
**Other baselines.** We have implemented baselines where the song is placed in earlier positions in the playlist. The resulting amplification is shown in Table 1 in the supplementary PDF. None of the simple baselines is better than random.
**Robustness to hyperparameters.** We have modified several hyperparameters of the Deezer model (number of attention heads, learning rate, dropout rate, weight decay). The results, presented in Table 3 of the supplementary PDF, show that the DirLoF strategy consistently outperforms the random strategy across all configurations.
These results are not surprising, as the strategy builds on a generalizable statistical intuition rather than specifics of the model architecture. This makes the strategies robust to model configurations, provided the model approximates the conditional probabilities in the training data sufficiently well.
**Rationale behind DirLoF.** A sequential recommender is trained to predict the K most likely songs to follow a given seed context. The lever that DirLoF exploits is that certain contexts are overrepresented in the playlists the collective controls. Taking the random baseline as a reference, Fig. 4 shows that for $\alpha<0.001$, randomly inserting $s^*$ is unlikely to lead to recommendations at test time. But by concentrating effort on overrepresented contexts the collective can meet the threshold more often. To identify overrepresented contexts, DirLoF uses that certain songs with global frequency $< \alpha$ still appear in the collective. DirLoF targets contexts that end on such overrepresented songs.
As you mentioned, DirLoF relies on global frequency estimates of songs in the overall training data, and estimating small frequencies from few data points is hard in general. To justify this, we have implemented the strategy with only approximate statistics. And we find that the strategy is still effective; knowing only 10% of the data is sufficient to achieve decent gains (see Fig. 12 in the Appendix). Even scraping public song statistics that do not match the year of the data (we can only scrape current statistics, e.g., total number of streams) provides some useful signal. This convinced us that it is a worthwhile and feasible approach to pursue for small collectives.
**Notation.** The notation ($P$,$P^*$,$P_0$) serves to frame our work within the original framework of algorithmic collective action by Hardt et al. 2023. There is a dataset $D$ of $N$ playlists sampled from a universe of playlists $P_0$, and a fraction $\alpha$ for these playlists is modified according to the strategy $h$. As a result, the platform does not observe samples from the base distribution, but from a mixture $P$. Training is done using empirical expectations. At test time, the model is evaluated against unseen samples from $P_0$. We will make this more clear in the write-up.
**Table 2.** We show average recommendations of (anchor) songs at test time and the difference in recommendations ($\Delta R$) with vs. without collective action. These statistics are computed based on results from 5 different train-test fold splits. For each fold, we report the mean and the 95% confidence intervals.
Questions related to framing and fairness:
The focus of algorithmic collective action is to demonstrate that participants can exert control over the algorithm a platform deploys through strategic data reporting. This is a powerful message because it gives users a lever in an otherwise highly imbalanced power relationship. As the title suggests, we show how ‘participants can promote songs through sequence reordering’.
As you point out, how this lever is used can lead to different solutions. The redistribution of recommendations is inherently zero-sum among artists due to the limited budget of recommendations. But it is worth noting that this applies to any updates to a recommendation algorithm, as well as features like ‘Showcase’ on Spotify where one can pay for recommendations.
In an ideal world, this can be used for bottom-up unfairness mitigation in recommender systems, however, a positive effect on overall welfare is by no means guaranteed. But it is not justified to assume that every update a platform implements is beneficial and ethical, or that every update participants promote is harmful. Thus, we think it is valuable to take an approach complementing much of the existing literature and present collective action as a potential opportunity, rather than only a threat, as users often pursue legit interests. Understanding how interactions can be designed to promote positive change while suppressing exploitation is a very intriguing question. What our work contributes is empirical evidence that the fully adversarial model is not sufficiently nuanced to address this question.
**Collective action increases system performance.** This is only obvious once you take the perspective of participants being economic agents pursuing their own legitimate interests. If incentives were completely misaligned such a gain would not be possible.
**Example about revenue.** The purpose of this example is to illustrate how recommendations relate to monetary value. A study of collective action is incomplete without discussing these incentives, and it is hard to deny that more recommendations imply more revenue. But we will make this point without explicit examples to avoid misunderstanding.
We hope to have convinced the reviewer of the robustness of our proposed strategies and the valuable perspective for the NeurIPS community. In any case, we will dedicate a separate section in the final version to expand on the results in Appendix D.4 and discuss effects on other artists in more detail, making negative externalities and the need for more work very clear. Thank you for the feedback!
---
Rebuttal 2:
Title: Acknowledgement
Comment: I'd like to thank the authors for the very detailed answer to my review. Most of my concerns are now addressed.
However, I have to disagree on one of the most important ones: presenting the proposed method as a way to empower users to support their artist is, to me, oversimplistic and a bit disconnected from the reality of streaming services.
Collective actions (not linked to recommendation) to divert revenue has been widely used in the music streaming industry, first by legit artists (see the Sleepify album by vulfpeck) but then (much more widely) by fraudsters. Both cases are unfair because they trick the payment system, but it's obviously even worse when done by fraudsters.
The proposed method also concerns tricking royalties payment through the recommender system, which raises important fairness issues and is obviously also open to fraudsters (who usually control many accounts) and could also be used by major labels (that have strong marketing power) to divert revenue.
I think then the work has critical ethical implications that must be discussed and acknowledged and that the positioning of the proposed method as an opportunity is problematic.
I won't change my rating if this aspect is not considered in the paper.
---
Rebuttal 3:
Comment: We are glad we could address most of your concerns with the additional experiments and clarifications. Let us follow up on the last one related to the potential for abuse of collective actions strategies. We agree with the reviewer that misuse is a concern and should not be understated. But we still think it is important that the potential for abuse does not prevent the community from working on algorithmic collective action as an interesting and valuable research direction.
That being said, we have revisited the paper with your comments in mind, and we can see how it could benefit from additional discussion and a broader perspective. We are happy to use the additional page in the final version to move the broader impact statement to the main body, as suggested by the ethics review, to acknowledge the limitations more prominently. While we have mentioned the threat that a single individual could control many playlists, we will also be more specific that a popular artist could have an advantage to gather large collectives. To further mitigate your concern we will also complement this by expanding more on the study of negative externalities on other artists in Section 4.3. We hope this is satisfactory and you consider voting for acceptance after these changes.
---
Rebuttal Comment 3.1:
Comment: Thanks for the extra comment. Based on this, I will increase my ratings. | Summary: They proposed a strategy for the streaming platform users to collectively act in order to promote targeted songs. The promotion efficacy is measured by the targeted songs' recommendation frequency boost at testing time. This strategy is approved to be effective through simulation experiment. Another finding is that this strategy has minimum impact to the performance of the recommendation system as a whole, i.e., by preserving user experience to other non-targeted songs.
Strengths: * This is a novel idea, presented in an interesting domain with a good amount of related work.
* The writing of the paper is clear. The motivation is sound.
* The experiment performed by the author successfully verified the efficacy of the collective strategy.
Weaknesses: * Limited scope. See limitations below.
* Lack of technical novelty and contribution. The evaluation result would be a good report but I would recommend the author to seek publication in a different conference.
Technical Quality: 2
Clarity: 3
Questions for Authors: The context selection methods discussed in section 3.2 seems pretty arbitrary to me. What reason or motivation does it make you choose the `InClust` or `DirLoF` approach? How would you approve these are guaranteed to work?
What if I say a good approach could be to find the most similar song (embedding similar higher than x) that is at least y popular and place our targeted song z ( z could be -1, 1, 2, 3) positions after it?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The strategy could be effective but it is built on top of the assumption that the serving platform has deployed minimum control against collective behavior. As I am aware, there are various user-side anomaly detection mechanisms usually deployed in production in streaming services like what is described in the paper. As an example, there could be real time monitoring of such collective behaviors. Such an anomaly detection system could be constructed by building a user-entity graph. In this use case, we could use artist or songs as the entity. When in a short period of time, there are bursty events that are related to an entity that happens (user promoting a song in their playlist), it could be an important indicator that something unusual happens (because this is not a popular song, it does not receive so many promotes on average). A follow-up action in the system could tag those relevant data as "spurious" thus they never go to the training data of the transformer model (for continuous training). As a result, the collective action could have a significantly less effect when such monitoring is turned on; or depending on what threshold they are set up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. Let us elaborate on your comments below.
**Intuition for the proposed strategies.** The design of our strategies relies on the idealized assumption that transformer-based models perform next song prediction by learning to model the conditional probability of songs, given context, in the training data. Achieving that the song is among the K most likely next songs implies a recommendation for similar contexts at test time. With this in mind, it makes sense to select a context $c$ and put the song $s^*$ right after this context to influence conditional probability $p(s^*|c)$. DirLoF and InClust are two different ways to select contexts $c$ to target. DirLoF exploits the overrepresentation of certain contexts in the playlists of the collective to pick those where they can make a larger difference. InClust targets a set of similar contexts that appear multiple times in the collective and targets them in a coordinated fashion. These are two principled levers from a statistical perspective. How we implement them is designed to make them practical.
- **DirLoF.** DirLoF targets contexts that end on a song that occurs infrequently in the overall training data (typically once among the playlists of the collective). If these contexts have an overall probability smaller than $\alpha$, they are overrepresented. Given the long-tail nature of the song distribution such songs are not that infrequent. Compared to inserting the song at random positions, the top K threshold can be met more often in that way (see Fig. 4).
- **InClust.** InClust targets contexts that precede a song $s_0$ that frequently occurs in the playlists controlled by the collective. Thereby they implicitly target contexts that are similar from the perspective of the recommender, and close to $\phi(s_0)$ in embedding space, by how the model is trained.
**Alternative strategy.** There might be other ways to design strategies, but we offer a strong initial baseline for a widely underexplored question. In contrast to the strategy you proposed, it is crucial that our approaches do not rely on knowledge of the embedding function. The collective does not require access to the model parameters to implement our strategies, nor to train a surrogate model to approximate them. It just needs to gather song statistics to identify the contexts worth targeting.
The fact that the strategies are designed based on a statistical intuition of sequential generation, implies that they are not specific to the model architecture. We demonstrate the robustness with additional experiments where we vary the hyperparameters of the model (see Table 3 in the supplementary PDF). With the additional ablation, we hope to have convinced the reviewer of the robust assumption underlying the design of our strategies.
It is also in line with our argument that the effectiveness of the strategy decreases if model training is stopped early (see Table 2 in the supplementary PDF).
**Theoretical guarantees.** End-to-end theoretical guarantees, beyond intuition in an idealized setting, is a lot to ask for, given that related data poisoning strategies, including popular Shilling attacks, rarely come with any guarantees in the context of deep learning-based recommender systems. Instead, we opted for a rigorous empirical evaluation using the example of an industry-scale recommender system that has been deployed in production.
**Anomaly detection.** We agree that countermeasures could reduce the effectiveness of our strategies. However, they usually come at a cost for the firm. What makes the problem interesting is that it is not a zero-sum game between the platform and the users, in contrast to adversarial attacks. It is possible that agents pursue legitimate interests. In fact, our work is the first to provide empirical evidence that collective action strategies are *not* necessarily zero-sum. With this in mind – inspecting which use cases the platform would want to protect against is an interesting and open question for future work.
**Contribution.** We are the first to study the impact of authentic sequence modifications on sequential recommender systems. We show that such strategies are effective and might be a realistic aspect we have to take into account when building future systems. Thus, our work empirically carves out a novel space of questions around machine learning in a broader social-economic context. We provide solid empirical evidence that the typical adversarial threat model is not applicable to algorithmic collective action and call for a more nuanced study of incentives around ML models. This seems relevant for the NeurIPS community and an important contribution. Is there anything else we can do to convince you of this? | Summary: The paper shows that strategic collective action by a small fraction of the population can lead to significant amplification of a particular song in a recommender system. The authors propose two strategies for the collective (for a transformer-based song recommender) that achieve this amplification.
Strengths: - The setup is original and focuses on strategic collective action by a fraction of users in a recommender system and how this can increase a target song’s reach.
- The performance loss for the platform is negligible, the collective’s strategies are not adversarial (for e.g. fake profiles, artificial perturbations) and based on 1-edit distance to the original playlist. The paper shows that recommendations are largely preserved.
Weaknesses: - The experiments follow the MF-Transformer in [7], to make the paper self-contained it would be beneficial to have a description of $\phi(.)$ and of $g(.)$ and the loss function in Section 2.1 or the Appendix.
- I found the strategies in Sec 3.2 hard to parse, perhaps a figure showing the original playlist and the possible changes a user in the collective can do under the two strategies would be helpful.
- Minor: I think the notation h(.) is overloaded for the song/playlist embedding in 2.1 and for the strategy mapping which inserts s* into a playlist. Also, fig 6 could use different colors for the different strategies.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Can you clarify the intuition and communication required by the collective for the two playlist modification strategies in 3.2?
InClust seems to place the target song before the most popular songs in the collective’s playlist whereas DirLoF places the target song after a low popularity song?
Both strategies provide amplification in the experiments. How much communication among the collective do both require, is one modification easier practically?
- Section 1.1 mentions “Our strategies exploit the properties of the attention function in the transformer model, without requiring knowledge of the model parameters”, can you elaborate?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors discuss limitations in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback, we will incorporate your suggestions and adjust the notation. In response to your comment, we also decided to add pseudo code to make the strategies more clear. It can be found in the supplementary PDF.
In the following let us elaborate on your questions:
**Our strategies exploit the properties of the attention function in the transformer model, without requiring knowledge of the model parameters.**
The assumption we operate under when designing the strategies is that the system aims to approximate the conditional probability of the next song, given prior songs within a given context window. We place the target song after a specific context with the goal that the model picks up on these correlations from the training data. This is a probabilistic assumption on the sequence model and it is not specific to the details of the model architecture or the training algorithm used. Thus, it is not surprising that the performance does not change much as hyperparameters of the model are varied (see the new Table 3 in the supplementary PDF). It is worth noting that similar approaches have proven to be very fruitful when designing collective action strategies in classification, where strategies have been designed under a Bayes-optimality assumption (see Ref. [24] in the manuscript).
**Intuition for the strategies.** Within the context of the last paragraph, our strategies implement two heuristics to select the context $c$ after which to place $s^*$. For comparison, it is useful to have the random baseline in mind: for $\alpha<0.1$, placing the song $s^*$ after a randomly selected context is not sufficient for it to be among the top K for any of the contexts (it leads to no recommendation at test time, see Fig. 4).
- The DirLoF strategy aims to exploit contexts that are overrepresented in the data the collective controls to increase the chance of meeting the top K threshold. To this end, it selects contexts $c$ that end on a low-frequency song (for small collectives, these typically appear only once in the controlled playlists). If these contexts have an overall probability smaller than $\alpha$, it means that they are overrepresented. As a result, targeting these contexts is more likely to effectively result in a recommendation at test time compared to random selection, which is what we see in the experiments.
- The InClust strategy aims to target contexts that appear multiple times among the playlists of the collective in a coordinated fashion. The intuition is that for contexts $c$ that precede a song $s_0$ that is popular within the collective, the context embedding $g(c)$ forms a cluster around the song embedding $\phi(s_0)$, due to the way embeddings are trained. As a result, the collective can achieve large mass for this specific region of the context space by placing $s^*$ before $s_0$. They would hope this is sufficient to meet the top K threshold for similar contexts at test time. Here we find that $\alpha>1%$ is necessary for this to be effective. The ablation in Figure 5 illustrates how this strategy increases the similarity between $s^*$ and the targeted contexts, leaving other similarity values widely untouched.
**Communication.** Both strategies require communication among the members of the collective to coordinate on a target song $s^*$ and coordinate the execution time. In addition, each of the strategies requires one more statistic to be gathered: The InClust strategy requires participants to pool information about their playlists to identify the songs that occur most frequently in the playlists they control – no additional data needs to be collected. The DirLoF strategy requires global song statistics to get a sense of which songs are overrepresented. These statistics are then shared with other members of the collective. We envision the aggregation and communication of song counts to happen through an app or an online forum. Notably, there is a growing literature on developing tools for coordinating platform participants (see e.g., [1,2]) which could be used here.
While both strategies are easy to implement, DirLoF comes with some additional overhead for identifying low-frequency songs. These statistics need to be gathered from external sources but can be shared among the participants. For example, approximate statistics and information based on scraping public playlists and streaming statistics are sufficient for the strategy to be effective (Fig. 12). Beyond this, none of the strategies require access to model weights, model architecture, or the training algorithm used. And none of the strategies require any computations that go beyond simple song counts as they do not use surrogate models.
References:
[1] Do, K., De Los Santos, M., Muller, M., & Savage, S. (2024, May). Designing Gig Worker Sousveillance Tools. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-19).
[2] Imteyaz, K., Flores-Saviaga, C., & Savage, S. (2024). GigSense: An LLM-Infused Tool forWorkers' Collective Intelligence. arXiv preprint arXiv:2405.02528.
---
Rebuttal Comment 1.1:
Title: Acknowledging and raising score.
Comment: Thanks for addressing my questions comprehensively and for the figures for the two strategies, I am raising my score to a strong accept.
As a general note directed to another reviewer, it would be nice if as a community we provide constructive feedback and improvement, and not attacks like "good as a report ... seek publication in a different conference". | Summary: This research work proposes a novel solution to promote songs on music streaming platforms strategically.
Under the following assumptions:
1. Fans can collaborate to promote a specific song by collectively reordering playlists.
2. The visibility of a song in a playlist affects its recommendation frequency.
3. Users are influenced by the position of songs in playlists when making listening choices.
4. The impact of collective action on song visibility is measurable and significant.
The authors suggest that fans strategically reorder playlists to promote a targeted song, thereby increasing its visibility in the recommender system. By leveraging algorithmic collective action, even small groups of fans can substantially impact the recommendation frequency of the promoted song. This strategy aims to enhance the visibility (capability of being discovered) of songs and artists, which will benefit both fans and musicians in the music streaming industry.
The evaluation focuses on quantifying the amplification of recommendations achieved by strategically placing songs in playlists, using metrics such as recommendation probability and change in the number of recommendations for a song. The evaluation also includes the impact on the recommendations of other songs and the overall performance of the recommender system. The analysis of results reveals that the collective action strategies can lead to a substantial increase in the recommendation frequency of the targeted song, with up to 25x higher recommendation probability compared to average songs.
The main contributions are:
1. The paper introduces **two innovative collective** action strategies where participants strategically insert a target song into their playlists to promote an emerging artist. These strategies aim to increase the recommendations of the targeted song at test time, thereby boosting the artist's visibility on the platform.
2. The research demonstrates that even **small collectives**, controlling less than 0.01% of the training data, **can achieve significant amplification of recommendations** by strategically placing songs in playlists. This finding highlights the effectiveness of algorithmic collective action in promoting songs without major disruptions to the user experience.
3. Preservation of Other Recommendations, as the study reveals that while promoting a specific song through collective action, the recommendations of other songs are largely preserved. This indicates that the proposed strategies can enhance the visibility of targeted songs without significantly compromising the overall recommendation system's performance.
Strengths: - Its innovation is a significant strength as it provides a new approach to increasing the visibility of emerging artists in music streaming platforms.
- Empirical Validation and Real-World Application: the research is empirically validated using an open-source APC model deployed on Deezer, a platform with millions of users.
- important result: the study demonstrates that even small collectives can achieve substantial amplification of recommendations by strategically placing songs in playlists
- The findings show the potential for diverse artist promotion, which can make fairer use of the platforms but also fights against the long-tail problem in recommender systems. It can also help the serendipity effect.
Weaknesses: - The paper assumes that users are influenced by the position of songs in playlists when making listening choices. This assumption **may oversimplify user behavior and overlook other factors** that influence song recommendations and user engagement, potentially leading to biased results.
Technical Quality: 3
Clarity: 3
Questions for Authors: Even if the problem is not exactly the same, could you relate your work with the one described in this cite: Walid Bendada, Guillaume Salha-Galvan, Thomas Bouabça, and Tristan Cazenave. 2023. A Scalable Framework for Automatic Playlist Continuation on Music Streaming Services. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '23). Association for Computing Machinery, New York, NY, USA, 464–474. https://doi.org/10.1145/3539618.3591628 ?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The paper does not explicitly discuss possible limitations of the approach to addressing problems of privacy and fairness. However, considering the ethical implications of data manipulation and collective action in recommender systems is crucial for ensuring transparency and equity in algorithmic interventions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and the positive assessment of our work.
To respond to your question we relate our work to Bendada et al. (2023). The authors describe a transformer-based recommender system for automatic playlist continuation (APC) that Deezer has deployed in production. The model implements the represent-than-aggregate paradigm to achieve better scalability and the authors demonstrate that the model achieves state-of-the-art performance on the Spotify Million Playlist Dataset. We use their model as a case study for our work. It is one of the few industry-scale APC models that are publicly available, allowing us to retrain it and inspect the effect of playlist modifications on recommendations. In contrast to Bendada et al. (2023), we are interested in the sensitivity of the model to strategic playlist modifications in the training data, and inspect the recommendation of specific artists. This is a dimension that has not been considered in prior work which focused on aggregate performance metrics of the model trained on a fixed dataset. We are the first to focus on algorithmic collective action in this context.
Regarding limitations, we will add a dedicated section in the future version to discuss the limitations of collective action and the negative externalities on other artists in more detail. We refer to the response to Reviewer tKsL for an extended discussion. We agree that this is valuable to put our work in context.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you very much for your reply | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their insightful feedback. Based on the reviewers' comments, we ran additional experiments and made some updates to the write-up, which we describe in detail in the individual rebuttals.
To support our discussion, we provide additional experiments and illustrations in the supplementary PDF. Figure 1, along with Algorithms 1 and 2, offers a visual explanation of the proposed strategies. Additional empirical results, including ablations related to model hyperparameters and the evaluation of alternative baseline strategies, can be found in Tables 1-3.
Pdf: /pdf/6ac5f5a3d97d5b7cf29fcf32eaa546ee24b5aa24.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion Models are Certifiably Robust Classifiers | Accept (poster) | Summary: This paper derives an upper bound of the Lipschitz constant for diffusion classifiers. Then, it proposes Exact Posterior Noised Diffusion Classifier (EPNDC) and Approximated Posterior Noised Diffusion Classifier (APNDC) by deriving ELBO upper bounds on $\log p (x_\tau)$ and thereby enabling classifying noisy images. The APNDC achieves state-of-the-art certified robustness.
Strengths: The theory is cool and the math is intriguing. I like this direction because it leverages the shared Gaussian structure in diffusion models and randomized smoothing while circumventing the challenges of attacking diffusion models. The empirical evaluation results (especially Figure 2a) are also impressive.
Weaknesses: In my opinion, some of the contents are not explained very clearly. Please see below and the contents of the "Questions" section.
- Table 4 is nice. However, I wish it was in the main text instead of the appendix, because the current main text misses the discussion on how to calculate the certified robustness for the proposed models.
- Figure 2 doesn't present the certified radius with a conventional diffusion classifier, as derived in Eq. (11). Since (11) is an important contribution of this work, I believe it should be included.
- I would like to see an ablation study on $\sigma_\tau$, but could not find this result.
Overall, this is still a nice paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 145: Why are the mentioned quantities bounded in the range $[0, 1]^D$? Do you clip $h_\theta (x_t, \sigma_t)$ to $[0, 1]^D$? How about $\|\| h_\theta (x_t, \sigma_t) \|\|_2^2$?
- Eq (12): The notation $q$ was introduced in Eq (1) to represent probabilities in the forward diffusion process. So, how to compute $q (x_t | x_{t+1})$? It would also be nice to add subscripts to each of the nested expectations, so that it's clear what variables the expectations are taken over.
- Also Eq (12): does the method become computationally cheaper when $\tau$ increases? Is it correct that different values of $\tau$ can reuse some computation, so if you set $\tau$ to 0, you simultaneously get the ELBO bound for $\tau = 0, 1, \ldots, T$?
- Remark 3.4: when would it make sense to use $\tau = 0$? When $\tau$ is $0$, is $\sigma_\tau$ also 0? Does this mean randomized smoothing is not used?
- Line 224: the APNDC reconstruction loss $\|\| h_\theta (x_\tau, \sigma_\tau) \|\|_2^2$ sees great resemblance to the training objective of consistency models. Does this intuitively imply that consistency models are more suitable or less suitable for APNDC? I see a short discussion about consistency models in the appendix, but it does not fully address my curiosity.
- Line 247 mentions that $\sigma_\tau \in \\{ 0.25, 0.5, 1.0 \\}$. What are the corresponding $\tau$ values? Which setting is used for which results? Are the different radii in Table 1, Table 2, and Figure 2 evaluated with the same $\sigma_\tau$ or different ones?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As is the case for numerous diffusion classifier paper, the computation complexity, while improved in this paper, is far from ideal. This paper evaluates the method on a small subset of CIFAR-10 and ImageNet, probably due to this reason.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the topics and contributions of our paper. We are greatly encouraged by your appreciation. Below, we address your detailed comments and hope that you find our responses satisfactory.
***Weakness 1: Table 4 should be put in the main text.***
Thank you for your advice. We will move Table 4 into the main text.
***Weakness 2: Certified radii of diffusion classifiers should be included in the main table.***
Thank you for your advice. We will incorporate these results into Table 1.
***Weakness 3: Ablation studies in \\(\sigma_\tau\\).***
Thank you for your advice. However, we are unable to perform these experiments during the rebuttal phase because, for each \\(\sigma\_\tau\\), we would need to obtain predictions from our model over the dataset 10,000 times. Nearly all previous work uses \\(\sigma\_\tau \in \\{0.25, 0.5, 1\\}\\), and these studies have already demonstrated that these three radii are sufficient for obtaining the upper bound of the certified radius. Therefore, we did not perform this ablation study. We adopt this setting to enable fair and equitable comparisons and to avoid recoding all previous baselines.
***Question 1: \\(h_\theta(x_t,t)\\) are bounded.***
Yes, we clip the output of the UNet \\(h_\theta(x_t,t)\\) to \\([0,1]\\), thus it is bounded.
\\(\mathbb{E}\_{x\_t}[\|h\_\theta(x\_t,t)\|\_2^2]\\) is the expectation of a function whose output is bounded within \\([0,1]\\) over a Gaussian distribution, which is in the form of randomized smoothing, thus its Lipschitz constant can be bounded.
***Question 2: About \(q\) distribution.***
Thank you for your suggestion. We will add subscripts to each of the nested expectations to clarify the expression:
\\[
\log p(\mathbf{x}\_\tau) \geq -\sum\_{t=\tau}^{T}w\_t^{(\tau)} \mathbb{E}\_{q(\mathbf{x}\_{t+1}|\mathbf{x}\_\tau)}\left[\\|\mathbb{E}\_{q(\mathbf{x}\_t|\mathbf{x}\_{t+1},\mathbf{x}\_\tau)}[\mathbf{x}\_t]-\mathbb{E}\_{p(\mathbf{x}\_{t}|\mathbf{x}\_{t+1})}[\mathbf{x}\_{t}]\\|^2\right] + C\_2,
\\]
in the revision.
We do not calculate \\(q(\mathbf{x}\_t|\mathbf{x}\_{t+1})\\) directly; instead, we only calculate the expectation of the reverse Gaussian distribution \\(q(\mathbf{x}\_t|\mathbf{x}\_{t+1})\\), as shown in Eq. (13), \\(\mathbb{E}\_{q(\mathbf{x}\_t|\mathbf{x}\_{t+1},\mathbf{x}\_\tau)}[\mathbf{x}\_t]=\frac{(\sigma\_{t+1}^2-\sigma\_t^2)\mathbf{x}\_{\tau}+(\sigma\_t^2-\sigma\_\tau^2)\mathbf{x}\_{t+1}}{\sigma\_{t+1}^2-\sigma\_\tau^2}\\).
***Question 3: About Eq. (12).***
Diffusion can be considered a continuous stochastic differential equation (SDE) within the range \\([\sigma_\tau, \sigma_T]\\). The number of function evaluations (NFEs) depends on both \\(\sigma_T - \sigma_\tau\\) and the discretization interval \\(\sigma_{i+1} - \sigma_i\\). If you keep the discretization interval unchanged, the NFEs decrease as \\(\tau\\) increases or \\(\sigma_T\\) decreases.
We are impressed by your idea of reusing computations to calculate all evidence lower bounds (ELBOs) simultaneously. Since the neural network output \\(p(\mathbf{x}\_t|\mathbf{x}\_{t+1})\\) does not depend on \\(\tau\\), we can reuse this part when calculating ELBOs for different \\(\tau\\). Given that the neural network's forward pass is the computational bottleneck compared to other parts of the ELBO calculations, this means that for a given \\(\mathbf{x}\_0\\), we can compute all ELBOs over \\(\tau\\) simultaneously in the time it takes to compute a single ELBO.
***Question 4: In the case of \\(\tau=0\\).***
When \\(\tau=0\\), the ELBOs of EPNDC reduce to the ELBOs in DDPM, and the EPNDC reduces to vanilla diffusion classifiers. In this case, we cannot use randomized smoothing.
***Question 5: Similarity between APNDC and consistency models.***
We agree that there is a significant similarity between the training loss of consistency models and APNDC's ELBO. However, consistency models (CD) seem to lose their ability to classify and instead overfit to the generation task. Specifically, when \\(t\\) is small, the predictions for all labels \\(h(x_t, t, y)\\) are nearly identical, with their cosine similarity exceeding 0.99. We suspect this is due to the distillation phase causing consistency models to overfit to the generation task. We plan to investigate this issue further by training consistency models ourselves to understand the underlying reasons.
***Question 6: About \\(\sigma\_\tau\\).***
Karras et al. [2] emphasize that since there is a bijection between \\(t\\) and \\(\sigma\\), and \\(t\\) depends on discretization, it is better to use \\(\sigma\\) as the variable to describe the noise added to the input images. Therefore, finding \\(\tau\\) for \\(\sigma\_\tau=0.25\\) is meaningless. In practice, we directly set \\(\sigma\_\tau=0.25\\) and determine \\(T'\\) discretization steps within \\([\sigma\_\tau, \sigma\_T]\\) to calculate the MSE loss at these \\(T'\\) timesteps.
For Tables 1 and 2, we follow previous work by calculating the certified radius using \\(\sigma\_\tau=0.25, 0.5, 1\\), respectively, and selecting the maximum one to include in the tables and figures.
[1] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified adversarial robustness via randomized smoothing." international conference on machine learning. PMLR, 2019.
[2] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." Advances in neural information processing systems 35 (2022): 26565-26577. | Summary: The authors investigate the certified robustness of diffusion classifiers. For this purpose, they first show that these classifiers have O(1) Lipschitzness and subsequently achieve tighter robustness bounds through Bayes' theorem and the ELBO.
Strengths: S1: Using diffusion models to generate large amounts of synthetic data is one of the most promising approaches to improve empirical and certified robustness in recent years. The authors utilize diffusion models directly to achieve high certified robustness.
S2: While prior work has investigated the robustness of diffusion classifiers, they do not provide certified guarantees. This gap is addressed in this work.
S3: The work provides both relevant empirical and theoretical contributions
Weaknesses: W1: References could be ordered by appearance (minor)
W2: The nature of diffusion classifiers induces a considerable computational overhead compared to standard classifiers. However, the authors try to address this issue through their sift-and-refine algorithm. Still a comparison between different methods w.r.t. inference time would have been informative. (could also include standard classifiers). Note that I would not consider large computational cost as a negative point concerning paper acceptance I just believe that a comparison would be helpful for the reader. Still the appendix provides some information w.r.t. time complexity so I view this as a minor issue.
W3: Appendix D is very short and could be incorporated into the paper (at least in the camera-ready version)
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: Could the authors provide a computational cost comparison between different methods?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are included in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing the contribution of our work and for providing valuable feedback. Below we address your detailed comments and hope that you find our responses satisfactory.
***Weakness 1: References could be ordered by appearance.***
Thank you for your suggestion. We will revise the references to be ordered by appearance in the final version.
***Weakness 2: Appendix D can be incorporated into the main text.***
Thank you for your advice. We will incorporate the Limitation section into the end of the main text in the final version.
***Weakness 3 and Q1: A table for comparing time complexity.***
Thank you for your valuable suggestion. We strongly agree that a table presenting the comparison of time complexity is necessary. The result is shown below:
CIFAR10:
| Method | Architecture | Certifying NFEs | Certifying Real Time | Inference NFEs | Inference Real Time |
|:---:|:---:|:--:|:---:|:----:|:----:|
| RS/SmoothAdv/Consistency/MACER | ResNet-110 | O(N) | \\(10.81\\) | O(1) | \\(0.001\\) |
| Carlini | UNet+WRN-72 | O(N) | \\(1.78 \times 10^3\\) | O(1) | \\(0.030\\) |
| Xiao | UNet+WRN-72 | O(400N) | \\(7.44 \times 10^4 \\) | O(400) | \\(1.14\\) |
| DC/APNDC/EPNDC (variance-reduction) | UNet | O(1250N) | \\(2.92 \times 10^4 \\) | O(1250) | \\(2.92\\) |
ImageNet:
| Method | Architecture | Certifying NFEs | Certifying Real Time | Inference NFEs | Inference Real Time |
|:-------:|:---:|:------:|:-----:|:------:|:---------:|
| RS/SmoothAdv/Consistency/MACER | ResNet-50 | O(N) | \\(26.701\\) | O(1) | \\(0.004\\) |
| Carlini | UNet+ResNet-50 | O(N) | \\(1025.586\\) | O(1) | \\(0.108\\) |
| Xiao | UNet+ResNet-50 | O(400N) | \\(432000\\) | O(400) | \\(43.2\\) |
| DC/APNDC/EPNDC (sift-and-refine) | UNet | undetermined | \\(1.1 \times 10^5\\) | undetermined | \\(112.3\\) |
As shown:
- When using sift-and-refine, the NFEs for each image depend on the specific image. Thus, the time complexity is undetermined.
- During certification, different samples are computed in parallel, so the time complexity is much lower than Inference Real Time * N.
- The value of N for the Diffusion Classifier is ten times smaller than for other models.
- The time complexity of diffusion classifiers is slightly higher than that of the approach by Xiao et al. (2022). We recognize this as a primary limitation of both our approach and generative classifiers in general. We are actively working to reduce the time complexity of diffusion classifiers so that it becomes independent of $K$.
---
Rebuttal Comment 1.1:
Title: Concerns addressed
Comment: I thank the authors for their response. My concerns are appropriately addressed and I am happy to raise my score.
I found reading the paper quite enjoyable.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for raising your score. We are delighted to hear that you found reading our paper enjoyable and that our responses addressed your concerns. Your feedback is invaluable to us, and we appreciate your support. | Summary: This work proves that diffusion classifiers possess inherent robustness to adversarial attacks by demonstrating their O(1) Lipschitzness and establishing their certified resilience. By generalizing these classifiers to handle Gaussian-corrupted data and using evidence lower bounds for likelihood approximation, the research demonstrates the superior certified robustness of Noised Diffusion Classifiers (NDCs).
Strengths: The paper showcases the robustness of the proposed Noised Diffusion Classifiers (NDCs), achieving high certified robustness on the CIFAR-10 and ImageNet 64x64 datasets. The study also provides a proof of O(1) Lipschitzness for diffusion classifiers.
Weaknesses: 1. The proposed method combines two existing techniques, diffusion classifiers and randomized smoothing, which is not sufficiently novel. The paper needs to better highlight what sets this approach apart from existing methods and how it fundamentally advances the field. Although the authors attempt to establish a theoretical framework, the derivation of the Lipschitz constant and its implications are not sufficiently detailed, leaving unanswered questions about the robustness guarantees.
2. The experimental evaluation relies heavily on the small CIFAR-10 and ImageNet 64x64 datasets. Expanding the experiments to include larger datasets, such as ImageNet-1K, would provide a more comprehensive assessment.
3. The paper discusses techniques to reduce time complexity but does not convincingly demonstrate the practicality of the proposed methods with experimental results, such as throughput or inference latency. A more detailed analysis and comparisons of computational efficiency, especially in relation to existing methods, are needed.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please address the weaknesses mentioned above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the strong results of our methods. Below we address the detailed comments, and hope you may find our response satisfactory and update the score accordingly.
***Weakness 1: Insufficient Novelty.***
We disagree with our highest respect. We justify the novelty from two aspects. First, diffusion classifiers have demonstrated superior empirical robustness and are increasingly used in robust classification tasks. However, a comprehensive theoretical understanding of their robustness is still lacking. There are concerns about whether their robustness is overestimated or if they might be vulnerable to potentially stronger adaptive attacks in the future. In this work, **we address these concerns by using proper mathematical tools (e.g., randomized smoothing) to provide a theoretical foundation for diffusion classifiers**. This contribution is novel and valuable to our community.
Second, the analysis of diffusion classifiers is highly nontrivial, with several key technical contributions that are also new:
1. **Theoretical Foundation for Diffusion Classifiers:**
Diffusion classifiers are traditionally **not** compatible with randomized smoothing because they struggle to handle noisy data. To address this, we derive the analytical solution of the diffusion classifiers' gradient and bound their gradient norm (Lipschitz constant in Appendix A.2). This provides a lower bound and demonstrates the inherent robustness of diffusion classifiers, offering insights into the source of their robustness.
2. **Generalization to Noisy Data and Remarkable Certified Robustness Record:**
Bounding robustness using the maximum Lipschitz constant \\(L\\) can only provide a certified robustness of at most \\(\frac{1}{2L}\\). To achieve a tighter certified bound, we generalize diffusion classifiers to handle noisy data by deriving two noisy ELBOs, for which we can establish their point-wise Lipschitzness. These generalized diffusion classifiers achieve remarkable certified robustness, breaking previous records by exceeding 70\% certified robustness at \\(\epsilon\_2=0.5\\), which is less than 10\% below the empirical upper bound.
3. **Reduction in Time Complexity:**
We reduce the time complexity of diffusion classifiers by more than 10x using our proposed variance reduction and sift-and-refine methods, making diffusion classifiers more practical for large-scale applications.
Overall, **we fundamentally address the key concerns about the robustness of diffusion classifiers by proving their robustness lower bound.** These contributions are significant and new, as agreed by reviewers LwbB, p9kw, and yRsf.
***Weakness 2: Experiments on ImageNet-256x256.***
Thank you for the valuable suggestion. We agree that experiments on ImageNet-256x256 are crucial for demonstrating the scalability of diffusion classifiers. However, there are currently no conditional diffusion models available at 256x256 resolution **in the RGB space**.
Existing ImageNet diffusion models are trained either at 64x64 resolution (e.g., EDM [3]) or in a 64x64 latent space (e.g., stable diffusion [2]). Although the latent diffusion models achieve state-of-the-art generation performance, they cannot be applied to adversarial defense tasks because their encoder is vulnerable to attacks [4].
We recognize this as a primary limitation of our work. We will continue to address this issue by employing more advanced diffusion training methods (e.g., [1]).
***Weakness 3: Comparison of inference latency.***
Thank you for the valuable suggestion. We provide further results on time complexity:
CIFAR10:
| Method | Architecture | Certifying NFEs | Certifying Real Time | Inference NFEs | Inference Real Time |
|:---:|:---:|:--:|:---:|:----:|:----:|
| RS/SmoothAdv/Consistency/MACER | ResNet-110 | O(N) | \\(10.81\\) | O(1) | \\(0.001\\) |
| Carlini | UNet+WRN-72 | O(N) | \\(1.78 \times 10^3\\) | O(1) | \\(0.030\\) |
| Xiao | UNet+WRN-72 | O(400N) | \\(7.44 \times 10^4 \\) | O(400) | \\(1.14\\) |
| DC/APNDC/EPNDC (variance-reduction) | UNet | O(1250N) | \\(2.92 \times 10^4 \\) | O(1250) | \\(2.92\\) |
ImageNet:
| Method | Architecture | Certifying NFEs | Certifying Real Time | Inference NFEs | Inference Real Time |
|:-------:|:---:|:------:|:-----:|:------:|:---------:|
| RS/SmoothAdv/Consistency/MACER | ResNet-50 | O(N) | \\(26.701\\) | O(1) | \\(0.004\\) |
| Carlini | UNet+ResNet-50 | O(N) | \\(1025.586\\) | O(1) | \\(0.108\\) |
| Xiao | UNet+ResNet-50 | O(400N) | \\(432000\\) | O(400) | \\(43.2\\) |
| DC/APNDC/EPNDC (sift-and-refine) | UNet | undetermined | \\(1.1 \times 10^5\\) | undetermined | \\(112.3\\) |
As shown:
- When using sift-and-refine, the NFEs for each image depend on the specific image. Thus, the time complexity is undetermined.
- During certification, different samples are computed in parallel, so the time complexity is much lower than Inference Real Time * N.
- The value of N for the Diffusion Classifier is ten times smaller than for other models.
- The time complexity of diffusion classifiers is slightly higher than that of the approach by Xiao et al. (2022). We recognize this as a primary limitation of both our approach and generative classifiers in general. We are actively working to reduce the time complexity so that it becomes independent of $K$.
## Reference:
[1] Matryoshka diffusion models. ICLR, 2023.
[2] High-resolution image synthesis with latent diffusion models. CVPR, 2022.
[3] Elucidating the design space of diffusion-based generative models. NeurIPS, 2022.
[4] Pixel is a Barrier: Diffusion Models Are More Adversarially Robust Than We Think. arXiv, 2024.
---
Rebuttal Comment 1.1:
Comment: Although diffusion classifiers and randomized smoothing are two established techniques, there are indeed some gaps when applying randomized smoothing to diffusion classifiers, as mentioned in the rebuttal. Despite the lack of novelty in combining these two existing components, addressing the gaps that hinder such a combination is also a valid contribution, especially when the mitigation method is supported by a theoretical foundation.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful review and for recognizing the value of our contributions. We are glad that you found our theoretical approach to addressing these gaps meaningful. We are also grateful for your decision to raise the score. | Summary: This paper presents a theoretical analysis of the enhanced robustness in diffusion-based classifiers and introduces a generalized Noised Diffusion Classifier, EPNDC. The authors utilize the Evidence Lower Bound (ELBO) of each conditional log-likelihood $\log p(x_\tau | y) $and Bayes' theorem as the logits for each class. They identified that EPNDC is time-consuming due to the iterative computation of two conditional ELBOs. To address this, they leverage the ELBO of an ensemble of EPNDC to approximate the expected likelihood as logits without additional computational cost. Additionally, they developed variance reduction and sift-and-refine techniques to reduce time complexity. Experimental results demonstrate that APNDC achieves significantly better robustness without requiring extra training data, fewer diffusion steps, and a reduced number of samples needed to estimate the Lipschitz bound.
Strengths: 1. The entire paper is logically structured with a clear progression, enabling readers to understand it well. From Algorithm 1 to Algorithm 2 to Algorithm 5, the authors continuously explore problems, improve algorithms, and provide thorough analysis and theoretical proofs.
2. The experiments are comprehensive, and compared to the benchmark, EPNDC shows significant improvements in certified accuracy. This demonstrates EPNDC's high scalability in handling large datasets with numerous categories.
Weaknesses: 1. Some causal relationships are unclear or lack citations, requiring further explanation from the authors. For instance, in Line 156: What does the "nabla operator" refer to? It is neither explained nor cited. In Lines 161-164: "However, similar to the weak law of randomized smoothing, such certified robustness has limitations because it assumes the maximum Lipschitz condition is satisfied throughout the entire perturbation path. As a result, the robust radius is less than half of the reciprocal of the maximum Lipschitz constant." The causal relationship here is unclear and needs further clarification.
2. Although the diffusion classifier is highly scalable and robust, its clean accuracy on ImageNet is still far behind the current state-of-the-art (90%+). More details can be found at [https://paperswithcode.com/sota/image-classification-on-imagenet](https://paperswithcode.com/sota/image-classification-on-imagenet).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Refer to weaknesses.
2. Eq (15) uses the diffusion model $h_\theta$ one more time than Eq (12). Why does APNDC not increase the computational overhead compared to EPNDC (Line201)? Please explain further.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors didn't address the limitations of APNDC, but Diffusion Classifiers are still far behind the current SOTA in classification accuracy. These are some potential limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for appreciating the writing and contribution of our paper. We are deeply encouraged by your kind words. Below we address detailed comments, and hope you may find our reponse satisfactory.
***Weakness 1: Nabla operator is unclear.***
Thank you for pointing this out. In our paper, the Nabla operator refers to the gradient operator. We will make this clearer in the final version.
***Weakness 2: Lines 161-164 is unclear.***
Thank you for your suggestion. We will modify this section in the final version to:
> However, similar to the weak law of randomized smoothing, such certified robustness has limitations because it assumes the maximum Lipschitz condition is satisfied throughout the entire perturbation path, i.e., it assumes the equality always holds in \\(|f(\\mathbf{x}\_{adv})\_y - f(\\mathbf{x})\_y| \leq L\\|\\mathbf{x}-\\mathbf{x}\_{adv}\\|\_2\\) when \\(f\\) has Lipschitz constant \\(L\\). As a result, the equality also holds in \\( f(\\mathbf{x}\_{adv})\_y \geq f(\\mathbf{x})\_y - L\\|\\mathbf{x}-\\mathbf{x}\_{adv}\\|\_2 \\) and \\(f(\mathbf{x}\_{adv})\_{\hat{y}} \leq f(\mathbf{x})\_{\hat{y}} + L\\|\mathbf{x}-\mathbf{x}\_{adv}\\|\_2\\) for \\(\max\_{\hat{y} \neq y} f(\mathbf{x})\_{\hat{y}}\\). To guarantee the prediction is unchanged (i.e., \\(f(\mathbf{x}\_{adv})\_y \geq f(\mathbf{x}\_{adv})\_{\hat{y}}\\)), its requires the perturbation \\(\\|\mathbf{x}-\mathbf{x}_{adv}\\|_2\\) must be less than \\(\frac{1}{2L}\\).
***Weakness 3: Still not comparable with discriminative classifiers on ImageNet.***
We acknowledge and agree that generative classifiers, including diffusion classifiers, currently lag behind discriminative classifiers in terms of clean accuracy on ImageNet. We recognize this as the primary limitation of our work and will include it in the limitation section of our paper. However, generative classifiers are still promising due to their robustness, interpretability, certifiability, and strong mathematical foundation, which are advantages not typically found in discriminative classifiers. We will continue to focus on advancing generative classifiers and believe they will play a crucial role in security applications.
***Question 1: APNDC has one more NFE than EPNDC.***
Thank you for your meticulous observation. APNDC does require one more forward pass to compute \\(h\_\theta(x\_\tau, \tau)\\). We have revised our claim about the time complexity of APNDC and EPNDC to: "This nearly free ensemble can be executed with only one more forward pass of UNet."
---
Rebuttal Comment 1.1:
Comment: Thanks for editing the article and solving my confusion. Overall, it's a great paper!
---
Reply to Comment 1.1.1:
Comment: Thank you for your kind words and for taking the time to review our paper. We're glad that our revisions were helpful and that you enjoyed the paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Decision Mamba: Reinforcement Learning via Hybrid Selective Sequence Modeling | Accept (poster) | Summary: This paper investigates an emerging foundation model, Mamba, in Reinforcement Learning (RL) scenarios and compares it with Transformer in terms of effectiveness and efficiency. The authors find that in-context RL methods with Mamba as the backbone are generally more efficient than Transformer, but there is no significant improvement in effectiveness. Then, this paper proposes a Hybrid Mamba (HM) with the merits of transformers and Mamba in high-quality prediction and long-term memory. Finally, this paper conducts experiments on three benchmarks to exhibit its improved effectiveness and efficiency.
Strengths: 1. The paper is commendably well-written and coherent, effectively explaining complex ideas in an accessible manner. The authors explored the potential of the widely discussed model Mamba in the context of RL and compared it with Transformer in terms of effectiveness and efficiency.
2. The authors proposed a novel hybrid model that inherits the merits of both Transformer and Mamba in a goal-conditional manner. The main advantage of using the hybrid structure is that when the time horizon is very long, as in the D4RL tasks, several episodes/trials are required for good in-context learning, as in the larger Dark Room and Tmaze environments.
3. HM improves training and inference speed by reducing the horizon of the transformer model. This can be particularly important in applications such as robotics, which require high-frequency control.
4. The experimental evaluation, meticulously designed to include several baselines and diverse tasks, demonstrates the algorithm's strengths.
Weaknesses: 1. The baseline AD (Mamba) in Figure 2 and the baseline DM in Figure 3, which appear to be AD (Transformer) and DT variants, are crucial for the readers' understanding of how Mamba replaces the Transformer architecture. However, the lack of explanation of these two baselines in the experimental setup section might confuse readers.
2. Some experimental settings are not explained clearly. In Section 5.3, the authors do not explain what GPU device they used. Although the device is introduced in Appendix A, it is recommended that it be explained clearly in the main text.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In Table 1 and Table 2, the author used AD (Mamba) as the primary baseline. However, in Figure 3, the author used the DM baseline instead. What is the main difference between AD (Mamba) and DM?
2. The experiments demonstrated that the online testing of HM in the long-term task is 28 times faster than the transformer-based method. However, can this hybrid structure also inherit Mamba's high efficiency in terms of training cost?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The author discusses limitations and potential negative societal impacts in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are particularly encouraged that Reviewer UASH finds our method effective.
### [I].Reply to the weakness
>**[1/2]W1. The baseline AD (Mamba) in Figure 2 and the baseline DM in Figure 3, which appear to be AD (Transformer) and DT variants, are crucial for the readers' understanding of how Mamba replaces the Transformer architecture. However, the lack of explanation of these two baselines in the experimental setup section might confuse readers.**
Thanks for the reviewer's suggestions. The difference between AD (Mamba) and DM is whether the context covers multiple trajectories. We highlight the advantages and disadvantages of Mamba by comparing it with two transformer-based methods, AD (Transformer) and DT. In the revised manuscript, we will add more detailed instructions in Section 5.1 of the experiment.
>**[2/2]W2. Some experimental settings are not explained clearly. In Section 5.3, the authors do not explain what GPU device they used. Although the device is introduced in Appendix A, it is recommended that it be explained clearly in the main text.**
Experiments are carried out on NVIDIA GeForce RTX 3090 GPUs and NVIDIA A10 GPUs. Besides, the CPU type is Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz. As suggested, we will emphasize the device in the revised manuscript.
### [II].Reply to the questions
>**[1/2]Q1. In Table 1 and Table 2, the author used AD (Mamba) as the primary baseline. However, in Figure 3, the author used the DM baseline instead. What is the main difference between AD (Mamba) and DM?**
The main difference between AD (Mamba) and DM is whether the context covers multiple trajectories. Since the Tmaze requires the policy to recall the first step at the current trajectory, we show the DT and DM comparison results in Figure 3. In contrast, the Grid world and d4rl tasks require the policy to improve itself based on the historical trajectories. Thus, we use AD (Mamba) as the primary baseline.
>**[2/2]Q2. The experiments demonstrated that the online testing of HM in the long-term task is 28 times faster than the transformer-based method. However, can this hybrid structure also inherit Mamba's high efficiency in terms of training cost?**
Yes, we are glad that the reviewer found our method also highly efficient in terms of training cost. Assume the sequence length is $n$. The computational complexity of the transformer is $O(n^2)$. In HM, the sequence is divided into a hierarchical structure. In the Mamba level, the sequence length is $\frac{n}{c}$, and the complexity is $O(\frac{n}{c})$, where the hyperparameter $c$ denotes the timestep interval at which each sub-goal is set to guide the transformer. In the transformer level, the sequence is divided into $\frac{n}{c}$ subsequences of length $c$, and the complexity becomes $O(\frac{n}{c}\cdot c^2)$. As the sequence length $n$ increases, the computational complexity of HM will be significantly lower than that of the transformer-based method, and thus, the training will be faster.
---
Rebuttal Comment 1.1:
Title: Looking forward to your reply
Comment: Dear Reviewer UASH,
We value your positive feedbacks and constructive suggestions for our paper and sincerely appreciate your effort in reviewing it. We hope we have effectively addressed all the concerns raised. As the end of the discussion is approaching, we are wondering if there are any additional potential clarifications or suggestions that you think would help us improve this manuscript.
Thank you again for your dedicated review and invaluable insights.
Kind regards,
Paper5823 Authors | Summary: This paper presents Hybrid Mamba (HM), a method that combines the Mamba model and Transformer to enhance reinforcement learning (RL) performance. HM leverages Mamba to generate high-value sub-goals, which then condition the transformer, leading to significant improvements in online testing efficiency and task-solving capabilities.
Strengths: 1. The paper is well-written and clear to read.
2. HM significantly accelerates testing speed, achieving up to 28 times faster results than baseline methods.
3. HM demonstrates superior performance across various benchmarks, such as D4RL, Grid World, and Tmaze.
Weaknesses: 1. This paper claims to present a in-context RL approach. The motivation of this paper is concerned with the problems encountered with the no-gradient updates in-context approach (line 28), where the policy network does not require parameter updates. However, this paper uses a global update approach, which is closer to gradient-based and conditional-based offline RL. It seems to contradict the original intention of this paper.
2. HM benefits from using a powerful subgoal encoder (Mamba in this case) and conditioning the policy network with subgoals. The performance improvement is expected and unsurprising due to the advantages inherent in conditional-based RL algorithms. Hence, it is necessary to further explain the unique contributions of combining Mamba and causal transformer in this paper.
3. If the sub-goal encoder are replaced with other advanced and efficient sequence encoders (e.g., flash-attention1/2 [1,2], x-lstm [3]), would it also yield better or more efficient performance?
4. The experiments demonstrating HW's efficacy in capturing long-term dependencies are unconvincing. Achieving good results in tasks with an arbitrarily horizon (e.g., Tmaze) does not necessarily prove effective long-term memory embedding. It is crucial to test the stability and performance of HM with varying horizon lengths or other length-based settings. For example, Mamba’s original paper [4] demonstrated the ability to capture long-term dependencies through the scaling laws.
5. Could the authors clarify in which specific aspects HM's training time is faster than DT's? Since HM appears to be a combination of Mamba and DT.
6. There are parts of the paper that are not clearly explained. For instance, in lines 228-233, it is mentioned that the transformer predicts a c-step action sequence (named $a_1$ here) through the sub-goal $z_t$ and another c-step action sequence (named $a_2$) through valuable sub-goals from offline data. How are $a_1$ and $a_2$ subsequently updated or processed?
7. (minor) The paper contains some typos and inconsistencies in tense usage. For example, in the related work section, the section on Mamba uses the present tense, while the section on in-context RL uses the past tense. These should be corrected for consistency. In addition, what's the meaning of the different gaussian distribution figures in Figure 1?
*Reference:*
[1] Dao T, Fu D, Ermon S, et al. Flashattention: Fast and memory-efficient exact attention with io-awareness. NeurIPS 2022.
[2] Dao T. Flashattention-2: Faster attention with better parallelism and work partitioning. ICLR 2024.
[3] Beck M, Pöppel K, Spanring M, et al. xLSTM: Extended Long Short-Term Memory. arXiv 2024.
[4] Gu A, Dao T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see weakness. If I have misunderstood some parts of the paper, I welcome corrections and further discussion.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors raise some limitaions, for example, how to control the setting of hyperparameter $c$, which is not addressed in this paper but is claimed to be solved in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's positive appraisal, insightful comment, and criticism of our paper.
### [I].Reply to the Weakness
>**[1/7]W1. This paper claims to present a in-context RL approach. The motivation of this paper is concerned with the problems encountered with the no-gradient updates in-context approach (line 28), where the policy network does not require parameter updates. However, this paper uses a global update approach, which is closer to gradient-based and conditional-based offline RL. It seems to contradict the original intention of this paper.**
We want to clarify the setting of transformer-based in-context RL, which is divided into two phases: training and testing. In the training phase, in-context RL designs the architecture of policy models and trains the model by predicting the action tokens in carefully designed sequences, such as multiple trajectories in AD [1]. This process involves parameter updates and can be regarded as a kind of pre-training. In the testing phase, we deploy the model in a new task and can only make inferences in the context obtained by interacting with the environment without involving parameter updates.
In the method section, we introduced the architecture of HM and how to construct sequences for training. Then, at the end of the section, we summarized its testing process. We roll out the HM with multiple trajectories and report the return of the last trajectory. In-context learning is reflected by the HM, which can predict better actions by recalling historical trajectories from the context. In Figure 2 in the manuscript, the ascending reward curves are achieved by learning in the contexts without requiring parameter updates.
>**[2/7]W2. HM benefits from using a powerful subgoal encoder (Mamba in this case) and conditioning the policy network with subgoals. The performance improvement is expected and unsurprising due to the advantages inherent in conditional-based RL algorithms. Hence, it is necessary to further explain the unique contributions of combining Mamba and causal transformer in this paper.**
Thanks for the reviewer's suggestion. Mamba is well known for its competitive effectiveness in transformer-based models while achieving linear complexity in long sequences. Its outstanding performance in NLP and visual tasks has attracted more and more attention. Therefore, our first contribution is to explore the potential of Mamba in decision-making tasks. In traditional RL tasks, we demonstrate that the Mamba model is more efficient but slightly inferior to the transformer model in terms of effectiveness. Therefore, we propose the HM method that inherits the merits of Mamba and transformers, achieving both high effectiveness and efficiency in long-term RL tasks. HM is inspired by the idea of conditional RL, proving that this hybrid structure is a promising way to leverage the strengths of Mamba and Transformer to complement their weaknesses.
>**[3/7]W3. If the sub-goal encoder are replaced with other advanced and efficient sequence encoders (e.g., flash-attention1/2, x-lstm), would it also yield better or more efficient performance?**
It is possible to use other sequence encoders to generate sub-goals, such as the reviewer's suggested flash-attention and x-lstm. The x-lstm has been proposed recently, and we did not find its open source code. Therefore, we tested HM by replacing the Mamba module with the flash-attention. Due to the limited time and device, we report the results of flash-attention1 across three random seeds.
**Table 1. The performance of different sub-goal encoder.**
|Sub-goal encoder|HalfCheetah-Med-Expert|HalfCheetah-Med|HalfCheetah-Med-Replay|Hopper-Med-Expert|Hopper-Med|Hopper-Med-Replay|Walker2d-Med-Expert|Walker2d-Med|Walker2d-Med-Replay|
|-|-|-|-|-|-|-|-|-|-|
|Mamba|96.12 $\pm$ 0.28|45.45 $\pm$ 0.35|45.26 $\pm$ 0.4|117.19 $\pm$ 0.65|83.15 $\pm$ 0.63|98.36 $\pm$ 0.51|118.21 $\pm$ 0.56|88.29 $\pm$ 0.76|95.66 $\pm$ 1.16|
|flash-attention|94.91 $\pm$ 0.18|44.22 $\pm$ 0.16|43.98 $\pm$ 0.35|116.09 $\pm$ 0.68|81.58 $\pm$ 0.33|96.85 $\pm$ 0.64|117.19 $\pm$ 0.49|86.26 $\pm$ 0.61|94.12 $\pm$ 0.88|
The results show that flash attention is slightly inferior to Mamba on d4rl tasks. This is because the process of predicting sub-goals is non-autoregressive. As we clarified in the manuscript, the sub-goal is represented by a vector that is not explicitly present in the training data. This non-autoregressive process may not fully exploit the power of flash-attention. On the contrary, although Mamba was also first proposed for NLP tasks, its structure is closer to RNN, a commonly used architecture in RL. Similarly, X-LSTM should yield efficient performance because it is also an RNN-like model.
>**[4/7]W4.The experiments demonstrating HW's efficacy in capturing long-term dependencies are unconvincing. Achieving good results in tasks with an arbitrarily horizon (e.g., Tmaze) does not necessarily prove effective long-term memory embedding. It is crucial to test the stability and performance of HM with varying horizon lengths or other length-based settings.**
There is a misunderstanding about our testing of HM recalling long-term memory. We followed the setting of the previous in-context RL method AMAGO [2] to extend the horizon of tasks until we ran out of GPU memory. The horizon of HM also varies with the tasks from short-horizon to long-horizon. Although unlike indicators such as perplexity or accuracy in NLP tasks, the cumulative reward (return) in RL tasks has a similar meaning and is used to evaluate the model's performance. Figure 3(a) results in the manuscript show that HM with varying horizon lengths are stable and high-performance. In summary, we want to show that HM can effectively handle tasks that require recalling long-term memory, as tested by previous methods.
Due to word limitations, we will answer the remaining questions in the next comment.
---
Rebuttal 2:
Title: Rebuttal of the remaining questions
Comment: Due to word limitations, we answer the remaining questions in this comment. Please review this response after the rebuttal part.
### [I].Reply to the Weakness
>**[5/7]W5.Could the authors clarify in which specific aspects HM's training time is faster than DT's? Since HM appears to be a combination of Mamba and DT.**
Thanks for the reviewer's suggestion. Although HM combines Mamba and DT, its sequence length is the same as that of DT. Assume the sequence length is $n$. The computational complexity of DT is $O(n^2)$. In HM, the sequence is divided into a hierarchical structure. In the Mamba level, the sequence length is $\frac{n}{c}$, and the complexity is $O(\frac{n}{c})$, where the hyperparameter $c$ denotes the timestep interval at which each sub-goal is set to guide the DT. In the DT level, the sequence is divided into $\frac{n}{c}$ subsequences of length $c$, and the complexity becomes $O(\frac{n}{c}\cdot c^2)$. As the sequence length $n$ increases, the computational complexity of HM will be significantly lower than that of DT, and thus, the training will be faster. We will incorporate the above complexity analysis into the revised manuscript.
>**[6/7]W6.There are parts of the paper that are not clearly explained. For instance, in lines 228-233, it is mentioned that the transformer predicts a c-step action sequence (named $a_1$ here) through the sub-goal $z_t$ and another c-step action sequence (named $a_2$) through valuable sub-goals from offline data. How are $a_1$ and $a_2$ subsequently updated or processed?**
In lines 228-233, we introduce the training process of HM using a sub-goal $z_t$ as an example. Assume a c-step sequence $(s_g,s_t,a_t^*,s_{t+1},a_{t+1}^*,\dots,s_{t+c-1},a_{t+c-1}^*)$ exists in the offline data, where $s_g$ is the valuable sub-goal selected by the weighted average of accumulated rewards. In the training process, the Mamba model in HM generates a sub-goal $z_t$ and guides the transformer to predict the c-step action sequence (named $a_1$ here). Meanwhile, we replace the $z_t$ with $s_g$ and predict the c-step action sequence again (named $a_2$ here). Finally, we update the model parameters so that its predictions for these two action sequences ($a_1$ and $a_2$) are close to $(a_t^*,a_{t+1}^*,\dots,a_{t+c-1}^*)$. In the testing process, the trained HM is deployed in a new environment without parameter updates. This process will not have access to $s_g$, where HM relies on generating better sub-goals $z$ from the context to improve its performance.
>**[7/7]W7.minor) The paper contains some typos and inconsistencies in tense usage. For example, in the related work section, the section on Mamba uses the present tense, while the section on in-context RL uses the past tense. These should be corrected for consistency. In addition, what's the meaning of the different gaussian distribution figures in Figure 1?**
Thanks for the reviewer's corrections. We will revise the typos and inconsistencies in tense usage. In Figure 1 in the manuscript, the different Gaussian distributions indicate that Mamba generates a different subgoal for the transformer's predictions every c steps. It is possible to predict the sub-goal directly by using a fixed representation. However, sampling from a multi-variate Gaussian distribution introduces variability and flexibility in representing information. This approach can generate diverse sub-goals or latent variables, allowing for exploration and capturing more nuanced aspects of the input data.
[1] In-context Reinforcement Learning with Algorithm Distillation. ICLR 2023.
[2] Amago: Scalable in-context reinforcement learning for adaptive agents. ICLR 2024.
---
Rebuttal Comment 2.1:
Title: Looking forward to your reply
Comment: Dear Reviewer YUg4,
We value your positive feedbacks and constructive suggestions for our paper and sincerely appreciate your effort in reviewing it. We hope we have effectively addressed all the concerns raised. As the end of the discussion is approaching, we are wondering if there are any additional potential clarifications or suggestions that you think would help us improve this manuscript.
Thank you again for your dedicated review and invaluable insights.
Kind regards,
Paper5823 Authors | Summary: This paper investigates to utilize the Mamba [1] architecture for In-Context RL task. Addressing this task with Transformer architecture is effective while it is very inefficient due to the quadratic computation overhead of Transformer. The Mamba can reduce this overhead dramatically while sustain the performance somewhat. The application of State-Space Models (SSMs) to In-Context RL task is studied in [2], but different from [2], they combinationally utilize Mamba and Transformer as high-level memory and low-level (short-term) memory. Additionally, as Mamba predicts the sub-goal for the Transformer short-term memory, they improved the performance. Through this modeling, they can achieve better performance than previous works while improving the efficiency.
[1] Gu, Albert, and Tri Dao. "Mamba: Linear-time sequence modeling with selective state spaces." arXiv preprint arXiv:2312.00752 (2023).
[2] Lu, Chris, et al. "Structured state space models for in-context reinforcement learning." Advances in Neural Information Processing Systems 36 (2024).
Strengths: - Appropriate modeling is applied in this study. While the effectiveness of hybrid modeling of SSMs and local Attention has been previously explored in [1], the authors effectively implement this concept for the In-Context RL task with new functionalities, such as predicting high-value sub-goals.
- The introduction and methodology sections are well written. The motivation is clearly articulated, and the logical flow of their method proposal is coherent. The empirical analysis comparing Mamba and Transformer in RL tasks convincingly demonstrates the need for more advanced modeling.
- The paper provides extensive empirical analyses. It shares experimental results on multiple benchmarks, including ablation studies and performance changes with varying hyperparameter values.
[1] De, Soham, et al. "Griffin: Mixing gated linear recurrences with local attention for efficient language models." arXiv preprint arXiv:2402.19427 (2024).
Weaknesses: - The high-level encoding is done by encoding the intervalled trajectories (e.g., every $c$ -th trajectory), which might miss important information in the middle of the interval.
- The section on Hybrid Mamba with Valuable Sub-goals is initially confusing, especially regarding the relationship between Mamba’s sub-goal prediction and the collected valuable sub-goals. Discussing this relationship at the beginning of the Valuable Sub-goal section could help readers understand the content more easily.
- One of the experimental results differs from my expectations, but the paper does not provide an analysis for this. I will address this in the Questions section.
Technical Quality: 4
Clarity: 4
Questions for Authors: - I am curious why AD (transformer) shows worse performance than HM. I thought AD (transformer) performance would be the upper bound of HM while HM is more efficient. However, AD (transformer) performance is generally worse than HM in your tests, especially for Grid World in Figure 2. Why is AD (transformer) performance poor in Grid World? Did you use a smaller context size for the Grid World test? If not (using the same context size), what could be the reasons for the significant performance gap?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors properly addressed their limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are particularly encouraged that Reviewer wVw1 finds our method effective.
### [I].Reply to the Weakness
>**[1/2]W1. The high-level encoding is done by encoding the intervalled trajectories (e.g., every $c$-th trajectory), which might miss important information in the middle of the interval.**
It is possible to provide complete trajectories to the high-level encoding. In fact, we tested non-intervalled trajectories during the paper's preparation and found that this setting did not produce the expected results. We design an HM variant that constructs a context sequence containing the complete trajectories but still generates subgoals for every $c$ steps.
**Table 1. The performance of intervalled trajectories and non-intervalled trajectories.**
| Trajectories | HalfCheetah-Med-Expert | HalfCheetah-Med| HalfCheetah-Med-Replay | Hopper-Med-Expert | Hopper-Med| Hopper-Med-Replay | Walker2d-Med-Expert | Walker2d-Med| Walker2d-Med-Replay |
|-|-|-|-|-|-|-|-|-|-|
| Intervalled trajectories | 96.12 $\pm$ 0.28 | 45.45 $\pm$ 0.35 | 45.26 $\pm$ 0.43 | 117.19 $\pm$ 0.65 | 83.15 $\pm$ 0.63 | 98.36 $\pm$ 0.51 | 118.21 $\pm$ 0.56 | 88.29 $\pm$ 0.76 | 95.66 $\pm$ 1.16 |
| Non-intervalled trajectories | 95.98 $\pm$ 0.21 | 45.22 $\pm$ 0.18 | 44.97 $\pm$ 0.28 | 117.21 $\pm$ 0.61 | 82.98 $\pm$ 0.65 | 98.12 $\pm$ 0.38 | 117.97 $\pm$ 0.62 | 88.25 $\pm$ 0.68 | 95.62 $\pm$ 0.96 |
Table 1 shows no significant performance gap between the two variants of HM. This is because (1) the transformer retains the complete short-term trajectory for action prediction. (2) The cross-episodic context in Mamba is long enough so that the uniformly sampled trajectory preserves sufficient information for predicting the sub-goals.
>**[2/2]W2. The section on Hybrid Mamba with Valuable Sub-goals is initially confusing, especially regarding the relationship between Mamba’s sub-goal prediction and the collected valuable sub-goals. Discussing this relationship at the beginning of the Valuable Sub-goal section could help readers understand the content more easily.**
Thanks for the reviewer's suggestion. We will explain their relationship in the revised manuscript. In summary, the valuable sub-goals collected can be regarded as additional signals, and we hope that the Mamba model can generate similar sub-goals to prompt the transformer's prediction.
### [II].Reply to the Questions
>**[1/1]Q1. I am curious why AD (transformer) shows worse performance than HM. I thought AD (transformer) performance would be the upper bound of HM while HM is more efficient. However, AD (transformer) performance is generally worse than HM in your tests, especially for Grid World in Figure 2. Why is AD (transformer) performance poor in Grid World? Did you use a smaller context size for the Grid World test? If not (using the same context size), what could be the reasons for the significant performance gap?**
As far as we know, AD was the first to introduce in-context learning into transformer-based RL. It uses the simplest (state, action, reward) tuples to form the context sequence, so it is not outstanding in performance and usually serves as a baseline for subsequent advanced methods, such as Amago [1] and DPT[2]. In the Grid World, both AD and our method HM set the same context that covers the same number of trajectories. This performance gap can be attributed to two aspects: token modality and model architecture.
Our approach adds two different tokens compared to AD: the c-step cumulative reward and the done flag. Since the environment provided by Grid World is mostly sparsely rewarded, cumulative reward tokens are more advantageous. On the other hand, recent methods, such as AT [3], demonstrated that done flag tokens are beneficial for in-context RL. In the model architecture, the hierarchical architecture of HM also brings about performance improvements, which is analogous to the fact that hierarchical RL learning is better at handling long-term and sparse reward tasks.
According to the reviewer's suggestion, we will incorporate the above analysis into the revised manuscript.
[1]Amago: Scalable in-context reinforcement learning for adaptive agents. ICLR 2024.
[2]Supervised Pretraining Can Learn In-Context Reinforcement Learning. NeurIPS 2023.
[3]Emergent Agentic Transformer from Chain of Hindsight Experience. ICML 2023.
---
Rebuttal Comment 1.1:
Title: Reply to the rebuttal
Comment: Thank you the authors for your rebuttal. The authors properly addressed my concerns and I keep my score.
---
Reply to Comment 1.1.1:
Title: Appreciation to Reviewer wVw1
Comment: Dear Reviewer wVw1,
We are grateful for your constructive suggestions and believe that incorporating the corresponding revision into the manuscript will significantly improve this paper. Thank you again for reviewing our paper and giving valuable feedback!
Kind regards,
Paper5823 Authors | Summary: The paper proposes Hybrid Mamba (HM) for in-context RL. Existing in-context RL methods are predominantly based on the Transformer architecture. Transformers come with quadratic complexity of self-attention and are computationally costly. Consequently, the authors propose a hybrid architecture that uses Mamba to compute sub-goals from long-context, which are fed into a low-level Transformer policy. The authors conduct experiments on grid-worlds and D4RL to evaluate their method.
Strengths: **Relevance**
The paper aims at deploying the Mamba architecture for in-context RL, which is very relevant given the quadratic complexity of the Transformer architecture.
This results in clear benefits in terms of time complexity.
**Experimental results**
Empirical results on simple gridworld environments and D4RL seem convincing and their method exhibits significant gains compared to Transformers.
Weaknesses: **Presentation**
The methodology raises some questions and should be improved, in particular:
- What is the reasoning behind sampling the sub-goal from a multi-variate Gaussian?
- How does this compare to using a fixed representation? (e.g., similar to CLS token)
- Why is the done-flag in Hybrid Mamba necessary? Do other methods (e.g., AD [1]) use this as well?
- What does “Extra high-value states” mean?
- What is the intuition behind removing actions from the Mamba context?
- What effect would dropping actions have in other methods?
Furthermore, the construction of “valuable sub-goals” is unclear.
One way to improve clarity would be to shorten the section on preliminaries and instead add more details to the Method section.
Figure 2 and Table 2 are missing the performance curves/scores for HM without valuable subgoals.
Finally, Figure 1 can be improved to enhance clarity.
**Significance of results**
The authors evaluate primarily on simple grid-world environments and rather simple robotics tasks. However, it is unclear how well HM generalizes to more complex tasks as used in other works [2].
**Evaluation**
The authors change their evaluation methodology from improvement curves on gridworlds (Figure 2) to average performance scores on D4RL (Table2).
On D4RL, HM seems to clearly outperform other methods.
However, the authors do not show in-context improvemenst which raises the question whether HM actually learns to improve in-context. Can the authors clarify, why no in-context improvement curves are shown for D4RL?
**Ablation studies**
Some ablation studies are missing and would add more depth to understanding the proposed method, in particular:
- What is the impact on performance of including the done-flag in Mamba?
- What effect does it have on other methods?
- What is the impact on performance of removing the action condition in HM?
- What effect does the same intervention have on other methods?
[1] Laskin et al., In-context Reinforcement Learning with Algorithm Distillation, ICLR 2023
[2] Raparthy et al., Generalization to New Sequential Decision Making Tasks with
In-Context Learning, ICML 2024
Technical Quality: 2
Clarity: 2
Questions for Authors: - Did the authors consider techniques such as key-value caching for improving inference speed of Transformers for results reported in Table 4?
- Why is Mamba worse in effectiveness (Table 1)? What is a particular (theoretical) reason for this? Why does Mamba shorten the training time?
- How does performance generally change, when making the models bigger? Do bigger models help on these tasks? How large are the considered models?
- How well does the construction of valuable sub-goals generalize to other environments (e.g., with sparse rewards)?
- How do in-context improvement curves look like on D4RL?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors highlight that setting the context length c to a fixed value as a current limitation of their method. However, a notable limitations are missing, namely that their evaluation is limited to simple environments, while it is unclear how well HM performs on more complex or new tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's positive appraisal, insightful comment, and criticism of our paper.
### [I].Reply to the Weakness
>**[1/6]W1.What is the reasoning behind sampling the sub-goal from a multi-variate Gaussian? How does this compare to using a fixed representation?**
It is possible to predict the sub-goal directly by using a fixed representation. The multivariate Gaussian distribution is inspired by the method of predicting continuous variables in RL tasks, such as SAC[1]. In fact, sampling from a multivariate Gaussian distribution introduces variability and flexibility in representing information. This approach can generate diverse sub-goals or latent variables, allowing for exploration and capturing more nuanced aspects of the input data.
>**[2/6]W2. Why is the done-flag in Hybrid Mamba necessary? Do other methods (e.g., AD) use this as well? What does “Extra high-value states” mean?**
Our use of the "done-flag" is inspired by the Amago [2] and AT [3], which investigated the contexts consisting of different tokens and different numbers of episodes in D4RL. The ablation experiment found that the "done-flag" is beneficial for in-context RL to improve its performance from multiple historical trajectories.
The "Extra high-value state" denotes the state with high-value weighted average of accumulated rewards in the future steps. During the training phase, we introduce extra high-value states to associate the actions generated by the transformer and assist the Mamba to generate similar sub-goals that align with these states. On the other hand, although the high-value sub-goal usually appears in the future time step of the current trajectory, it may also appear in the historical trajectory, encouraging the Mamba to extract information from the long sequence context.
>**[3/6]W3. What is the intuition behind removing actions from the Mamba context?**
We clarify that adding action tokens to the context of Mamba is possible. There are two reasons why we do not introduce action tokens: First, unlike transformers, the Mamba model does not generate actions, but rather representations of subgoals. Due to such subgoals do not appear explicitly in the training data, Mamba cannot use them as input for autoregressive generation. Therefore, we do not add action tokens to Mamba's input to distinguish this from the autoregressive generation of actions in the transformer. Second, our Mamba model predicts a sub-goal every $c$ steps, resulting in the context not being composed on consecutive time steps. Therefore, one-step actions can hardly help Mamba predict future states after multiple time steps, as they are used to assist in predicting the next adjacent state. In particular, we are not saying that actions are unimportant. In the context of consecutive time steps, actions are essential tokens, also included in our transformer and other baselines.
>**[4/6]W4.The authors evaluate primarily on simple grid-world environments and rather simple robotics tasks. However, it is unclear how well HM generalizes to more complex tasks as used in other works.**
The grid world is one of the most commonly used benchmarks for in-context RL, such as AMAGO[2], AD[4], and DPT[5]. This is because the grid world provides many tasks that are difficult to achieve zero-shot transfer, which is very suitable for testing the ability of methods to learn from context. In addition, we also tested on large variants of grid world, where the difficulty of the tasks is greatly increased. Figure 2 in our manuscript shows that the baselines all showed significant performance degradation in large variant tasks. We are also encouraged by the new work pointed out by the reviewers, published in ICML 2024, which exploits the potential of in-context RL in a completely different setting. Due to time constraints for the rebuttal, we will add their proposed new benchmark in the revised manuscript.
>**[5/6]W5.On D4RL, HM seems to clearly outperform other methods. However, the authors do not show in-context improvemenst which raises the question whether HM actually learns to improve in-context.**
In the current manuscript, we followed the prior in-context RL work by showing different styles of results for d4rl [3] and grid-world [4]. We supplement the D4RL plots in Figure 1 in the global pdf to avoid reviewer misunderstandings.
>**[6/6]W6.What is the impact on performance of including the done-flag in Mamba? What is the impact on performance of removing the action condition in HM?**
According to the reviewer's suggestions, we report the ablation studies on the done-flag and action condition. As we discussed in W2 and W3, the results in Table 1 show the corresponding conclusions. In addition, we show the improvement curve in Figure 1 in the global PDF.
**Table 1. The ablation study on the done-flag and action condition.**
|Token-flag|HalfCheetah-Med-Expert|HalfCheetah-Med|HalfCheetah-Med-Replay|Hopper-Med-Expert|Hopper-Med|Hopper-Med-Replay|Walker2d-Med-Expert|Walker2d-Med|Walker2d-Med-Replay|
|-|-|-|-|-|-|-|-|-|-|
| HM | 96.12 $\pm$ 0.28 | 45.45 $\pm$ 0.35 | 45.26 $\pm$ 0.43 | 117.19 $\pm$ 0.65 | 83.15 $\pm$ 0.63 | 98.36 $\pm$ 0.51 | 118.21 $\pm$ 0.56 | 88.29 $\pm$ 0.76 | 95.66 $\pm$ 1.16 |
| HM without done-flag | 93.95 $\pm$ 0.26 | 42.21 $\pm$ 0.38 | 41.92 $\pm$ 0.36 | 112.21 $\pm$ 0.62 | 78.89 $\pm$ 0.58 | 93.21 $\pm$ 0.32 | 113.57 $\pm$ 0.75 | 86.24 $\pm$ 0.65 | 90.52 $\pm$ 0.66 |
| HM with actions token | 96.03$\pm$ 0.12 | 45.51 $\pm$ 0.28 | 44.98 $\pm$ 0.18 | 117.22 $\pm$ 0.60 | 83.14 $\pm$ 0.65 | 98.25 $\pm$ 0.46 | 118.31 $\pm$ 0.58 | 88.15 $\pm$ 0.67 | 95.61 $\pm$ 1.05 |
Due to word limitations, we will answer the remaining questions in the next comment.
---
Rebuttal 2:
Title: Rebuttal of the remaining questions
Comment: Due to word limitations, we answer the remaining questions in this comment. Please review this part of the response after the rebuttal part.
### [II].Reply to the Questions
>**[1/5]Q1. Did the authors consider techniques such as key-value caching for improving inference speed of Transformers for results reported in Table 4?**
The transformer model we counted in Table 4 has applied key-value caching. When calculating the action of a new time step, we do not recalculate the key and value of the previous time step. However, the transformer's computational complexity grows quadratic with the sequence length, which makes it much slower than the Mamba method in the context of long sequences.
>**[2/5]Q2. Why is Mamba worse in effectiveness (Table 1)? What is a particular (theoretical) reason for this? Why does Mamba shorten the training time?**
Mamba models long sequence dependencies through an input-dependent selection mechanism. This mechanism acts independently on each token, so its computational complexity is less than the attention mechanism based on token pairs. Therefore, Mamba's computational complexity grows linearly with the sequence, making it naturally superior to transformers in terms of training speed and inference speed. However, even under the MDP assumption, the states in RL tasks are not independent, which causes Mamba to perform lower than the transformer in terms of effectiveness.
>**[3/5]Q3. Do bigger models help on these tasks? How large are the considered models?**
As shown in the hyperparameters in Table 3 of the manuscript, our Transformer model follows the setting of the AT method [3], and the Mamba model follows the setting of the DM method [6]. According to the conclusions in the Mamba paper, larger models generally improve performance. Due to resource and time constraints, we will add tests of larger models in the revised manuscript. However, it is worth mentioning that HM, with the current model size, can handle these tasks well.
>**[4/5]Q4. How well does the construction of valuable sub-goals generalize to other environments (e.g., with sparse rewards)?**
The construction of valuable sub-goals also generalizes well to environments with sparse rewards. Figures 2 and 3 in the manuscript report the tasks in a sparse rewards setting, showing sub-goals' effectiveness. In particular, we will try our best to test HM on the benchmarks suggested by the reviewers in the revised manuscript.
>**[5/5]Q5. How do in-context improvement curves look like on D4RL?**
As we clarify in W5, the curves of D4RL are shown in Figure 1 in the global PDF file.
[1]Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. ICML 2018.
[2]Amago: Scalable in-context reinforcement learning for adaptive agents. ICLR 2024.
[3]Emergent Agentic Transformer from Chain of Hindsight Experience. ICML 2023.
[4]In-context Reinforcement Learning with Algorithm Distillation. ICLR 2023.
[5]Supervised Pretraining Can Learn In-Context Reinforcement Learning. NeurIPS 2023.
[6]Decision Mamba: Reinforcement learning via sequence modeling with selective state spaces. 2024.
---
Rebuttal Comment 2.1:
Title: Looking forward to your reply
Comment: Dear Reviewer yEub,
We value your positive feedbacks and constructive suggestions for our paper and sincerely appreciate your effort in reviewing it. We hope we have effectively addressed all the concerns raised. As the end of the discussion is approaching, we are wondering if there are any additional potential clarifications or suggestions that you think would help us improve this manuscript.
Thank you again for your dedicated review and invaluable insights.
Kind regards,
Paper5823 Authors | Rebuttal 1:
Rebuttal: Dear Reviewers,
We are very grateful to the reviewers for their valuable suggestions, which further improved our work. We provide the learning curve of our HM and ablation studies in d4rl tasks with a submitted 1-page pdf.
Thank you again for your careful review and helpful comments.
Kind regards,
Paper5823 Authors
Pdf: /pdf/5b73c1a2b9a4570135fba98900749c08a497e705.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Communication Efficient Distributed Training with Distributed Lion | Accept (poster) | Summary: The paper introduces Distributed Lion, a variant of the Lion optimizer, tailored for distributed training environments. Lion, known for its memory and computational efficiency, is adapted to reduce communication costs between workers and a central server. This is achieved by communicating binary or low-precision vectors rather than high-precision floating-point vectors. The paper presents theoretical convergence properties and empirical results that demonstrate Distributed Lion’s robustness and efficiency across various tasks, worker counts, and batch sizes. It shows comparable performance to standard Lion or AdamW optimizers but with significantly reduced communication bandwidth.
Strengths: + Innovation in Communication Efficiency: The use of binary or low-precision vectors for communication significantly reduces bandwidth requirements, which is a critical factor in distributed training.
+ Theoretical Validation: The paper provides a solid theoretical foundation confirming the convergence properties of Distributed Lion.
+ Empirical Evidence: Extensive experiments demonstrate the robustness and efficiency of Distributed Lion across a variety of tasks, making a strong case for its practical applicability.
Weaknesses: - Incompatible with Allreduce: after converting the gradients to binary or low-precision, Allreduce cannot be used for gradient synchronization. One of my concerns is about its communication efficiency in real-world distributed systems, especially training with a high number of workers.
- Computation Overhead: While the communication cost is reduced, the overhead of converting updates to binary or low-precision vectors and back might offset some of the gains in certain scenarios. It helps if the end-to-end training throughput comparison is reported.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the fundamental difference between distributed Lion and SIGNUM-like algorithms?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Incompatible with all reduce.**
Our current algorithm indeed requires a customized all_reduce, but we believe the code should be relatively simple to apply to various real-world applications. Additionally, we are exploring ways to optimize the communication process for low-bit information.
You are right, due to the limited operator support in practical all reduce such as PyTorch, which is ring-based reduce, it makes the low-bit information hard to broadcast and maintain its information when broadcast through the whole ring. This issue widely exists in all our baselines. Our current practical solution is packing and encoding our binary information into a low-bit tensor and then using an int8 tensor as a container to broadcast through the ring all reduce. This can still compress the information from 2-4x depending on the number of workers, we show the empirical result under different works below together with question 2.
For sure, we are looking forward to some core updates that can be made by the NCCL library or pytorch team to support more low-bit friendly operators or datatypes like fp4, and fp8 during communication, which could greatly help with low-bit communication algorithms.
**2. Computation overhead.**
Please check the common response.
**3. The difference between Distributed Lion and SIGNUM-like algorithm.**
The main differences are two-fold:
- Lion has an additional reweighting before the sign(), as in sign( $\beta_1 m_t + (1 - \beta_1) g_t$) and the weight decay term, which are crucial for performance improvement. The unique form of Lion presents the special $\ell_\infty$ constraint optimization view of the algorithm.
- Usually, when applying SIGNUM to distributed training, we aggregate the gradient and then apply the sign operation. In distributed Lion, we transmit the signed results and their majority vote, not the gradient. The weight decay, as a result, is applied locally in distributed Lion. | Summary: Large-scale AI model training has increasingly higher requirements on time, cost and environmental impact, so it is crucial to develop efficient optimizers. As an emerging optimizer, Lion optimizer has advantages in memory, computation and sample efficiency compared with AdamW. Distributed Lion: The paper proposes Distributed Lion, which is an innovative adaptation of Lion optimizer in distributed training environment. Using symbolic operations in Lion, Distributed Lion only requires binary or low-precision vectors to be communicated between working nodes and central servers, significantly reducing communication costs.
Strengths: 1. Distributed Lion significantly reduces communication overhead by communicating only binary or low-precision vectors between workers, which is particularly beneficial for large-scale distributed training.
2. The paper provides theoretical analysis to prove the convergence of Distributed Lion.
3. Experimental results show that Distributed Lion can achieve comparable performance to the standard Lion or AdamW optimizer while reducing communication bandwidth.
Weaknesses: 1. The actual updating on local worker parameters is gradients, while the communicated message is signs. While the theoretical analysis shows this updating can guarantee the convergence, the actual updating style looks like the quantization. The important baselines like QSGD and SignSGD are missed.
2. The performance of Distributed Lion can be sensitive to hyperparameter choices, especially those related to communication and aggregation strategies.
3. The code is not provided. Thus the reproducibility of the experiments is weakened.
4. The experiment performance on the CIFAR-10 is very low. Considering that the well-known validation performance of CIFAR-10 can be achieved as 94%, the proposed results are around 90%. Why the performance decreases?
5. The important baseline SGD with momentum is not provided.
6. The convergence curves on training with ImageNet and OpenWebText are not provided. This makes it hard to identify the convergence speedup between different optimizers.
7. The wall-clock time is not provided. The quantization operation and the majority vote require extra time, it will be better to show this optimizer can reduce the real-world throughputs.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How do you ensure that the hyper-parameters of different optimizers are set as a suitable combination for them? Suitable hyper-parameter settings can ensure the fair comparison.
If the above weaknesses and questions are addressed, I'm happy to raise the score.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Two minor points considering the realistic device constraints:
1. The experiment scalability is with 32 GPUs, which is not a large scale distributed training setting.
2. The training model is small-scale with less than 1B parameters.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Comparison to quantization methods.**
The actual update on each worker is actually not the gradient, but rather the Lion’s update (the sign() plus weight decay). To our knowledge, the quantization methods often quantize the gradients before feeding the quantized gradient to the optimizer. In our case, the sign() operation is applied to the reweighted momentum, which is computed using exact gradients. Moreover, unlike quantization methods, the sign() function in distributed Lion is a feature, instead of any approximation.
Given the limited time for rebuttal, we are not able to run QSGD and SignSGD distributedly. However, note that the Ternary gradient baseline is a specific quantization method, where the gradient is quantized to a ternary. Moreover, we provide a comparison against a stronger baseline than SignSGD (Sign SGD momentum, a.k.a., SIGNUM), and also the SGD momentum (as requested by the reviewer) in the following:
| Algorithm | worker = 4 | worker = 8 | worker = 16 |
|------------------|------------|------------|-------------|
| SGD Mom | 79.3 | 81.8 | 82.5 |
| SIGNUM | 88.3 | 87.2 | 85.5 |
| Dist Lion (MaVo) | 90.9 | 89.6 | 88.9 |
| Dist Lion (Avg) | 90.5 | 87.9 | 87.9 |
From the result, we see that the distributed Lion still results in better performance.
**2. Sensitivity to hyperparameters.**
Empirically we observe that the performance of distributed Lion is not sensitive to hyperparameters. In our experiments, we did not tune the betas, and chose the default betas suggested in the Lion paper, and only tuned the learning rate and weight decay, which are the two hyperparameters shared by all optimizers. **According to our findings in Table 4, the best hyperparameters for distributed Lion are the same as in the Global Lion.** Therefore, we expect that the default hyperparameters for Lion will always be a good configuration for distributed Lion.
**3. Reproducibility.**
The actual implementation requires further internal review to be made publicly available. But we provide an implementation of our distributed Lion optimizer in this [link](https://anonymous.4open.science/r/dist_lion-6789/dist_lion.py).
**4. Performance on CIFAR-10.**
Note that as mentioned in Line 224, we are applying all optimizers to ViT models with 6 layers and 8 heads, as it is easier and faster to train. We chose ViT models as they are broadly used in practice. As a result, given the limited size of the ViT model, a performance with nearly 90% accuracy is already pretty high. The overall architecture is similar to the ViT small from this codebase [https://github.com/kentaroy47/vision-transformers-cifar10/tree/main] (note that it is ViT small, not the ViT small (timm transfer)). The training result from their repo is around 80% accuracy.
**5. Convergence Curve.**
We provide the training curves of Distributed Lion, Global Lion, and Global AdamW on ImageNet training in the uploaded PDF from the common response. From the curve, we can tell that Distributed Lion performs slightly better throughout the training.
**6. Wall-clock time comparison.**
We refer the reviewer to the common response for this question.
---
Rebuttal Comment 1.1:
Title: Thanks for the responses
Comment: I appreciate for the efforts from authors. I have two follow-up questions:
- 1. Why does SIGNUM outperform SGD Mom? As a compression method, SGD Mom might be the upper bound of SIGNUM? Maybe the hyper-parameters of the SGD Mom are not well tuned?
- 2. Could you give some explanations about why the Optimizer State Communication (ms) of MaVo is half less than the baseline. Because the communication messages are signs, I suppose they should be less than original messages for 16x or 32x.
---
Rebuttal 2:
Title: Reply to Reviewer
Comment: We thank the reviewer for your followups :)
**1. Why SIGNUM outperforms SGD Mom.**
This is in fact expected. So the sign() operator is not merely compressing the momentum information, it also **normalizes** the update. Take the Adam optimizer as an example, at the first step, Adam's update recovers the normalized gradient descent $ g / ||g|| $, and at each step, Adam's update is also roughly doing that. It is known that these kinds of **normalized gradient descent** perform well in practice. One reason is that it makes each entry of the parameter update at a similar pace (therefore similar to a second-order method), the other reason is that it can avoid saddle points (as the update norm is relatively constant). SIGNUM (and similarly Lion) also mimics what Adam does by having the sign operation (their update norm is a constant). Based on my personal experience, SGD Momentum works better for convolution-based networks. But for Transformer models, often Adam-like optimizers work better.
**2. Why the Optimizer State Communication (ms) is half less?**
As we mentioned in the common response, currently during our implementation, we are using the int8 all reduce instead of binary all reduce. This is because in practice all-reduce in PyTorch does not support low-bit all-reduce in a ring-based fashion, which is the default way to perform distributed training. Our current practical solution is packing and encoding our binary information into a low-bit tensor and then using an int8 tensor as a container to broadcast through the ring all reduce. This can still achieve a 2-4x compression, depending on the number of workers.
On the other hand, we are looking forward to some core updates from the NCCL library or pytorch team to support more low-bit friendly operators or datatypes like fp4, and fp8 during communication, which could greatly help with low-bit communication algorithms. As a result, our current work mainly focuses on demonstrating the theoretical convergence and empirical performance of distributed Lion algorithms, rather than the practical aspects of implementation.
---
Rebuttal Comment 2.1:
Title: Thanks for the further explanations
Comment: I appreciate the further explanations, which clearly address my concerns. I'd like to increase my score as 5. | Summary: This paper extends the Lion optimizer to data parallel distributed training. Unlike optimizers like SGD and Adam, the binary update in Lion can be exploited to minimize the communication. They investigate two cost effective methods for the communication of binary updates; averaging and majority vote. Experimental results show that both methods yield competitive results to global Lion and AdamW.
Strengths: The convergence analysis provided in Section 3 gives some reassurance to this non-conventional optimization method. Results are promising and experimental conditions seem adequate.
Weaknesses: The proposed method is a trivial extension of Lion to data parallel distributed training, so the only interesting contribution seems to be the convergence analysis.
The main contribution of this work is supposed to be the reduction of communication overhead, but there are no results showing the actual breakdown of the training time. Therefore, it is not possible to determine whether the reduction of communication volume is actually contributing to the reduction of the overall training time. Since the results seem to vary quite a bit for different models and datasets, such information is useful for determining whether the experiments are conducted for configurations that actually show a significant impact on the training time. There remains a possibility that the current method does not work as well for extremely large models trained with ZeRO 3 data parallelism, which is where the communication overhead really becomes a problem.
Technical Quality: 3
Clarity: 3
Questions for Authors: How different are the global binary updates between averaging and majority vote?
Are the results similar because they are similar or despite their large difference?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations pointed out above are not explicitly stated in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. Wall-clock time reduction.**
We refer the reviewer to the common response for this question.
**2. Compatibility with ZeRO3 data parallelism.**
Although large-scale parallelism techniques such as ZeRO3 and FSDP require additional inter-node gather operations that cannot be accelerated by our algorithm, we can still optimize intra-node optimizer state synchronization. In the common large-scale training scenario where the model size increases and intra-node latency becomes significant, our optimizer can reduce communication time by around 50%. While our algorithm does not reduce inter-node gather time in this case, it is compatible with other gradient-size compression algorithms like Galore[1]. In optimal cases, combining these algorithms can lead to even greater reductions in communication time.
**3. Difference between majority vote and averaging.**
Based on our empirical observations, we do not observe too much difference between the two schemes and one could potentially perform better than the other on different datasets. The two reduction schemes are indeed similar, as the majority vote method can be seen as applying an actual sign() activation on top of the averaging.
---
[1] Zhao et al, GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the actual training time and its breakdown. I still feel that more emphasis should be put on the reduction of communication time for different model sizes, batch sizes, and including other modes of parallelism such as tensor, pipeline, and ZeRO. The communication in Int8 does reduce the volume by half, but this method has the potential to do much better. The effort to extend Lion to a distributed Lion using only data parallelism is minimal, so I feel that the paper should include more detailed studies on the actual reduction of communication time. I am sticking to my original score. | Summary: This paper proposes Distributed Lion, a new variant of Lion optimizer for distributed training. The proposed algorithm only requires to communicate binary or lower-precision vectors between workers to the center server, significantly reducing the communication cost. The theoretical analysis proves the convergence of the proposed algorithms. The empirical results show that the proposed algorithms have comparable model performance on CV/NLP applications but with significantly less communication overhead compared to the baselines.
Strengths: 1. This paper proposes Distributed Lion, a new variant of Lion optimizer for distributed training.
2. The proposed algorithm only requires to communicate binary or lower-precision vectors between workers to the center server, significantly reducing the communication cost.
3. The theoretical analysis proves the convergence of the proposed algorithms. The empirical results show that the proposed algorithms have comparable model performance on CV/NLP applications but with significantly less communication overhead compared to the baselines.
Weaknesses: 1. According to Assumption 3.1, the convergence requires i.i.d. local datasets, while real-world distributed training typically uses non-i.i.d. local data.
2. In the empirical results, there seems to be no wall-clock time for training is reported. Note that the overall goal of communication reduction is to reduce the training time. Thus, it is important to report loss/acc vs. wall-clock time in the experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible to provide a convergence analysis based on non-i.i.d. data?
2. For the experiments, are the local dataset on each worker i.i.d. or non-i.i.d.?
3. Since the proposed algorithm can compress the communication to an extreme extend, I wonder whether it could also be applied to the federated learning scenario, where the local datasets are not only non-i.i.d., but also highly heterogeneous.
4. Is there any empirical results reporting wall-clock time of training?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are well discussed and addressed in this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **1. i.i.d assumption.**
Indeed, currently, we assume data are i.i.d (the dataset on each worker is pre-sharded before training). We leave it as a future work to show the convergence of distributed Lion under a non-i.i.d setting.
**2. Wall-clock time comparison and communication reduction.**
Please check the common response.
**3. Whether the method can be applied to federated learning?**
Yes, in principle we expect the method to work in a federated fashion. However, when distributed Lion is applied in a local-SGD fashion, the local update will involve multiple steps of the Lion update, so the aggregated server update will be quantized instead of being binary.
---
Rebuttal Comment 1.1:
Comment: My major concern on wall-clock time comparison is well addressed. And I guess the non-iid settings and the federated learning settings worth another paper in the future.
Thus, I will keep the positive score. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their valuable feedback. In the following, we provide the general response and address individual concerns separately in individual responses.
**1. Wall-clock time comparison.**
Several reviewers have requested a wall-clock time comparison. Our study primarily focuses on the theoretical and empirical performance of distributed Lion algorithms, rather than the practical aspects of implementation. Currently, we employ a majority vote strategy using int8 combined with a built-in all-reduce operation, which already achieves notable speedup in wall-clock time and reduces communication overhead. In particular, using a 3B Transformer model, with 1K token length, across 64 A100 40G devices, with batch size 1, using our current implementation, we have:
| Method | Forward Time (ms) | Backward Time (ms) | Optimizer State Communication (ms) | Total (ms) |
|-------------------------|-------------------|--------------------|------------------------------------|-------|
| AdamW | 18 | 141 | 132 | 291 |
| Lion | 18 | 137 | 133 | 288 |
| Distributed Lion (MaVo) | 18 | 128 | 61 | 207 |
As we can see, the communication overhead is saved by more than 50% even with our int8 implementation. This could be further improved if customized all reduce is supported for binary inputs.
Reviewer gUvC raised concerns about the limited support for low-bit operations in ring-based all-reduce frameworks like PyTorch. This challenge affects not only our approach but also all our baseline comparisons. We address this by packing binary information into low-bit tensors and then broadcasting it using an int8 tensor container through the ring all-reduce.
We anticipate future enhancements from the NCCL library or the PyTorch team that introduce more efficient low-bit operations or data types, such as fp4 or fp8. Such developments would significantly improve the efficiency of low-bit communication algorithms.
**2. Release of the Code.**
We release our implementation of the distributed Lion [here](https://anonymous.4open.science/r/dist_lion-6789/dist_lion.py). We intend to publicly release the method for the convenience of future research.
Pdf: /pdf/99f41776560fb081fbbd19cacdeeb822c9fed21d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Spatially-Aware Language and Audio Embeddings | Accept (poster) | Summary: This paper presents an approach to learning spatially aware language representations. The authors propose a contrastive representation model that integrates spatial context into language representations, aiming to enhance the performance of tasks that require spatial reasoning. The model combines visual and textual data to create embeddings that are sensitive to spatial relationships. The contributions include an architecture of the proposed model, extensive experimental results demonstrating improved performance on spatial reasoning benchmarks, and an analysis of the model's ability to generalize across different spatial contexts.
Strengths: 1. The paper makes a strong contribution to spatial reasoning. The integration of spatial context into text-to-audio generation models is an important and underexplored area, and this work offers a novel and effective solution.
2. The experimental setup is rigorous, with well-designed experiments that effectively validate the model's performance.
3. The paper is well-written, with clear explanations of the methodology and results.
4. The findings have significant implications for improving spatial language understanding in various applications.
Weaknesses: 1. The reliance on synthetic datasets may limit the generalizability of the findings. The authors could explore the way to train the model on in-the-wild data.
2. The current interpretation experiments (Sec. 5.4) only study a four-class classification ("left," "right," "up," "down"), which is insufficient for real-life scenarios. For instance, spatial audio applications often require more nuanced classifications, such as distance perception (e.g., strong/weak reverb in indoor/outdoor settings), which are critical for capturing and representing spatial information. The authors should consider extending the experiement to handle a wider range of spatial attributes to enhance its applicability in diverse settings. For example, the authors should consider using prompts like "xxx is making a sound in the distance" and "xxx is making a sound nearby" to figure out if the results are different.
3. The paper could benefit from a more detailed error analysis, identifying common failure cases and understanding why the model fails in certain scenarios. This analysis would provide insights for further improvement and refinement of the model.
4. While the model performs well on tasks like retrieval and source localization, its ability to generalize to spatial text-to-audio generation remains to be seen.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the limitations of their work by discussing the datasets used and the model's computational requirements. Additional limitations can be found in the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide comment and feedback on our submission. We address the questions and concerns below. Please let us know if further clarification is required.
***The reliance on synthetic datasets may limit the generalizability of the findings. The authors could explore the way to train the model on in-the-wild data.***
The scarcity of publicly available, open vocabulary, labeled spatial audio datasets presents a challenge. While modalities like text, image, and video benefit from vast amounts of internet-sourced data, that is not the case for spatial audio. Collecting and annotating high-quality spatial audio data is considerably more complex, requiring specialized equipment (suitable microphones, headphones, and room measuring equipment) and human expertise for accurate spatial perception and labeling. We acknowledge the importance of exploring in-the-wild data and consider it a valuable future step in our research.
***The current interpretation experiments (Sec. 5.4) only study a four-class classification ("left," "right," "up," "down"), which is insufficient for real-life scenarios. For instance, spatial audio applications often require more nuanced classifications, such as distance perception (e.g., strong/weak reverb in indoor/outdoor settings), which are critical for capturing and representing spatial information. The authors should consider extending the experiement to handle a wider range of spatial attributes to enhance its applicability in diverse settings. For example, the authors should consider using prompts like "xxx is making a sound in the distance" and "xxx is making a sound nearby" to figure out if the results are different.***
Experiments in Section 5.4 were designed to better understand the embedding space of the audio and text. The experiment referred to in question (paragraph 1 of Section 5.4) does directional classification using a shallow (one hidden and one output layer) MLP on the audio embeddings, and uses the MLP to then classify corresponding text embeddings. For completeness, we have also run these experiments for **distance** and **elevation**, and obtained accuracies of 76.5% and 55.1% respectively.
We are a little unsure on how to best address the suggestion about natural prompts (`xxx is making a sound in the distance`....). Table 2 actually presents results for zero-shot classification accuracy using diverse prompts of the suggested form. For every spatial attribute, we take language descriptors (as detailed in Table A.3 of the appendix) and affix it in a prompt (`Sound coming from <>`). We then run classification using cosine similarity of the text and audio embeddings and report the accuracies. Please let us know if we have misunderstood your question, we would be happy to clarify further or run any additional experiments.
***The paper could benefit from a more detailed error analysis, identifying common failure cases and understanding why the model fails in certain scenarios. This analysis would provide insights for further improvement and refinement of the model.***
Thank you for raising this issue. As discussed in the global response (Exp. B.), we have carried out this analysis for the direction-of-arrival error according to azimuth, elevation, distance, floor area, mean T30, and TUT Sound Events 2018 semantic classes. Results show little variance across all those conditions, though errors are higher at the extrema of the conditions.
***While the model performs well on tasks like retrieval and source localization, its ability to generalize to spatial text-to-audio generation remains to be seen.***
This study is an initial investigation into aligned representations of spatial-audio and spatial-captions. To measure the success of ELSA, we chose perception-based tasks involving retrieval and source localization because these are well-understood and have clear ways of measuring performance objectively. We agree that spatial text-to-audio generation is an important task, however the success of the generation requires either user-studies or the development of new metrics to measure the quality of spatial audio generation. We are not aware of any established benchmarks for reporting generation quality of _spatial_ audio. For these reasons we opted for tasks for which we can directly measure performance, and leave generative tasks for future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing most of my concerns. I appreciate your effort on the error analysis, and please make sure to include this in a revision. I’m happy to increase my score to WA.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Error analysis will be included in the final iteration. We appreciate your thoughtful review of ELSA and your valuable insights. Thank you! | Summary: This paper describes a method for learning to represent spatial audio (and text). The proposed model is trained on synthetically spatialized audio data with corresponding text prompts. The authors evaluate the system on audio captioning/retrieval and localization tasks, showing that the proposed model effectively represents both semantics and locality of audio.
Strengths: The paper is well written, clearly organized, and easy to follow. The proposed method makes intuitive sense and appears to be effective. The empirical evaluation is reasonably thorough and the choice of tasks and baseline models seem appropriate. Spatial audio is a growing area (in the context of machine learning / event detection / representation learning), and I think the work described here does fill an unaddressed area of the literature in an interesting way. Overall I think they did a great job here.
Weaknesses: I don't have much to fault here, but there are a few points that I think could be expanded to improve clarity and help readers understand the contribution here. I'll elaborate on these points in the questions section, but the high-level gloss is:
- While the spatial representation part of the work (ie FOA-derived input) is explained well, there is almost no explanation of how the spatialization was implemented.
- There is little (if any) qualitative analysis of the results, only aggregate scores reported in tables.
Technical Quality: 4
Clarity: 4
Questions for Authors: - How was the spatialization implemented? I expect this was done via standard methods (ISM implemented by pyroomacoustics? something else?), but there is no mention of this in either the main text or the appendix. Additionally, I think some of the details from the appendix (Table A.2) should be mentioned directly in the main text, such as the range of room sizes, angle distributions, etc.; these details are important, and do not take much space. (If you do need to sacrifice anything, I don't think the definition of log-mel transformation is as critical to include since it is standard.)
- Since TUT2018 is a finite vocabulary dataset, it would be informative (and entirely possible) to see a per-class and per-environment breakdown of the evaluations reported in table 1. This would be informative because it's not necessarily a given that your spatialization is equally effective across categories (or rooms / room sizes). If the model does turn out to perform consistently across categories - great! If not, it may suggest a weakness in either the spatial rendering or prompt generation. (If you do compute these results, it may or may not make sense to store in the appendix, depending on how interesting the results are.)
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations sections seems sufficient to me.
One potential caveat here is that the authors do not explicitly mention any limitations imposed by the accuracy of the spatialization process, e.g., whether it will only work well for simulated closed environments (shoebox model) or if it can accurately capture open environments. This I think would be easy to resolve with a bit more information about the process and an extra line the limitations (if necessary).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time reviewing the paper and providing valuable feedback and suggestions. We will do our best to clarify the points that you have raised below. Please let us know if there are further questions.
***While the spatial representation part of the work (ie FOA-derived input) is explained well, there is almost no explanation of how the spatialization was implemented. How was the spatialization implemented? I expect this was done via standard methods (ISM implemented by pyroomacoustics? something else?), but there is no mention of this in either the main text or the appendix. Additionally, I think some of the details from the appendix (Table A.2) should be mentioned directly in the main text, such as the range of room sizes, angle distributions, etc.; these details are important, and do not take much space. (If you do need to sacrifice anything, I don't think the definition of log-mel transformation is as critical to include since it is standard.)***
Thank you for the suggestion regarding Table A.2. We will move it over to the main paper in the camera ready version. As per your suggestion we will also include more details on the simulation algorithm. Please let us know if there are any other suggestions as well.
The augmentation pipeline mirrors that of Spatial LibriSpeech [1], employing a combination of geometrical- and boundary-element acoustical simulation. This allows us to physically model the way that sources radiate sound, as well as its propagation in both enclosed and open spaces.
[1] Sarabia et al. (2023) Spatial LibriSpeech: An Augmented Dataset for Spatial Audio Learning. InterSpeech.
***There is little (if any) qualitative analysis of the results, only aggregate scores reported in tables. + Since TUT2018 is a finite vocabulary dataset, it would be informative (and entirely possible) to see a per-class and per-environment breakdown of the evaluations reported in table 1. This would be informative because it's not necessarily a given that your spatialization is equally effective across categories (or rooms / room sizes). If the model does turn out to perform consistently across categories - great! If not, it may suggest a weakness in either the spatial rendering or prompt generation. (If you do compute these results, it may or may not make sense to store in the appendix, depending on how interesting the results are.)***
Thank you for this suggestion. Please refer to the general response and attached PDF for a detailed error analysis (Exp. B.). Briefly, we find little variation across different attributes, though the errors are higher at the extrema of the ranges. We will update the manuscript appendix with this analysis. | Summary: The paper presents ELSA (Embeddings for Language and Spatial Audio), a novel model designed to learn spatially-aware audio and text embeddings using multimodal contrastive learning. The primary aim is to address the limitations of existing audio foundation models, which lack spatial awareness, and sound event localization and detection models, which are constrained to a fixed number of classes and absolute positional descriptions. The authors spatially augment several classical open-source audio datasets in order to train ELSA. Results show that ELSA is able to capture spatial attributes and semantic meaning of the audio.
Strengths: - The focus of this paper is on learning spatial audio embeddings associated with natural language description, which is a very interesting and rewarding problem for which there is a lack of models.
- These authors synthesize large amounts of non-spatial audio data under various spatial configurations, which is a valuable contribution to the field of spatial audio understanding.
Weaknesses: - For this paper, my biggest concern is the generalizability of the model to real scenarios.
While the synthetic dataset is extensive, there is a risk that the model might not generalize well to real-world scenarios due to potential biases in simulated environments. To show the performance of model generalization to real scenarios, the experiments only on a small real-world dataset appear too thin. Would it be possible to test ELSA in other real scenarios, for example, in some of the tasks in the latest DCase competition, e.g. Sound Event Localization?
- For paper writing, too much important information is put in appendices, such as the structure figure of the whole model. Perhaps the layout of the writing could be adjusted to make it easier to read.
- The citation format of the article is highly problematic and needs to be standardized.
Technical Quality: 3
Clarity: 2
Questions for Authors: The experiments in Table 2 confuse me a lot. In Sec. 5.2, the authors mentioned that "The ELSA text embeddings for such captions are extracted from the pre-trained encoder and compared in a zero-shot fashion with ELSA audio embeddings for samples from the test set using cosine similarity. We classify the match as correct if the spatial attribute in the closest audio sample matches the spatial attribute of the query caption".
However, the number of classes of a spatial attribute is very limited (For instance, there are only two classes "far" and "near" for the "distance" attribute), which means there are only two captions that will be used for the "distance" attribute? Wouldn't there be very few captions being used for testing totally?
Hopefully, the authors can explain the experimental configuration a bit more.
- To train ELSA on single-channel audio, the authors repeat the single channel 4 times to fake a 4-channel FOA audio and compute Intensity Vectors. However, the way IV is calculated possibly doesn't make sense for this kind of faked 4-channel audio. Why is it designed this way? Why not try to design a separate feature extractor for single-channel audio?
- It is natural to understand computing bearing information from spatial audio, which is essentially a bit similar to calculating the "Time Difference of Arrival" based on different channels. But how to understand that the model can get distance information from spatial audio? In other words, where does the information about distance come from?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - As the authors mentioned, for creating a spatial audio caption dataset, using LLM to rewrite the caption might lead to hallucinations.
- Model performance in real scenarios is yet to be verified.
- ELSA looks very suitable to be used as a spatial audio encoder for a caption model to conduct spatial audio captioning, but unfortunately, the authors did not show this kind of capability in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to provide comments and suggestions. We address the points raised below.
***Would it be possible to test ELSA in other real scenarios, for example, in some of the tasks in the latest DCase competition, e.g. Sound Event Localization?***
***Model performance in real scenarios is yet to be verified***
Table 1 reports localization performance on the TUT sounds dataset, which is a dataset of real-world room impulse responses encoded as FOA and convolved with clean recordings. We use this dataset specifically because it has properties similar to the synthetic samples we have used for model training. In particular, each sample has a single and stationary point source, which allows us to focus specifically on performance differences due to the sim-to-real gap. Other datasets, such as STARSS23 (latest dataset in the DCASE - SELD task), contain overlapping and moving sound sources (features we will support in future).
Tables 2 and A.6 present the spatial and semantic performance of our model on the spatial-RWD dataset (our real world dataset). We acknowledge that this dataset is small in size. However, it is important to note that its data distribution is significantly different from the training data used for our model. As such, the performance metrics on this dataset offer valuable insights into the model's generalization beyond simulated data.
***ELSA looks very suitable to be used as a spatial audio encoder for a caption model to conduct spatial audio captioning .....***
We are thankful for this suggestion. As mentioned in the global response we trained a prefix encoder to perform spatial audio captioning. Please refer to the global response (Exp. C.) for implementation details and results.
***For paper writing, too much important information is put in appendices, such as the structure figure of the whole model......***
We apologize for inconvenience due to the paper structure. We will move the model architecture to the main paper in the camera ready version. Please let us know if there are any other suggestions.
***The citation format of the article is highly problematic and needs to be standardized.***
Apologies. We have standardized the references and ensured everything is consistent for the camera-ready version of the paper.
***The experiments in Table 2 confuse me a lot. In Sec. 5.2, ... . However, the number of classes of a spatial attribute is very limited (For instance, there are only two classes "far" and "near" for the "distance" attribute), which means there are only two captions that will be used for the "distance" attribute? Wouldn't there be very few captions being used for testing totally? Hopefully, the authors can explain the experimental configuration a bit more.***
This task measures accuracy and alignment of spatial attributes encoded in the audio and text embeddings. For each spatial attribute, we have a finite number of values (e.g., `left`, `right`, `front`, and `back` for the attribute `direction`; the full list of attribute/value pairs detailed in Table A.3 of the appendix). We then generate a text caption for each spatial attribute (e.g., for direction, it would be “Sound from the **left**”, “Sound from the **right**”, and so on). Finally, the class assigned to a test sample is the class of the text embedding with highest cosine similarity to the audio embedding. This evaluation is conducted over the entire test set. You are correct that there are a limited number of classes per spatial attribute for this task. However, we also report open vocabulary spatial caption retrieval performance in Table A.6 of the appendix. This table reports retrieval accuracy over a much larger (~1000 sample) set of complex spatial captions (eg "Sound of an alarm coming from the far left of a large room.") for Spatial-Clotho and Spatial-AudioCaps.
***To train ELSA on single-channel audio, the authors repeat the single channel 4 times to fake a 4-channel FOA audio and compute Intensity Vectors. However, the way IV is calculated possibly doesn't make sense for this kind of faked 4-channel audio. Why is it designed this way? Why not try to design a separate feature extractor for single-channel audio?***
This is a good question. The duplicated channels result in identical intensity vectors for the x, y, and z dimensions. The spatial attributes branch learns to associate this condition with lack of spatial bias. We tried alternative methods: using zeros for the other three channels, using random values for the other three channels, as well as selectively using the spatial attributes branch only for spatial audio samples. Empirically, our current design produced the best results. We will update the manuscript with this information.
***It is natural to understand computing bearing information from spatial audio, which is essentially a bit similar to calculating the "Time Difference of Arrival" based on different channels. But how to understand that the model can get distance information from spatial audio?....***
Distance is primarily encoded in the DRR (direct-to-reverberant ratio) of the soundfield, as well as the absolute level of the sound. Since our data synthesis is based off physical simulation of soundfields, these biases are inherently present in the resulting training examples, and may be learned by both branches of the audio encoder.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed replies, which address many of my concerns. The new experiments are very valuable. Please add them to the revised paper if possible. Consequently, I have raised my score to "Weak Accept".
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: The new results will be duly incorporated into the revised paper. We extend our sincere gratitude for your review, suggestions and experiment ideas. Thank you! | Summary: The paper presents ELSA (EMbeddings for Language and Spatial Audio), a spatially aware-audio and text embedding model. The training data is created by synthesizing spatial audio in ambisonic format and augmenting text captions with spatial information. A small real world data is also collected for evaluations. The model training itself largely follows standard CLIP/CLAP training by using contrastive losses. Additional losses for direction and distances are added for the spatial part. Evaluations are done on both semantic retrieval tasks and spatial tasks.
Strengths: – The paper addresses a key part of multimodal contrastive embeddings. Sounds contain a significant amount of spatial information and humans naturally rely on directional information from sounds. Considering this it is expected that embeddings with spatial information are created. The paper is a good step in the right direction.
– For the most part, the paper is well done. Spatial audio can present several challenges with respect to data (more so in multimodal settings, training approach). Considering the challenges around learning from spatial audio, the paper presents a good approach for learning spatially-aware language embeddings. The experiments are also reasonably good.
– The paper is also well written and mostly clear.
-------
Score increased after rebuttal.
Weaknesses: There are a few weaknesses which are worth addressing in the paper.
– For table 2, I would be curious to see what CLAP on its own can achieve. It would be good to contrast this zero-shot classification on the spatial task.
– How were the non-spatial audio-text pairs used in training (as shown in Table 3, last row) ?
– Using non-spatial audio-text seems crucial for good semantic retrieval. This is evidenced by A.6 as well where the models training on just spatial audio-text pairs do not do well on semantic retrieval task. This is a bit surprising. The CLIP loss is still present in training, the semantics are also intact in spatial audio-text pairs. Why should there be a performance drop in that case ? it would be good to provide a good discussion and justification
– In Table A.7, the performance of the model trained on spatial Clotho and Audiocaps is better on RWD data than even on Clotho and Audiocaps itself. That is a bit surprising. We would expect that the model would be better in it’s own domain. The difference also is pretty big.
– The discussion in Section 5.4 is a bit adhoc. I would suggest not referring to anecdotal observations. The experiments could be better designed.
– Several of the classification experiments end up using 3-4 layers MLP. I think a more shallower model (maybe even just linear classifier) would provide a better confirmation of what information the embeddings store. Otherwise such deeper networks are able to push the numbers on their and it’s not clear how good the embeddings are.
– Some form of clustering and distance visualization would be good. It has been incorporated in some form in Table 2, but it would be good to explicitly show how the distances between embedding represent the spatial information.
– All the spatial mapping in terms of the language is very discrete (A.2). The range for distance, direction etc. can appear a bit arbitrary and forced. While this is perhaps a good first attempt, a more continuous form of “spatial-language” is desirable. Another thing could be a perception driven approach can also be taken where the boundaries are decided by what people generally perceive as left or right w.r.t sound direction.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the weaknesses above
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please add some limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestions and comments, which we address here. Please let us know if anything remains unclear.
***For table 2, I would be curious to see what CLAP on its own can achieve.***
Performance using a pre-trained CLAP checkpoint is close to random for all tasks, which is expected as CLAP is not trained with spatial audio or captions that describe spatial features of audio.
| | S-Clotho | S-Audiocaps | S-RWD |
| ----------------- | --------------- | ------------- | ------------- |
| Distance | 48 | 54.3 | 53 |
| Direction | 28.2 | 29.3 | 27.3 |
| Elevation | 56.7 | 51.3 | 59.4 |
| Room Area | 46.3 | 66.5 | N/A |
| Reverb | 57.3 | 52.5 | N/A |
***How were the non-spatial audio-text pairs used in training (as shown in Table 3, last row) ?***
We merge the spatial and non-spatial datasets, and use all samples from this mixed dataset in each epoch to learn ELSA. Samples are input to **both** the spatial attributes encoder and HTSAT (semantic encoder). For the HTSAT encoder, the non-spatial audio is used as is and we pass just the first channel for the spatial audio. For the spatial attributes encoder, which expects four-channel FOA, we use the spatial audio as is and we repeat the single channel non-spatial audio four times. The loss is weighted the same for both forms of input. We will revise Section 4.1 and Figure A.1 for clarity in the camera-ready version.
***Using non-spatial audio-text seems crucial for good semantic retrieval....***
This is a great question and something we investigated during the architecture design for ELSA. Table A.6 evaluates retrieval on _spatial captions_ while Table 3 (main paper) and the table below evaluate retrieval on _semantic captions_. It is worth noting that the task in Table A.6 is harder than Table 3 as the model must retrieve a caption with **both** semantic and spatially correct elements. We will be clearer about this distinction in the camera-ready version of the paper.
| Model | | | AC | | | | | | Clotho | | | | | |
|-------|---------------------|------------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| | Training Data Audio | Training Data Captions | T->A | T->A | T->A | A->T | A->T | A->T | T->A | T->A | T->A | A->T | A->T | A->T |
| | | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
| CLAP | Non-spatial | Non-spatial | 32.7 | 68.8 | 81.5 | 40.7 | 74 | 84.7 | 14.4 | 37.6 | 50.7 | 18.3 | 40.5 | 55.1 |
| ELSA | Spatial | Non-spatial | 27.1 | 62.7 | 76.1 | 36.6 | 68.7 | 78.4 | 11.3 | 32.6 | 44.4 | 12.4 | 28.3 | 50 |
| ELSA | Spatial | Spatial | 25.3 | 59.3 | 72.5 | 34.8 | 64.5 | 75.2 | 9.9 | 31 | 39.8 | 12.1 | 35.3 | 47.3 |
| ELSA | Mixed | Mixed | 33.2 | 68.2 | 81 | 40.9 | 74.4 | 86.1 | 15 | 36.7 | 50.8 | 20.1 | 43.2 | 55.4 |
We remark that changing from non-spatial audio only to spatial audio only results in a drop of -5.6% and -3.1% in T→A R@1 for Audiocaps and Clotho respectively. Similarly, training only with spatial captions results in further drops of -1.8% and -1.4% in T→A R@1 for Audiocaps and Clotho respectively. Mixing both captions and audio restores the performance to the original levels.
***The discussion in Section 5.4 is a bit adhoc. I would suggest not referring to anecdotal observations.***
Apologies, we should not have used the word anecdotal. The experiments are indeed formal. We will rephrase for the camera-ready version of the paper.
We have since run experiments for the remaining attributes to understand the alignment between captions and audio (extension of the experiment in paragraph 1, section 5.4) and obtained accuracies of 76.5% for distance and 55.1% for elevation. We have also performed clustering visualizations on the embeddings (Exp. A. in global response). We are happy to incorporate further changes or experiments that would strengthen this section.
***Several of the classification experiments end up using 3-4 layers MLP...***
We will correct Section 4.3 — all MLPs consist of a two layer MLP with a hidden layer of 64 dimensions and an output layer. We will add the parameter counts (33,345 parameters or only 0.02% of ELSA).
***Some form of clustering and distance visualization would be good.***
We address this question in the global response (Exp.B).
***All the spatial mapping in terms of the language is very discrete (A.2). The range for distance, direction etc. can appear a bit arbitrary and forced....***
We selected our mappings to correspond to general terms commonly used for broad spatial descriptions. We wholeheartedly agree that incorporating more natural spatial language is desirable. The idea of perceptually motivated boundaries is also excellent. Ideally both of these would be derived by running user-studies. Firstly to align with the perception of spatial attributes(eg, the azimuth would a human consider a sound to be coming from the left) , and secondly to understand the distribution of language people use to refer to spatial attributes. These user studies are not something that we would be able to setup and have approved in this rebuttal period, but is something we will strongly consider for future work.
---
Rebuttal Comment 1.1:
Title: Erratum
Comment: We made a small mistake when addressing the following point:
***Several of the classification experiments end up using 3-4 layers MLP. I think a more shallower model (maybe even just linear classifier) would provide a better confirmation of what information the embeddings store.***
In our response, we only mention Section 4.3, but we meant to refer to both **Sections 4.3 and 5.4**. That is, all the downstream classifiers and regressors we reported in the paper had one hidden layer with 64 parameters. We will make sure to update the manuscript accordingly. | Rebuttal 1:
Rebuttal: We are pleased to see such a strong positive sentiment from the reviewers about our work. Reviewers highlighted how **interesting and rewarding** (SQBX), **strong and significant** (rKMo), and how **rigorous** (rKMo) our work is. Reviewers also mentioned how **well-written** (JMmV, USXb) and **intuitive** (USXb) the proposed approach is. We agree that the research area is **under-addressed** (USXb) and **under-explored** (rKMo), and appreciate that reviewers feel ELSA addresses this gap in a **novel** and **effective** (rKMo) way. We want to thank all reviewers for their time and thoughtful feedback.
In response to the reviewers' suggestions, we have conducted the following additional experiments that we believe will be of interest to all:
**Exp. A. Visualization of ELSA Embeddings**
Figure R.1 in the rebuttal document shows a UMAP projection of the ELSA embeddings from the test sets of Spatial-AudioCaps and Spatial-Clotho. Note that the UMAP projection was guided with the embeddings and labels of the training sets of both datasets. Additionally, we computed the Wasserstein distances between the 512-dimensional embeddings of both test sets:
| | left | right | front | back |
| -------- | -------- | -------- | -------- | -------- |
| left | 0.00 | 1.04 | 0.94 | 0.98 |
| right | 1.04 | 0.00 | 0.92 | 0.97 |
| front | 0.94 | 0.92 | 0.00 | 0.81 |
| back | 0.98 | 0.97 | 0.81 | 0.00 |
Overall, both results show that the data clusters well with the direction labels, though there is some degree of confusion between back and front. We carried out the same analysis for distance and floor area, and obtained similar results. We are happy to share with the reviewers if they are interested. We will add these results to the appendix of our paper.
**Exp. B. Fine-grained of direction-of-arrival error analysis**
We analyzed the errors of a two-layer MLP trained to regress the direction-of-arrival (same setting as Table 1, last column in the paper). We observe how the errors vary along the following dimensions: source azimuth, source elevation, source distance, room floor area, room mean T30, and TUT Sound Events 2018 semantic classes. Results are rendered as boxplots in figure R.2 in the attached PDF.
Overall, we find there is little variability in the direction-of-arrival error across the studied dimensions. However, we note the errors tend to be higher at the extrema of the dimensions. Regarding the TUT Sound Events 2018 semantic classes, we again find few differences between the different sound classes, with `drawer` having the lowest error and `phone` having the highest one.
**Exp. C. Spatial Audio-Caption Generation**
Inspired by reviewer SQBX’s suggestion, we trained a Spatial Audio caption generator. Decoding multimodal embeddings into natural language can be achieved by prefixing an autoregressive causal language model [1-4], where the prefix is constructed from a projection of the multimodal embeddings. To facilitate audio captioning using ELSA, we fine-tune a GPT model with 12 attention layers each having 12 heads. The ELSA embeddings are projected onto the prefix using a single dense layer. With the ELSA encoder frozen, we train the GPT model on the audio embeddings of Spatial AudioCaps, and perform early-stopping to avoid overfitting. The following results are obtained with fine-tuning on only 150K embedding caption pairs from the train splits of Spatial Clotho and Spatial AudioCaps.
We evaluate results on the test splits of both Spatial AudioCaps and Spatial Clotho. We report metrics from Audio Captioning task of the DCASE Challenge [5] (the metric description are taken verbatim from their associated github page).
| Metric | Short description | Range | Spatial Clotho | Spatial Audio Caps |
| ------ | ----------------- | ----- | -------------- | ------------------ |
| SPIDEr | Mean of CIDEr-D and SPICE | [0, 5.5] | 0.19 | 0.34 |
| FENSE | Combines SBERT-sim (Cosine-similarity of Sentence-BERT embeddings) and Fluency Error rate (fluency errors in sentences with a pretrained model) | [-1, 1] | 0.59 | 0.68 |
| Vocab | Number of unique words in candidates. | [0, $\inf$] | 1103 | 1258 |
Additionally, we also present examples of generated samples from both datasets:
(1) Generated caption
> In a medium-sized room located at the far back, an electric motor is emitting a high-pitched whine, accompanied by a whirring noise. In the background, adult male voice can be heard speaking.
(1) Ground truth caption from test set:
> From deep within a medium-sized room, the noise of a robust industrial engine can be heard whirring loudly.
(2) Generated caption:
> The sound of water flowing and splashing is emanating from the front of a room.
(2) Ground truth caption from test set:
> The sound of gentle rowing and paddling in the water is emanating from the vicinity of a medium-sized room.
(3) Generated caption:
> The sound of cheering coming from a crowd is heard near the medium-sized room.
(3) Ground truth caption from test set:
> The sound of applause, indicating that people are praising the musicians after their performance, is emanating from the medium-sized room.
[1] Mokady et al. (2021) ClipCap: CLIP Prefix for Image Captioning. ArXiV preprint.
[2] Gu et al. (2023) I Can't Believe There's No Images! Learning Visual Tasks Using only Language Supervision. CVPR.
[3] Kim et al. (2023) Prefix tuning for automated audio captioning. ICASSP.
[4] Deshmukh et al. (2024) Training audio captioning models without audio. ICASSP.
[5] https://dcase.community/challenge2024/task-automated-audio-captioning
Pdf: /pdf/e0f9e9a27d2ddd6023f08653af533bd88a2ac45a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FactorSim: Generative Simulation via Factorized Representation | Accept (poster) | Summary: This work presents FACTORSIM, a framework that converts any language specification into a complete simulation for training RL agents. FACTORSIM decomposes the input prompt into steps and uses a factored Partially Observable Markov Decision Process (POMDP) to minimize the context needed for each generation step. It also introduces a method to benchmark generative simulation and demonstrate the capability for FACTORSIM to be used in robotics setting.
Strengths: 1. Develop a robust pipeline for constructing game environments from language descriptions, which could significantly enhance the scalability of training generalist agents.
2. Formalize the framework as a Partially Observable Markov Decision Process (POMDP), reducing the need for full context during generation and improving outcomes.
3. Demonstrate the potential of this method to generalize to embodied scenarios.
Weaknesses: 1. The evidence for generalizing to embodied scenario is limited.
2. The successful rate in Table 1 and Figure 3 is low. Could there be some potential way to improve it?
Technical Quality: 3
Clarity: 3
Questions for Authors: I have one primary concern: how could this be applied to the field of robotics and embodied AI?
**From this concern, several questions arise:**
- How is the background (or scenario) generated within this pipeline? In a 2D game setting, detailed descriptions generate the scenarios, but this can become extremely tedious as scenes become more complex, such as in an embodied environment. While language-guided scene generation could be a solution, how will it fit into the POMDP framework?
- The framework addresses three main components: controller, model, and view (rendering). In robotics, these aspects are typically handled by a physics simulation. How will this framework further contribute to the field of robotics? Currently, the paper shows potential for task generation in tabletop scenarios only.
- I am still unclear on how the Robotics Task Generation part is achieved by this pipeline.
**Some suggestions:**
- While it might be challenging to address this concern with experiments during the rebuttal period, more discussion on approaches and challenges would be beneficial.
- Any additional experiments that could demonstrate the pipeline's usage in robotics or embodied AI could help.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback. We are glad that you share the excitement of the implication of this to the potential of scaling up the training of embodied generalist agents.
# How can this be applied to the field of robotics and embodied AI?
We present a novel method for factorizing simulation codebases, based on factorized POMDP representation, which allows for efficient context selection. Since most robotics tasks can be formulated as a POMDP, we believe that our method can beneficial in the field of robotics and embodied specifically in assembly related tasks at they require many dependent steps. To further demonstrate the superiority of our method, we also conduct additional experiments on 12 assembly-related tasks where FactorSim outperforms all Chain-of-Thought baselines used in GenSim by a large margin due to its ability to factorize the multi-step process into subtasks and generate these subtasks in separate steps, by using only the relevant context needed. Please find more details in the following section.
# Details on the robotics experiment
Refer to Figure 2 in the newly attached pdf for an overview of our robotics experiment. Below, we explain our experimental settings in more detail and how we apply FactorSim to generate robotics tasks.
The paper GenSim [1] proposed a benchmark to investigate how well Large Language Models (LLMs) can design and implement simulation tasks and how to improve their performance on task generation. This benchmark includes a list of tasks described in the following format:
- Task: Build-Car
- task-name: build-car
- task-description: Construct a simple car structure using blocks and cylinders.
assets-used: [block/block.urdf, ball/ball-template.urdf]
After the task code is generated, GenSim builds on top of CLIPort, evaluates the tasks by feeding them through a series of checkers that validate "syntax-correctness," "runtime-verification," and "task-completion."We followed the same experimental setting as described in their paper, except that we introduced an additional metric, "human-pass-rate" to measure for prompt alignment. This “human pass rate” checks whether the generated task adheres to the input prompt upon completion.
FactorSim achieved state-of-the-art performance on their benchmark. To clarify how we apply FactorSim to the benchmark, we first describe how the POMDP is defined in this setting:
- States: object states in the environment (color, object pose, size, scale)
- Reward function: a set of functions that define how the rewards are given (e..g., a reward of X is given when the blue cube is in the bowl). This is analogous to the scoring functions are generated in the case of RL game generation.
- Transitional function and Observation function: The underlying physics simulation and the robot assumed in CLIPort handle these aspects, so we don't generate them.
We apologize if we didn’t make it clear how we applied FactorSim to generate robotic tasks. To summarize, a robotic task is no different than a RL game in that the final robotic task can be modeled as a POMDP. While the transition dynamics in this case are often handled by a physics simulator, we still need to generate the “reward function.” The only part left to generate is essentially the reward function, in the form of goal states. Objects in the scene and their arrangements are essentially state variables, and the functions are essentially functions that operate on these state variables to define whether a state should be given rewards. This is analogous to how in RL games we implement a reward function that takes in a set of state variables and updates them.
# Additional robotics task generation experiment
To further showcase FactorSim's effectiveness in generating robotics tasks, we form a list of 12 assembly tasks (i.e. build related tasks) and by combining the assembly tasks in GenSim's benchmark and by asking GPT-4 to generate additional assembly tasks such as "BuildLampPost" or "BuildDogHouse". As shown in Table 1 of the attached pdf to our general response, FactorSim outperforms all baselines by a large margin, including GenSim's two Chain-of-Thought baselines. We also provide visualizations of tasks that FactorSim is able to generate, demonstrating collection from using the oracle agent in the accompanying figure. These tasks use primitive assets like blocks and balls to form more complicated structures. We believe that scene generation fits into the POMDP framework, similar to what we achieved in our robotics task generation experiment.
# Can we improve upon the results in Table 1 and Figure 3
The low success rates testify to the difficulty of the tasks in the benchmark. We additionally ran AgentCoder [1], a SOTA code generation method that uses a multi-agent system to perform CoT prompting, on our benchmark. Please find the results in the General Response. While AgentCoder claims to refine code iteratively, it performs poorly because it relies on a test designer agent and a test executor agent to write quality test cases. However, due to the complexity of the tasks in our benchmark, the test designer agent tends to write incorrect or infeasible tests, leading to negative feedback. This points to FactorSim being an improvement over the standard "role-based" Chain of
Thought decompositions.
# How is the background generated?
In Pygame learning environments, the game states are represented as a non-visual state, as exemplified by the documentation of the Catcher game: https://pygame-learning-environment.readthedocs.io/en/latest/user/games/catcher.html.
[1] Huang, Dong, et al. "Agentcoder: Multi-agent-based code generation with iterative testing and optimisation." arXiv preprint arXiv:2312.13010 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Most of my concerns have been addressed.
While the main technical contribution appears to be in Code Synthesis, extending these skills to the field of robotics—a critical application area for code generation—also represents an important aspect of this paper. The authors have addressed this point well in their rebuttal.
As a result, I am inclined to raise my assessment to a weak accept for this paper.
BTW, it would be beneficial to improve the quality of the figures in the future. For instance, in Rebuttal Figure 2, the red dashed square intersects both the code and the outer square, which could be refined.
---
Rebuttal 2:
Comment: Dear Reviewer 6Px5,
Thank you for taking the time to read our rebuttal. We have refined our figures, as you suggested.
Best regards,
Authors. | Summary: The paper proposes a LLM prompting method to generate full game / robot simulations in code based on text descriptions. Given a long text description, the method first utilizes an LLM to decompose it into multiple sentences, and then use them to iteratively generate and update simulation code. For each iteration, the code is generated and updated separately as three modules, i.e., controller, model and view. The update happens in a factorized way - the authors use the LLM to identify relevant state and context to avoid feeding the full generated code into LLM.
In experiments, the method is evaluated on game and robot simulation code generation benchmarks. The method shows superior results against other LLM prompting baselines in generating valid simulation code that aligns with text description.
Strengths: - The proposed method exploits the structure of simulation to modularize and factorize code generation. This strategy significantly improves LLM's capability to generate full simulation code.
- The method is comprehensively evaluated on game and robot benchmarks.
- The paper is well written and easy to follow.
Weaknesses: The major contribution of the paper seems to be a prompting technique crafted for the specific task of simulation code generation. While such a technique does improve performance on the task, my concern is it is neither fundamental nor sufficiently novel. The proposed prompting technique highlights two key designs:
- modularize simulation code generation manually, which aligns with the common practice to manually decompose a complex task into sub-tasks for LLMs to handle more effectively.
- extract relevant portion of code for LLM to consume and update, which is also an implementation-wise design that many works have already incorporated.
While the paper writes factorized POMDP formulations, they don't seem to make a difference on how the prompting method is implemented. So I'm concerned that the contribution of this paper is more as a practical application rather than a general method or framework.
Technical Quality: 2
Clarity: 3
Questions for Authors: I'm curious what the failure modes of FactorSim is like.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See Weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for dedicating your time to review our paper and for providing insightful feedback. We are glad to learn that you find our paper to be well-written and comprehensively evaluated.
# Novelty
We present a novel method for generating coded simulations that allows for efficient context selection; our method outperforms the Chain-of-Thought baseline in generating complex code, e.g., RL environments by a significant margin (See Table 1 in the main paper) and robotics tasks (See Figure 3 in the main paper and Table 1 in the pdf attached to our general response). More specifically, our method differs from the existing Chain-of-Thought methods by introducing a principled way to select context from the codebase, based on the factorized POMDP representation. It can be applied to any code generation task where the target generation can be modeled as a Partially Observable Markov Decision Process (POMDP), which is a flexible representation. The key advantage of our proposed representation is that it helps us model dependencies between different state variables and functions, instead of having to consider the entire codebase when making modifications to it or having to retrieve context code snippets based on some pre-defined similarity scores (e.g., akin to Retrieval Augmented Generation). To further showcase the novelty and effectiveness of FactorSim, we ran an additional SOTA baseline that uses a "role-based" chain of thought to perform code generation.
| RL Env| GPT-4 w/ AgentCoder | GPT-4 w/ FactorSim |
|----------------------|:----------------:|:----------------:|
| Flappy Bird | 0.18 | 0.78 |
| Catcher | 0.45 | 0.66 |
| Snake | 0.27 | 0.44 |
| Pixelcopter | 0.43 | 0.78 |
| Pong | 0.43 | 0.61 |
| Puckworld | 0.33 | 0.81 |
| Waterworld | 0.20 | 0.62 |
| Monster Kong | 0.23 | 0.44 |
While AgentCoder claims to refine code iteratively, it performs poorly because it relies on a test designer agent and a test executor agent to write quality test cases. However, due to the complexity of the tasks in our benchmark, the test designer agent tends to write incorrect or infeasible tests, leading to negative feedback. This points to FactorSim being an improvement over the standard "role-based" Chain of Thought decompositions, and that it is non-trivial to generate simulations from complex textual specifications.
# Failure modes and Limitations
The improvement of FactorSim stems from better context provided in the prompts, allowing the LLMs to be more focused during code generation. Below, we discuss one failure mode and one limitation of FactorSim. The main failure mode we observe with FactorSim is when the context is selected incorrectly, leading to incorrect implementation. However, we find that the benefit of FactorSim outweighs the occasional errors LLMs make when selecting contexts, as shown in our experiments.
In our experiment of generating robotics tasks, we find that all baselines, including our method, often ignore physical constraints necessary for the task to be completed by a robot. It is difficult for LLMs to consider context related to these "constraints" necessary for the task to be completed without being explicitly prompted. For example, if the LLM is prompted to generate the task "build a bridge", LLMs might generate a "bridge" block that is too small to span the two bottom items' distance. When the LLM is prompted to generate the task "put the ball in the container," the generated task might consist of a base container that is much smaller than the size of the ball. We leave this to future work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal! It's good to see how FactorSim performs against a recent baseline AgentCoder and what its failure modes are like. I thought about it carefully and agreed to the claimed novelty of the proposed method.
I'm happy to increase the score to Weak Accept, given the clear writting and comprehensive evaluation of the paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer yVST,
Thank you for taking the time to read our rebuttal!
Best regards,
Authors. | Summary: The paper proposed a factorized approach to generate simulated games via LLM code synthesis. The code idea is that one doesn't need to generate the entire code at once, but rather generate different part of a POMDP game, such as controller, model, and view. The generated simulation allows RL policies to train on top. The authors introduced a benchmark to evaluate the proposed framework and show good results in terms of prompt alignment, transfer ability and human evaluation.
Strengths: The paper investigates an important problem, simulation generation. The evaluation over the mentioned environments is solid, spanning from automated tests to human evaluations.
Weaknesses: 1. The paper is poorly written. I have hands-on experience with almost all important concepts mentioned in the paper, yet still have a hard time understanding the paper, and have to read again and again including some code. Rather than talking about abstract terms like POMDP / factorization first, I think the authors can start easy with intuitions and explanations. The figure can also be improved. The main method figure shall spend more time showing what's special about "Factored POMDP" compared to prior methods. The benchmark claim should have its own section. The motivation is not clearly narrated either. The world model section in related work doesn't seem to fit there.
2. On of the main contributions the authors listed in the introduction section is a benchmark. However, I think this benchmark seems to lack the technical depth I was expecting as a standalone contribution. I feel it's just a set of small metrics rather than benchmark.
3. The paper just lacks the level of technical contribution that meets my criteria for a Neurips paper. While there are many other prompting papers like CoT, ToT, the problem the paper is trying to solve is also very specific.
4. While I have experience with both LLM and robotics, I believe the authors should not put Robotics as primary area, but NLP or code synthesis community.
Technical Quality: 2
Clarity: 2
Questions for Authors: In figures like figure 6, is the human pass rate based on the previous stage e.g. only executable code.
It seems that on open source model like llama 3, gensim with CoT is very close to factor sim. Can you explain the insights?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The author discussed the limitations of automated evaluation by conducting a human study. No outstanding negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback! We are glad that you find this an important problem and that you find our evaluation solid and comprehensive.
# Clarification of our Motivation and Novelty and why we chose Robotics as the primary area
Recent advancements in foundational models have demonstrated their value in cognitive planning in robotics. The use of these models for obtaining lower-level policy is relatively underexplored, with many works focusing on direct control outputs [6, 7]. Our work leverages foundational models to generate simulations for training (i.e., Generative Simulation), a promising approach for training RL and robotics agents [1, 2]. However, LMs often fail at generating large amounts of complex code, leading to incorrect simulations and significant distributional shifts in downstream environments.
By representing our codebase as a factorized POMDP, we can decompose the code into simpler chunks, reducing the need for extensive context. This method improves the accuracy of generating simulations with complex logic, outperforming all Chain-of-Thought-based baselines in generating prompt-aligned coded simulations. In zero-shot transfer experiments, more accurate code generation for simulations proves pivotal for the downstream training of RL and robotics agents. FactorSim’s zero-shot performance reaches 50% of direct training results in 5 out of 8 environments, whereas baselines show significant transfer in only 1-2 environments. Given that most robotics tasks can be formulated as a POMDP, our method could significantly impact the robotics community.
# Presentation
Thank you for your detailed feedback on our presentation. We understand the importance of legibility and have made the following improvements to our paper:
- We included intuitive explanations for our method. We propose Factored POMDP representations to model an existing simulation codebase as a hypergraph, with nodes as state variables and hyperedges as functions relating to one or more state variables. Based on instructions for module additions, LLMs first select relevant state variables, then include only these variables and related functions in subsequent prompts.
- We added a new figure to demonstrate how our formulation directly corresponds to prompt construction, with general explanations and concrete examples.
- We included two new figures to explain our robotics task generation experiment and show visualizations of tasks that FactorSim successfully generates, which all other baselines fail to achieve.
- We moved the world model part of the related section to the appendix.
# Clarification on our Benchmark
Pygame Learning Environment [3] is a Reinforcement Learning benchmark used in various existing works [4,5]. We extend these environments to include prompts with detailed specifications of game logic paired with system tests to test for “precise” prompt adherence ability in simulation code generation. When coupled with the original RL environments, our extension enables the evaluation of not only the accuracy of the generated code but also its usefulness in solving RL tasks (i.e., zero-shot transfer).
# Additional experiments on the benchmark
We additionally ran AgentCoder [8], a SOTA code generation method that uses a multi-agent system to perform CoT prompting, on our benchmark.
| RL Env| GPT-4 w/ AgentCoder | GPT-4 w/ FactorSim |
|----------------------|:----------------:|:----------------:|
| Flappy Bird | 0.18 | 0.78 |
| Catcher | 0.45 | 0.66 |
| Snake | 0.27 | 0.44 |
| Pixelcopter | 0.43 | 0.78 |
| Pong | 0.43 | 0.61 |
| Puckworld | 0.33 | 0.81 |
| Waterworld | 0.20 | 0.62 |
| Monster Kong | 0.23 | 0.44 |
Additionally, we added two Atari games to our benchmark to increase the difficulty of our tasks. Here are provide some preliminary results:
| RL Env | GPT-4 | GPT-4 w/ self debug | GPT-4 w/ Factor Sim |
|----------------------|:----------------:|:----------------:|:----------------:|
| Breakout | 0.13 | 0.10 | 0.40 |
| Space Invaders | 0.10 | 0.14 | 0.28 |
This benchmark allows us to investigate whether classical RL environments can be solved through the paradigm of generative simulation.
# In figures like figure 6, is the human pass rate based on the previous stage e.g. only executable code.
Yes. These four metrics (i.e. "syntax-correct", "runtime-verified", "task-completed", "human-pass") have an incremental structure from syntax to runtime to successfully generate demonstrations to human verification where failing the former metrics implies failing on the latter metrics.
# Explain the insights re: open source models’ performance
In our experiments, llama3 performs very well on generating specific tasks but exhibits instability, performing exceptionally on some tasks while poorly on others. Empirically, we observe that it resorts to "memorization" more, suggesting that if the memorized code is correct, it excels at generating those tasks.
GenSim with CoT, both bottom-up and top-down, are competitive baselines that use a chain of 4-7 prompts. Despite this, FactorSim outperforms it, demonstrating its superior performance in task generation.
[1] Xian, Zhou, et al. "Towards generalist robots: A promising paradigm via generative simulation."
[2] Katara, Pushkal, Zhou Xian, and Katerina Fragkiadaki. "Gen2sim: Scaling up robot learning in simulation with generative models."
[3] Tasfi, Norman. "Pygame learning environment." GitHub repository (2016).
[4] Riemer, Matthew, et al. "Learning to learn without forgetting by maximizing transfer and minimizing interference." ICLR 2019.
[5] Tucker, Aaron, Adam Gleave, and Stuart Russell. "Inverse reinforcement learning for video games."
[6] Wang, Yen-Jen, et al. "Prompt a robot to walk with large language models."
[7] Nasiriany, Soroush, et al. "Pivot: Iterative visual prompting elicits actionable knowledge for vlms."
[8] Huang, Dong, et al. "Agentcoder: Multi-agent-based code generation with iterative testing and optimisation."
---
Rebuttal 2:
Comment: I acknowledge that I've read the rebuttal and other reviewers' opinions.
Figure 1 in the rebuttal pdf definitely made the paper much more intuitive, allowing me to verify the original interpretation of the paper is correct. I recommend the authors to further improve its quality and put it in the paper whether it's accepted to Neurips or not.
I still believe robotics should not be the primary area for reviewer allocation even given the author's response - just as reviewer EKr6 mentioned, the robot experiments seem like an afterthought. LLM / code synthesis shall be the better pool of reviewers, and I believe the AC shall consider this when making the final decision from our reviews.
I am raising the score for the presentation & overall rating a bit since I find the general response/pdf helpful to understanding, yet still deem the paper unable to meet the acceptance threshold. I am willing to defend my rating, although other reviewers may have different opinions.
---
Rebuttal Comment 2.1:
Title: Thank you for reviewing our rebuttal
Comment: Thank you for taking the time to read our rebuttal and for your constructive feedback. We really appreciate your acknowledgment of the improvements we made.
The subject area was indeed a challenging decision for us, and we would have chosen LLMs or Code Synthesis if they had been available. Looking back at our discussion before the submission, we first carefully listed down all available options:
- Machine vision
- Natural Language Processing
- Speech and Audio
- Deep Learning Architectures
- Generative Models
- Diffusion-based models
- Optimization for deep networks
- Evaluation (methodology, meta-studies, replicability, and validity)
- Online Learning
- Bandits
- Reinforcement Learning
- Active Learning
- Infrastructure (libraries, improved implementation and scalability, distributed solutions)
- Machine learning for healthcare
- Machine learning for physical sciences (for example: climate, physics)
- Machine learning for social sciences
- Machine learning for other sciences and fields
- Graph neural networks
- Neuroscience and cognitive science (neural coding, brain-computer interfaces)
- Optimization (convex and non-convex, discrete, stochastic, robust)
- Probabilistic methods (for example, variational inference, Gaussian processes)
- Casual Inference
- Robotics
- Interpretability and explainability
- Fairness
- Privacy
- Safety in machine learning
- Human-AI Interaction
- Learning theory
- Algorithmic game theory
- Other (please use sparingly, only use the keyword field for more details)
We first narrowed it down to Generative Models, Reinforcement learning, Natural Language Processing, and Robotics. Then, we ruled out Generative Models as we are not creating a new generative model. Between Reinforcement Learning, Natural Language Process, and Robotics, we ultimately chose Robotics as the primary area for a couple of reasons: (a) we are not trying to solve a core NLP or RL problem, (b) the downstream application of our code generation method is RL and robotics, and (c) the message we hope to communicate to the community and the broader implications beyond our empirical results are more aligned with the robotics domain, as highlighted in our rebuttals.
As you may see, none of these areas are a perfect match for our submission. We hope this clarifies the concern about our submission's subject area. Please don't hesitate to let us know if there are any other concerns that we may address to meet the acceptance threshold. | Summary: The paper introduces an LLM-based method for generting code for simulations. After generating the simulation of famous games based on their manual and description, the authors show that policies trained in these environments transfer well to the real games.
Strengths: - **S.1 Great results.** I think that the results from Fig.3 are very impressive. Zero-shot transfer is very hard, and doing so much more reliably than vanilla GPT-4 is impressive.
- **S.2 Overall good idea.** The idea to deconstruct the game development task into M-V-C makes a lot of sense to me. I just thought that's already kind of captured in the POMDP formulation.
- **S.3 Good presentation.** The overall presentation and writing are good, although there is much left to be desired in terms of implementation details.
Weaknesses: - **W.1 Implementation details and relationship to formulas.** I'm happy that there was code provided with the submissio and I hope that it will be released publicly because based purely on the main body of the paper, the method is not reproducible. Including the prompts in the appendix helps but I wish they were commented a bit more on why certain phrases and sections are there. And I'm wondering if the theory that's presented in the paper holds water wrt the actual instructions in the appendix. Because as far as I understand, there aren't any restrictions on what code the LLM can generate for each component, right? Also, the paper mentions graphs every so often and I don't know how they fit into this. I also think the context selection is crucial to your method and from the main body of the paper, it's completely unclear how that's implemented.
- **W.2 Missing examples.** Along a similar idea, I'd have loved some examples throughout the paper to illustrate what some of these instructions actually mean.
- **W.3 Unclear robotic experiments.** The robotic experiments seem to be more of an afterthought in the paper, and it's unclear what existing assets there are, what control code is assumed given, what camera parameters are assumed, etc.
- **W.4 Unclear input mapping.** The appendix mentions that the controller is given or that button presses always mean the same thing. I completely don't understand what's meant by that.
Overall, I think the paper shows a great idea and is probably beneficial to the community, but some work should go into tweaking the main body of the paper and making the method more clear and reproducible.
Technical Quality: 3
Clarity: 3
Questions for Authors: I don't have any major questions wrt the work. There were a couple of points throughout the paper in the methods section, where I asked myself why this is relevant, but then this was cleared up a paragraph later.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately address the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and for providing constructive feedback. We are glad you share our excitement about FactorSim's performance on the challenging task of zero-shot transfer. We appreciate your feedback and would like to address your questions.
# Missing examples
Thank you for your great suggestion! Following your advice, we have included two new figures in the pdf file attached to our general response. One is to illustrate FactorSim with a concrete example, and the other will explain our robotics experiment in more detail. See more detail below.
# The connection between FactorSim and the formulations
While LLMs are free to generate code, our formulation in the paper (i.e., Algorithm 1) directly informs what is included in the prompt as context. To clearly illustrate how FactorSim's prompts are constructed and map to the theory presented in Algorithm 1, we made a new figure and attached it to the PDF of our general response. FactorSim uses five prompts, as highlighted in grey boxes in the newly added figure. The first prompt corresponds to Equation 1 in our main paper. It decomposes the long and complex simulation specification into a list of steps to be implemented sequentially. The output from that is a list of instructions. For example, a step instruction could be "Introduce a blue agent rendered as a blue circle, allowing players to control it with arrow keys. The red puck should respawn when it collides with the blue agent." Then, at each step, FactorSim would add and modify the code of the existing codebase to incorporate the instructed change.
During each step, FactorSim uses the second prompt (i.e. Context Selection Prompt, as shown in the figure) to select a set of state variables from the list of all state variables in the current codebase. This step corresponds to Equations 9 and 10 in Algorithm 1 in our main paper. The output of this prompt consists of a list of state variables, including state variables that are considered to be relevant to this module and new state variables that LLMs deem necessary to add to the system. After the list of context states is obtained, we will use it to retrieve the relevant functions in the codebase. A function is defined as relevant to a state variable if the state variable is used in the function. Our factorized POMDP representation essentially arranges a codebase into a (hyper-)graph where state variables are nodes and functions are hyperedges. After we obtain the list of relevant states and functions, they are used as context for the existing codebase in the following prompts the LLMs to generate the "model," "view," and "controller" part of the module. Note that not all steps require all three components; for instance, in our robotics experiment, only the "model" is generated as the "view" and "controller" are determined by the robot and the physics engine.
# Clarification on our robotics experiment
The paper GenSim proposed a benchmark to evaluate how well LLMs can design and implement simulation tasks, as well as improve their performance in task generation. We follow the same experimental setup described in their paper, with the addition of a new metric—"human pass rate"—to assess prompt alignment.
In our experiments, we apply FactorSim to the benchmark proposed by the paper GenSim [2]. The benchmark, built on top of CLIPort [3], is designed to investigate how well Large Language Models (LLMs) can design and implement simulation tasks and how to improve their performance on task generation. This benchmark includes a list of tasks described in the following format:
- task-name: build-car
- task-description: Construct a simple car structure using blocks and cylinders.
- assets-used: [block/block.urdf, ball/ball-template.urdf]
The assets needed are provided in the prompts. The robot control and camera parameters are assumed to be given. Refer to Figure 2 in the attached PDF for an overview of our robotics experiment. We apply FactorSim to generate robotics tasks in a very similar way to how we generate RL games, as both can be modeled as POMDPs.
To further demonstrate FactorSim's effectiveness in generating robotics tasks, we curate a list of 12 assembly-related tasks and show that FactorSim outperforms all Chain-of-Thought baselines used in GenSim by a large margin due to its ability to factorize the multi-step process into subtasks and generate these subtasks in separate steps, by using only the relevant context needed. Due to space limitations, please find more details in our response to reviewer 6Px5.
# Unclear input mapping
In our zero-shot transfer experiment, we train RL agents on generated code and test them on the original game implementation in the Pygame Learning Environment benchmark (PLE) [1]. In the Pygame Learning Environment (PLE), an action is defined as a "keyboard button." When we say, "The same button always means the same thing," we mean that a specific keyboard key consistently maps to the same in-game action during RL agent training. For example, pressing the spacebar in Flappy Bird will always trigger the flap action in training and testing environments. We will rewrite the sentence in our paper for better clarity. Thank you for pointing it out!
# Reproducibility
We are committed to releasing the code publicly. We will ensure that the code repository includes comprehensive documentation and examples to facilitate the reproduction of our results. We have sent our code, with instructions on how to run it, via an anonymous link to the AC, following the conference guidelines.
### References
[1] Tasfi, Norman. "Pygame learning environment." GitHub repository (2016).
[2] Wang, Lirui, et al. "Gensim: Generating robotic simulation tasks via large language models." ICLR 2024
[3] Shridhar, Mohit, Lucas Manuelli, and Dieter Fox. "Cliport: What and where pathways for robotic manipulation." Conference on robot learning. PMLR, 2022. | Rebuttal 1:
Rebuttal: We are grateful for the insightful feedback provided on our paper. We are encouraged to find that the reviewers found our paper well presented, comprehensively evaluated, achieved impressive zero-shot results, and shared our excitement about its applicability to training generalized agents. Below, we address some of the points raised by more than one reviewer and summarize the improvements we have made. We respond individually to points specific to each reviewer.
# Clarification of our contribution
With our goal of generating simulations that include long and complex prompt logic for downstream training in mind, our approach is to generate code for various simulation modules step-by-step, modifying and expanding the codebase. Every prompt has two components: the context (i.e., the existing codebase and how the new function will be used in conjunction with it) and the task specification (i.e., what to implement). Existing works showed that Retrieval-Augmented Generation can improve LLM’s performance, while irrelevant contexts can hurt [1,2]. FactorSim constructs the "context" portion of the prompts dynamically in a principled manner. While we agree that task decomposition or Chain-of-Thought (CoT) are proposals of existing work, our method for factorization and representation of dependencies for such code generation tasks are novel to the best of our knowledge.
To clarify our method, we provide a new figure in the pdf, illustrating how the formulation in Algorithm 1 directly corresponds to how the prompts are constructed. FactorSim consists of five prompts, highlighted in gray boxes. FactorSim first decomposes the long and complex prompt into a series of steps. Then, at each step, FactorSim modified the codebase according to the step instruction. If the codebase is written in an object-oriented paradigm, almost the entire codebase (i.e., 120 lines of code) would have to be included in the prompt as context since this logic pertains to many entities in the game. Instead, FactorSim maintains a factorized POMDP representation of the codebase, which is essentially a graph with state variables as nodes and functions as hyperedges:
| Name| Type|
|-|- |
|score| State Variable |
| game_over| State Variable |
| green_puck_position| State Variable |
| red_puck_speed| State Variable|
| red_puck_radius| State Variable |
| blue_agent_position| State Variable |
| … | State Variable |
| red_puck_respawn| Function |
| check_game_over_condition | Function|
| …| Function |
FactorSim uses PROMPT 2 to select a set of relevant state variables and then retrieve functions that pertain to the set of relevant context state variables according to the maintained graph structure. FactorSim constructs the “context” portion of the following code generation prompts with just the relevant state variables and the functions that modify one or more of the relevant state variables. In this example, instead of providing 120 lines of the codebase to the "context" part of the prompt, FactorSim only needs to include around 23 lines of code to the code generation prompts (i.e., prompt 3, 4, and 5 in Figure 1). Refer to Figure 1 for an overview of FactorSim, alongside an illustrative example.
# Broader Implications of our contribution
We believe in the significance of our work and FactorSim's broader implications. First, we demonstrate LLMs' ability to self-refine their prompt contexts for improved performance, unlike existing works that rely on similarity-based metrics to retrieve relevant code context [4].
Second, we believe the result of our zero-shot transfer experiment has important implications. Factorsim's zero-shot performance achieves 50% of the performance of training directly on the testing environments in 5 out of 8 Reinforcement Learning environments, compared to the best baseline's 1-2 out of 8. This suggests the importance of automated task generation and its potential for scaling up the training of generalist embodied agents.
# Improvements we have made:
1. **New Figure**: include a new figure to explain FactorSim, with concrete examples, and how our formulation directly corresponds to how the prompts are constructed. Refer to Figure 1 in the pdf.
2. **Additional code generation experiments**: We include a comparison between FactorSim and the SOTA code generation method (i.e., AgentCoder [3]). More detail on the experimental result can be found in our response to reviewer p3c7.
|Env|GPT-4 w/ AgentCoder| GPT-4 w/ FactorSim|
|-|:-:|:-:|
|Flappy Bird|0.18|0.78
|Catcher|0.45|0.66|
|Snake|0.27|0.44|
|Pixelcopter|0.43 | 0.78 |
|Pong | 0.43 | 0.61 |
|Puckworld | 0.33 | 0.81 |
|Waterworld | 0.20 | 0.62 |
|Monster Kong | 0.23 | 0.44 |
3. **Details on our robotics experiments**: We include a new figure that overviews how our robotics experiments are conducted and how FactorSim is utilized. Refer to our response to reviewer 6Px5.
4. **Additional robotics task generation experiments**: refer to Table 1 in the pdf and our response to reviewer 6Px5.
5. **Benchmark Extension**: We have extended our benchmark to include two classical Atari RL environments (i.e., breakout and space_invaders), which are included in the code we submitted. Results can be found in our response to reviewer p3c7.
6. **Reproducibility** To mitigate concerns regarding reproducibility, we have also sent an anonymous link to our codebase with instructions on how to run our code to the AC in an official comment, following the conference guidelines.
References:
[1] Shi, Freda, et al. "Large language models can be easily distracted by irrelevant context."
[2] Levy, Mosh, Alon Jacoby, and Yoav Goldberg. "Same task, more tokens: the impact of input length on the reasoning performance of large language models."
[3] Huang, Dong, et al. "Agentcoder: Multi-agent-based code generation with iterative testing and optimisation."
[4] Zhang, Fengji, et al. "Repocoder: Repository-level code completion through iterative retrieval and generation."
Pdf: /pdf/db86ed710587ae379dd4dc9a00b57b76312b2f61.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Structured Representations with Hyperbolic Embeddings | Accept (poster) | Summary: The paper introduces a novel regularization method, HypStructure, which utilizes hyperbolic geometry to improve the embedding of hierarchical relationships within feature representations. This approach enhances the learning of structured representations, reducing distortion and boosting generalization in low-dimensional scenarios. It also demonstrates superior performance in Out-of-Distribution (OOD) detection across various datasets through extensive empirical evaluation. Additionally, the paper includes an eigenvalue analysis that provides deeper insights into the structured representations, correlating positively with improved OOD detection performance. This advancement extends structured representation learning to hyperbolic spaces, achieving more discriminative and interpretable features that effectively capture the inherent hierarchies in complex datasets.
Strengths: 1. The paper is the first to formally characterize properties of hierarchy-informed features via an eigenvalue analysis, and also relate it to the OOD detection task.
2. The paper is easy to read and follow, making complex concepts accessible. The use of clear definitions, structured methodology sections, and detailed discussions helps in understanding both the theoretical underpinnings and practical implications of HypStructure. Visual aids and empirical results are presented in a manner that clearly supports the claims made.
3. The significance of this work lies in its potential impact on a range of applications that require understanding and leveraging hierarchical relationships in data, such as image recognition and OOD detection.
Weaknesses: 1. The main concern of this paper is the novelty. I believe the method proposed by the author in this work has been explored in many previous works. For instance, in "Hyperbolic Image Embeddings," "Hyperbolic Contrastive Learning for Visual Representations beyond Objects", etc. Although the paper characterizes properties of hierarchy-informed features via an eigenvalue analysis, the contribution is not significant enough to be accepted.
2. The writing is also not good enough for me. For instance, two examples starting from line 107 are not necessarily included in the formal paper; it is better to be placed in the supplementarity materials. There are also some repetitive expressions, for instance, in line 351 and line 353 (While the current work).
3. In summary, I believe the technical contribution of this paper is not significant enough to be accepted.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: As I said in the previous section, the main concern is the novelty. Potential improvements include extending the application scenarios of hyperbolic structured representations on more vision or language tasks that have not been explored before. Further theoretical investigation into establishing the error bounds of CPCC-style structured regularization objectives is of interest as well. The writing needs to be improved as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments, and address each of the questions and weaknesses below.
## Concerns about the novelty
We humbly disagree with the reviewers’ statement since our learning setting is **different from the existing works** in the hyperbolic geometry literature, where our focus is on using hyperbolic geometry to **embed a given class hierarchy.** We believe our work would be a suitable contribution to the space of structured representation learning and Hyperbolic geometry, and we will make key points to argue the novelty of our work, which will hopefully highlight our technical contributions to the field.
1. **Implicit vs explicit incorporation of hierarchy in Hyperbolic geometry and complementary nature of HypStructure**: Most previous works in Hyperbolic geometry assume a latent **implicit** hierarchy in distributions/parameters, while our work focuses on an **explicit** knowledge injection of external hierarchy into the representation space. So while the methods and motivations may _seem_ similar, there are important differences in the nature of the problem, and the statement that the “method proposed by the author has been explored in many previous works” is an overly simplified generalization.
Specifically, the “Hyperbolic Image Embeddings” paper replaces Euclidean layers with Hyperbolic classifiers which constrained the layer parameters in the Hyperbolic space, and has no similarities with HypStructure which instead utilizes external hierarchy for embedding regularization. Furthermore, the “Hyperbolic Contrastive Learning for Visual Representations Beyond Objects” proposes a Hyperbolic supervised contrastive loss implicitly assuming the existence of an unknown scene-object hierarchy.
However, we want to emphasize that our approach offers a flexible framework that can be complementary and used in conjunction with other Hyperbolic losses/backbones assuming implicit hierarchies for performance gains. For instance, with a more recent variant of Hyperbolic classifier-based backbone (Clipped HNN), as well the Hyperbolic SupCon loss for further performance gains - please refer to the detailed results in the global response above, however we share these performance numbers here for reference.
| Method | CIFAR10 | CIFAR100 | ImageNet100|
|---|:---:|:---:|:---:|
| L_hyp (HypSupCon) | 94.64 | 75.77| 90.02|
| L_hyp (HypSupCon) + HypStructure (Ours)| **95.06**| **76.08** | **90.31** |
| Method | CIFAR10 | CIFAR100 |
|---|:---:|:---:|
| Clipped HNN | 95.08 | 77.42 |
| Clipped HNN + HypStructure | **95.19** | **78.05** |
2. **Address limitations of the l2-CPCC with strong empirical performance**: The central theme of our paper is **accurately** embedding **external knowledge** available in the form of hierarchical information, in the representation space. While our work draws inspiration from recent contributions in Hyperbolic machine learning, we propose a novel, practical and effective framework that addresses the severe limitations of the CPCC objective in embedding this hierarchical information. This is clearly demonstrated with the strong empirical performance on large-scale datasets and visualization of learned representations using HypStructure.
3. **Theoretical understanding of structured representation learning and OOD detection** - to the best of our knowledge, our work is one of the first to draw theoretical connections between the paradigm of hierarchy-informed representation learning and tasks such as OOD detection, and we provide a provable hierarchical embedding that fills a theoretical gap in the broader field. These insights can serve to be useful in designing theoretically grounded, efficient and accurate representation learning methods that can learn general representations useful across tasks in practice as we have demonstrated for classification, OOD detection, and maintaining interpretable representations.
Based on these grounds, we feel our work makes many novel contributions in terms of explicit modeling of structured hierarchy, improving and preserving representations with other Hyperbolic objectives and backbones, empirical effectiveness across tasks, and drawing theoretical connections to the OOD detection task (as you have rightly noted). We also foresee our contribution as motivating more works in the practical domain of embedding structured information explicitly using Hyperbolic geometry. We sincerely hope this provides a satisfactory justification to the reviewer regarding our contribution.
---
## Concerns about writing and examples
We have included these examples in the motivation section of our paper to emphasize the practical concerns and severe limitations of l2-CPCC, which serves as our motivation to develop better methods. These examples related to our label tree setting, are very informative for understanding the problem setup and the challenges (as has also been noted by Reviewer dBDC as a strength of our paper). Regarding writing, thanks for your suggestions, we will proofread to improve readability and remove repetitive expressions in the camera ready version.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal. The rebuttal strengthens my understanding of this paper. Combined with other reviews and rebuttals. I still believe the plug-in approach is useful, but the improvements on CIFAR 10 and 100 are not obvious. However, the theoretical understanding part and mathematical proof are valuable to the community. I would like to raise the score to 5. Good luck!
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 3c64
Comment: Thanks for raising your score and for the support of our work and it's contributions! We greatly appreciate the time you spent reviewing our paper, and going through the rebuttal and the other reviews, as well as your valuable suggestions for improving our work. We hope that the improvements across datasets, tasks and settings in terms of accuracies, distortion metrics and OOD scores are clearer in our updated set of results (as shared in response to reviewer gWWe, as well as the global comment with new results), and are grateful that our rebuttal could address your concerns. We would be happy to discuss further if you have any more questions about our work. | Summary: The paper presents a novel approach, HypStructure, for learning structured representations. Comparing with the existing method, the proposed method adds an regularizer calculated from hyperbolic geometry. This approach aims to reduce distortion and improve generalization performance, particularly in low-dimensional scenarios. Extensive experiments are conducted on both the classification task and the out-of-distribution detection task.
Strengths: The paper is well organized. The paper extends and studies the existing L2-CPCC to the hyperbolic space, which effectively reduces distortion and enhances the representation of hierarchical relationships in the data. The paper also conducted comprehensive experiments as well as detailed theoretical analysis of the eigenspectrum of structured representations.
Weaknesses: If my understanding of the proposed loss term is correct, $L_flat$ is not calculated in the hyperbolic geometry. Have you tried the $L_flat$ with hyperbolic network or hyperbolic geometry. I hope the authors could provide more explanation of the combined loss in different geometries.
In Section 2.1, it mentions that $D_i$ is the subset of data points with a specific class label, and $d(D_i, D_j)$ is the distance between the feature vectors of $D_i$ and $D_j$. However, it is not mentioned how the vectors for $D_i$ and $D_j$ are calculated. Is it simply the average of all the feature vectors in the subset?
For Example 1 in Section 2.2, tree $T$ and nodes $G, C, D, E$ are referenced in a way that implies there should be a figure accompanied by the example. While Figure 1 is referenced shortly before this, it is meant to be accompanied by Example 2. Is there a figure that has been omitted here?
Also, I would recommend proofreading the paper to correct all grammatical errors. For example, in the paragraph of Section 2.1, the first sentence “Using a composite objective as defined in Equation (2), we can enforce the distance relationship between a pair of representations in the feature space, to behave similarly as the tree metric between the same vertices” should be corrected to “Using a composite objective as defined in Equation (2), we can enforce the distance relationship between a pair of representations in the feature space to behave similarly to the tree metric between the same vertices.” This version removes the unnecessary comma and corrects “behave similarly as…” to “behave similarly to…”.
Technical Quality: 2
Clarity: 3
Questions for Authors: see my comments in the Weaknesses section.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: There are two limitations mentioned in this paper. The first being that the L2-CPCC can only approximate the structured information since it is impossible to “embed a tree $T$ into $L2$ without distortion,” and as this work extends the L2-CPCC, it would similarly have this limitation. However, this is never explicitly stated in Section 7, where it mentions that HypStructure is only limited to Euclidean classification losses. There could have been more said about the limitations, but it does suffice.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments, and address each of the questions and weaknesses below.
## Hyperbolic $L_{flat}$
We choose the Supervised Contrastive Loss (SupCon) loss as the $L_{flat}$ loss in our experiments primarily to provide a fair comparison with the prior works, whereas our HypStructure methodology is flexible and can be combined with any other loss function.
To demonstrate the wider applicability of our method based on the reviewers suggestion, we evaluate the performance using the Hyperbolic Supervised Contrastive loss $L_{\text{hyp}}$ proposed in [1] instead of the SupCon loss, and provide a comparison with using HypStructure in conjunction with this Hyperbolic loss in the Table below (fine acc.)
| Method | CIFAR10 | CIFAR100 | ImageNet100|
|---|:---:|:---:|:---:|
| L_hyp (HypSupCon) | 94.64 | 75.77| 90.02|
| L_hyp (HypSupCon) + HypStructure (Ours)| **95.06**| **76.08** | **90.31** |
In addition to the aforementioned loss, we vary the backbone network - the Clipped HNN model, a type of Hyperbolic Neural Network, instead of using the Euclidean ResNet style architectures and provide the results for that setting as below.
| Method | CIFAR10 | CIFAR100 |
|---|:---:|:---:|
| Clipped HNN | 95.08 | 77.42 |
| Clipped HNN + HypStructure | **95.19** | **78.05** |
We observe that even with a classification loss function that optimizes for separability in the Hyperbolic space, or a Hyperbolic Neural Network as the backbone, our proposed methodology HypStructure is complementary and provides performance gains for better embedding the hierarchical relationship in the Hyperbolic space. (More details regarding the setup are in the global response above.)
We will include the results with the experiments on hyperbolic losses and backbone in the revised version of the paper as well.
[1] Hyperbolic Contrastive Learning for Visual Representations Beyond Objects, Ge et. al, CVPR ‘23
---
## Definition of dataset distance $d(D_i, D_j)$
As we mention in line 75-76, the dataset distance $d(D_i, D_j)$ can vary depending on the choice of the distance metric and the design setup, hence we avoid writing an explicit definition in Sec. 2.1 and provide a general expression here instead. We follow this up with an example of the specific dataset distance for the l2-CPCC case in lines 97-98, where we see that $d(D_i, D_j) = \rho_{l2} =$ _the Euclidean distance of two averages of all feature vectors_ in each subset. On the other hand, for our proposed methodology HypStructure, you can either compute the metric $d(D_i, D_j)$ on Hyperbolic space, which first uses exponential map (Eq. 4) and then Klein average (Eq. 6) followed by the Poincare distance computation (line 180-181), or compute the means of the Euclidean features as in $\rho_{l2}$ and project the centroids to the Hyperbolic space using the exponential map, followed by the Poincare distance computation.
---
## Figure for Example 1 and 2
The accompanying figures for both Example 1 and Example 2 are provided in Figure 1. More specifically, Example 1 only uses the tree nodes on the left part of Figure 1, which is what we refer to as the tree $\mathcal{T}$.
The Example 2 uses both the left part of the Figure to describe nodes in the tree, C, D and E, and the right part of the Figure to visualize this corresponding setup in an attempt to embed the tree in the Euclidean space, based on the arguments described in the text in Example 2. We will make this more explicit in the paper description and the caption in the figure.
---
## Grammatical errors
Thanks for the suggestions, we will proofread to improve readability and remove any grammatical errors in the camera ready version.
---
## Clarification for limitations
The aforementioned comments misunderstand the limitations of our proposed method and we would like to provide a few clarifications regarding the limitations. First, the $l_2$-CPCC methodology suffers from the challenges of embedding a tree **exactly**, owing to the representational capacity of the Euclidean space which has zero curvature. This leads to high distortion when embedding a tree-like hierarchy using the $l_2$-CPCC, as also shown in the distortion metrics in Table 1. In contrast, Hyperbolic spaces allow for embedding tree-like data in finite dimensions with minimal distortion owing to the negative curvature of the spaces, and hence our proposed methodology - HypStructure does not suffer from the same limitation as the $l_2$-CPCC, and therefore reduces the distortion in embedding the hierarchy (Table 1).
Additionally, while our submitted work experimented with primarily a Euclidean classification loss, (hence the corresponding statement in the limitations), over the rebuttal period based on the reviewers’ suggestions, we have experimented with other Hyperbolic losses and backbones, including the Hyperbolic SupCon loss and Clipped Hyperbolic Neural Network, which also shows improvements when combined with our proposed method, and hence our work is not limited to Euclidean classification losses. We will rephrase this statement to avoid further confusion. We foresee a potential limitation - since our proposed method relies on the availability (or construction) of an external hierarchy for computing the HypCPCC objective, it can be challenging if the hierarchy is unavailable or noisy. We will include this in the limitations section.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal. Personally, I think that the new results in the rebuttal have marginal improvements, and I am not sure if such improvements are statistically important, which decides the novelty of the proposed method. Therefore, I decide to keep my rate unchanged.
---
Reply to Comment 1.1.1:
Title: Response to concern about the statistical significance of the new results (Part 1)
Comment: Thanks for taking the time to go through the rebuttal, owing to the rebuttal timeline and limited space we were only able to report results corresponding to a single seed in our initial response, however we are now sharing a more exhaustive set of results with the hyperbolic settings with more evaluation metrics used in our paper, and you can find the results below (averaged over 3 seeds). **Additionally, to address your concerns about statistical significance, we performed the t-test for experiments with vs. without HypStructure, and highlighted the higher performance number only if the p-value is smaller than 0.05.**
## Experiments with the Hyperbolic Supervised Contrastive Loss
Fine accuracies, coarse accuracies, and distortion measurements with the L_hyp loss .
| Method | Dataset | $\delta_{rel}$ (lower) | CPCC (higher) | Fine Accuracy (higher)| Coarse Accuracy (higher)|
|---|:---:|:---:|:---:|:---:|:---:|
| L_hyp (HypSupCon) | CIFAR10| 0.128 (0.007) | 0.745 (0.017) | 94.58 (0.04) | 98.96 (0.01)|
| **L_hyp (HypSupCon) + HypStructure (Ours)**|CIFAR10 |**0.017 (0.001)**| **0.989 (0.001)** | **95.04 (0.02)** | **99.36 (0.02)**|
| | | | | |
| L_hyp (HypSupCon) |CIFAR100|0.168 (0.002)|0.664 (0.012)|75.81 (0.06)|85.26 (0.07)|
| **L_hyp (HypSupCon) + HypStructure (Ours)**|CIFAR100|**0.112 (0.005)**|**0.773 (0.008)**|**76.22 (0.14)**|**85.83 (0.06)**|
| | | | | |
| L_hyp (HypSupCon) |ImageNet100|0.157 (0.004)|0.473 (0.004)|89.87 (0.01)|90.41 (0.01)|
| **L_hyp (HypSupCon) + HypStructure (Ours)**|ImageNet100|**0.126 (0.002)** |**0.714 (0.003)**|**90.26 (0.01)**|**90.95 (0.01)**|
Comparative Evaluation of OOD detection performance on a suite of OOD datasets with different ID datasets.
OOD Detection AUROC Results on CIFAR10 (ID)
| Method | SVHN | Textures| Places365 | LSUN | iSUN | Mean |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| L_hyp (HypSupCon) |89.45 (0.18)|93.39 (0.24)|90.18 (0.09)|98.18 (0.19) |91.31 (0.31)|92.502|
| **L_hyp (HypSupCon) + HypStructure (Ours)**|**91.11 (0.21)**|**94.45 (0.13)**|**93.52 (0.33)**|**99.05 (0.17)**|**95.24 (0.42)**|**94.674**|
OOD Detection AUROC Results on CIFAR100 (ID)
| Method | SVHN | Textures| Places365 | LSUN | iSUN | Mean |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| L_hyp (HypSupCon) |80.16 (0.08)|79.61 (0.26)|74.02 (0.45)|70.22 (0.18)|**82.35 (0.19)**|77.272|
| **L_hyp (HypSupCon) + HypStructure (Ours)**|**82.28 (0.19)**|**83.51 (0.29)**|**77.95 (0.31)**|**86.64 (0.54)**|69.86 (0.87)|**80.048**|
OOD Detection AUROC Results on ImageNet100 (ID)
| Method | SUN | Places365| Textures | iNaturalist |Mean|
|---|:---:|:---:|:---:|:---:|:---:|
| L_hyp (HypSupCon) |91.96 (0.18)|90.74 (0.26)|**97.42 (0.21)** |94.04 (0.19)|93.54|
| **L_hyp (HypSupCon) + HypStructure (Ours)**|**93.87 (0.05)**|**91.56 (0.13)**|97.04 (0.16)|**95.16 (0.24)**|**94.41**|
## Experiments using the Clipped Hyperbolic Neural Network Backbone
Fine accuracies, coarse accuracies, and distortion measurements with the Clipped HNN backbone
| Method | Dataset | $\delta_{rel}$ (lower) | CPCC (higher) | Fine Accuracy (higher)| Coarse Accuracy (higher)|
|---|:---:|:---:|:---:|:---:|:---:|
| Clipped HNN | CIFAR10|0.084 (0.008)|0.604 (0.004)|94.81 (0.23)|89.71 (2.04)|
| **Clipped HNN + HypStructure (Ours)**|CIFAR10 |**0.013 (0.002)**|**0.988 (0.001)**|94.97 (0.12)|**98.35 (0.22)**|
| | | | | |
| Clipped HNN |CIFAR100|0.098 (0.001)|0.528 (0.009)|76.46 (0.26)| 49.26 (0.73)|
| **Clipped HNN + HypStructure (Ours)**|CIFAR100|**0.064 (0.006)**|**0.624 (0.005)**|**77.96 (0.14)**|**55.46 (0.61)**|
| | | | | |
Comparative Evaluation of OOD detection performance on a suite of OOD datasets with different ID datasets.
OOD Detection AUROC Results on CIFAR10 (ID)
| Method | SVHN | Textures| Places365 | LSUN | iSUN | Mean |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| Clipped HNN |92.63 (0.24)|90.74 (0.18)|88.46 (0.19)|95.66 (0.11)|92.41 (0.25)|91.98|
| **Clipped HNN + HypStructure (Ours)**|**95.41 (0.44)**|**93.91 (0.32)**|**92.31 (0.41)**|**96.87 (0.21)**|**94.92 (0.31)**|**94.68**|
OOD Detection AUROC Results on CIFAR100 (ID)
| Method | SVHN | Textures| Places365 | LSUN | iSUN | Mean |
|---|:---:|:---:|:---:|:---:|:---:|:---:|
| Clipped HNN |89.94 (0.16)|83.77 (0.23)|77.26 (0.33)|82.87 (0.29)| 82.35 (0.15)|83.23|
|**Clipped HNN + HypStructure (Ours)**|**91.56 (0.21)**|**84.31 (0.09)**|**78.45 (0.28)**|**87.53 (0.52)**|**83.44 (0.37)**|**85.06**|
... (continued in the next response)
---
Reply to Comment 1.1.2:
Title: Response to concern about the statistical significance of the new results (Part 2)
Comment: ... (continued from Part 1 of this response)
**Summary**: Based on the above results over multiple seeds, we can clearly observe that our proposed method leads to a **statistically significant** and consistent improvement in performance in terms of classification accuracies with a significantly more evident reduction in other metrics, including distortion of the representations, and a higher improvement on the mean AUROC in the OOD detection tasks, with improvements upto 3% across a suite of OOD datasets.
>I think that the new results in the rebuttal have marginal improvements, and I am not sure if such improvements are statistically important, which decides the novelty of the proposed method
We humbly disagree with the reviewers’ comment here and based on the above results, argue that the results would be a strong addition in demonstrating the wide-applicability of our proposed method **across tasks**, as has also been noted by Reviewer kf6R. Note that it has been discussed in multiple prior works [1] that improving accuracy on both classification as well as OOD detection tasks is fairly challenging.
Furthermore, we would also like to point the reviewer to the global response where we clearly outline the novelty of our contribution which is not limited to the performance with hyperbolic losses/backbones, but is a lot more general, where **our work addresses the challenges of embedding explicit hierarchies and shows strong performance across tasks, learns interpretable representations and also draws important theoretical connections to tasks such as OOD detection, which are all very different from the prior works in the literature**.
We hope that this clarifies the questions raised by the reviewer about the statistical significance and the novelty of the results. We would be happy to discuss any further questions about the work, and would really appreciate an increase in score if reviewers’ concerns are addressed.
[1] OpenOOD: Benchmarking Generalized Out-of-Distribution Detection,Yang et. al, NeurIPS ‘22, Datasets and Benchmarks Track
[2] Learning Structured Representations by Embedding Class Hierarchy, Zeng et. al, ICLR ‘23 | Summary: This work introduces a regularization scheme based on Cophenetic Correlation Coefficient to more appropriately embed semantic label hierarchical structure in the representation. The method exploits the hierarchical benefits of hyperbolic space reformulating the CPCC regularization term to operate on the Poincare ball. The proposed method sees improvement in empirical performance demonstrating the effectiveness of the approach to learn a more separable embedding space for classification.
Strengths: ⁃ The authors present the work clearly with effective use of visual and writing structure. All figures/diagrams are useful in supporting the narrative and findings.
⁃ The method is simple, highly generalizable, and leads to improved performance on benchmark tasks. It can therefore, be seen as an advantageous tool in hyperbolic learning that could possibly lead to impact and adaptation by practitioners in the field.
⁃ The theoretical and analysis are generally good, with eigenspectrum analysis supporting your claims of hierarchical structure for the most part. This is a useful analysis that provides confidence in the findings supported by appropriate proofs.
⁃ Extensive details to support replication are provided.
Weaknesses: ⁃ From the visualizations presented of the embedding space, notably the UMAP, your embeddings seem to have collapsed to the boundary in many places limiting the inherent hierarchy of the embeddings, this results in a limited hierarchy being represented. This in turn, leads me to question the extent of hierarchies learnt, when discussing the core intention of the work, and the claims made. One would expect that greater performance could be achieved if this had been addressed. I am aware that boundary collapse is still an unsolved problem, but careful tuning can limit its effects.
⁃ The approach is simple but arguably not significantly novel given it is a hyperbolic reformulation of CPCC with minimal changes. With that being said, these simple methods do work somewhat well in practice and are useful to progressing the field.
⁃ The use of a Euclidean linear evaluation is a confusing direction. You are aiming to learn a hyperbolic embedding space that preserves hierarchy, yet for downstream tasks you employ a Euclidean classifier, why? You will lose the desirable properties you are aiming to capture.
⁃ Further experimentation on different hyperbolic models and downstream tasks would have helped demonstrate the generalization of the regularization to all of hyperbolic learning. Although, this cannot be expected in the rebuttal, it would have helped support the findings to present the work a more generalized approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Broader impact statement and societal impact are seemingly missing from this work. These limitations including those limitations of the analysis should be highlighted in more detail, you mention the Euclidean loss as a limitation. If I have missed them however in the text, please correct me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed comments, and address each of the questions and weaknesses below.
## Boundary collapse
Part of the design of our HypStructure methodology, **including the centering loss and embedding of internal nodes** mitigates boundary collapse. Embedding the internal tree nodes in HypStructure - $T_{\text{int}}$ (as compared to only leaf nodes in prior work CPCC) and placing the root node at the center of the poincare disk with $l_{\text{center}}$ loss (discussed in lines 100-104, 186-194 in the main paper, lines 931-946 in the Appendix) embeds the hierarchy more accurately and the mitigate aforementioned issues. To demonstrate this intuition and evaluate its impact on the learnt hierarchy and representations, we first visualize the learnt representations from HypStructure, with and without these components - i.e. embedding internal nodes and a centering loss vs leaf only nodes, via UMAP in Figures 14 (CIFAR100) and 15 (ImageNet100) of the rebuttal PDF. We also provide a performance comparison (fine acc.) in the Table below
|Method|CIFAR10|CIFAR100|ImageNet100|
|-------|:-------------:|:-----:|:-----:|
| HypStructure (leaf only) | 94.54 | 76.22 | 89.85 |
| HypStructure (with internal nodes and centering) | **94.79** | **76.68** | **90.12** |
First, based on Figures 14 and 15, one can note that in the leaf-only setting without embedding internal nodes + centering loss (figures on the left), the samples belonging to the fine classes which share the same parent (same color) are in close proximity, reflecting the hierarchy, however they are close to the boundary. With the embedding of internal nodes and a centering loss (right), we note that the representations are spread between the center (root) to the boundary as well as across the poincare disk, which is more representative of the original hierarchy. This also leads to performance improvements as can be seen in the table above.
Note that we used the same $\beta = 0.01$ penalty parameter across datasets for convenience and easy reproducibility, and with a more careful tuning of $\beta$, the boundary collapse can be mitigated to a further extent. However, forcing the representations to be too close to the origin can lead to inter-class confusion and performance degradation as discussed in prior work [1] (Figure 4, page 7 of the paper).
[1] Hyperbolic Busemann Learning with Ideal Prototype, Atigh et. al, NeurIPS ‘21
---
## Novelty
Thanks for your comments in support of the simplicity of our work. We would also like to refer the reviewer to our response regarding technical novelty in the global response above.
---
## Euclidean linear evaluation
We provide **Euclidean linear evaluation on the Euclidean features**, as an added evaluation in the Appendix. Besides, it allows us to perform a fair comparison between important baselines, particularly l2-CPCC vs. HypStructure. To clarify, let us revisit our HypStructure pipeline: we first train an Euclidean loss SupCon which gives us an Euclidean feature (Fig. 6c), and then apply the exponential map to get the Poincare feature for CPCC computation. Because SupCon takes Euclidean inputs, it is reasonable to use Euclidean space for classification evaluation. From the visualizations in Figure 6b vs 6c, we note that the Hyperbolic regularizer empirically affects the geometry of the features even in the Euclidean space to be more structured and leads to performance improvements (Table 1, Fig. 7a, c)
On the other hand, Hyperbolic classifiers are applicable if we change the backbone model. We experiment with a Hyperbolic model, Clipped HNN [1] and present the fine acc. results below (more details about the setup in the global response above), evaluated with Hyperbolic Multinomial Logistic Regression Layer in [2]:
| Method | CIFAR10 | CIFAR100 |
|---|:---:|:---:|
| Clipped HNN | 95.08 | 77.42 |
| Clipped HNN + HypStructure | **95.19** | **78.05** |
We observe that HypStructure provides performance improvements with Hyperbolic backbones as well.
[1] Clipped hyperbolic classifiers are super-hyperbolic classifiers, Guo. et al, CVPR ‘22
[2] Hyperbolic neural networks, Ganea et. al, NeurIPS ‘18
---
## Further Experimentation
We would also like to point the reviewer to additional experiments that we have performed with non-euclidean setups and we discuss more details about them in the global response.
---
## Broader impact and limitations
Thanks for pointing this out, we would like to clarify that we experiment with the Euclidean loss primarily to provide a fair comparison with the prior works, and it is not a requirement (or limitation) for our HypStructure methodology. It can be combined with any other loss function (such as HypSupCon) or backbone (Clipped HNN) as we have demonstrated above, highlighting the wide applicability of our method.
In addition to the discussion in Section 7, we enumerate other limitations which we foresee for future research and building on our methodology - our proposed method relies on the availability (or construction) of an external hierarchy for computing the HypCPCC objective, which might be challenging if the hierarchy is unavailable or noisy. In terms of Broader Impact, one potential area of impact that we recognize is the AI for science domain, where HypStructure can be helpful in learning more interpretable representations which are more reflective of the underlying relationships in the domain space. We include this in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your insightful rebuttal, and clarification on points raised during the review.
I'm grateful for your empirical evaluations and clarification on the boundary collapse and agree with the points made, however, I would still argue it would be beneficial to empirically demonstrate that collapse is not occurring given your strong claims made for hierarchy. Arguably, the small improvements when applying your centring method and accuracy metrics alone cannot refute the collapse claim or argue that the hierarchies you are aiming to capture are indeed captured. If I have misunderstood, or missed any further results that counter my points please do correct me.
Thanks for the additional evaluation on hyperbolic networks this is a strong addition that addresses one of the identified weaknesses.
I would emphasise the importance for a revised section outlining limitations, broader impact and societal for any revised manuscript, however, since you have identified these points in the rebuttal I have confidence the authors will provide this.
Given my original review is positive, and that I still have a positive outlook for this work, I have maintained my score for now, and will revisit after further discussion with other reviewers.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer kf6R
Comment: Thanks for the suggestion and for the support of our work! We will include the visualizations (Fig. 14 and 15) and the new results on hyperbolic settings provided in the rebuttal in the camera ready version, along with a section outlining the limitations, and broader impact as well. | Summary: The paper introduces HypStructure, a novel approach for learning structured representations using hyperbolic embeddings, which are well-suited for modeling hierarchical relationships due to their tree-like structure. The method incorporates a hyperbolic tree-based representation loss and a centering loss to embed label hierarchies into feature spaces with minimal distortion.
Experiments demonstrate HypStructure's effectiveness in improving classification accuracy, especially in low-dimensional scenarios, and enhancing Out-of-Distribution (OOD) detection performance without compromising in-distribution accuracy.
Strengths: 1. Although it is already theoretically proved in the related work of [70, 72] that it is not possible to embed a tree in Euclidean space without loss, it is still informative to see that In section 2.2, example 1 and example 2 give good counter-examples to show this property.
2. The paper is well-written and easy to follow, and the proposed model is simple yet effective.
3. The paper provides a formal eigenvalue analysis that links the geometry of the learned representations to their performance
Weaknesses: 1. Sections 2.1, 2.2, and 3.1 are all from existing literature, which limited the contribution of the paper. Although the operations described in Section 3.1 are common hyperbolic operations, this section still lacks proper reference to the related papers.
2. In HypCPCC, the authors proposed two alternatives of the loss,
* Map Euclidean vectors to Poincaré space then average.
* Averaging the Euclidean vectors then map to Poincaré space.
In the 1st alternative, The use of Klein weighted average incurs extra computation, Is it worth doing so?, In the 2nd alternative is exactly the same as [r2], which also calculates the prototypes for each class in hyperbolic space and then map to Poincaré space, [r2] also deploys supervised constructive learning, but the reference is missing and comparison is not stated.
3. The statement in Theorem 5.1 is incorrect, an entry of $K$, denoted as $r^h$ should be a vector, but the theorem stated that $\lambda_0 = 1 - r^1$, which does not make sense.
3. Incorrect (but fixable) definition in line 708, the proof used $\| u \| = \| v \| = 1-\epsilon $ but in the proof the authors used the fact that $\| u \| = \| v \| = 1-\epsilon^2 $
4. Incorrect proof in Corollary A.1, the last row of the proof does not hold, Poincaré distance cannot be the same as Euclidean distance, "growing in the same trend" does not mean "proportional to".
[r2] Long, Teng, et al. "Searching for actions on the hyperbole." CVPR2020
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the rationale behind choosing the Klein weighted average for the first alternative in HypCPCC, considering the extra computation it incurs?
2. Can the authors provide a comparison to [r2], which is (a special case of) their second alternative of HypCPCC?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors did not discuss the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for detailed comments, and address each of the questions and weaknesses below.
## Contribution
While our work builds on prior research, this does not diminish the contribution of our work and we request the reviewer to kindly refer to our note on novelty in the global response, we clearly outline the key differences between our work and prior works.
---
## Reference to Hyperbolic ops.
Thanks for pointing this out, we will include references to common Hyperbolic operations in the revised manuscript [1]
[1] Hyperbolic trigonometry and its applications in the Poincare ball model of Hyperbolic geometry, Ungar A., Computers & Mathematics with Applications, 2001
---
## Comparison to [r2]
Thanks for pointing out this work, indeed [r2] is a related work, we will cite it in our revised manuscript and have attempted to provide a comparison below. However, first we discuss the key differences between [r2] and our work.
1. **The usage of hierarchy and “prototypes” differ significantly.** [r2] is a two-step method, where first an action *word* Hyperbolic embedding is created based on action hierarchies (each word is a single tree node), and second a video embedding is learnt using positions of action words (called prototypes in [r2]) in the Hyperbolic space. In contrast, our method learns the hierarchical image embeddings end-to-end, does not depend on word embeddings, and performs a centroid computation on the label hierarchy tree nodes (many images belong to a single label node, hence class prototypes are computed by averaging).
2. **The nature of components to learn hierarchical embeddings are different**. In [r2], both losses $L_H$ (Eq. 2 in [r2], as in [2]) and $L_2$ (Eq. 6 in [r2], as in [3]) are ranking losses, whereas HypCPCC factors in absolute values of the node-to-node distances. The learned hierarchy with HypCPCC will not only (implicitly) encode the correct parent-child relation like [2, 3], but also learn more complex and weighted hierarchical relationships more accurately (Fig. 16 in the rebuttal PDF).
We attempt to compare HypStructure and [r2] now. Since losses $L_H$ and $L_2$ have similar motivation as HypStructure, we add them as regularizers to the Hyperbolic classification backbone Clipped HNN and compare with our method:
|Method|CIFAR10|CIFAR100|
|------|:-----:|:------:|
|Clipped HNN|95.08|77.42|
|Clipped HNN + $L_H$|94.78|77.44|
|Clipped HNN + HypStructure|**95.19**|**78.05**|
We see HypStructure achieves better performance than $L_H$. We face NaN issues while training with L_2 and other losses, and discuss the setup details in the global response.
[2] Poincaré embeddings for learning hierarchical representations, Nickel et. al, NIPS ‘17
[3] Hyperbolic entailment cones for learning hierarchical embeddings, Ganea et. al, ICML ‘18
---
## Rationale behind Klein avg.
We experiment with both HypCPCC variants mentioned in line 177-185. The 2nd variant, however, given our understanding, is not a special case of [r2], since it still requires a Euclidean averaging step for class prototype computation. We empirically observe performance improvements in accuracy using the 1st variant across datasets below (fine acc.).
|Method|CIFAR10|CIFAR100|ImageNet100|
|------|-------|--------|-----------|
|Euc. (2)|94.56|75.64|90.08|
|Klein (1)|**94.79**|**76.68**|**90.12**|
---
## Theorem 5.1 entry of K
Since $K$ is $\mathbb{R}^{n\times n}$ (line 289), an *entry* of $K$ is - $K_{i,j}$, the $i$-th row and $j$-th column scalar value of K. Each entry of $K$ can be denoted as $r^h$, where $r^h$ is the lowest common ancestor (LCA) of two samples represented by the $i$-th row and $j$-th column respectively so it is indeed a scalar. We also refer the reviewer to the matrix in lines 714-715, where each entry of $K$, denoted as $r^h_{i,j}$, is a scalar value. (i,j subscript can be ignored, see 718-719, since every two leaves sharing the LCA of the same height have the same tree distance). We will make this more explicit in the revised version to avoid confusion.
---
## “The proof used $|𝑢|=|𝑣|=1−𝜖$…”
Thanks for pointing out this typo due to our cluttered notation! We intended to mean $|u|^2 = 1 - \text{(a small number)}$, however the conclusion of the proof still holds with this change, as the reviewer noted. Let us rewrite the proof: in practice with $clip^1$ projection, to be consistent with the notation in the main body, we set $|u|=|v| = 1 - \epsilon$ where $\epsilon$ is a very small number. Then $|u|^2 = (1 - \epsilon)^2 = 1 - 2\epsilon + \epsilon^2$. Since $\epsilon$ is very small, we define $2\epsilon - \epsilon^2$ as $\xi$, making $|u|^2 := 1 - \xi$ where $\xi$ is still a small number. Then replacing all $\epsilon$ with $\xi$ in line 708-709, we get the same conclusion. We will correct this error in the revised version.
---
## “Incorrect proof in Corollary A.1...”
Apologies for the confusion. With the statement “growing in the same trend”, we meant that Poincaré(u,v) is monotonically increasing with Euclidean(u,v), and the proof after Corollary A.1 will not depend on the metric, as can be seen from comments in 710. This is true since with $\text{clip}^1$ transformation, Poincaré is approximately the log-scalar transform of Euclidean distance. The monotonically increasing property ensures the relative order of any two entries for Euclidean CPCC and Poincare CPCC matrices in K to be the same. Then, we can argue about the structure of K, either Euclidean or Poincare, to have the hierarchical diagonalized structure as in Fig 8a. But indeed technically, due to the logarithm, this is not “proportional to”. We will remove this notation and make it more clear.
---
## Limitations
Another potential limitation - our method relies on the availability (or construction) of an external hierarchy for computing the HypCPCC objective, which might be challenging if the hierarchy is unavailable or noisy. We will include this in the next iteration of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and the clarifications provided. I appreciate the effort you put into addressing my concerns. After reviewing your explanation, most of my concerns have been resolved.
Although I acknowledge that the novelty and technical contribution might be somewhat limited, as pointed out by Reviewer 3c64, I believe that your proofs and the extensive experiments you conducted will still provide value to the community. In light of this, I will be raising my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer dBDc
Comment: Dear Reviewer dBDc,
Thanks for your valuable feedback in the support of our work and for raising your rating! We are happy to see that our responses could address your concerns. We will include the clarifications and corrections to the proofs based on your suggestions, in the revised manuscript, and if you have any further questions for our work, we would be happy to continue the discussion. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for your valuable reviews and constructive feedback that has helped us improve our work. We first share the results of additional experiments we conducted based on these suggestions which demonstrate the wide applicability of our method, and then summarize our technical contributions and its novelty.
## Additional experiments
1. **Hyperbolic Backbones and Losses**
According to the suggestions from reviewers dBDC, kf6R, and gWWe, we experimented with a Hyperbolic backbone, Clipped HNN [1], which is developed on the Hyperbolic MLR layer in the work Hyperbolic Neural Networks [2]. We also experiment with a composite objective combining this backbone and our HypStructure regularizer.
For comparison, we also experiment with the objective function in the seminal work Poincaré Embedding [3] as a regularizer, since the methodology also aims to learn Hyperbolic (word) embeddings (as suggested by reviewer dBDc for learning action embeddings in [r2]). Following the setup in our paper, we use the first alternative for both HypStructure and PoincaréEmb in this comparison, i.e., we first ensure the features are in Poincaré ball, followed by the Klein averaging for each class node, and apply the regularizer accordingly. The fine accuracy results are as follows:
|Method|CIFAR10|CIFAR100|
|---|:---:|:---:|
|Clipped HNN|95.08|77.42|
|Clipped HNN+PoincareEmb|94.78|77.44|
|Clipped HNN+HypStructure|**95.19**|**78.05**|
We observe that HypStructure is a flexible regularizer compatible with Hyperbolic backbones and it improves the fine-level classification accuracy from the Clipped HNN baseline, and performs better than Clipped HNN + Poincare Embedding [3] as well. We also experimented with the Poincare Embedding [3] regularizer with the Supervised Contrastive (SupCon) loss, however the training runs into NaN issues due to numerical instability in the PoincareEmb computation. Note that for this experiment, we could not provide results on ImageNet100, since in our main experiments in Table 1 for ImageNet100, we rely on fine-tuning a pre-trained ResNet backbone on ImageNet-1K, whereas we were unable to find a comparable pretrained Hyperbolic model for a fair comparison to results in Table 1.
We also replaced the Euclidean Supervised Contrastive Loss, with the **Hyperbolic Supervised Contrastive Loss** $L_{\text{hyp}}$ as proposed in [4], and experiment with HypStructure in combination with this Hyperbolic loss. We report the results on different datasets below (fine acc.)
|Method|CIFAR10|CIFAR100|ImageNet100|
|---|:---:|:---:|:---:|
|L_hyp (HypSupCon)|94.64|75.77|90.02|
|L_hyp (HypSupCon) + HypStructure (Ours)|**95.06**|**76.08**|**90.31**|
We observe that HypStructure works well with Hyperbolic contrastive losses as well, and in fact leads to improved absolute accuracies on CIFAR10 and ImageNet100 than our previous results using the supervised contrastive losses.
[1] Clipped hyperbolic classifiers are super-hyperbolic classifiers, Guo. et al, CVPR ‘22
[2] Hyperbolic neural networks, Ganea et. al, NeurIPS ‘18
[3] Poincaré embeddings for learning hierarchical representations, Nickel et. al, NeurIPS ‘17
[4] Hyperbolic Contrastive Learning for Visual Representations Beyond Objects, Ge et. al, CVPR ‘23
[r2] Searching for actions on the hyperbole, Long et. al, CVPR ‘20
2. **Understanding the components of HypStructure**
Based on the suggestion from reviewers dBDC, kf6R, we perform additional experiments and visualizations to understand the working of HypStructure, specifically:
- 2a. **The role of centering loss for the root and embedding internal nodes** - see Table below and comparison of visualizations in Figures 14 and 15 in the PDF
|Method|CIFAR10|CIFAR100|ImageNet100|
|---|:---:|:---:|:---:|
|HypStructure (leaf only)|94.54|76.22|89.85|
|HypStructure (with internal nodes and centering)|**94.79**|**76.68**|**90.12**|
- 2b. **The role of Klein averaging vs Euclidean averaging** for improved empirical performance
|Method|CIFAR10|CIFAR100|ImageNet100|
|------|-------|--------|-----------|
|Euc. (2)|94.56|75.64|90.08|
|Klein (1)|**94.79**|**76.68**|**90.12**|
- 2c. **The capacity of HypStructure to learn complex hierarchical relationships** with differently weighted trees - Figure 16 in the PDF.
---
## Novelty & Contribution
1. **Complementary nature to prior Hyperbolic works and address limitations of l2-CPCC**: While our work draws inspiration from recent works in Hyperbolic machine learning, we propose a novel, practical and effective framework that addresses the limitations of the l2-CPCC,as we clearly demonstrate with the strong empirical performance on large-scale datasets and visualization of learned embeddings. Furthermore, our approach offers a flexible framework that can be complementary and used in conjunction with other Hyperbolic losses/backbones for performance gains. We want to highlight that our work is very different from the majority of prior works in Hyperbolic learning that assume a latent implicit hierarchy in the learning process, while our work solves the problem of leveraging explicit external hierarchical knowledge in representation learning.
2. **Provable structured representation learning**: To the best of our knowledge, our work is one of the first to draw theoretical connections between the paradigm of hierarchy-informed representation learning and OOD detection. These insights can be very useful in designing theoretically grounded, efficient and accurate representation learning methods that can learn general representations useful across tasks in practice as we have demonstrated for classification and OOD detection, while maintaining interpretable representations.
We believe our work makes several novel contributions, and we anticipate that our work will inspire further research on embedding structured information and usage of Hyperbolic geometry. We hope this adequately justifies our contributions to the reviewers.
Pdf: /pdf/a6a892e50711c971f45867217dddbf803e8e8d5e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Energy-Based Modelling for Discrete and Mixed Data via Heat Equations on Structured Spaces | Accept (poster) | Summary: This article, "Energy-based modelling for discrete and mixed data via heat equations on structured spaces", proposes to perform the training on EBM, using the Energy Discrepancy (ED) loss, in the case where having multi-modal dataset mixing eventually continuous inputs but also discrete (categorical) ones. The work describes into details how to parametrize in different setting the inclusion of discrete variables, and they apply it to various datasets.
The main contributions are the design of the continuous time Markov chain transition probability that lies at the heart of the ED approach and the application to tabular dataset for which generative approach is usually hard.
Strengths: The authors show how their method can be efficiently used on Tabular dataset. In particular, they apply to several dataset and show that in average the EBM trained with Energy-Discrepancy using a discrete implementation of the Markov chain transition probability outperform the concurrent approach, ref[Xu et al 2019].
The authors also show experimental results on image modelling.
Weaknesses: The authors extend the formalism of Energy Discrepancy to the case of including discrete state in addition to continuous features. Whether or not this justifies an entire publication can be debated, Although it should be emphasized that the datasets under consideration are quite original.
It might be because I'm not an expert on ED, but while traditional EBM relies on MCMC to compute the gradient, ED does not. However, it is not clear to me if sampling the EBM trained in such way need MCMC to be generative ? If so, the article should provide more details on the implementation. They should also check that the trained model is at equilibrium (the generated samples corresponds to the equilibrium properties of the model).
More importantly, the comparison for the tabular dataset is only done at the level of the AUC curves. Can at least the authors compare the frequency and correlations amongst the generated samples and the true ones ?
Technical Quality: 3
Clarity: 3
Questions for Authors: The authors said that their work is one of the first dealing with tabular data, I at least find this one dealing with EBM and tabular dataset: https://arxiv.org/abs/1912.09382, the authors might check if it is relevant w.r.t. their work. Also, this article https://arxiv.org/abs/2405.15376 deals with generative and categorical variables for EBM.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are correctly discussed in the article.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We first want to comment on the weaknesses of our paper mentioned in your review:
> The authors extend the formalism of Energy Discrepancy to the case of including discrete state in addition to continuous features. Whether or not this justifies an entire publication can be debated
>
Here is why we hope that this work is interesting to the community: Currently, discrete EBM training methods are hard to tune and don’t translate to mixed state spaces. We want to introduce a simple performant training method to address these limitations.
To do so, we introduce a graphically informed noising process which may also be of great interest in other domains, e.g. discrete diffusion modelling, too. The theoretical understanding of the diffusion allows automated tuning of the time parameter $t$ as a function of the state space size (Theorem 2) which is a critical new contribution to this work substantially different from prior work and important to tabular data, where state space sizes typically vary across between features.
Our theoretical analysis is supported by the experimental results that show good performance, thus unlocking a new tool for density estimation on non-continuous domains.
> [...] it is not clear to me if sampling the EBM trained in such way need MCMC to be generative ?
>
To use EBM as a generative model, one needs to solve two challenges: Training the model and sampling from the model. This paper addresses the training part, only. As you say, sampling methods have to be used to obtain synthetic samples from the model. We employ a fairly standard method for sampling with interleaved updates of the numerical and discrete state. The implementation can be found in the appendix (Algorithm 2).
Importantly, tuning a sampler at inference time is much easier than tuning it for a stable training algorithm. Indeed, we were unable to implement the MCMC-based training for tabular datasets, while sampling from our model with MCMC was unproblematic.
It should be noted that EBMs are particularly useful in non-generative downstream tasks like out of distribution detection and calibration of classifiers, because unlike other generative models they output the unnormalised data density, explicitly. For this reason we focussed on contributing to better EBM training.
> They should also check that the trained model is at equilibrium (the generated samples corresponds to the equilibrium properties of the model).
>
As stated in lines 78-81, ED offers the theoretical guarantee that $p_{\theta^\ast}(\mathbf x) = p_\mathrm{data}(\mathbf x)$ after training converged. Hence, if we run an long-run MCMC sampler on our learned model with Metropolis Hastings acceptance step, the produced samples are at equilibrium by construction (The MH acceptance guarantees detailed balance which is equivalent to samples being at equilibrium.)
The same guarantees does not exist for MCMC-based training approaches because short-run MCMC has to be used in practice introducing biases in the optimisation, see e.g. [1]. For empirical evidence of these favourable properties, see figure 2, 7, 8 which contrast ED and CD results.
> More importantly, the comparison for the tabular dataset is only done at the level of the AUC curves. Can at least the authors compare the frequency and correlations amongst the generated samples and the true ones ?
>
Thanks for your suggestion. We include two new metrics here: single-column density similarity and pair-wise correlation similarity. These two metrics can measure the similarity of the frequency of single columns and correlations of pair-wise columns between the generated and the true table. They can be computed by using the open-source API: [SDMetrics](https://docs.sdv.dev/sdmetrics/reports/quality-report/single-table-api). It shows that the proposed ED approaches can outperform baselines or achieve comparable performance in most datasets.
**single column density similarity score**
| | Adult ↑ | Bank ↑ | Cardio ↑ | Churn ↑ | Mushroom ↑ | Beijing ↑ |
| --- | --- | --- | --- | --- | --- | --- |
| CTGAN | .814 | .866 | .906 | .901 | .951 | .799 |
| TVAE | .783 | .824 | .892 | .899 | .965 | .711 |
| TabCD | .719 | .790 | .824 | .845 | .618 | .799 |
| TabED-Un | .791 | .760 | .948 | .905 | .918 | .843 |
| TabED-Grid | .751 | .766 | .945 | .846 | .951 | .951 |
| TabED-Cyc | .778 | .826 | .937 | .834 | .969 | .751 |
| TabED-Ord | .828 | .894 | .933 | .887 | .943 | .747 |
**pair-wise correlation similarity score**
| | Adult ↑ | Bank ↑ | Cardio ↑ | Churn ↑ | Mushroom ↑ | Beijing ↑ |
| --- | --- | --- | --- | --- | --- | --- |
| CTGAN | .744 | .769 | .717 | .826 | .828 | .761 |
| TVAE | .669 | .772 | .687 | .808 | .919 | .618 |
| TabCD | .522 | .600 | .629 | .710 | .428 | .761 |
| TabED-Un | .653 | .661 | .828 | .851 | .825 | .735 |
| TabED-Grid | .583 | .768 | .829 | .764 | .842 | .842 |
| TabED-Cyc | .636 | .703 | .810 | .755 | .860 | .685 |
| TabED-Ord | .702 | .796 | .811 | .791 | .826 | .662 |
> The authors said that their work is one of the first dealing with tabular data, I at least find this one dealing with EBM and tabular dataset: [...] Also, this article https://arxiv.org/abs/2405.15376 deals with generative and categorical variables for EBM.
>
Thank you for these interesting references! We missed the first reference in our literature research. Unfortunately, we are using different benchmark data sets so a direct comparison is difficult. Furthermore, RBM architectures tend to be more constrained than deep NN architectures, which has the advantage of stabilised training at the cost of flexibility. This means that ED and the referenced work have slightly different advantages and drawbacks.
The second article is very interesting. It is concurrent work to ours and was published after the NeurIPS deadline, so we were not aware of it.
[1] Nijkamp et al. Learning non-convergent non-persistent short-run
MCMC toward energy-based model
---
Rebuttal Comment 1.1:
Comment: I thank the author for the clarifications.
The only thing that remains unclear to me is how the author ensure the convergence of the MCMC chains during sampling. In many systems, in particular EBM, can have metastable states or long mixing time.
I understand that the authors use the algorithm 2 for the sampling but I'm wondering how the authors ensure that the samples correspond to the learned distribution.
---
Reply to Comment 1.1.1:
Comment: Thank you for this question. In this work, we did not employ quantitative MCMC diagnostics to assess the convergence of Markov chains as we were mostly concerned with learning the data distribution. We will work on additional MCMC diagnostics for the revision of our work.
> I'm wondering how the authors ensure that the samples correspond to the learned distribution.
Some energy-based models are trained with short-run MCMC, i.e. the samples during training are not at equilibrium. As a consequence, one also needs to employ non-convergent short-run MCMC at inference time to produce good samples [1], and in these settings one may be concerned about a mismatch between model and samples.
Our setting is different: Since we do not require MCMC at training time, the estimated energies are generally more accurate representations of the data distribution (with theoretical guarantees). In principle, running MCMC for longer will also produce more accurate samples from the model. For improved mode coverage, each sample is initialised from an independent noise sample and the Markov chain is simulated until we observe high-quality samples. We then evaluate the quality of the produced samples indirectly in various metrics:
- For tabular data, we generate synthetic data with 200 rounds of interleaved discrete/continuous updates, and assess the sample quality indirectly by showing that a random forest classifier trained on our synthetic samples and trained on the actual training data displays comparable performance. (Table 1)
- For samples generated from lower-dimensional discrete densities, we compare the training data with the MCMC samples in the maximum mean discrepancy and use the MCMC samples to compute the negative log-likelihood of the data under the model. (Table 3 and 4)
- The negative log likelihood for discrete image data is computed with 300,000 steps of annealed importance sampling. Annealed importance sampling helps with the convergence of the MCMC chains. This is the same setup as in earlier work [2, 3], where it was noted that AIS for discrete image data converges after 30,000 steps. According to this heuristic we assume that the sampler has converged in our case as well. Similarly, we use the same number of sampling steps for the experiments on Ising models as [2, 3].
- In all experiments, we assume that a non-convergent sampler would produce sub-optimal results due to the assumed accuracy of the learned energy. In addition, the samples (and in some cases the learned distributions) can be visually assessed (Figure 2, 3, 10, 12), demonstrating great agreement between data, learned distribution, and MCMC samples. While we can not rule out metastable samples that match the data distribution better than the model, we consider this unlikely across multiple baseline data sets, performance metrics, and dimensionalities.
We will work on providing additional more rigorous MCMC diagnostics for the revision of our work. We agree that this may be important when wanting to apply our method in generative downstream tasks. Please, let us know if we left your question unanswered.
[1] Nijkamp et al. On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models, AAAI 2020
[2] Zhang et al. A Langevin-like Sampler for Discrete Distributions, ICML 2022
[3] Grathwohl et al. Oops I took a Gradient: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions, ICML 2021 | Summary: This paper extends the Energy Discrepancies framework introduced by Schroder et al. to the setting of discrete data. In order to do this, the authors first describe ways to perturb discrete data by modeling the perturbation process as a CTMC. They describe suitable perturbation choices for different types of discrete data (e.g. binary, categorical, ordinal, cyclical) and describe different considerations for the time parameter in the CTMC. They then propose an approach that performs a Monte-Carlo estimate of the contrastive potential needed for the Energy Discrepencies loss and compare their method to existing methods for training discrete EBMs.
Strengths: Originality: Energy discrepancy is a relatively new approach. While the original paper proposed some extensions to discrete data, this paper goes into extending energy discrepancy to discrete data in much more depth and includes new mathematical and experimental analyses.
Clarity: Overall, the paper is well written.
Quality: I believe the paper is technically sound.
Significance: The authors show on toy examples that their method outcompetes contrastive divergence. The authors appear to generally outperform two methods proposed in 2019 along with contrastive divergence. While I have some minor concerns about these baselines, outperforming these baselines is at least demonstrating some empirical benefit of this approach.
Weaknesses: Clarity: The clarity can be improved a bit (see my questions below).
Significance: Despite demonstrating that the method can work empirically, I have some concerns with the overall significance. It seems that while the method works well on toy examples, the results are less impressive on real-world image modeling tasks. I am unfamiliar with the field of tabular data modeling and therefore, cannot properly assess the significance of the results. Beyond contrastive divergence the main baselines is a method from 2019 with 2,000 citations. Are there better baselines to compare against among these 2,000 citations?
Technical Quality: 4
Clarity: 3
Questions for Authors: In section 3 on lines 93-95 the authors describe two key criteria for defining useful perturbation processes. The first criterion is described as “the negative samples obtained through $q$ are informative for training the EBM when only finite amounts of data are available.” I struggled to understand precisely what this statement meant. What are examples of processes that are more and less informative?
I am confused about the connections to the heat equation, which is likely due to my own lack of understanding but may also indicate that the clarity could be better. My understanding is that we need to define a process that perturbs our data, $p(y | x)$. Such processes have been described in the ML literature, which the authors cite and can be solved through the matrix exponential. While normally this matrix exponentially may be hard to solve, since the noise process is applied independently across the dimensions, it should scale with O(S^3). It was unclear why small timesteps were introduced and why Euler integration was needed. Was the point that for some problems S^3 is too big and so for these problems we will restrict ourselves to small timesteps in order to avoid computing the matrix exponential? Overall, I was left confused about why we need to talk about heat equations at all and why we don’t just describe this as a CTMC with initial conditions? Is there something that the heat equation view is really buying us?
Lines 176-181 describe the subsection “Localization to random grid”. Related to my comment above about the lack of clarity regarding when the negative samples are “informative for training” the authors say that adding uniform noise to categorical variables “introduce too much noise to inform the EBM about the correlations in the data set.” Can the authors make this statement more precise in the text? I think I can intuitively see that if make random uniform perturbations at each dimension then you are sampling from a uniform distribution and this will be uninformative in some sense. However, I think this notion needs to be explicitly connected to the optimization objective / loss functional in order to make this clear. Furthermore, it is not clear why this isn’t solved by taking smaller timesteps so that only a few dimensions will on average be changed. Can the authors please clarify this?
Overall, I am confused about the choice of time parameter and I think this needs to be better written in the manuscript. Section 3 establishes that as $t$ goes to infinity, “the energy discrepancy loss converges to maximum likelihood estimation.” The authors describe why maximum likelihood may have statistical advantages for finite data sets but then immediately move on in Section 3.1 to small timescales. This seemed like an odd transition and immediately made me ask “why not just use large times since this is maximum likelihood?” I suspect the answer lays in Section 4.2 where it becomes apparent that the contrastive potential must be estimated with Monte-Carlo sampling. It seems that larger timescales induce higher entropy distributions that would require more MC samples to approximate the expectation on line 192?
For the related works, I felt the “Contrastive loss functions” paragraph needs more discussion. Energy discrepancies seems very closely related, if not exactly, to contrastive loss functions for training energy-based models. Can the authors please provide a more thorough comparison of these different methods?
Similarly, I did not see a discussion on pseudolikehood loss functions. For small timesteps, the loss function seems very closely related to pseudolikelihood estimation and it seems that when MC sampling must be used in this method to approximate an otherwise intractable integral, that the MC sampling can be seen as an MC approximation to pseudolikelihood?
The authors make a point of saying that ED is much more efficient that CD and point to timing experiments in the appendix. However, it appears that authors are only reporting timing for M=4 samples when in the paper M=32 is used. If I extrapolate and multiply the ED time by 8 (since 32 = 8*4) then ED is more expensive then all of these methods. Can the authors please clarify this? I suggest changing Table 6 to M=32 if that is what is used in the paper.
The biggest experimental win seems to come from the Tabular Dataset. I am not very familiar with this area so I have a limited ability to evaluate the significance here. While the results seem reasonable I have two questions: 1) since the baseline methods were published in 2019 are there more sensible baselines to compare with? I again emphasize that I am not requiring that the authors’ method is SOTA – it is okay if other methods beat their method. 2) Since these methods have mixed continuous and discrete data can the authors do a separate benchmark that only models the discrete columns? I think it would be helpful to tease apart whether the best strength of this method is in modeling mixed continuous-discrete data or also purely discrete data.
I was confused by the statement that method is sensitive to the assumption that the data distribution is positive on the whole space. Why is this more of an issue for ED than other EBM training techniques? Intuitively, it seems that you can always avoid the assumption that the data distribution is positive by just assuming that the energy in these regions is so high that the probability that you sample these regions is vanishingly low. Either way, can the authors point me to where this assumption of a low-dimensional manifold is investigated in the paper / SI?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: I would like more honest discussion throughout the paper of the relative strength and drawbacks of their method. For example, while it is true that this method is “MCMC-free” it still requires Monte-Carlo estimation and the relative tradeoffs here are not adequately discussed. I would also like to see more comparisons of CD with large numbers of MCMC steps v.s. ED with large numbers of samples.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your questions! Due to lack of space we focus on questions of most interest for all reviewers. Citations and responses to the rest can be found in the comment.
> Significance: [...] Are there better baselines to compare against [...]?
>
EBMs are appealing because they output the unnormalised data density explicitly, and allow more flexible modelling than e.g. normalising flows. This unlocks a unique set of downstream applications, see [1, 2, 3].
Given our interest in using EBMs as a tool in downstream tasks, we are not trying to claim SOTA compared to all generative models. For binary discrete image data, our method is competitive with other EBM training approaches like [4]. For tabular data we compare with [5] due to lack of EBM baselines. Prompted by your comment we found that a tabular DDPM gives a stronger baseline [6] and we will add this reference to our table.
**Questions**
> [...] What are examples of processes [q] that are more and less informative?
>
In principle, the negative samples can be drawn from an arbitrary noise distribution. If, for example, $q$ is independent of the data, the contrastive term becomes just the log-normalisation constant up to a constant, i.e.
$$
\log \sum_{ x_-\in X} q( y) \exp(-U_\theta( x_-)) = \log q( y) + \log \sum_{ x_-\in X} \exp(-U_\theta( x_-))
$$
which is an example of a **non-informative** $q$ distribution. In this case, ED becomes the log likelihood in theory, but the normalisation constant $Z_\theta$ needs to be approximated from thousands of samples to be low-variance.
An informative noising distribution $q$ achieves two things.
- $\sum_{ x_-\in X} q( y\vert x_-)\exp(-U_\theta( x_-))$ can be computed from a small number of samples with lower variance than the full normalisation $Z_\theta$ because the integrand is localised around $ y \sim q( y \vert x_+)$.
- similar to GAN training, the model converges prematurely if the negative samples $x_-$ are trivial to distinguish from data $x_+$.
> I am confused about the connections to the heat equation [...] It was unclear why small timesteps were introduced and why Euler integration was needed.
>
We don't use the Euler discretisation in practice and our method is simulation-free, i.e. for any time $t$ we can sample exactly from the transition density $q_t(b|a)$ using the closed form solution in Proposition 1. For the uniform perturbation, the exact solution is equivalent to equation (7) via the time rescaling $t' := (1/S(1-e^{-St}))\in [0, 1/S)$, (see comments). We are going to replace equation (7) with the closed form solution.
> [...] Is there something that the heat equation view is really buying us?
[...] “why not just use large times since this is maximum likelihood?”
>
Any CTMC could be used to define a noising process, but this leaves us with many choices for the rate function. In previous work using CTMC [7, 8], the structure on the discrete space is ignored and only the uniform and absorbing diffusion is discussed.
We choose the rate function as the graph Laplacian which defines the **smallest possible transitions** in the graph. For small $t$, the corrupted data points are then with highest probability the **nearest neighbours** of the data point in the graph.
However, a small corruption may miss global distribution properties like mixture weights [9]. Theorem 1 tells us that for large $t$, ED performs MLE which captures these global properties at the cost of higher variance of the estimated loss. The heat equation allows us to trade-off the variance and better statistical properties of the loss.
To support our intuition empirically, we include new experimental results in the attached PDF, Figure 1, showing that a large $t$ requires increasing $M$ to achieve accurate energy estimates.
> The authors make a point of saying that ED is much more efficient that CD and point to timing experiments in the appendix. However, it appears that authors are only reporting timing for M=4 samples when in the paper M=32 is used. [...]
>
$M=4$ is often sufficient to obtain good results, see table 8. For the main text of this paper we used $M=32$ to maximise our GPU usage. In fact, ED can be computed in parallel, while CD requires sequential sampling algorithms that cannot be parallelised. We provide the training time for $M=32$ below:
| | CD-1 | CD-5 | CD-10 | ED-Bern (M=32) | ED-Grid (M=32) |
| --- | --- | --- | --- | --- | --- |
| Per Epoch (s) | 29.1660 | 95.2178 | 167.5718 | 46.4317 | 44.0621 |
It shows that ED with $M=32$ is still more performant than CD-5. We will change the corresponding table in the revision.
> [...] 1) since the baseline methods were published in 2019 are there more sensible baselines to compare with? [...] 2)
>
As mentioned earlier, there is a better baseline that uses a diffusion model [6] which outperforms our approach on the tabular data benchmark. We provide the experimental results for detailed comparison:
| | Adult ↑ | Bank ↑ | Cardio ↑ | Churn ↑ | Mushroom ↑ | Beijing ↓ |
| --- | --- | --- | --- | --- | --- | --- |
| CTGAN | .861 | .774 | .787 | .792 | .781 | 1.01 |
| TabDDPM | .910 | .924 | .802 | .793 | 1.0 | .570 |
| TabED (Ours) | .853 | .845 | .796 | .814 | .985 | .978 |
TabED refers to the best results from our proposed methods. We remark that, while the DDPM outperforms the GAN and EBM, we are interested in tasks that cannot be addressed by diffusion models.
> I would like more honest discussion throughout the paper of the relative strength and drawbacks of their method. [...]
>
We will revise the paper keeping your comment in mind. In brevity, the biggest strength of ED are improved theoretical guarantees over CD or robust behaviour when tuning an MCMC sampler is difficult (e.g. tabular data). The biggest drawback of ED is currently its scalability to data more high-dimensional than MNIST.
---
Rebuttal 2:
Title: Complementary Answers and Information 1/2
Comment: Thank you for your thorough review and apologies that we could not fit all responses in the rebuttal.
We now want to respond to the questions left out in the rebuttal. We follow the chronological order of your review and the rebuttal.
> It was unclear why small timesteps were introduced and why Euler integration was needed.
>
We don't use the Euler discretisation in practice and our method is simulation-free. For the uniform perturbation, the exact solution is given by
$$
q_t(y \vert a) = \left(\frac{1}{S} + \frac{S-1}{S}e^{-St}\right) \delta(y, a) + \left(\frac{1}{S} - \frac{1}{S}e^{-St}\right) \sum_{k \neq a}\delta(y, k)
$$
This is equivalent to equation (7) via the time rescaling $t \leftarrow (1/S(1-e^{-St}))\in [0, 1/S)$ We are going to replace equation (7) with the closed form solution.
> [...] since the noise process is applied independently across the dimensions, it should scale with O(S^3). [...] Was the point that for some problems S^3 is too big [...]?
>
Thank you for this comment! We missed that one could diagonalise any graph Laplacian in $O(S^3)$ time before running the training which extends the generality of our suggested method significantly. We consider this for the revision of this work.
> [...] the authors say that adding uniform noise [...] “introduce too much noise to inform the EBM about the correlations in the data set.” Can the authors make this statement more precise in the text? [...] it is not clear why this isn’t solved by taking smaller timesteps [...]
>
The intuition is the same as our earlier response on informative perturbations. We will revise the statement in the paper. The grid perturbation always changes exactly one entry of the data vector which is the main different to the uniform perturbation with small $t$. We observed empirically that sometimes this is beneficial, e.g. for the binary image data the grid perturbation outperforms the heat perturbation (a Bernoulli distribution) with a small time step. For tabular data, one can also condition the negative samples on the column that was perturbed, so that the parameter update is only informed by the change of a single column. We have not quantified the benefit of this analytically.
> For the related works, I felt the “Contrastive loss functions” paragraph needs more discussion. [...]
>
We will revise our discussion of related contrastive loss functions. Without giving proofs, we want to give a brief summary of the connections we see:
The following papers are based on the same functional as energy discrepancy:
- ED is equivalent to KL contractions [10]. However, [10] does not turn the KL contraction into a training loss, and does not explore the influence of the perturbing distribution.
- ED for Gaussian perturbation is equivalent to the diffusion recovery likelihood functional [11]. However, [11] uses a large number of noise scales, an MCMC scheme, no stabilisation, and only one negative sample to produce a training algorithm, so methodologically it sits between ED and CD training approaches.
- ED is equivalent to the concurrent work Noise Contrastive Divergence [12]. However, [12] approximates the loss with a variant of score matching, thus turning it into a different loss in practice.
- ED is strongly related to the concurrent work Contrastive Latent Energy Learning [13]. Again, the loss function implemented in practice by CLEL is very different as it only considers a single negative sample obtained from a latent code.
- If the perturbation satisfies the detailed balance relation, ED becomes equivalent to the contrastive divergence loss [9].
- As remarked by your review, pseudo-likelihood methods can be seen as a special case of ED.
The following works discuss a similar training objective as ED:
- The stabilised ED loss is equivalent to the prior contrastive estimation (PCE) bound [14] because both losses are based on approximations of KL divergences. However, PCE is not used to learn a probabilistic model but to find an optimal Bayesian experimental design, a very different learning task.
- Similarly, the stabilised ED is structurally similar to InfoNCE [15], a loss used in representation learning.
- The stabilised ED is related to the log loss [16]. However, the log-loss was introduced without connection to learning the EBM as a probabilistic model, introducing meaningful ways to obtain contrastive samples, or providing theoretical guarantees, and we could not find experimental results for this loss.
It should be noted that none of these references apply to discrete or mixed data like our work.
---
Rebuttal 3:
Title: Complementary Answers and Information 2/2
Comment: > [...]For small timesteps, the loss function seems very closely related to pseudolikelihood estimation [...]
>
Thank you for raising this point. It seems that pseudo likelihood can be derived from energy-discrepancy as follows: Define $q(\mathbf y \vert \mathbf x) = \frac{1}{d} \sum \delta( y_i, \ast)$ which masks exactly one entry of the data vector. Then, energy discrepancy is given for a sampled perturbation $y = x_{\neg I}$ as
$$
U_\theta(\mathbf x) + \log \sum_{x} q(\mathbf x_{\neg i} \vert \mathbf x) \exp(-U_\theta(\mathbf x)) = -\log \frac{\exp(-U_\theta(\mathbf x))}{\sum_{s \in \{1, 2, \dots, S\}}\exp(-U_\theta(\mathbf x_1, \dots, \mathbf x_i = s, \dots, \mathbf x_d))}= -\log p_\theta(\mathbf x_i \vert \mathbf x_{\neg i})
$$
Hence, this specific ED loss function is indeed a MC approximation of pseudo-likelihood. Energy discrepancy is appealing because it is more general and more tunable through the choice of $t$ and $M$. We will discuss this connection in our revised version.
> Since these methods have mixed continuous and discrete data can the authors do a separate benchmark that only models the discrete columns?
>
The performance on continuous data is compared in prior work [9]. In this work, we benchmark ED on various discrete data sets. For a separate benchmark on discrete/continuous variables in tabular data we are missing baselines that perform the same experiment, and an implementation of comparative experiments was not possible in the given time frame.
> I was confused by the statement that method is sensitive to the assumption that the data distribution is positive on the whole space. Why is this more of an issue for ED than other EBM training techniques? [...]
>
Other training techniques for EBMs are adaptive, i.e. the negative samples are created by attempting to self-sample from the model. If the sampler is well-tuned, this produces smoothed estimates of the data distribution (see e.g. Figures 2, 7 to see the smoothing effect compared to ED). Thus, these methods produce biased estimates which can be advantageous when training on image data, where the data support is small compared to the ambient space. ED produces more accurate estimates, but tends to oversaturate quickly when the diffusion produces uninformative negative samples.
> I would also like to see more comparisons of CD with large numbers of MCMC steps v.s. ED with large numbers of samples.
>
- Table 2 reports results for CD with 40 MCMC steps. CD with larger numbers of MCMC steps is rarely used due to the cost of training.
- Similarly, ED with large number of samples would loose the key advantage of ED of being cheaper to compute than CD or MLE with MC approximated normalisation constant.
A major difficulty with increasing the number of MCMC steps in CD is that the sampler likely needs to be retuned. For all these reasons, we chose to compare CD with a typical number of MCMC steps (e.g. 40 steps) and ED with M = 32, i.e. maximising the parallelisation capabilities of our GPU.
Finally, here are the references mentioned in the rebuttal and in the following comments.
[1] Du et al. Compositional Visual Generation with Energy Based Models, NeurIPS 2020
[2] Glaser et al. Maximum Likelihood Learning of Unnormalized Models for Simulation-Based Inference
[3] Grathwohl et al. Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, ICLR 2020
[4] Grathwohl et al. Oops I took a Gradient: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions, ICML 2021
[5] Xu et al. Modeling Tabular Data using Conditional GAN, NeurIPS 2019
[6] Kotelnikov et al. TabDDPM: Modelling Tabular Data with Diffusion Models, ICML 2023
[7] Campbell et al. A Continuous Time Framework for Discrete Denoising Models, NeurIPS 2022
[8] Lou et al. Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution, ICML 2024
[9] Schroeder et al. Energy Discrepancies: A Score independent Loss for Energy-Based Models, NeurIPS 2023
[10] Lyu S. Unifying non-maximum likelihood learning objectives with minimum kl con- traction., NeurIPS 2011
[11] Gao et al. Learning energy-based models by diffusion recovery likelihood. ICLR 2020
[12] Luo et al. Training Energy-Based Models with Diffusion Contrastive Divergences
[13] Lee et al. GUIDING ENERGY-BASED MODELSVIA CONTRASTIVE LATENT VARIABLES, ICLR 2023
[14] Foster et al. A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments, AISTATS 2020
[15] van den Oord et al. Representation Learning with Contrastive Predictive Coding
[16] Le Cun et al. A Tutorial on Energy-Based Learning
---
Rebuttal Comment 3.1:
Title: Reply to author rebuttal / comments
Comment: Thank you to the authors for their thorough reply. I have read through it and the replies answer my questions and satisfies my main critiques. At this time I do not have any further questions.
---
Reply to Comment 3.1.1:
Comment: Thank you for your review. Let us know if any further questions about our responses come up. | Summary: The paper proposes a suite of methods for training energy-based models for discrete and mixed data using the Energy Discrepancy loss, a recently proposed method for training EBMs. Compared to contrastive divergence, it does not require MCMC sampling from the model distribution during training, improving training efficiency. This is done by simulating a diffusion process on the discrete states and effectively using those noisier samples as the negative samples. The paper introduces a connection between the new method and maximum likelihood estimation, showing that energy discrepancy as applied to discrete state spaces can converge to the negative log-likelihood. In experiments, the new method behaves favourably compared to contrastive divergence-based methods on synthetic data sets, on average better than baselines on real-world tabular data sets, and comparably to many competing methods generation on discrete image modelling. An application of the trained EBM on classification and improving uncertainty quantification compared to a direct classifier is also shown.
Strengths: - The paper proposes a relevant extension to recently published work. Especially Theorem 1 does not seem obvious, and the paper may open up the use of the Energy Discrepancy loss to a much wider variety of use-cases.
- The method is also quite simple, and seems simple to implement.
- The paper connections to recent work on discrete diffusion models, and proposes a variety of methods to estimate the energy discrepancy loss.
- The results are good compared to standard contrastive divergence based methods
- The paper is well written, and I found it easy enough to understand even without prior knowledge on the Energy Discrepancy method.
Weaknesses: - As noted in the limitations, the application to data such as images seems to be challenging as the noisy negative samples may not give very useful training signal in this case.
- Although the energy discrepancy method has already been proposed and published in previous work, I found the justification for the method slightly confusing while reading this paper. What is Theorem 1 exactly saying? (see questions) The loss also is, in practice, approximated with something slightly different than the proposed loss, which seems conceptually a bit confusing. However, this is not a major concern given that the base method has been proposed and published in previous work.
Technical Quality: 3
Clarity: 4
Questions for Authors: - How should I interpret the left side of the equation in Theorem 1, and the fact that the right side approaches zero with large enough t? How does this link ED to maximum likelihood, exactly?
- What is Avg. Rank in Table 1?
Overall, the paper seems like a solid contribution in advancing the training of this branch of energy-based generative models. However, I was not aware of ED before reading this paper, and am not very up-to-date on the most recent work on energy-based models. As such, I give tentatively a weak accept.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Addressed adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and questions.
> Although the energy discrepancy method has already been proposed and published in previous work, I found the justification for the method slightly confusing while reading this paper. What is Theorem 1 exactly saying? (see questions)
>
> How should I interpret the left side of the equation in Theorem 1, and the fact that the right side approaches zero with large enough t? How does this link ED to maximum likelihood, exactly?
>
We want to address both your question and the weaknesses mentioned in your review. Theorem 1 is meant to guide the hyperparameter choice of $t$. Another way of stating Theorem 1 is as follows:
$$
\mathrm{ED}{q_t}(p_\mathrm{data}, U_\theta) - \mathcal L_{\mathrm{MLE}}(\theta) - z_t = \mathcal O(e^{-\lambda t})
$$
Here, $z_t$ is a constant **independent** of $\theta$, so the optimisation landscapes of energy discrepancy estimation and maximum likelihood estimation in $\theta$ align at an exponential rate, except for a shift $z_t$ which does not affect the optimisation.
Practically speaking, this property implies that large time parameters yield losses with favourable statistical properties as the MLE has the lowest variance among all unbiased estimators. The most striking example of this is global mode awareness, i.e. as shown in prior work [1], energy discrepancy losses are capable of estimating the mixture weight distributions in multi-modal data sets while being more tractable than maximum likelihood estimation. However, this comes at the cost of higher variance of the loss function due to the sample approximation used to compute the loss function in practice.
Since arbitrarily large $t$ are impractical, the parameter needs to be tuned. For this reason we study the opposite limit $t\to 0$ in the following sections. Similar to the late stages of GAN training, our goal is that negative samples obtained from the noising process remain close to the data support to provide a good training signal. The infinitesimal analysis relates the properties of ED for $t\to 0$ to the geometry of the underlying discrete space. The perturbed samples produced are then with highest probability the nearest neighbours of the data point in the graph. This makes it possible to get fine-grained control about the properties of the ED loss with a single time parameter.
While these results guide our understanding of the influence and tuning of $t$ in the algorithm, Theorem 1 is not needed to justify ED as a loss function. Instead, the justification was given in prior work and is reiterated in lines 78-81: One can show that after convergence, for a sufficiently rich family of neural networks and arbitrary amounts of data, $p_\theta(\mathbf x) = p_\mathrm{data}(\mathbf x)$ which holds for all $t$.
> The loss also is, in practice, approximated with something slightly different than the proposed loss, which seems conceptually a bit confusing. However, this is not a major concern given that the base method has been proposed and published in previous work.
>
The loss used in practice can be justified by observing that for any $w>0$:
$$
\lvert \mathcal L_{w, M, t}(\theta) - \mathrm{ED}{q_t}(p_\mathrm{data}, U_\theta)\rvert \xrightarrow{M, N\to\infty} 0
$$
Thus, the estimated loss function is asymptotically unbiased, and empirically we observe that the loss performs similarly to our expectations from the theory. It should be noted that most EBM training approaches need some form of approximation and stabilisation. MCMC-based training of deep EBMs usually use short run MCMC, introducing biases into the derived parameter update, and use a regulariser $\mathcal L_{\mathrm{reg}}(\theta) = U_\theta(\mathbf x_+^i)^2 + U_\theta(\mathbf x_-^i)^2$ which alters the desired MLE loss. In comparison, ED is easier to justify, and indeed we observe better performance in most applications.
To further support our theoretical analysis empirically, we conduct discrete density estimation experiments with different $w,M$ (see Figure 2 in the attached PDF for illustration). Despite using varying $w$, our approach can still learn a faithful energy landscape using a sufficiently large $M$, verifying the theoretical analysis that $\mathcal L_{w, M, t}(\theta) \rightarrow \mathrm{ED}{q_t}(p_\mathrm{data}, U_\theta)$ when $M, N \rightarrow \infty$.
> What is Avg. Rank in Table 1?
>
We rank each of the 7 methods in table 1 according to their AUC scores from 1 to 7. The average rank is the average ranking of the method across the used data sets.
[1] Schroeder et al. Energy Discrepancies: A Score independent Loss for Energy-Based Models, NeurIPS 2023
---
Rebuttal Comment 1.1:
Title: Answer to rebuttal.
Comment: I thank the reviewers for the comprehensive explanations to the questions! I have no further concerns, and will keep my accepting score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and feedback for our work! | Summary: : The paper introduces a novel method for training energy-based models (EBMs) on discrete and mixed data using heat equations on structured spaces. This method employs the Energy Discrepancy (ED) loss function, which eliminates the need for Markov chain Monte Carlo (MCMC) by using graph-structured perturbations. The proposed approach is evaluated on several applications, including discrete density estimation, synthetic data generation, and calibrated classification, demonstrating significant improvements in training efficiency and model performance.
Strengths: The paper successfully extends the Energy Discrepancy method to discrete and mixed data with solid theoretical analysis, addressing the challenge of robust and fast sampling in these space. The designed experiments demonstrate the method's ability to accurately capture complex data structures and generate high-quality synthetic data, highlighting its practical applicability.
Weaknesses: 1. Despite the method's solid contributions and experimental design, the motivations behind each step and their presentations are not very clear, making it hard to follow. For instance, in Section 3.1, the paper discusses different structured and unstructured categorical values, introducing the four types {cyc, ord, unif, abs}. However, it is not clear why these specific structures are chosen. Are they meant to cover all categorical values comprehensively, or are they the most common in tabular data? Providing a clearer rationale would help readers understand the choices made.
2. The scalability of the proposed method in such scenarios is a significant concern. An analysis or discussion on how the method handles large categorical values would be beneficial. This could include potential modifications or considerations to ensure that the method remains efficient and practical when applied to datasets with large categorical variables. What’s more, I strongly recommend moving these algorithms from the appendix into the main body of the paper. This would make the paper easier to follow and more accessible to readers who need to understand the detailed workings of the method.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and your helpful comments.
> Despite the method's solid contributions and experimental design, the motivations behind each step and their presentations are not very clear, making it hard to follow. For instance, in Section 3.1, the paper discusses different structured and unstructured categorical values, introducing the four types {cyc, ord, unif, abs}. However, it is not clear why these specific structures are chosen. Are they meant to cover all categorical values comprehensively, or are they the most common in tabular data? Providing a clearer rationale would help readers understand the choices made.
>
Thank you for letting us know that the presentation is not clear in this point, so we would like to provide additional background on this topic: To create the Energy Discrepancy loss we have to introduce a noising process that produces contrastive negative samples which the model can learn from. In principle, the noising processes {unif, abs} already cover **all categorical values** and this is the approach taken for discrete diffusion models, see e.g.
[1] Campbell et al. A Continuous Time Framework for Discrete Denoising Models, NeurIPS 2022
[2] Kotelnikov et al. TabDDPM: Modelling Tabular Data with Diffusion Models, ICML 2023
[3] Lou et al. Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution, ICML 2024
In this work, we observe that many discrete types of data have additional structure such as seasonality/periodicity (e.g. month of year) or ordering/grading (e.g. age, RGB pixel values, salary) that give us more fine-grained control over the noising process than the diffusions in [1, 2, 3]. This generalises previous diffusions while remaining **simulation-free**, i.e. we can compute and sample from the noising distribution for any $t$ in closed form.
The intuition for respecting the structure of the underlying discrete space is that we seek negative samples that are close to the data support to provide a good training signal (This is similar to the discriminator training in GANs).
If the discrete space of interest has a more complicated graph structure, the process either needs to be simulated from eigen decompositions of the graph Laplacian or one can resort to the simpler uniform or absorbing perturbation. Since all our benchmark data sets didn't require an involved graphical structure, we stick to the more common ordinal or cyclical structure which generalises previous noising processes [1, 2, 3].
> The scalability of the proposed method in such scenarios is a significant concern. An analysis or discussion on how the method handles large categorical values would be beneficial. This could include potential modifications or considerations to ensure that the method remains efficient and practical when applied to datasets with large categorical variables.
>
Thank you for your suggestions. Could you elaborate what you mean by large categorical values in case we misunderstand your question?
Scalability is indeed a valid concern for energy-based models in general. Most scalable **continuous** EBMs such as [4], which trains EBMs on image net 32x32, have other problems, e.g. over-smoothing the estimated densities due to missing theoretical guarantees, and are limited to the continuous domain. This limits their applicability in some key downstream tasks like reliable out of distribution detection.
For discrete data sets, alternative training methods for EBMs produce comparable results to our method. Compared to MCMC based training methods like contrastive divergence [5], the energy discrepancy approach requires less tuning and runs faster since the loss can be computed in a single parallel pass through the network and does not rely on sequential steps. Furthermore, these works have yet to be extended to mixed state spaces, and many tabular data sets of interest have a comparably small number of features.
We are also interested in better scalability in future work. However, this will likely require a combination of EBMs with other generative modelling approaches such as a variational decoder model as in [6]. This method has already been established for the energy discrepancy training method. Secondly, the energy function may be learned with the energy discrepancy loss at various noise scales as in [7], thus improving training stability and learning a sampler alongside the EBM. Finally, EBMs may be an interesting way to modify a pre-trained base model as in [8]. While ED can likely be used to modify [6, 7, 8], this is not the main contribution of our work.
[4] Du et al. Implicit Generation and Generalization in Energy-Based Models, NeurIPS 2019
[5] Grathwohl et al. Oops I took a Gradient: Oops I Took A Gradient: Scalable Sampling for Discrete Distributions, ICML 2021
[6] Pang et al. Learning Latent Space Energy-Based Prior Model, NeurIPS 2020
[7] Gao et al. Learning Energy-Based Models by Diffusion Recovery Likelihood, ICLR 2021
[8] Deng et al. Residual Energy-Based Models for Text Generation, ICLR 2020
> What’s more, I strongly recommend moving these algorithms from the appendix into the main body of the paper. This would make the paper easier to follow and more accessible to readers who need to understand the detailed workings of the method.
>
Thank you for this suggestion. We will include the algorithms in the main text for the revision of this paper.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in rebuttal, and I decided to keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your review and feedback for our work! | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive comments. First, we would like to summarise the strengths of the paper according to the reviewers.
The reviewers agree that our paper is a successful extension of energy discrepancy to discrete and mixed data where energy-based modelling is challenging.
> **9ZAB**: The paper successfully extends the Energy Discrepancy method to discrete and mixed data with solid theoretical analysis, addressing the challenge of robust and fast sampling in these space.
**qwnm**: The paper proposes a relevant extension to recently published work. Especially Theorem 1 does not seem obvious, and the paper may open up the use of the Energy Discrepancy loss to a much wider variety of use-cases.
**FxWc**: While the original paper proposed some extensions to discrete data, this paper goes into extending energy discrepancy to discrete data in much more depth and includes new mathematical and experimental analyses.
>
The reviewers generally consider our paper to be well-written and well-supported by experiments.
> **9ZAB**: The designed experiments demonstrate the method's ability to accurately capture complex data structures and generate high-quality synthetic data, highlighting its practical applicability.
**qwnm**: The results are good compared to standard contrastive divergence based methods. The paper is well written, and I found it easy enough to understand even without prior knowledge on the Energy Discrepancy method.
**FxWc**: Overall, the paper is well written. The authors appear to generally outperform two methods proposed in 2019 along with contrastive divergence. While I have some minor concerns about these baselines, outperforming these baselines is at least demonstrating some empirical benefit of this approach.
**t2RA**: The authors show how their method can be efficiently used on Tabular dataset. In particular, they apply to several dataset and show that in average the EBM trained with Energy-Discrepancy [...] outperform the concurrent approach.
>
We now want to address the biggest concerns and questions of the reviewers.
- The reviewers **9ZAB, qwnm, FxWc,** and **t2RA** were concerned about the scalability of our proposed method, particularly for image data. To address this concern we want to put our work into perspective of the research field.
Scalability is a valid concern for EBMs in general. Even in continuous spaces, EBMs struggle to scale up and outperform the sample quality of GANs and diffusion models. However, EBMs are appealing because they explicitly output an approximate data density, which is advantageous for downstream tasks such as out-of-distribution detection and calibration.
Most existing EBM training methods rely on contrastive divergence which over-smoothes the estimated densities. Our method is more reliable due to theoretical guarantees, requires little tuning, and displays competitive performance on binary image data. ED can be computed in a single parallel pass, unlike MCMC-based methods that need many sequential steps. Thus, ED is significantly cheaper and more parallelisable. Furthermore, MCMC-based approaches have yet to be extended to mixed-state spaces, and many tabular data sets of interest have a comparably small number of features.
Enhancing the scalability of EBMs will likely require a combination of EBMs with other tools such as a variational decoder model used to scale energy discrepancy in the continuous domain in prior work. The energy function could also be learned in a diffusion model style with the energy discrepancy loss at various noise scales, thus improving training stability and learning a sampler alongside the EBM. However, this was not the main concern of our work.
- The reviewers **9ZAB, qwnm,** and **FxWc** had questions about the importance of the noising process which we define as a heat equation on a graph. We want to summarise what guides the choices we are making.
In principle, the negative samples can be drawn from an arbitrary noise distribution. If, for example, $q$ is independent of the data, the contrastive term becomes just the log-normalisation constant up to a constant, i.e.
$$
\log \sum_{ x_-\in X} q( y) \exp(-U_\theta( x_-)) = \log q( y) + \log \sum_{ x_-\in X} \exp(-U_\theta( x_-))
$$
which is an example of a **non-informative** $q$ distribution. In this case, ED becomes the log likelihood loss in theory, but the normalisation constant $Z_\theta$ needs to be approximated from thousands of samples to be low-variance.
An informative noising distribution $q$ achieves two things.
- $\sum_{ x_-\in X} q( y\vert x_-)\exp(-U_\theta( x_-))$ can be computed from a small number of samples with lower variance than the full normalisation $Z_\theta$ because the integrand is localised around $ y \sim q( y \vert x_+)$.
- similar to GAN training, the model converges prematurely if the negative samples $x_-$ are trivial to distinguish from data $x_+$.
If we select $q$ via an arbitrary CTMC we have no guidance about what rate function to choose. In previous work using CTMC the structure of the discrete space is ignored and only the uniform and absorbing chain is discussed. We choose the rate function as the graph Laplacian which defines the smallest possible transitions in the graph. For small $t$, the corrupted data points are then with highest probability the nearest neighbours of the data point in the graph. This guarantees that negative samples are different from data while staying as informative as possible.
However, a small corruption may miss global distribution properties like mixture weights. Theorem 1 states that for large $t$, the difference between the optimisation landscape of ED and MLE goes to zero. MLE has preferable statistical properties, but taking the large $t$ limit comes at the cost of higher variance of the estimated loss. Therefore, the heat equation provides a unified framework and allows us to tune the loss properties in terms of a single hyperparameter $t$.
Pdf: /pdf/bb3ab2d37a6212d5900ec19d3ad7f579a32c80c7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection | Accept (poster) | Summary: This paper first reveals the relationship between the quality of out-of-distribution (OOD) features and the prediction uncertainty of in-distribution (ID) data. Then, the paper introduces modulating factors to weight the ID loss and OOD loss, with the weights being related to the ID data prediction confidence. The experiments are carried out on standard datasets.
Strengths: 1. The analysis of the relationship between OOD feature quality and ID prediction confidence is well-reasoned; lower ID confidence indeed affects the accuracy of foreground-background separation.
2. Weighting the loss components is a straightforward approach, making it easier to understand.
3. The writing is clear and easy to comprehend.
Weaknesses: 1. Overall, the technical contribution of this paper is relatively incremental, primarily focusing on how to weight the two loss components.
2. The effectiveness of the proposed method is quite limited. For example, as shown in Table 2, the improvement in averaged AUROC under the 16-shot scenario is minimal, only around 0.3%, and there is even a slight decrease in results on ID-like data.
3. There is a lack of comparison with existing state-of-the-art (SOTA) methods. For instance, the results reported in this paper are not as good as those of NegLabel[1] (AUROC 94.21 > 93.37), which is a zero-shot method that does not require training and training samples. The results for ID-like method reported in this paper are also lower; the official paper reports 94.36 AUROC under 4-shot, while this paper reports 92.14 AUROC under 16-shot.
4. More exploration is needed regarding the settings of function $\phi$ and $\psi$ in Equation 4.
5. The statement in lines 158-159 is somewhat unclear. Should it be that inaccurate OOD features hinder the effective learning of better OOD detection?
[1] Jiang, Xue, et al. "Negative label guided ood detection with pretrained vision-language models." ICLR (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: See weaknesses
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
>**Q1:** Overall, the technical contribution of this paper is relatively incremental, primarily focusing on how to weight the two loss components.
>
Thanks for the valuable comments! We would like to re-clarify the **novelty and insights of our SCT** as follows.
Conceptually, **the motivation of SCT is to mitigate the problem of spurious OOD features in prompt-tuning based OOD detection methods.** Generally, these methods rely on the ID-irrelevant local context extracted by VLMs as the surrogate OOD features to perform regularization, **the quality of which is greatly affected by the foreground-background decomposition of VLMs.** As shown in Figure 1/4 in the submission, although VLMs can mask out some ID-related regions, **large portions of the extracted OOD features(shown as the colored patches of images) obviously belong to ID features.**
Empirically, **we find that the quality of extracted OOD features is significantly correlated with the uncertainty level of ID data.** As illustrated in the left panel of Figure 2 in the submission, the extracted OOD features become more inaccurate as the uncertainty increases. In the right panel of Figure2, we train LoCoOp on multiple data groups with different uncertainty levels and the results demonstrate that **the OOD detection performance of LoCoOp can be significantly impacted by the uncertainty level of ID data.** Therefore, to mitigate the issue of unreliable OOD features, we propose SCT to calibrate the influence of OOD regularization from different ID samples based on their uncertainty level.
>**Q2:** The effectiveness of the proposed method is quite limited. For example, as shown in Table 2, the improvement in averaged AUROC under the 16-shot scenario is minimal, only around 0.3%, and there is even a slight decrease in results on ID-like data.
>
Thanks for the comments! As shown in Table 1 in the submission, **the space for improvement of AUROC of the prompt-tuning based method is close to saturation,** both exceeding 90%. However, the space for improvement of FPR95 is still very large, so these two metrics should not be treated equally. **The improvement of SCT on FPR95 (e.g. +5.95% for IDLike and +2.73% for LSN under 16-shot setting) is significant**. Nevertheless, we will continue to promote the improvement of AUROC in future work!
>**Q3:** There is a lack of comparison with existing state-of-the-art (SOTA) methods. For instance, the results reported in this paper are not as good as those of NegLabel[1] (AUROC 94.21 > 93.37), which is a zero-shot method that does not require training and training samples. The results for ID-like method reported in this paper are also lower; the official paper reports 94.36 AUROC under 4-shot, while this paper reports 92.14 AUROC under 16-shot.
>
Thanks for the comment! **Zero-shot methods and prompt-tuning based methods are compatible with each other, further boosting the OOD detection performance.** We conduct experiments on the compatibility of NegLabel with SCT **in Table 5 in the attached PDF** and the results show that SCT can be combined with advanced Neg-Label for better OOD detection.
Regarding the results for ID-like method, we strictly follow the official source code [1] and the hyperparameter settings of the official paper on a single A100 GPU. We will add this to our revised version.
>**Q4:** More exploration is needed regarding the settings of function $\phi$ and $\psi$ in Equation 4.
>
Thanks for the suggestions! We conduct experiments on more instantiations of modulation functions **in the following table**. The results demonstrate that All the instantiations show significant improvement over LoCoOp, which verifies the effectiveness of the learning framework of SCT.
| method | $\phi$ | $\psi$ | FPR95 | AUROC | ID-ACC |
|----------|----------------------------------|--------------------------------|-------|-------|--------|
| LoCoOp | 1 | 1 | 29.47 | 93.10 | 71.43 |
| power-2 | $(1-p(y\|x))^2$ | $p(y\|x)^2$ | 27.41 | 93.14 | 71.42 |
| power-4 | $(1-p(y\|x))^4$ | $p(y\|x)^4$ | 27.10 | 93.21 | 71.49 |
| log | $ 1-\frac{log(p(y\|x)+1)}{log2} $ | $ \frac{log(p(y\|x)+1)}{log2} $ | 27.06 | 93.20 | 71.39 |
| triangle | $cos(\frac{\pi}{2}p(y\|x))$ | $sin(\frac{\pi}{2}p(y\|x))$ | 27.34 | 93.16 | 71.53 |
> **Q5:** The statement in lines 158-159 is somewhat unclear. Should it be that inaccurate OOD features hinder the effective learning of better OOD detection?
>
Thanks for the comments! What we mean by the original statement is that we aim to design a mechanism to mitigate the issue of unreliable extracted OOD features. We will make this statement more clear in our revision.
Reference: [1] Bai, Han, et al. "ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection." CVPR, 2024
---
Rebuttal Comment 1.1:
Title: [Invitation to rolling discussion] Need further clarification?
Comment: Thanks for your time and comments on our work. We have tried our best to address the concerns and provided detailed responses to all your comments and questions. Are there any unclear points that we should/could further clarify?
---
Rebuttal Comment 1.2:
Comment: Thanks for your responses. Some of my concerns have been addressed; however, I still have concerns about the incremental technical contribution and the limited improvement on AUROC. In my experiments, the FPR95 fluctuates greatly, while the AUROC results are relatively stable, so I consider the AUROC results more reliable. I will maintain my current score.
---
Reply to Comment 1.2.1:
Title: Thanks for your response!
Comment: Many thanks for your response and we will consider your suggestions in the revision. | Summary: This paper presents a novel few-shot approach to regularizing prompt tuning-based OOD detection methods called Self-Calibrated Tuning (SCT). SCT is specifically built to address the problems of incorrect OOD features being used in prompt tuning-based OOD detection methods. More specifically, by weighting regions of the image based on model confidence, SCT can better alleviate these issues in prompt tuning-based OOD detection methods. The resulting SCT method shows strong empirical improvements across a wide range of OOD detection methods.
Strengths: Strengths:
- The paper is well written and the authors provide a clear and concise motivation justifying the use of SCT.
- The author provides a timely analysis of the problem of incorrect OOD features extracted from ID data.
- SCT shows strong empirical performance across a wide range of traditional OOD detection methods and prompt tuning-based OOD detection methods.
- Additionally, given the nature of prompt tuning-based OOD detection methods, SCT can act in the more relevant few-shot setting.
Weaknesses: Weakness:
- A primary concern of the reviewer is the lack of evaluations against the more traditional CIFAR set of benchmarks for OOD detection.
- Additionally, the empirical performance gain of SCT (table 2) in combination with other prompt-tuning-based methods, seems minimal.
Technical Quality: 3
Clarity: 4
Questions for Authors: The reviewer would like to see some additional evaluations of SCT in the traditional CIFAR setting of OOD detection. The reviewer would also like to point out some small inconsistencies in the bolding for Table 2 (IDLike+SCT).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The author provides adequate discussions on any limitations and broader impacts. Additionally, the reviewer does not forsee any potential negative social impacts from the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
>**W1:** A primary concern of the reviewer is the lack of evaluations against the more traditional CIFAR set of benchmarks for OOD detection.
>
Thanks for the valuable suggestions! We conduct experiments on the **CIFAR benchmark in Table 4 in the attached PDF** and the results show that our SCT can still outperform the baselines under the CIFAR benchmark.
>**W2:** Additionally, the empirical performance gain of SCT (table 2) in combination with other prompt-tuning-based methods, seems minimal.
>
Thanks for the comments! As shown in Table 1 in the submission, the space for improvement of AUROC of the prompt-tuning based method is close to saturation, both exceeding 90%. However, the space for improvement of FPR95 is still very large, so these two metrics should not be treated equally. **The improvement of SCT on FPR95 (e.g. +5.95% for IDLike and +2.73% for LSN under 16-shot setting) is significant**. Nevertheless, we will continue to promote the improvement of AUROC in future work!
>**Q1:** The reviewer would also like to point out some small inconsistencies in the bolding for Table 2 (IDLike+SCT).
>
Thank you for pointing out this issue! We are sorry for the carelessness and the original bolded data should be 92.44 instead of 91.44. We will make the modification in the revision. | Summary: Based on the observation that CLIP undercalibration will affect the existing prompt-tuning-based method's OOD regularization, i.e. samples with uncertain True-Class Probability (referred to as ID uncertainty in this paper) may provide false OOD features and harm to negative training used in the existing methods; therefore, the author proposes a simple training strategy Self-Calibrated Tuning (SCT) weighted with the ID uncertainty, which can help improve FPR95 through experimental verification.
Strengths: - The author also observed and attempted to study the important CLIP calibration problem.
- The paper is relatively easy to understand overall.
Weaknesses: My main concerns about this work are that the work is relatively incremental and empirical; there is insufficient discussion on the pros and cons of field-related methods (including other paradigms); the experiments are not sufficient, rigorous, and analyzed; and the method's effect on improving common benchmarks is rather one-sided, etc. The details are as follows:
[Method]
- (Inaccurate motivation verifications) If I understand correctly, the misclassified ratio on the horizontal axis in the right panel of Fig. 2 refers to $p(pred\neq gt), pred=\arg\max_yp(y|x)$ while your ID uncertainty refers to sth like True-Class Probability, marginal softmax $p(y=gt|x)$. These are not two identical things, though there is a certain correlation: only within some ranges, TCP could indicate accuracy [1]. Therefore, I think the author may have a biased understanding here, and the experimental results cannot fully reflect the motivation of the work: when "uncertain" ID samples are used as OOD regularization, some ID data are misdetected as OOD (FPR). If I have misunderstood, could the author clarify? Or maybe additional correct experiments are needed?
- (No calibration verifications) The claim and results in the work show that SCT helps with CLIP calibration, but there is no visualization of the calibration after training to illustrate the point (e.g. could add the before and after calibration comparison like in Fig. 2).
- (Lack of discussion of weighted training; modulating function rationale) The idea of weighted loss is very direct and easy to think of. Previous work should also be mentioned and discussed. For example, [2] is based on the feature representation paradigm and uses activation scale as the ID-ness (similar to ID uncertainty in the context) indicator for weighted training to improve OOD detection. In comparison, I do not quite understand why $\phi$ must be monotonically decreasing w.r.t. $p$ (e.g. $1-p$) and not monotonically increasing (e.g. $p$), because the weighting method in [2] is monotonically increasing, and the result is also improved. Could the authors elaborate on this?
- (How about post-hoc CLIP calibration?) Usually, calibration is divided into two types: training and post-hoc [3] (calibration related works are lacked in the paper). The former is used in this paper. The latter may be explored in OOD feature extraction methods, e.g. changing the rank operation (Eq. (6) & Fig. 3(d)). The author may lack discussion in this aspect.
[Experiments]
- (No much AUROC improvement) I understand that the method in this work is mainly to improve FPR95, but unilaterally only improving FPR95 does not seem to be comprehensive enough, because AUROC is also an equally important indicator and methods need to be proposed to improve it.
- (Lack of CIFAR results) Although the comparison method LoCoOp has not been experimented on CIFAR, CIFAR is indeed another important benchmarks in the field of OOD detection, and I think it is necessary to supplement it.
- (Discussions with simpler yet more effective pre-trained features + post-hoc?) I just would like to know what the author thinks about the (potential) advantages of the prompt-tuning-based method studied in the paper compared to the post-hoc method. After all, post-hoc does not require additional training and uses the basic ResNet backbone; the FPR95 and AUROC on the main task of Tab. 1 on ImageNet-1k have reached **20.05** and **95.71** respectively, which are much better than the results reported in the paper (26.47, 93.37).
- Could the authors clarify on what validation set are the hyperparameters tuned?
- (Interpretations of the ablations.) Figure 3(b) shows that the results of selecting other regularization functions are very different, and the paper (L294-298) does not provide any analysis. I am curious about how the author could try to interprete these ablation study experiment results. Similarly, the quality of OOD features extracted by different extraction methods also varies greatly, which seems very empirical (Fig. 3(d)).
- Table 1 is suggested to include results of combining more newer post-hoc methods (e.g. ASH (Djurisic et al., 2022), Scale [2]) and fine-tuned methods, which will give readers a more comprehensive sense.
[Presentation]
- The paragraph introducing the OOD features (L189) should be moved forward, or at least before refer to Fig. 1, which will give readers a clearer background.
- Why is the left panel in Fig. 2 not arranged in ascending order of softmax output? The arrangement of 0.02, 0.89, 0.04, and 0.67 affects reading. What is it trying to say? It would be better to display the classes and images together for clarity.
References:
[1] Corbière, Charles, et al. "Addressing failure prediction by learning model confidence." NeurIPS, 2019.
[2] Xu, Kai, et al. "Scaling for Training-Time and Post-hoc Out-of-distribution Detection Enhancement." ICLR, 2024.
[3] Guo, Chuan, et al. "On calibration of modern neural networks." ICML, 2017.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations of lack of theoretical analysis (L571-574)'d better be put in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
> **W1:** Inaccurate motivation verifications
Thanks for the constructive comments! Although true-class probability (TCP) and accuracy are not two identical things, **classification accuracy could indicate TCP to a certain degree under the large-scale ImageNet-1k dataset.** Since ImageNet-1k has a large number of classes and CLIP models are prompt-tuned with one-hot labels, the prediction probability of CLIP can be overconfident, **which means that CLIP models normally output high maximum softmax probability $max_y p(y|x)$.** Therefore, when models correctly classify samples, TCP are likely to be very high. Nevertheless, we would acknowledge it has some non-accurate parts.
**To make our motivation verification clearer and more accurate**, we conduct a new experiment on the correlation between data uncertainty and OOD detection performance. Specifically, we calculate the TCP of all the training samples in a 64-shot set using a CLIP model prompt-tuned with a 4-shot training set which contains no overlapping samples with the 64-shot set. **We choose the data with the lowest and highest TCP for every ID class to generate two data groups with different uncertainty levels respectively,** and train LoCoOp on these two data groups. **As shown in Table 1 in the attached PDF**, the OOD detection performance of LoCoOp is significantly impacted by the data uncertainty level, which is consistent with Figure 2 in the submission.
> **W2:** No calibration verifications
Thanks for the valuable suggestions! We add the figures of calibration comparison **as Figure 1 in the attached PDF.** The illustrations show that our SCT can significantly help with CLIP calibration for more accurate OOD feature extraction. The extracted OOD features (shown as grey patches of images) of SCT models are significantly more accurate and contain less ID-relevant regions than LoCoOp models.
> **W3,W4**
We leave the answers in General Response.
> **W5:** No much AUROC improvement
Thanks for the comments! As shown in Table 1 in the submission, **the space for improvement of AUROC** of the prompt-tuning based method **is close to saturation**, both exceeding 90. However, the space for improvement of FPR95 is still very large, so these two metrics should not be treated equally. We will continue to promote the improvement of AUROC in future work!
>**W6:** (Lack of CIFAR results)
Thanks for the valuable suggestions! We conduct experiments on the CIFAR benchmark **in Table 4 of the attached PDF** and the results show that our SCT can still outperform the baselines under CIFAR benchmark.
> **W7:** Discussions with pre-trained features + post-hoc
Thanks for the comment! First, **prompt-tuning based method can leverage the generalization ability** of VLMs to better fit the domains of the downstream tasks with relatively low computational cost. Secondly, post-hoc methods need to be built on a well-trained model, the capacity of which greatly affects the OOD detection performance. Thirdly, **post-hoc methods and prompt-tuning based methods are compatible with each other**, further boosting the OOD detection performance. We conduct experiments on the compatibility of An advanced post-doc method, NegLabel [1], with SCT **in Table 5 in the attached PDF**. The results show that SCT can be combined with post-doc methods for better OOD detection.
> **W8:** Validation set
Thanks for the question! We tune the hyperparameter $\lambda$ on the dedicated OOD validation set in the OpenOOD v1.5 benchmark[2]. The other hyperparameters are chosen following previous works.
> **W9:** About ablations.
Thanks for the constructive question!
In the ablation study of Figure 3(b), we follow OE [3] and Energy-OE [4] to implement prompt-tuning based OOD detection with MSP and Energy regularization functions, respectively. **The motivation of conducting this experiment is to verify that SCT can outperform LoCoOp in different regularization functions.** Specifically, the energy regularization **needs tuning the two energy threshold hyperparameters $m_{in}$ and $m_{out}$, limiting its advantages over other regularization.** As shown in Fig. 3(b), we conjecture that directly forcing the probability distribution of OOD features to the uniform distribution(MSP) performs worse than entropy maximization under the setting of prompt tuning.
Regarding OOD feature extraction methods, the probability-based and entropy-based methods both have a threshold hyperparameter to discriminate between ID and OOD features. The sensitivity to hyperparameters of different methods may be the reason behind the different OOD detection performance. **For the entropy-based method, the performance is poor since it is challenging to determine the appropriate threshold.** [5]
>**W10:** About Table 1
Thanks for the suggestion! We will conduct new experiments of newer post-hoc and fine-tuned methods on CLIP models in our revision. **Since ASH and Scale can't apply to CLIP as analyzed in W3**, we will include the results of them on conventional CNN models in the appendix for fair comparison.
>**W11, W12, W13:** About presentation and limitations.
Thanks for the suggestion! The original images arrangement is designed to compare the neighboring images with different uncertainty levels. we will make the modifications in our manuscript.
Reference:
[1] Jiang, Liu, et al. "Negative Label Guided OOD Detection with Pretrained Vision-Language Models." ICLR, 2024.
[2] Jingyang Zhang,Jingkang Yang, et al "Openood v1.5: Enhanced benchmark for out-of-distribution detection."
[3] Dan Hendrycks, et al, "Deep Anomaly Detection with Outlier Exposure", ICLR 2019
[4] Weitang Liu, et al, "Energy-based Out-of-distribution Detection", NeurIPS 2020
[5] Miyai1, Yu, et al. "LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning." NeurIPS, 2023
---
Rebuttal Comment 1.1:
Title: [Invitation to rolling discussion] Need further clarification?
Comment: Thanks for your time and comments on our work. We have tried our best to address the concerns and provided detailed responses to all your comments and questions. Are there any unclear points that we should/could further clarify?
---
Rebuttal 2:
Comment: I have carefully read the reviews of the reviewers and the author's detailed rebuttal, and I am sincerely grateful. In particular, the author corrected the experimental verification of TCP motivation and provided the benchmark result of CIFAR. However, in general, I am still worried about:
- If we refer to related methods such as feature representation-based ones, 90 AUROC is not saturated, and their FPR is also relatively low ~20, so the AUROC performance should be improved further.
- The method is a bit incremental and empirical, lacking theory as the author also acknowledges. If I understand correctly, the author wants to achieve the balance and relative weighting of OOD and ID items by modulating the function. Even so, the ablation OOD/ID ratios in the last three rows of Tab. 4 all increase monotonically with p, so either the hyperparameters may not be tuned well or there would be a deeper understanding of the exact reasons that matter.
- For calibration verification, in addition to the recommended qualitative OOD region visualization comparison, you can also consider adding quantitative metrics such as ECE [4].
(Minor)
- Post-hoc studies can be more comprehensive in research, and choosing different K is heuristic and not a very good calibration strategy. The authors could consider temporal scaling [4], etc.
Based on the above reasons and careful consideration, I tend to keep the original score.
[4] A Closer Look at the Robustness of Contrastive Language-Image Pre-Training (CLIP)
---
Rebuttal Comment 2.1:
Title: Thanks for your constructive suggestions!
Comment: Thank you very much for all the constructive feedback after reading our response! We will make sure to incorporate all suggestions in the revision. | Summary: In response to challenges in OOD detection using CLIP-based methods, this paper introduces Self-Calibrated Tuning (SCT), a novel framework that addresses issues with unreliable OOD features extracted from ID data. SCT dynamically adjusts the influence of OOD regularization during model training based on the prediction uncertainty of ID samples. By introducing modulating factors into the learning objective, SCT directs the model's attention more effectively towards classification tasks, especially when training with low-confidence data. This adaptive approach improves the calibration of OOD features extracted from high-confidence ID data, enhancing the overall OOD detection performance of prompt tuning methods. Empirical evaluations on ImageNet-1k demonstrate SCT's effectiveness.
Strengths: 1. This paper is well-motivated and well-written. In particular, authors propose to adaptively adjust the importance of OOD features and introduce SCT, which are motivated by the following finding: performance of prompt tuning based methods is significantly affected by the uncertainty of the given ID data.
2. Authors have a comprehensive review of the whole research literature.
3. Authors conduct a large amount of experiments and the experimental results demonstrate the effectiveness of SCT on both official benchmarks and hard OOD detection tasks.
4. In summary, I think SCT could become a great contribution towards OOD detection community.
Weaknesses: None in particular
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. My concern is mainly about the computational cost and training cost of SCT, since it involves operations on dense/local features.
2. My second concern is about the rationality of using pre-trained models (CLIPs .etc) to perform OOD detection tasks, because the concepts in both ID and OOD datasets are probably seen during pre-training stage. I want to know the authors' opinions towards the benchmarking and research paradigm.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
> **Q1:** My concern is mainly about the computational cost and training cost of SCT, since it involves operations on dense/local features.
>
Thanks for your valuable question!
First, SCT **doesn't incur any extra computational cost to the LoCoOp due to its simple design.** Technically, SCT introduces modulating factors respectively on the two components of the original learning objective. The modulating factors (instantiated as $1-p(y|x)$ for $\phi$ and $p(y|x)$ for $\psi$ in Equation (4) in the submission) only involves the computation of the prediction probability of ground truth classes $p(y|x)$, **which can be repeatedly used after the original forward pass of CLIP models.**
Secondly, **the operation of local features involved in LoCoOp and SCT are also relatively low-cost in terms of computation.** The local are generated from the forward pass of the vision encoders of CLIP, which doesn't bring additional computational cost compared to regular training. Regarding the extraction of OOD features, we compute the similarity between local features and text features of all the ID classes and we identify regions that do not include their ground truth class in the top-K predicted classes as ID-irrelevant regions. Empirically, we evaluate the time and memory consumption of SCT compared with other baselines **in Table 1 and the results show that SCT is relatively compute-efficient.** The evaluation is conducted on a single A100 GPU with the batch size of 32.
**Table 1.** Evaluation of computational cost of SCT and baselines.
| Method | Time per iteration (s) | GPU Memory (MiB) | FPR95 | AUROC | ID-ACC |
| --- | --- | --- | --- | --- | --- |
| CoOp | 0.70 | 21140 | 35.09 | 91.99 | 71.93 |
| LoCoOp | 0.96 | 23036 | 29.47 | 93.10 | 71.43 |
| SCT | 0.96 | 23036 | 26.47 | 93.37 | 71.77 |
> **Q2:** My second concern is about the rationality of using pre-trained models (CLIPs .etc) to perform OOD detection tasks, because the concepts in both ID and OOD datasets are probably seen during pre-training stage. I want to know the authors' opinions towards the benchmarking and research paradigm.
>
**A2:**
Thanks for your valuable comments! The definition of VLM-based OOD detection differs significantly from that of conventional OOD detection. VLM-based OOD detection aims to **detect samples that do not belong to any ID class text designated by the downstream task [1].** Therefore, the current benchmarks, such as the large-scale ImageNet-1k benchmark, still apply to VLM-based OOD detection as long as they satisfy the rule that ID and OOD concepts don't overlap. **For the better development of this field, future works should be focused on building benchmarks based on realistic datasets and scenarios.**
References:
[1] Miyai, Yang, et al. "Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey." 2024. | Rebuttal 1:
Rebuttal: # General Response
We appreciate all the reviewers for their thoughtful comments and suggestions on our paper.
We are very glad to see that the reviewers find our focused problem is **important** (R1,R2,R3,R4) within the OOD detection research, and **simple but adaptable** (R1,R2,R4) to various other techniques, and the experiments are **good**, **comprehensive** and demonstrate the **general effectiveness** of our SCT (R2,R4). We are also pleased that the reviewers find our writing is **very clear** and **easy to understand** (R2,R3,R4,R5).
We have tried our best to address the reviewers' comments and concerns in **individual responses to each reviewer** with comprehensive experimental justification. The reviews allowed us to improve our draft and **the contents added** in the revised version and **the attached PDF** are summarized below:
**From Reviewer wD2g**
- Clarify the novelty and insights of SCT(see Figure 1 and 2 in original draft)
- Explain and compare the difference between SCT and hard sample mining (see Table 4 in the original draft)
**From Reviewer 34fm**
- Conduct evaluation on the computational cost of SCT
- Discuss the rationality of utilization of pre-trained models for OOD detection.
**From Reviewer fpY7**
- Conduct experiments for more accurate motivation verifications. (see Table 1 in PDF)
- Supplement illustrations of calibration gains of SCT. (see Figure 1 in PDF)
- Discuss and compare the difference between SCT and other weighted training methods. (see Table 2 in PDF)
- Conduct experiments on post-doc CLIP calibration (see Table 3 in PDF)
- Explain the performance gain of SCT on AUROC.
- Conduct experiments on the CIFAR benchmarks. (see Table 4 in PDF)
- Conduct experiments on the compatibility of SCT and advanced zero-shot method. (see Table 5 in PDF)
- Provide more analysis on the ablation study results. (see Figure 4 in the original submission)
**From Reviewer pUkR**
- Conduct experiments on the CIFAR benchmarks. (see Table 4 in PDF)
- Explain the performance gain of SCT in combination with other baselines.
- Correct the inconsistencies of experiment data in the original submission.
**From Reviewer pBeU**
- Clarify the novelty and insights of SCT (see Figure 1 and 2 in original draft)
- Explain the performance gain of SCT in combination with other baselines.
- Conduct experiments on the compatibility of SCT and advanced zero-shot method. (see Table 5 in PDF)
- Conduct more explorations on the function $\phi$ and $\psi$.
- Clarify the unclarity of some statements in the original submission.
**We appreciate your comments and time!** We have tried our best to address your concerns and revised the paper following the suggestions. **Would you mind checking it and confirming if you have further questions?**
----
# Some rest answers:
## For reviewer fpY7:
> **W3:** Lack of discussion of weighted training; modulating function rationale
Thank you for recommending this work! We will clarify the difference between SCT and ISH as follows.
Conceptually, **the activation scale factor**, denoted as $Q/Q_p$ in the official paper, is derived as the quotient of the sum of all activations and the sum of un-pruned activations, which **has no direct mathematical correlation with true-class probability of VLMs.** Specifically, VLMs like CLIP compute prediction probability based on the cosine similarity between image and text features. **The computation of cosine similarity involves the normalization of features,** which naturally eliminates the effect of activation scale. **Furthermore, we compute the Pearson correlation coefficient** of the activation scale factors and $p(y|x)$ of ImageNet-1k validation set utilizing CLIP, and the result is -0.05 with the p-value equal to 0.0002. showing that these two variables **have no significant linear correlation**.
Technically, **ISH performs reweighting directly on the samples** based on their activation scale factor. Whereas, **SCT adaptively adjusts the importance between the two components** of the original learning objectives for every single sample. Data with high uncertainty are not directly down-weighted but are utilized more for OOD regularization in SCT.
Empirically, we conduct an experiment to make $\phi$ monotonically increasing with respect to $p(y|x)$ **in Table 2 in the attached PDF** and results show that the performance of monotonically increasing $\phi$ is much worse than monotonically decreasing $\phi$.
> **W4:** Post-hoc CLIP calibration
Thanks for the valuable suggestions! We conduct experiments on the effect of different $K$ in the rank operation **in Table 3 in the attached PDF**. The results demonstrate that post-hoc CLIP calibration shows less effective performance than training-based methods. We will include more calibration related works in our revision.
Pdf: /pdf/300e2c73bf236d5d6c6679c35675bb7b6e85fd64.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper focuses open-set detection method based on CLIP. The authors propose an additional weighting mechanism based on the LoCoOp method to alleviate the problem that the outlier related regions extracted by the LoCoOp method are not trustworthy in some cases.
Strengths: Outlier detection with VLM is an interesting research direction.
Weaknesses: The contribution over LoCoOp is incremental. The only difference is an extra reweighting term based on the current prediction score. And the reweighting mechanism is purely based on heuristics - for example, $1-p(y|x)$ for $L_{ce}$ implicitly enforce hard sample mining.
Minor:
The intuition in Figure 1/4 is not clear to me. The shown examples validate that LoCoOp can detect and mask-out the inlier-related regions well. Also, the GT label should be annotated.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please clarify the novelty and new insights.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your time devoted to reviewing this paper and your constructive suggestions. Here are our detailed replies to your questions.
> **W1,Q1:** The contribution over LoCoOp is incremental. The only difference is an extra reweighting term based on the current prediction score. And the reweighting mechanism is purely based on heuristics - for example, $1-p(y|x)$ for $L_{ce}$ implicitly enforce hard sample mining. Please clarify the novelty and new insights.
Thanks for the valuable comments! We would like to **re-clarify the novelty and insights of our SCT** as follows.
Conceptually, the motivation of SCT is to **mitigate the problem of unreliable OOD features** in prompt-tuning based OOD detection methods. Generally, these methods rely on the ID-irrelevant local context extracted by VLMs as the surrogate OOD features to perform regularization, **the quality of which is greatly affected by the inaccurate foreground-background decomposition of VLMs**. As shown in Figure 1/4 in the submission, although VLMs can mask out some ID-related regions (shown as the grey patches of images), **large portions of the extracted OOD features (shown as the colored patches of images) obviously belong to ID features**.
Empirically, we find that the quality of extracted OOD features is significantly correlated with the uncertainty level of ID data. **As illustrated in the left panel of Figure 2** in the submission, the extracted OOD features become more inaccurate as the uncertainty increases. In the right panel of Figure 2, we train LoCoOp on multiple data groups with different uncertainty levels, and **the results demonstrate that the OOD detection performance of LoCoOp can be significantly impacted by the uncertainty level of ID data.** Therefore, to mitigate the issue of unreliable OOD features, we propose SCT to calibrate the influence of OOD regularization from different ID samples based on their uncertainty level.
Technically, despite the simple design, SCT is significantly different from hard sample mining. The latter **conducts reweighting directly on the samples** based on the classification difficulty during training. The former **adaptively adjusts the importance between the two components of the original learning objectives** for every single sample. Data with high uncertainty are directly down-weighted in hard sample mining while they are utilized more for OOD regularization in SCT. **As shown in Table 4 in the submission**, under 16-shot ID data, the OOD detection performance of simply assigning $1-p(y|x)$ to $L_{ce}$ (denoted as $\phi$ $\checkmark$ and $\psi$ $\times$) are significantly inferior to SCT (denoted as $\phi$ $\checkmark$ and $\psi$ $\checkmark$), demonstrating the difference of SCT and hard sample mining.
| $\phi$ | $\psi$ | FPR95 | AUROC | ID-ACC |
| --- | --- | --- | --- | --- |
| $\times$ | $\times$ | 29.47 | 93.10 | 71.43 |
| $\checkmark$ | $\times$ | 29.30 | 92.66 | 71.50 |
| $\times$ | $\checkmark$ | 28.94 | 92.62 | 71.90 |
| $\checkmark$ | $\checkmark$ | 26.47 | 93.37 | 71.77 |
> **W2:** The intuition in Figure 1/4 is not clear to me. The shown examples validate that LoCoOp can detect and mask-out the inlier-related regions well. Also, the GT label should be annotated.
>
Thanks for the suggestions! As shown in Figure 1/4, **although VLMs can mask out some ID-related regions (shown as the gray patches of images), large portions of the extracted OOD features (shown as the colored patches of images) obviously belong to ID features.** We will make the captions and figures clearer as suggested in our revised version.
---
Rebuttal Comment 1.1:
Title: [Invitation to rolling discussion] Need further clarification?
Comment: Thanks for your time and comments on our work. We have tried our best to address the concerns and provided detailed responses to all your comments and questions. Are there any unclear points that we should/could further clarify?
---
Rebuttal Comment 1.2:
Comment: Many thanks for the response. I carefully checked the author's response and revisited the relevant parts of the paper. I would like to firstly note *I'm not very familiar with the relevant area and its evaluation standard of relevant works*; in this background, the proposed method is *slightly* insufficient for me in terms of technical novelty and empirical contribution. The author may either:
- empirically conduct more experiments to show solid improvement. As I noticed in the paper and the updated table here, the difference is not significant. If current benchamrks tend to saturate, the authors may convert to other more challenging datasets.
- carry out some theoretical analysis. Current method is mainly built on heuristics, for example, the general uncertainty-based idea can also result in other variants like the one proposed, why and how the proposed idea be the optimal?
I sincerely hope this can help make the submission better.
---
Reply to Comment 1.2.1:
Title: Thanks for your response!
Comment: Many thanks for your feedback and we will consider your suggestions in the revision. | null | null | null | null | null | null |
Training Dynamics of Transformers to Recognize Word Co-occurrence via Gradient Flow Analysis | Accept (poster) | Summary: This paper investigates the training dynamics of a single-layer transformer followed by a single MLP layer on a synthetic binary classification task, where the objective is to identify the co-occurrence of two specific tokens in the input sequence. They analyze the gradient flow dynamics for the case that all the attention parameters (key, query, value) and the linear layer all trainable and show that the model can achieve low loss despite the non-convexity of the objective. They identify two phases in the training, 1) the MLP aligns with the two target tokens at the start of the training and the model learns to classify all the samples correctly, 2) the attention and MLP parameters update to increase the classification margin and drive the loss to zero. They also run a small scale numerical experiment in their synthetic setup tp confirm their analysis.
Strengths: The paper makes no restricting assumptions on the weights of the transformer model and performs the analysis on the joint optimization of all the parameters.
Although the paper and its proof are notation-heavy, the authors have broken down the complexity of the proof and notation in the main body to clarify the steps needed to prove the results.
Weaknesses: There are some restrictive assumptions on the synthetic data model: The vocabulary set $d$ is considered to be larger than the number of training tokens, which is not the case in realistic setups. Thus, some tokens are not visited at training time. Also, they assume, apart from the two target tokens, the remaining tokens appear at maximum once in the training set.
The proof outline in the main body helps in understanding the high-level steps involved. However, it could still benefit from additional clarifications on some intermediate steps. For instance, in phases 1 and 2, it's mentioned how the alignment of the MLP with the target tokens $G^{(t)}(\mu_{1,2})$ behaves during training. However, it's not clear how this connects to the evolution of the attention scores in phase 2 as stated in Lemma 4.7.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As far as I understand, in your synthetic task, in the first phase of training, effectively only the MLP weights are learning the task. That is, the model can achieve 100% accuracy only by aligning the MLP weights with the relevant tokens $\mu_1,\mu_2$. So, the attention layer is not needed for identifying the co-occurrence of the tokens in this setup?
2. Can you also report validation and accuracy plots in your synthetic experiments? Does the validation loss decay at the same rate as the training loss as stated in Thm 3.3?
3. Regarding the proof sketch:
a) The alignment of parameters with the target tokens is discussed in the main body. Can you also clarify how the gradients related to irrelevant tokens evolve? In particular, regarding the tokens that do not appear in the training set (since $nL\leq d$), does the model learn not to attend to them at test time?
b) I find it confusing that the softmax output remains close to $1/L$ long in the training (line 320) and assigns uniform attention to all tokens in the sequence. Does this statement hold for all training samples? If yes, then how does the model learn to attend to the target tokens?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer very much for your time and efforts on providing helpful review comments.
**Comment:**
There are some restrictive assumptions on the synthetic data model: The vocabulary set $d$ is considered to be larger than the number of training tokens, which is not the case in realistic setups. Thus, some tokens are not visited at training time. Also, they assume, apart from the two target tokens, the remaining tokens appear at maximum once in the training set.
**Response:** Thanks for the question. In fact, we can relax both assumptions by letting the remaining tokens be uniformly randomly sampled from the vocabulary. Our proof will still be valid. More specifically, our proof holds as long as the number of each individual random token is much less than the number of the signals. For example, if we uniformly sample the irrelevant tokens from the vocabulary, then in expectation each irrelevant token will only appear $\frac{nL}{d}$ times in the training set which is much less than $n$ (the number of times each signal appears in the training set) if $d \gg L$.
**Comment:**
In phases 1 and 2, it's mentioned how the alignment of the MLP with the target tokens $G^{(t)}(\mu_{1,2})$ behaves during training. However, it's not clear how this connects to the evolution of the attention scores in phase 2 as stated in Lemma 4.7.
**Response:**
This can be seen from the update of the score in the gradient flow dynamical system which is in Appendix C. Take the score between $\mu_1$ and $\mu_2$ for example, rearranging the terms, we have
\begin{align*}
&\frac{\partial \mu_1^\top W_K^{(t) \top} W_Q^{(t)} \mu_2}{\partial t} \\\\
&= \frac{1}{n\sqrt{m}} \sum_{i=1}^n g_i^{(t)} y_i \sum_{l=1}^L \mu_1^\top W_K^{(t) \top} K^{(t,i)} \cdot \textnormal{diag}\left( G^{(t)}(X^{(i)}) - (G^{(t)}(X^{(i)}))^\top p_l^{(t,i)} \right) p_l^{(t,i)} x_l^{(i) \top} \mu_2 \\\\
&+ \frac{1}{n\sqrt{m}} \sum_{i=1}^n g_i^{(t)} y_i \sum_{l=1}^L \mu_2^\top W_Q^{(t) \top} q_l^{(t,i)} p_l^{(t,i) \top} \cdot \textnormal{diag}\left( G^{(t)}(X^{(i)}) - (G^{(t)}(X^{(i)}))^\top p_l^{(t,i)} \right) X^{(i) \top} \mu_1
\end{align*}
Since $G^{(t)}(\mu_1) = \Theta(1)$ after phase 1, we have $G^{(t)}(\mu_1) - (G^{(t)}(X^{(i)}))^\top p_l^{(t,i)} = \Theta(1)$. We can use this to show that $\frac{\partial \mu_1^\top W_K^{(t) \top} W_Q^{(t)} \mu_2}{\partial t}$ is positive in Lemma 4.7 and thus the score between $\mu_1$ and $\mu_2$ is increasing.
We will add these clarifications in our proof outline. We will also try to enrich our proof outline by adding clarifications on some intermediate steps.
**Comment:**
As far as I understand, in your synthetic task, in the first phase of training, effectively only the MLP weights are learning the task. That is, the model can achieve 100\% accuracy only by aligning the MLP weights with the relevant tokens $\mu_1,\mu_2$. So, the attention layer is not needed for identifying the co-occurrence of the tokens in this setup?
**Response:** Although MLP can achieve full accuracy, only MLP layer in phase 1 does not achieve a good performance margin, i.e., the loss is not vanishing yet, and such a model can easily make wrong classification when data have noise. The important role that attention layer plays in phase 2 is to enlarge the classification margin together with MLP so that the loss approaches zero and the trained model can generalize well.
**Comment:**
Can you also report validation and accuracy plots in your synthetic experiments? Does the validation loss decay at the same rate as the training loss as stated in Thm 3.3?
**Response:** In the attached pdf file, Fig. 2(a) provides the accuracy plot, where both the training and test errors become zero after certain number of training steps. Fig. 2(b) provides the validation (i.e., test) and training losses, and they decay at similar rate.
**Comment:**
The alignment of parameters with the target tokens is discussed in the main body. Can you also clarify how the gradients related to irrelevant tokens evolve? In particular, regarding the tokens that do not appear in the training set (since $nL\leq d$), does the model learn not to attend to them at test time?
**Response:** The alignment of the random tokens is much smaller than the signals, and remain to be small and bounded throughout the training process since they don't occur as much in the training set.
In our proof, we handle the random tokens all together no matter if they occur in the training set or not.
Their effect can be upper bounded during training (Theorem E.15 for phase 1 and Appendix F.5 for phase 2).
So the answer to your question is yes -- the model does learn not to attend to those tokens not in the training set at test time.
**Comment:**
I find it confusing that the softmax output remains close to $1/L$ long in the training (line 320) and assigns uniform attention to all tokens in the sequence. Does this statement hold for all training samples? If yes, then how does the model learn to attend to the target tokens?
**Response:**
Sorry about the confusion. In line 320, we mean to explain how we prove convergence. In fact, there are 2 steps to show convergence: (1) show that the training loss can decrease when the softmax outputs remain to $1/L$, which is what we explain in the proof outline; (2) show that with the deviation in the softmax output from $1/L$, the loss value will still decrease, which is not explained in the proof outline (we will add this part to the proof outline to avoid confusion). The outputs of the softmax attention indeed change as stated in Lemma 4.7.
We thank the reviewer again for your comments. We hope that our responses resolved your concerns. If so, we wonder if the reviewer could kindly increase your score. Certainly, we are more than happy to answer your further questions.
---
Rebuttal Comment 1.1:
Title: A gentle reminder
Comment: Dear Reviewer cFaT,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors | Summary: This paper studies the training dynamics of a single hidden layer transformer network (self-attention + linear MLP) trained on a binary word cooccurrence task. Specifically, given a data matrix $X \in R^{d \times L}$ representing L "words" (each column of X is a word vector of dimension d), the model must output +1 if words 1 and 2 both occur in X, and -1 otherwise. The paper shows that a transformer layer is able to learn this task, and that the training occurs in two stages: First, the linear MLP layer learns to classify data points correctly by positively aligning with the embeddings for words 1 and 2 (but without making large changes to attention matrices). Second, it drives the loss down further by using the attention matrices to positively correlate q,k,v for words 1 and 2, and anti-correlate the q,k,v for a common word (denoted word "3" in the paper) relative to words 1,2. After these phases, both the training and generalization losses go to zero (as long as embedding dimension is large enough).
Overall, I found the results interesting and insightful, though not very surprising, and the practical implications of these results were not very clear to me. Thus, I currently recommend weak accept. Importantly, my primary research area is not learning theory, so my knowledge of the related work is relatively limited, and thus my review confidence is relatively low.
Strengths: - It is interesting to see that the training dynamics for this word cooccurrence task can be analyzed rigorously, with relatively few assumptions.
- The theoretical results are validated with a nice synthetic experiments, that demonstrates that the two phases predicted by the theory do occur in practice.
Weaknesses: - This word cooccurrence task is very simple, and thus it is not surprising that a single transformer layer can easily learn it.
- Only full gradients are considered, whereas transformers are typically trained with mini-batch Adam(W).
Technical Quality: 3
Clarity: 3
Questions for Authors: - What other tasks (beyond word cooccurrence) do you think could be analyzed with this methodology?
- What are the implications of this result to more complex/realistic tasks, like next token prediction?
- If mini-batch Adam is used during training, do the two phases still occur?
- Can you add more details about the experimental setup to the main paper?
- Can you add more discussion about the automatic balancing of gradients, and its significance, in a more central part of the text (e.g., section 3, not 4)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, limitations have been discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer very much for your time and efforts on providing helpful review comments.
**Comment:**
This word co-occurrence task is very simple, and thus it is not surprising that a single transformer layer can easily learn it.
**Response:**
We agree that this is a simple task from a representational perspective.
However, our primary goal here is to use this setting as an **analytically tractable** case and develop theoretical techniques to understand the *training dynamics* of transformers, which is a highly non-trivial task considering the complicated nature of self-attention and transformers.
**Comment:**
Only full gradients are considered, whereas transformers are typically trained with mini-batch Adam(W).
**Response:**
Regarding full gradients, since we are using gradient flow, it is more convenient to analyze the full gradients. We can indeed use gradient descent and relax the full gradients to consider stochastic gradients. However, this will create some unnecessary hassle and won't change the essential nature of two-phase training dynamics of transformers.
Regarding Adam or its variant, it is indeed an interesting direction. However, its analysis would be more complicated due to the adaptive momentum terms in the iterative update. The solution that transformers converge to can also have different implicit bias from SGD. Thus, we will leave this as future work.
**Comment:**
What other tasks (beyond word co-occurrence) do you think could be analyzed with this methodology?
**Response:**
Thank you for asking.
Our proof techniques open up for a great possibility of settings to analyze such as NLP tasks with one hot embedding. For example, one popular way of modeling natural language is by modeling the language sequences by Markov chains. Our techniques can be used to study the training dynamics when we want the transformer model to predict the next input given the previous tokens in the sequence.
**Comment:**
What are the implications of this result to more complex/realistic tasks, like next token prediction?
**Response:** Our work could have some implications applicable to more complex tasks including next-token prediction. (i) The training can experience two or more training phases, where each phase captures one salient change of certain attention scores or MLP weights. (ii) The gradient of well-designed loss function can facilitate classification and enlarge the margin by driving attention scores between query and relevant key tokens to change properly during the training process.
**Comment:**
If mini-batch Adam is used during training, do the two phases still occur?
**Response:**
We expect the training phases of Adam will exhibit some differences from gradient descent considered in our work. From a generic nonconvex optimization perspective, (Shuo \& Li) showed that Adam can be viewed as normalized gradient descent in $\ell_\infty$ geometry whereas gradient descent operates on $\ell_2$ geometry.
This is due to the coordinate-wise normalization operation uniquely occurring in Adam but not in GD. We expect that the loss with transformers trained by Adam will somewhat also follow such an observation.
Xie, Shuo, and Zhiyuan Li. "Implicit Bias of AdamW: $\ell_\infty $-Norm Constrained Optimization." Forty-first International Conference on Machine Learning.
**Comment:**
Can you add more details about the experimental setup to the main paper?
**Response:** Thanks for the suggestion. We will make more space in the main paper and add the experimental setup.
**Comment:**
Can you add more discussion about the automatic balancing of gradients, and its significance, in a more central part of the text (e.g., section 3, not 4)?
**Response:**
Thank you for the suggestion. We will make the change as the reviewer suggested.
We thank the reviewer again for your comments. We hope that our responses resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions.
---
Rebuttal Comment 1.1:
Title: A gentle reminder
Comment: Dear Reviewer ritc,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thank you very much for your response. I keep my score unchanged, due to what I perceive to be the limited impact/scope of the work.
Regarding my question: "If mini-batch Adam is used during training, do the two phases still occur?" ---> Could you add an experiment to check this?
---
Reply to Comment 1.2.1:
Comment: Thank you very much for your feedback.
As we mentioned in our rebuttal, we expect the training phases of Adam will exhibit some differences from gradient descent considered in our work.
This is due to the coordinate-wise normalization operation uniquely occurring in Adam but not in GD.
Our experiments confirmed our thoughts.
We provide the experiment results on AdamW with default parameter setup below, where we show the changes in $G(\mu_1)$, which is the MLP alignment with the signal $\mu_1$, and the attention score when the query is $\mu_1$ and the key is $\mu_2$.
All the values are rounded to a tenth of decimal.
As you can see from the experiment results below, both the MLP and score change dramatically in the first 50 steps and then the changes slow down.
The behavior thus is very different from gradient descent, where only MLP changes rapidly in the initial training and then both MLP and score jointly change substantially.
| Step | 0 | 50 | 100 | 150 | 200 | 250 |
| --- | --- | --- | --- | --- | --- | --- |
| $G(\mu_1)$ | 0.2 | 50.8 | 56.8 | 59.7 | 62.0 | 64.0 |
| Score $(\times 10^{-1})$ | -0.8 | 3.0 | 3.5 | 3.8 | 4.1 | 4.3 |
We further emphasize that the main contribution of this paper lies in the development of the **new techniques** for analyzing the training dynamics of transformers, especially the joint optimization analysis of all transformer parameters, as Reviewer cFaT pointed out. We expect that these mathematical techniques can be generalized to more complicated transformer architectures in the future.
We thank the reviewer for the time and efforts.
We hope that our response answers your question. | Summary: This article delves into the gradient flow dynamics for detecting word co-occurrence, demonstrating that the gradient flow approach can achieve minimal loss. The training process commences with random initialization and can be delineated into two distinct phases.
Strengths: - This article noticed an interesting phase transition during training in this special setting and demonstrates it with solid calculation and experiments.
- A new property of gradient flow is noticed and contributes to prove near minimum training loss together with the analysis of softmax.
Weaknesses: The setting of empirical experiments is also simple and ideal and readers may have no idea if this is a general phenomenon during training for detecting word co-occurrence.
Technical Quality: 3
Clarity: 2
Questions for Authors: - In line 151 and 152, it is confusing why concentration theorems lead to the specific probability in (i) of Assumption 2.3.
- Lack of explanation for $\langle w_{j_1}^{(t)},w_{j_2}^{(t)} \rangle$ in line 194.
- It is not obvious why "the samples with only one target signal may be classified incorrectly as co-occurence" in line 282.
- The notation in line 169 is somewhat misleading.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The semantic mechanism this article focuses on is not general enough in NLP and the the results cannot give instruction useful enough to guide the training process of Transformer.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer very much for your time and efforts on providing helpful review comments.
**Comment:**
The setting of empirical experiments is also simple and ideal and readers may have no idea if this is a general phenomenon during training for detecting word co-occurrence.
**Response:** We clarify that the experiment presented in the paper is used to guide and validate our theoretical analysis, and hence it is designed to match our theoretical setting. Our goal here is to understand the training dynamics of transformers under **analytically tractable** setting so that it can help us understand the mechanism behind self-attention and transformers.
Regarding the reviewer's question on the generality of the phenomenon, we attached a pdf file that contains a preliminary experiment that we worked out. Figure 1 shows that for more general settings with multi-layer transformers, our main theoretical findings still hold: (i) the loss converges to the global minimum (i.e., with zero value) despite being highly nonconvex; (ii) the training process achieves correct detection with zero classification error as seen in phase 1 of our characterization, and the loss value continues to decrease to zero due to enlargement of the classification margin as seen in phase 2 of our characterization. We will continue to work on more experiments.
We also point out that in multi-layer transformers, it is hard to interpret the meaning of the softmax attention in the middle layers since unlike the one layer case, the inputs to the middle layers are changing from iterations to iterations. Thus, it is hard to find a fixed subspace to project on and study its changes.
**Comment:**
In line 151 and 152, it is confusing why concentration theorems lead to the specific probability in (i) of Assumption 2.3.
**Response:** Thanks for the question. Such an assumption helps to simplify our notations throughout the paper so that our main results can have simpler version and it is easy for readers to digest the main insight of the result.
Those specific ratios can be satisfied with high probability if the size of the training set $n$ is large enough. This can be achieved by applying Hoeffding's inequality as follows. Take $I_1$ as an example. We have $\mathbb{P}\Big[|\frac{1}{n} \sum_{i=1}^n \mathbb{I}(X^{(i)} \in I_1) - \frac{1}{2}| \geq \sqrt{\frac{\log(2/\delta)}{n}}\Big] \leq \delta$.
**Comment:**
Lack of explanation for $\langle w_{j_1}^{(t)},w_{j_2}^{(t)} \rangle$ in line 194.
**Response:** Thanks!
This is the inner product of the neuron weights between $j_1$-th and $j_2$-th neuron.
This term appears in calculating the update of $\nu^\top W_V^{(t)\top} W_V^{(t)} \mu$ and is needed to make the dynamical system complete. We will add this explanation to the revision.
**Comment:**
It is not obvious why "the samples with only one target signal may be classified incorrectly as co-occurrence" in line 282.
**Response:**
This is because during the beginning of training, the linear MLP layer will positively align with $\mu_1, \mu_2$, i.e., $G^{(t)}(\mu_1), G^{(t)}(\mu_2) > 0$, while for the common token $\mu_3$, we have $G^{(t)}(\mu_3) \approx 0$.
At the same time, the linear MLP will also output something near zero for random tokens.
Thus, for those samples $X$ with only $\mu_1$ or $\mu_2$ (i.e., $X \in I_2 \cup I_3$), we have $f^{(t)}(X) > 0$ which is incorrect as the samples in $I_2$ and $I_3$ has label $-1$.
**Comment:**
The notation in line 169 is somewhat misleading.
**Response:**
$p_{q\leftarrow \nu, k \leftarrow \mu}^{(i)}$ is the output of the softmax attention given input $X^{(i)}$ when the query is $\nu$ and key is $\mu$.
In addition, $l(i,\mu)$ denotes the index in $X^{(i)}$ such that $X^{(i)}_{l(i,\mu)} = \mu$. We will add this explanation to the revision.
We thank the reviewer again for your comments. We hope that our responses resolved your concerns. If so, we wonder if the reviewer could kindly consider to increase your score. Certainly, we are more than happy to answer your further questions.
---
Rebuttal Comment 1.1:
Title: A gentle reminder
Comment: Dear Reviewer G3j5,
We've taken your initial feedback into careful consideration in our response. Could you please check whether our responses have properly addressed your concerns? If so, could you please kindly consider increasing your initial score accordingly? Certainly, we are more than happy to answer your further questions.
Thank you for your time and effort in reviewing our work!
Best Regards,
Authors
---
Rebuttal 2:
Comment: Thank you for your clarifications. Specifically, I have noticed the new experiment. However, these results remain preliminary and do not verify that the training dynamics of weights in this work can be generalized to broader cases. For example, the dynamics "first learning MLP and then learning Attention" cannot be adequately captured by the loss or accuracy presented.
I will maintain my score.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer very much for the highly inspiring question.
To clarify the reviewer's concern, we note that the attention quantities that we study in one-layer case are the projections of attention to the input signals. In multi-layer transformers, the inputs to the middle layers are changing from iterations to iterations, and hence it is hard to find a fixed subspace to project on. Consequently, it is hard to find a natural and meaningful projection of the softmax attention in the middle layers to illustrate the phase change. Similarly, the inputs to the middle MLP layers are also changing, which makes it hard to find a fixed space for projection.
This said, we still expect that the phase-wise learning (first MLP then attention) can occur for broader class of architectures. One feasible setting to demonstrate this is one-layer **multi-headed** transformer, which generalizes our original architecture and still maintains the meaningful projection of softmax attention to the fixed input space. We conducted additional experiments for one-layer 3-headed transformer. In our table below, we provide the values $G(\mu_1)$ of the MLP alignment with $\mu_1$ and the attention scores when the query is $\mu_1$ and key is $\mu_2$. The experiment results are all rounded to a tenth of decimal for better illustration. Take the first head as an example. The result clearly indicates a two-phase training process: (i) in phase 1 (during the first 250 steps), the MLP layer $G(\mu_1)$ grows quickly whereas the change in the attention score is very small compared to its later changes; and (ii) in phase 2 (after 250 steps), both MLP and attention score change substantially.
| | Step | 0 | 250 | 500 | 750 | 1000 | 1250 | 1500 | 1750 |
|---|---|---|---|---|---|---|---|---|---|
| Head No.1 | score ($\times 10^{-2}$) |7.7 | 7.8 | 8.2 | 8.6 | 9.0 | 9.3 | 9.5 | 9.8 |
| | $G(\mu_1)$ | 0.8 | 4.8 | 8.0 | 10.7 | 13.0 | 14.8 |16.4 | 17.6 |
| Head No.2 | score ($\times 10^{-2}$) |2.1 | 2.2 | 2.6 | 3.0 | 3.3 | 3.6 | 3.9 | 4.1 |
| | $G(\mu_1)$ | -0.1 | 2.5 | 4.6 | 6.6 | 8.3 | 9.7 |10.9 | 11.9 |
| Head No.3 | score ($\times 10^{-2}$) |-12.4 | -12.2 | -11.7 | -11.1 | -10.6 | -10.2 | -9.9 | -9.6 |
| | $G(\mu_1)$ | 0.7 | 4.6 | 7.6 | 10.2 | 12.4 | 14.2 | 15.6 | 16.8 |
We further emphasize that the main contribution of this paper lies in the development of the **new techniques** for analyzing the training dynamics of transformers, especially the joint optimization analysis of all transformer parameters, as Reviewer cFaT pointed out. We expect that these mathematical techniques can be generalized to more complicated transformer architectures in the future.
We thank the reviewer again for your time and efforts. If our response resolved your concerns, could you please kindly consider to increase your score. We are also more than happy to answer your further questions.
---
Reply to Comment 2.1.1:
Title: A gentle reminder
Comment: Dear Reviewer G3j5,
It occurs to us that we uploaded our response earlier and OpenReview didn't send out notification.
This message serves as a gentle reminder for the reviewer to look at our most recent response.
We once again appreciate the reviewer's time and effort.
If our response resolved your concerns, could you please kindly consider to increase your current score accordingly?
We are happy to hear your response.
Best,
Authors | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
Please find all the mentioned additional experiment results in rebuttals (to Reviewer G3j5 and Reviewer cFaT) in the attached pdf.
Thank you.
Best,
Authors
Pdf: /pdf/f47595ec6729d6dec536104ee266e733d15692e3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unified Generative and Discriminative Training for Multi-modal Large Language Models | Accept (poster) | Summary: This paper proposes a novel learning paradigm to learn MLLMs based on interleaved image-text corpora.
It introduces a structure-induced training strategy that imposes semantic relationships between input samples and
the MLLM’s hidden state.
This work apply the dynamic time warping framework to calculate the semantic similarity between different image-text sequences.
Then, a discriminative loss is applied to sequence similarity matrices calculated based on raw inputs and MLLM hidden states.
The framework can also leverage the capabilities of multiple vision and language encoders to more accurately calculate the similarity matrices.
Experiment results show that the new learning paradigm demonstrates good performance on basic multimodal comprehension benchmarks,
complicated multimodal comprehension benchmark DEMON, cross-model information retrieval, and retrieval-augmented generation.
Strengths: 1. This paper is well-written and easy to follow.
2. This paper proposes a novel learning paradigm based on interleaved image-text corpora.
Weaknesses: 1. This paper did not discussed the impact of including interleaved image-text pairs in MLLM learning. For example, how will it affect the performance on basic visual-language benchmarks (Table 1) and image-text retrieval. Will there be any negative effects?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can sugar better leverage the multi-modal in-context examples or better understand interleaved image-text content, is there any evaluation for that?
2. What is exactly the amount of interleaved image-text sequences (from MMC4) and image-text pairs (from other datasets) used to train Sugar.
3. What is the context window size of Sugar?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your comprehensive comments and constructive advice. We will explain your concern as follows.
> **Q1:** This paper did not discussed the impact of including interleaved image-text pairs in MLLM learning. For example, how will it affect the performance on basic visual-language benchmarks (Table 1) and image-text retrieval. Will there be any negative effects?
**A1:** Thank you for your insightful question. We conducted additional ablation experiments (**Table F of the Rebuttal PDF**) by gradually increasing the MMC4 sampling ratio in basic visual-language benchmarks (Table 1) and image-text retrieval. The results demonstrated the following effects of interleaved image-text pairs on MLLM learning.
**(1.1)** MMC4 does not directly contribute to performance improvement for basic visual-language benchmarks. This is because its long sequence undermines the model's interleaved ability (e.g., VQA dropped from 79.9 to 60.5 when MMC4 radio increased). Additionally, without Sugar, MMC4 is unable to assist in image-text retrieval.
**(1.2)** (a) Delightfully, using a synergistic framework in Sugar can significantly enhance the capability of interleaved text while maximizing the retention of basic visual-language capabilities as the MMC4 data increases. Moreover, it will facilitate certain tasks that require distinguishing between multiple images or texts, such as POPE (85.5 to 86.4)and multi-image VQA(from 38.4 to 39.5). (b) Only Sugar can simultaneously achieve image-text retrieval. With the increase in MMC4 data, it can significantly improve its performance in complex retrieval tasks (increasing from 14.4 to 23.2) while maintaining competitive performance in general retrieval tasks.
*Please refer to Table F of the Rebuttal PDF for the Results. The table below is the same as in the PDF*
> **Q2:** Can Sugar better leverage the multi-modal in-context examples or better understand interleaved image-text content, is there any evaluation for that?
**A2:** Thank you for your constructive suggestions. Following your recommendations, we have conducted additional experiments on 5 benchmarks across 3 task types to further evaluate the effectiveness of our method in **Table E**.
To effectively answer **multi-image QA**, the model must clearly distinguish between different images and understand their common global information. In **in-context interaction**, the model must fully comprehend the preceding image-text dialogue to provide reasonable responses. For **visual prompt comprehension**, the model needs to meticulously identify visual cues in the images to answer questions. Our model achieved the best results across all three task types, especially in VIST-SIS, which involves 5 rounds of interleaved image-text dialogue.
*Table E Comparison with baseline in 5 challenging tasks*
| | **Multi-image VQA** | | **In-context Interaction** | | **Visual Prompt** |
| ------------ | ------------------- | -------- | -------------------------- | -------- | ----------------- |
| | SEED | Mantis | Visual Dialog | VIST-SIS | BLINK |
| LLaVA-1.5-7B | 58.6 | 31.3 | 53.7 | 14.1 | 37.1 |
| VILA-7B | 61.1 | 38.4 | 58.5 | 17.4 | 39.2 |
| Sugar | **63.6** | **41.0** | **62.5** | **28.2** | **42.2** |
We also present the results in Table E of the Rebuttal PDF.
> **Q3:** What is exactly the amount of interleaved image-text sequences (from MMC4) and image-text pairs (from other datasets) used to train Sugar.
**A3:** Thank you for your question. We apologize if our manuscript gave the impression that the training recipe for Sugar was unclear. The specific data usage and comparisons are detailed in the **Table A**. We did **not** use any additional data beyond what baseline used.
We utilized a **limited dataset** in comparison to VILA and LLaVA-1.5. Nonetheless, we achieved competitive outcomes and acquired additional retrieval and retrieval-augmented capabilities. By employing an equivalent data volume and integrating the 300K SFT data used by VILA, we observed improvements in 12 multi-modal comprehension capabilities (**Table B,C**), showcasing the model's scalability.
> **Q4:** What is the context window size of Sugar?
**A4:** Thank you for your insightful query! Due to the context length limitation of Vicuna-7B, which is **4096** tokens. So, Sugar cannot handle very long multimedia documents. However, we believe that the mutual enhancement of generative and discriminative capabilities demonstrated by Sugar, as well as the exploration of integrating retrieval and generation within a single model, are beneficial. Your question is valuable and will guide our future improvements and research directions.
Thank you once again for your recognition and constructive suggestions, which have been instrumental in enhancing the quality of our research!
---
Rebuttal 2:
Title: Table B
Comment: Table B: Comparison with baseline on 11 visual-language benchmark with equitable data volume.
| | VQA | GQA | VizWiz | SQAI | VQAT | POPE | MMEP | MMEC | MMB | LLaVAWd | MM-Vet |
| ------------------- | --------- | ---------- | -------- | -------- | -------- | -------- | ---------- | --------- | -------- | -------- | -------- |
| LLaVA-1.5-7B | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 1510.7 | – | 64.3 | 49.0 | 30.5 |
| VILA-7B | 79.9 | 62.3 | 57.8 | 68.2 | 64.4 | 85.5 | 1533.0 | 296.1 | 68.9 | 70.0 | 34.9 |
| Sugar | 76.0 | 58.7 | 60.4 | 69.4 | 57.5 | 86.6 | 1550.8 | 300.0 | 64.9 | 75.6 | 31.3 |
| with equitable data volume | **80.2** | **63.1** | **61.0** | **72.1** | **65.1** | **87.2** | **1550.0** | **309.0** | **69.3** | **76.4** | **36.8** |
*We also present the results in Table B of the Rebuttal PDF.*
---
Rebuttal 3:
Title: Table E
Comment: Table E: Comparison with baseline in 5 challenging tasks.
| | **Multi-image VQA** | | **In-context Interaction** | | **Visual Prompt** |
| ------------------- | ------------------- | -------- | -------------------------- | -------- | ------------------ |
| | SEED | Mantis | Visual Dialog | VIST-SIS | BLINK |
| LLaVA-1.5-7B | 58.6 | 31.3 | 53.7 | 14.1 | 37.1 |
| VILA-7B | 61.1 | 38.4 | 58.5 | 17.4 | 39.2 |
| Sugar | **63.6** | **41.0** | **62.5** | **28.2** | **42.2** |
*We also present the results in Table E of the Rebuttal PDF.*
---
Rebuttal 4:
Title: Table F
Comment: Table F: Ablation study on the effects of MMC4. We evaluated basic visual-language benchmarks, including VQA and GQA. For image-text retrieval, we reported MSCOCO R@5 for both text-to-image(t2i) and image-to-text (i2t). Additionally, we included Mantis for multi-image VQA and VIST R@5 for complex retrieval tasks.
| | | basic visual-language benchmarks | | | | | | | image-text retrieval | | multi-image VQA | complex retrieval |
| --------------------------- | ----------------- | -------------------------------- | ---- | ------ | ---- | ---- | ---- | ---- | -------------------- | -------------- | --------------- | ----------------- |
| | MMC4 sample ratio | VQA | GQA | VizWiz | SQAI | VQAT | POPE | MMB | t2i | i2t | Mantis | VIST |
| VILA-7B | | 79.9 | 62.3 | 57.8 | 68.2 | 64.4 | 85.5 | 68.9 | / | / | 38.4 | / |
| + directly fine-tune | 25% | 72.3 | 59.9 | 53.5 | 64.1 | 58.7 | 84.8 | 63.4 | / | / | 38.5 | / |
| | 50% | 67.8 | 54.7 | 49.1 | 58.9 | 51.8 | 82.8 | 59.7 | / | / | 38.7 | / |
| | 75% | 60.5 | 49.1 | 39.3 | 43.4 | 43.2 | 81.7 | 53.2 | / | / | 38.2 | / |
| + fine-tune with our method | 25% | 78.7 | 60.2 | 60.6 | 70.1 | 62.1 | 85.9 | 66.1 | 40.3 | 36.2 | 38.7 | 14.4 |
| | 50% | 75.9 | 57.9 | 59.2 | 65.9 | 57.0 | 86.4 | 62.3 | 40.7 | 36.3 | 39.0 | 19.1 |
| | 75% | 73.7 | 55.1 | 57.1 | 62.3 | 51.7 | 85.7 | 59.7 | 40.2 | 36.0 | 39.5 | 23.2 |
*We also present the results in Table F of the Rebuttal PDF.*
---
Rebuttal 5:
Title: Looking Forward to Your Reply
Comment: Dear Reviewer DDMF,
Thank you for the time and effort you have dedicated to reviewing our submission. We hope we have addressed the concerns raised in your initial reviews and eagerly await your thoughts and further guidance to refine our work. As the author-reviewer discussion period for NeurIPS 2024 will be over soon, please let us know if you require any additional information or clarification from our end. We are open to engage in further discussions to enhance our submission. Thank you!
---
Rebuttal Comment 5.1:
Comment: My concerns are well resolved in the rebuttal, and I would like to raise my rating to 6.
---
Reply to Comment 5.1.1:
Comment: Thank you for your support of our work. Your valuable feedback has made our work better! | Summary: The paper addresses the limitations of Vision-Language Models (VLMs) by proposing a unified approach that combines generative and discriminative training paradigms. This new method leverages interleaved image-text sequences and introduces a structure-induced training strategy. It aims to enhance the MLLM's ability to capture global semantics and fine-grained details, effectively balancing generative and discriminative tasks. The approach uses dynamic sequence alignment within the Dynamic Time Warping framework and integrates a novel kernel for fine-grained semantic differentiation. Extensive experiments demonstrate that this method achieves state-of-the-art results in various generative and discriminative tasks.
Strengths: - The paper introduces a novel method that successfully integrates generative and discriminative training paradigms, addressing the weaknesses inherent in each when used independently.
- The authors clearly articulate the challenges faced by existing VLMs and provide a well-defined solution.
Weaknesses: - While the paper shows impressive results, there is limited discussion on the potential limitations and areas where the model might underperform.
- The paper primarily focuses on specific benchmarks. It would be beneficial to discuss how well the proposed method generalizes to other types of vision-language tasks not covered in the experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you provide more detailed ablation studies to understand the contribution of each component of the proposed method, such as the dynamic sequence alignment and the GAK?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Considering the connection with real-world scenarios is necessary. The authors need to discuss: what requirements might the tasks introduced in the paper have in real-world scenarios?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the valuable comments and we will explain your concern as follows.
> **Q1:** While the paper shows impressive results, there is limited discussion on the potential limitations and areas where the model might underperform.
**A1**: Thank you very much for your valuable question! Our potential limitations are as follows:
**(1.1)** Firstly, the context length limitation of Vicuna, capped at 4096 tokens, poses a significant challenge. This restricts Sugar's ability to handle very long multimedia documents.
**(1.2)** Moreover, real-world applications often present varied and dynamic data, which might not align perfectly with the model's training data. This discrepancy could lead to suboptimal performance in diverse and unforeseen contexts.
**(1.3)** The limitation of our model lies in its reliance on the existing Vicuna and CLIP encoder, which is suboptimal and somewhat limits the comprehension ability *[Karamcheti et al., 2024]*. For example, there are some hallucinations.
*[Karamcheti et al., 2024]* Prismatic VLMS: Investigating the Design Space of Visually-Conditioned Language Models. ICML, 2024.
> **Q2:** The paper primarily focuses on specific benchmarks. It would be beneficial to discuss how well the proposed method generalizes to other types of vision-language tasks not covered in the experiments.
**A2:** Thank you for your suggestion. In the original manuscript, we primarily used the single-image QA benchmarks from LLaVA-1.5 to evaluate our model's multi-modal comprehension capabilities. To provide a more comprehensive evaluation of our model, we have supplemented our method with 5 benchmarks in **Table E of the Rebuttal PDF**.
To answer **multi-image QA**, the model must clearly distinguish between different images and understand their common global information. In **in-context interaction**, the model must fully comprehend the preceding image-text dialogue to provide reasonable responses (Visual Dialog and VIST-SIS, involve 10 and 5 rounds of dialogue respectively). For **visual prompt comprehension**, the model must meticulously identify visual cues in the images to answer questions. Our model achieved the best results across all three task types.
> **Q3:** Can you provide more detailed ablation studies to understand the contribution of each component of the proposed method, such as the dynamic sequence alignment and the GAK?
**A3:** To test the effectiveness of our method, we introduced two variants of GAK: GMin and GMax, and examined the impact of the hyperparameter $\delta$. To avoid interference from additional data, we explored the impact of different parameters without using any extra task-specific data. We tested two variants of the kernel function:
GMin kernel: $
K_{GMin}(\mathbf{x}, \mathbf{y}) =\min_{\pi \in \mathcal{A}(\mathbf{x}, \mathbf{y})} \prod_{i=1}^{|\pi|} e^{-\phi_\sigma} \in (0, 1]
$,
GMax kernel$:
K_{GMax}(\mathbf{x}, \mathbf{y}) =\max_{\pi \in \mathcal{A}(\mathbf{x}, \mathbf{y})} \prod_{i=1}^{|\pi|} e^{-\phi_\sigma} \in (0, 1]
$.
Based on the experimental results in **Table G**, we found that:
1. GAK is relatively stable and not very sensitive to the hyperparameter δ.
2. GAK generally performs better compared to GMin and GMax.
3. Tasks requiring global information, such as VIST and KGQA, and those needing detailed semantics, like Winground, are the most sensitive to changes in the method.
> **Q4:** Considering the connection with real-world scenarios is necessary. The authors need to discuss: what requirements might the tasks introduced in the paper have in real-world scenarios?
**A4:** Thank you for your insightful question. Below, we outline two valuable real-world applications:
**(4.1) Complex information retrieval.** Real-world information retrieval requires models to search through **interleaved multi-modal sequences** , as articles, websites, and books are often composed of extensive interwoven image and text sequences. Current retrievers like CLIP and Sentence BERT are designed to handle single images or texts and struggle with complex interleaved retrieval tasks. In Section 4.4, we tested complex retrieval scenarios and achieved promising results. Adapting to more complex scenarios is an important direction for future research.
**(4.2) Complex comprehension tasks**, such as the 5 new tasks mentioned in Q2, which respectively require capturing global semantics and detailed semantic differentiation.
**(4.3) Knowledge retrieval for answering questions.** In reality, many questions require external knowledge for accurate answers. For tasks such as FVQA and WebQA, traditional methods first require the retriever to find relevant knowledge in a knowledge base, and then the generator to formulate the answer. The final performance, therefore, depends on both the retriever and generator models, highlighting **compatibility issues** and **sub-optimal issues** between them. Based on the **Table D**, we can observe the following:
**(4.3.1)** MLLM can answer a small portion of FVQA questions using internal knowledge, but it still requires some knowledge from a retriever for accurate responses.
**(4.3.2)** The impact of retrieval strategies on the results is inconsistent. For instance, text-based retrieval strategies often outperform image-based ones in FVQA, whereas in WebQA, image-based retrieval is more effective.
**(4.3.3)** There are also compatibility issues between retriever and models. For example, in WebQA, VILA is more sensitive to retrieval strategy, exhibiting fluctuations 3.4 times greater than those of LLaVA-1.5.
**(4.3.4)** Our integrated retriever and generator model does not require additional retrievers, thereby avoiding the optimization and selection issues mentioned above.
We hope this clarifies your concerns. We are committed to thoroughly incorporating your suggestions in the next version of the paper. Thank you once again for your excellent feedback.
---
Rebuttal 2:
Title: Table B
Comment: Table B: Comparison with baseline on 11 visual-language benchmark with equitable data volume.
| | VQA | GQA | VizWiz | SQAI | VQAT | POPE | MMEP | MMEC | MMB | LLaVAWd | MM-Vet |
| ------------------- | --------- | ---------- | -------- | -------- | -------- | -------- | ---------- | --------- | -------- | -------- | -------- |
| LLaVA-1.5-7B | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 1510.7 | – | 64.3 | 49.0 | 30.5 |
| VILA-7B | 79.9 | 62.3 | 57.8 | 68.2 | 64.4 | 85.5 | 1533.0 | 296.1 | 68.9 | 70.0 | 34.9 |
| Sugar | 76.0 | 58.7 | 60.4 | 69.4 | 57.5 | 86.6 | 1550.8 | 300.0 | 64.9 | 75.6 | 31.3 |
| with equitable data volume | **80.2** | **63.1** | **61.0** | **72.1** | **65.1** | **87.2** | **1550.0** | **309.0** | **69.3** | **76.4** | **36.8** |
*We also present the results in Table B of the Rebuttal PDF.*
---
Rebuttal 3:
Title: Table C
Comment: Table C: DEMON benchmark with equitable data volume.
| | MMD | VST | VRI | MMC | KGQA | TRQA | MMR |
| -------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |
| LLaVA-1.5-7B | 37.5 | 25.2 | 25.9 | 22.2 | 48.6 | 44.9 | 50.3 |
| VILA-7B | 47.8 | 25.8 | 13.2 | 17.2 | 60.1 | 42.1 | 50.5 |
| Sugar | 51.8 | 34.3 | 32.3 | 16.8 | 64.4 | 65.9 | 51.7 |
| with equitable data volume | **54.1** | **34.7** | **34.2** | **23.1** | **65.0** | **68.1** | **53.2** |
*We also present the results in Table C of the Rebuttal PDF.*
---
Rebuttal 4:
Title: Table D
Comment: Table D: Knowledge-based VQA.
| | FVQA | WebQA |
| ----------------------------- | -------- | -------- |
| LLaVA-1.5-7B | 5.9 | / |
| LLaVA-1.5-7B + CLIP image | 6.8 | 81.8 |
| LLaVA-1.5-7B + CLIP text | 7.1 | 79.2 |
| LLaVA-1.5-7B + CLIP (average) | 7.9 | / |
| VILA-7B | 6.4 | / |
| VILA-7B + CLIP image | 9.0 | 80.0 |
| VILA-7B + CLIP text | 10.2 | 71.2 |
| VILA-7B + CLIP (average) | 11.0 | / |
| **Sugar** | **6.5** | / |
| **Sugar + rag** | **20.7** | **88.7** |
*We also present the results in Table D of the Rebuttal PDF.*
---
Rebuttal 5:
Title: Table E
Comment: Table E: Comparison with baseline in 5 challenging tasks.
| | **Multi-image VQA** | | **In-context Interaction** | | **Visual Prompt** |
| ------------------- | ------------------- | -------- | -------------------------- | -------- | ------------------ |
| | SEED | Mantis | Visual Dialog | VIST-SIS | BLINK |
| LLaVA-1.5-7B | 58.6 | 31.3 | 53.7 | 14.1 | 37.1 |
| VILA-7B | 61.1 | 38.4 | 58.5 | 17.4 | 39.2 |
| Sugar | **63.6** | **41.0** | **62.5** | **28.2** | **42.2** |
*We also present the results in Table E of the Rebuttal PDF.*
---
Rebuttal 6:
Title: Awaiting Your Feedback
Comment: Dear Reviewer Lcdg,
Thank you once again for reviewing our submission. We are deeply grateful for your recognition of the value of our work, particularly our proposal of the unified approach that combines generative and discriminative training paradigms to address the challenges of global semantic understanding and detailed semantic capture in existing VLMs. In response to your thoughtful suggestions regarding performance across various vision-language task types and real-world applications, we have conducted additional experiments on 5 more challenging benchmarks, as well as a knowledge-based VQA task of practical significance. These efforts further demonstrate the effectiveness and practical value of our method.
Over the past few days, we have carefully addressed the concerns of other reviewers. They have kindly raised their scores and agreed that our approach not only addresses the limitations of VLMs and enhances comprehension capabilities, but also offers emergent insights into complex retrieval tasks and the unified nature of generative and retrieval abilities, which could inspire future work.
As the author-reviewer discussion period for NeurIPS 2024 is drawing to a close, please feel free to let us know if you have any remaining concerns or questions. We greatly appreciate your feedback and look forward to any further insights you may have. | Summary: This paper proposed a method for unifying generative training and discriminative training of multi-modal LLMs. Generative training mainly uses auto-regressive formulation while discriminative training mainly performs contrastive representation matching. The goal of this paper is trying to use discriminative training to improve multi-modal LLMs.
The paper unifies generative training and discriminative training by introducing a Dynamic Sequence Alignment module which aligns similar text and image data on the hidden states of a multi-modal LLM. In addition, Detailed Semantics Modeling is proposed to effectively distinguish detailed semantics.
The paper conducts evaluation on a wide range of benchmarks.
Strengths: The motivation is clear and the paper is easy to follow.
The concept of unifying generative training and discriminative training is interesting.
Weaknesses: It's unclear what is dynamic time warping framework.
The performance improvement of the proposed method sugar is not significant. Compared with some baselines, such as VILA and LLaVA-1.5, Sugar performs worse than them on many tasks, as shown in Table 1. This raises concerns about the effectiveness of the proposed method.
This could be meaningless to align a visual token and a text token in an MLLM model since the LLM is trained with next-token prediction instead of contrastive learning like CLIP. The current token is conditioned on previous tokens. I can't think of a reasonable explanation for this mechanism. It **could be** meaningful to align visual tokens. In addition, the experiment results also suggest that this method is not effective as expected.
What is the evaluation protocol? Does Sugar train on each benchmark first then evaluate or directly zero-shot evaluation? In the former case, will Sugar lose generative ability after training with discriminative task data?
Technical Quality: 2
Clarity: 2
Questions for Authors: What is the training recipe of the proposed method?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your constructive and insightful comments. We will explain your concerns point by point.
First, we must clarify that we only used **a very limited amount of data** (see **Table A of the Rebuttal PDF**). Despite this, we have effectively integrated the generative and discriminative paradigms, alleviating the challenges remain in generative paradigms and collaterally enabling our model to tackle many other tasks. Such as complex retrieval tasks (Section 4.4), multi-modal retrieval-augmented generation (Section 4.5), and knowledge-based VQA in one model.
> **Q1**: It's unclear what is dynamic time warping framework.
**A1:** Thank you for your question! We apologize for not clearly explaining our method and the dynamic time warping framework. Below, we will elaborate on DTW and our contributions.
**(1.1)** The Dynamic Time Warping (DTW) framework is a dynamic programming-based method for calculating the similarity of unequal length sequences, commonly used in text information retrieval *[Müller M., 2007]* *[Cuturi M, Blondel M., 2017]* and sequence matching problems *[Cao et al., 2020]*.
**(1.2)** To enable the current separated generative and discriminative paradigms to achieve synergistic gains, we primarily face two challenges: **(a)** comprehensively capturing the global semantics for interleaved sequences of unequal length; **(b)** keenly differentiating the detailed semantics.
**(1.3)** We explicitly designed two corresponding modules in Sugar to tackle these challenges: **(a)** formulate the relationship between interleaved sequences as a dynamic sequence alignment problem within the DTW framework; **(b)** integrate a novel kernel into the framework to leverage the strengths of various discriminative pre-trained models. The ablation experiments validate the effectiveness of our approach (Section 4.5 and Table G).
*[Müller, 2007]* Dynamic Time Warping. Information Retrieval for Music and Motion.
*[Cuturi & Blondel, 2017]* Soft-DTW: A Differentiable Loss Function for Time-Series. ICML 2017.
*[Cao et al., 2020]* Few-Shot Video Classification via Temporal Alignment. CVPR 2020.
> **Q2**: The performance improvement of the proposed method Sugar is not significant. Compared with some baselines, such as VILA and LLaVA-1.5, Sugar performs worse than them on many tasks, as shown in Table 1. This raises concerns about the effectiveness of the proposed method.
**A2:** Thank you for raising an important concern. We will address your question from the following three aspects:
**(2.1)** We used a very **limited amount of data** compared to VILA. Despite this, we achieved competitive results and gained additional retrieval and retrieval-augmented capabilities. By using an equitable data volume (incorporating 300K SFT data used by VILA), we observed enhancements in 11 comprehension benchmarks of Table 1 (**Table B,C**), demonstrating our model's scalability.
**(2.2)** In the original manuscript Table 1, we primarily used the single-image QA benchmarks. To provide a more comprehensive assessment of our model, we have supplemented our method with 5 new, **more challenging** benchmarks, showcasing our superiority (see **Table E**). To answer **multi-image QA**, the model must clearly distinguish between different images and understand their shared global information. For **in-context interaction**, it must fully comprehend the preceding multiple image-text dialogue to provide reasonable responses. For **visual prompt comprehension**, the model needs to accurately identify visual cues in the images to answer questions. Our model achieved the best results across all three task types.
> **Q3**: This could be meaningless to align a visual token and a text token in an MLLM model since the LLM is trained with next-token prediction instead ... (omitted)
**A3:** We apologize if our manuscript gave the impression that the method of Sugar was unclear.
**(3.1)** Simply using contrastive learning methods like CLIP can only calculate the similarity between single images and texts. However, we **consider the interleaved image-text sequence as the general format** of input samples. This means that the number of tokens in different sequences is of unequal length. If we simply use contrastive learning like CLIP, the similarity of interleaved sequences would lose global information.
**(3.2)** Hence, we select the final token of a sequence to represent this input interleaved sequence. As you mentioned, *'the current token is conditioned on previous tokens*' , the final token can integrate global information from the preceding content. In this way, we can modulate the hidden states of the MLLM by leveraging the semantic relationships between interleaved input sequences, thereby ensuring consistency between hidden states and encouraging the MLLM to capture the global semantics. Additionally, we introduced a novel kernel to provide more detailed signals, enhancing the ability to differentiate fine-grained semantics.
**(3.3)** The effectiveness of our method is demonstrated through experiments on 11 comprehension benchmarks and others VILA can't do (**Table B,C,D,E**).
> **Q4:** What is the evaluation protocol? Does Sugar train on each benchmark first then evaluate or directly zero-shot evaluation? In the former case, will Sugar lose generative ability after training with discriminative task data?
**A4:** All the experiments in our paper are **zero-shot**. We have merely introduced a structure-induced training strategy within the generative paradigm and subsequently conducted zero-shot evaluations for each task.
> **Q5:** What is the training recipe of the proposed method?
**A5:** We apologize if our manuscript suggested that Sugar's training method was unclear. The specific data we used is detailed in **Table A of the Rebuttal PDF**.
We hope the above discussion addresses your concerns. The discussion remains open, and we always welcome your review.
---
Rebuttal 2:
Title: Table B
Comment: Table B: Comparison with baseline on 11 visual-language benchmark with equitable data volume.
| | VQA | GQA | VizWiz | SQAI | VQAT | POPE | MMEP | MMEC | MMB | LLaVAWd | MM-Vet |
| ------------------- | --------- | ---------- | -------- | -------- | -------- | -------- | ---------- | --------- | -------- | -------- | -------- |
| LLaVA-1.5-7B | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 1510.7 | – | 64.3 | 49.0 | 30.5 |
| VILA-7B | 79.9 | 62.3 | 57.8 | 68.2 | 64.4 | 85.5 | 1533.0 | 296.1 | 68.9 | 70.0 | 34.9 |
| Sugar | 76.0 | 58.7 | 60.4 | 69.4 | 57.5 | 86.6 | 1550.8 | 300.0 | 64.9 | 75.6 | 31.3 |
| with equitable data volume | **80.2** | **63.1** | **61.0** | **72.1** | **65.1** | **87.2** | **1550.0** | **309.0** | **69.3** | **76.4** | **36.8** |
*We also present the results in Table B of the Rebuttal PDF.*
---
Rebuttal 3:
Title: Table D
Comment: Table D: Knowledge-based VQA.
| | FVQA | WebQA |
| ----------------------------- | -------- | -------- |
| LLaVA-1.5-7B | 5.9 | / |
| LLaVA-1.5-7B + CLIP image | 6.8 | 81.8 |
| LLaVA-1.5-7B + CLIP text | 7.1 | 79.2 |
| LLaVA-1.5-7B + CLIP (average) | 7.9 | / |
| VILA-7B | 6.4 | / |
| VILA-7B + CLIP image | 9.0 | 80.0 |
| VILA-7B + CLIP text | 10.2 | 71.2 |
| VILA-7B + CLIP (average) | 11.0 | / |
| **Sugar** | **6.5** | / |
| **Sugar + rag** | **20.7** | **88.7** |
*We also present the results in Table D of the Rebuttal PDF.*
---
Rebuttal 4:
Title: Table E
Comment: Table E: Comparison with baseline in 5 challenging tasks.
| | **Multi-image VQA** | | **In-context Interaction** | | **Visual Prompt** |
| ------------------- | ------------------- | -------- | -------------------------- | -------- | ------------------ |
| | SEED | Mantis | Visual Dialog | VIST-SIS | BLINK |
| LLaVA-1.5-7B | 58.6 | 31.3 | 53.7 | 14.1 | 37.1 |
| VILA-7B | 61.1 | 38.4 | 58.5 | 17.4 | 39.2 |
| Sugar | **63.6** | **41.0** | **62.5** | **28.2** | **42.2** |
*We also present the results in Table E of the Rebuttal PDF.*
---
Rebuttal 5:
Title: Looking Forward to Your Reply
Comment: Dear Reviewer 4VP3,
Thank you for the time and effort you have dedicated to reviewing our submission. We hope we have addressed the concerns raised in your initial reviews and eagerly await your thoughts and further guidance to refine our work. As the author-reviewer discussion period for NeurIPS 2024 is half past, please let us know if you require any additional information or clarification from our end. We are open to engage in further discussions to enhance our submission.
---
Rebuttal 6:
Title: Awaiting Your Feedback
Comment: Dear Reviewer 4VP3,
Thank you again for reviewing our submission. As the author-reviewer discussion period for NeurIPS 2024 is nearly over, please let us know if any further information or clarification is needed. We are ready to engage in any further discussions with you!
Looking forward to your further feedback!
---
Rebuttal Comment 6.1:
Title: Response to rebuttal
Comment: Thanks for authors' rebuttal, I think they have addressed most of my concerns, so I will raise my score.
---
Reply to Comment 6.1.1:
Title: Official Comment by Authors
Comment: Thank you for increasing the score. Your valuable suggestions greatly contribute to the quality of our manuscript. Thank you again for your precious time and valuable suggestions! | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their insightful and valuable comments!
We thank all the reviewers for agreeing that this paper presents a very interesting idea of **addressing the limitations of the original generative paradigm** in comprehensively capturing global information and keenly discerning fine-grained semantic details through the introduction of discriminative supervision. Collaterally, our method leverages the strengths of MLLMs to handle **complex information retrieval** tasks that cannot be effectively solved by discriminative models such as CLIP. Our paradigm also realizes **retrieval-augmented generation** and addresses **knowledge-based** VQA problems within a **single model**, showing great promise for future work.
Overall, we are encouraged that they find that:
- The motivation and novelty for Sugar are clear, reasonable, and meaningful. (all Reviewers)
- Clearly articulate the challenges faced by existing MLLMs and provide a well-defined solution. (all Reviewers)
- The experiments are thorough, clearly showing that the proposed framework can excel well at both comprehension and discrimination tasks. (Reviewers Lcdg and Reviewers DDMF)
- The effect of combining our retriever and generator is interesting, which can inspire fellow works (Reviewer Lcdg).
To address the concerns raised by the reviewers, overall we have conducted several additional experiments and analyses:
- We further validated Sugar's effectiveness using more data equivalent to the baseline, as shown in **Tables B and C of the Rebuttal PDF**.
- We present the performance across 5 tasks of 3 new types (in-context, multi-image, visual prompt comprehension) to more comprehensively evaluate our model's capabilities in **Table E of the Rebuttal PDF**.
- We conducted a more detailed dataset and method ablations to verify the stability of Sugar and the effectiveness of each component in Sugar in **Table G of the Rebuttal PDF**.
- We validated the 2 tasks requiring external knowledge in **Table D of the Rebuttal PDF** to demonstrate the benefit of combining retrieval and comprehension abilities in a single model, thereby avoiding compatibility issues and sub-optimal performance.
Next, we address each reviewer's detailed concerns point by point. We sincerely thank all reviewers for their recognition of our work and the valuable suggestions provided! And discussions are always open. Thank you!
Pdf: /pdf/620830ef8f357a460ceb934d6264259a9b3fe395.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Discovery of the Hidden World with Large Language Models | Accept (poster) | Summary: This paper presents Causal representatiOn AssistanT (COAT), which introduces large language models (LLMs) to bridge this gap. LLMs are trained on massive observations of the world and have shown great capability in extracting key information from unstructured data. Thus, employing LLMs to propose useful high-level factors and craft their measurements is natural. COAT also uses CDs to find causal relations among the identified variables and provides feedback to LLMs to iteratively refine the proposed factors. This mutual benefit enhances both LLMs and CDs.
Strengths: 1. Interesting topic. Employing LLMs to propose useful high-level representations for causal discovery.
2. Develop two benchmarks for the unstructured causal discovery. AppleGastronome and Neuropathic.
3. Derives the first metrics that measure the causal representation learning capabilities of various LLMs.
Weaknesses: 1. ‘We will release an anonymous link during the discussion period.’ I will consider raising my score if the code is reasonable.
2. The contribution of LLM in COAT is a little small. I assume LLM is used as a representation tool to learn the conceptual level attributes, including iterative refining. The causal structural learning can still be considered as the downstream task.
3. COAT will inherit the shortcomings of downstream causal structure learning algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How can we ensure that LLM does not introduce erroneous prior knowledge for reasoning?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > W1 ‘We will release an anonymous link during the discussion period.’ I will consider raising my score if the code is reasonable.
We send an anonymized link of code to the AC in a separate comment, as we are not allowed to include any links to external pages in the responses this year.
> W2 The contribution of LLM in COAT.
LLM is extensively involved in the factor proposal phase as a representation assistant:
- In the factor proposal phase, LLMs are extensively leveraged to identify useful high-level factors from the given samples. The pre-trained knowledge and the reasoning ability of LLMs are heavily involved during this phase. Although COTA reduces the reliance on LLMs, **the final results still depend on the capabilities of LLMs**, as shown in our theories and experiments.
- Moreover, when an external interface to annotate the factors is not available, LLMs will be leveraged to parse the data. Without LLMs, we can still not obtain tabular data readily to be used for further causal structural learning.
- For the causal feedback module, it is still crucial that LLMs need to properly distinguish the already verified factors and other potential factors that may affect the partitioning of the provided groups.
Although in the current design of COAT, the causal structural learning module can be separated, the causal structural learning module plays an important role in providing various feedback, such as the existence of a hidden confounder, when extending COAT to more challenging scenarios.
> W3 COAT will inherit the shortcomings of downstream causal structure learning algorithms.
It is common for all rigorous causal discovery methods to have certain assumptions and requirements on the data, while COAT may relieve the requirements:
- **Well-specified variables**: Classic causal discovery methods work on tabular data whose columns are meaningful variables. Here COAT extends their scope on unstructure data utilizing the power of LLMs.
- **Assumptions on the data and population distribution**: Assumptions are inevitable for any causal identifiability guarantee. COAT's factor proposal phase is disentangled with causal discovery, so its identifiability can hold even when one of the chosen causal discovery methods doesn't. For instance, the Apple Gastronome benchmark doesn't satisfy the linear non-Gaussian assumption, yet it still works on the factor proposal task when cooperating with LiNGAM (Appendix F).
> Q1 How can we ensure that LLM does not introduce erroneous prior knowledge for reasoning?
Thanks for the good question. This question shares exactly the same spirit as the motivation of the paper. All factors proposed by LLM will be checked on data in the factor proposal phase. LLM can propose irrelevant factors (as witnessed on the *META* baseline), but they will be filtered out in each COAT iteration (as witnessed on the *DATA* baseline, the single COAT iteration, and *COAT* itself).
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed reply. I am sorry to hear that there is no link at this stage. I would like to keep my score as it is and will keep an eye on other reviewers' comments.
---
Rebuttal 2:
Title: There is a link, but NeurIPS does not allow us to share to Reviewers but only to AC this year
Comment: Dear Reviewer PF9K,
Thank you for your prompt reply. We need to clarify that, **there is a link shared in a comment to Area Chair iWif, since NeurIPS does not allow us to share any links to external pages with Reviewers**:
>4. All the texts you post (rebuttal, discussion and PDF) should not contain any links to external pages. Furthermore, they should not contain any identifying information that may violate the double-blind reviewing policy. If you were asked by the reviewers to provide code, please send an anonymized link to the AC in a separate comment (make sure the code itself and all related files and file names are also completely anonymized).
If Reviewer PF9K would like to know any details in our code, we are more than happy to provide any details. We could also just copy the codes here if needed. **Please kindly understand that it is because of the NeurIPS regulations instead of the authors' intention that we could not provide a link directly in the responses**.
Meanwhile, we have sent another email to Program Chairs, to inquire about the case. Please kindly let us know if you feel there is any way that could resolve your concern about the link. Since the code has already been packed up and shared in an anonymous github link, we would like to provide any information for it so long as it is allowed by the NeurIPS regulations!
---
Rebuttal 3:
Title: Link could be shared to Reviewers via AC
Comment: Dear Reviewer PF9K,
We just received the reply from the Program Chairs:
> A possible solution might be sharing an anonymized link with the AC that can be passed further on to the reviewer of interest.
We will communicate and ask the Area Chair iWif to kindly help pass the link to you. Please let us know if you need any further information about the codes. Thank you so much!
---
Rebuttal Comment 3.1:
Title: I did not get this message but I assume you received it and thus will pass the link only to the reviewer
Comment: I did not get this message but I assume you received it and thus will pass the link only to the Reviewer PF9K
https://anonymous.4open.science/r/CausalCOAT-D9CD
All the best
---
Reply to Comment 3.1.1:
Title: Thank you for passing the link
Comment: Dear Area Chair iWif,
Thank you for your prompt help with the link. Yes, the message was sent in reply by Program Chairs to our inquiry email.
Best regards,
The Authors of Paper6701 | Summary: The paper tackles the problem of discovering relevant features for recovering the underlying causal graph in the absence and/or in lieu of a human domain expert. The proposed method, COAT, first queries an LLM through a prompt elucidating the task (for eg., discovering relevant features that affect a product review using a few text reviews), then the proposed variables are fed into another LLM that assigns a value to each of these variables, thus outputting structured/tabular data that can be used for causal discovery. Finally, the tabular data is used in conjunction with a traditional causal discovery algorithm (FCI and LiNGAM in this case) to retrieve a causal graph with respect to a target variable (for eg., review score) using the proposed variables. The process repeats until the proposed variables form a markov blanket for the target variable w.r.t the raw unstructured input data (for eg., the text reviews), progressively expanding the markov blanket in each iteration. Additionally, the LLM can receive feedback at the end of each iteration in the form of samples that the proposed variables cannot sufficiently explain. In particular, the authors propose clustering the samples w.r.t the latent variables induced by the LLM and picking the samples in the cluster with the largest conditional entropy.
Initial theoretical analysis of the proposed method implies that the proposed method is able to identify the markov blanket for a target variable using the proposed variables given that enough iterations of COAT are performed.
The authors evaluate COAT empirically over two synthetic datasets and three real-world datasets. They compare COAT against two simple baselines 1) factors being directly proposed by the LLM based on the prompt without further iterations 2) factors being proposed by LLM when queried using both the prompt and some samples of raw observations. The second baseline is essentially COAT without the LLM receiving any feedback after each iteration. The experiments are conducted using 10 different LLMs and primarily one causal discovery algorithm (FCI), with additional experiments on one dataset using LiNGAM. Additionally, the paper proposes two novel metrics for quantitatively assessing the performance of LLMs for feature proposals to be used for causal discovery.
Update: I moved my rating up in the hope that teh authors will add the experiments as they promised to the final version. We have no way of enforcing it but hopefully, the authors will follow up on their promise.
Strengths: The paper addresses the important problem of causal discovery and employs an effective two pronged approach involving LLMs and traditional causal discovery algorithms. This approach leverages the strengths of both the LLMs and the causal discovery algorithms i.e, ability to respond to complex prompts and unstructured data with high-level and possibly noisy information, and robust causal discovery with strong theoretical guarantees although requiring strong assumptions on the faithfulness of the data and causal mechanisms, respectively. Overall, I believe this is a promising direction wherein the two components complement each other effectively.
The empirical evaluation is sufficient in terms of the large number of LLMs considered and the moderate amount of datasets evaluated. The results, based on the chosen metrics, sufficiently demonstrate the effectiveness of the proposed method over the simple baselines.
Finally, the paper is well-written and clearly explains the steps involved in each iteration. The further explanations provided in the appendix also aid in this.
Weaknesses: The theoretical aspects of the proposed algorithm are exaggerated in the introduction. Given the strong assumptions of “sufficiently powerful” LLMs, “sufficiently diverse” examples and further assumptions pertaining to the chosen causal discovery method, the propositions, while appreciated, are rather straightforward. In particular, it would be far more interesting to theoretically analyse the impact of modules involving the LLMs themselves, such as the chosen prompt template, quality of factor annotations and responsiveness of LLMs to feedback regarding causal discovery, even though some of these are evaluated empirically. Also, an analysis on the rate of convergence of COAT would be beneficial.
Secondly, while the modularity of the proposed approach facilitates utilising a cross product of LLMs, causal discovery methods and feedback mechanisms, it also necessitates extensive ablation studies. In particular, the paper would be strengthened by a thorough ablation of the initial prompts and feedback. In particular, a discussion and ablation on the chosen prompt template and its effect on the proposed factors, or lack thereof, is needed. A robust template would allow more seamless adoption of the proposed method. Finally, the chosen baselines are far too simple to make any strong claims on the effectiveness of COAT. Comparing against some of the methods covered in the related work section would help bolster this claim.
Technical Quality: 2
Clarity: 2
Questions for Authors: The paper addresses an important and timely problem and proposes a simple and intuitive solution, leveraging the strengths of LLMs and traditional causal discovery methods. While the experiments demonstrate the effectiveness of the proposed method over two simple baselines, stronger baselines and more ablations on prompts and factor annotations would strengthen this claim. Theoretical analysis is limited to the well-studied causal discovery aspect of the pipeline while making strong assumptions on the powerfulness of the LLMs, diversity and faithfulness of the raw observational data, and the number of iterations being sufficiently large, seems rather unsurprising.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See the weakness and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the insightful and constructive comments on our work. We hope our response can sufficiently address your concerns.
> W1.1 "strong assumptions" of “sufficiently powerful” LLMs...
**Thank you for pointing out these potentially confusing words. We revised the paper to clarify that "sufficiently powerful" and "sufficiently diverse" are not part of assumptions.** These words appear in Sec 3.3, which are used to give an intuitive and concrete description of the COAT algorithm.
**Our theoretical results in Sec 3.4 do not rely on those assumptions.**
- Proposition 3.1 is to show why we expect new factors to satisfy Eq 6. Test on the Eq 6 is a conditional independent test. **The assumption behind this is the faithfulness condition**, i.e., the conditional independence reflects the d-separation of the causal graph.
- Proposition 3.3 concretely characterizes the impact of LLM's ability on factor identification, and it requires $p>0$ and $C_\Psi>0$ so that Eq 9 is about a finite number.
> W1.2 Impacts of modules.
**The impacts of each module in COAT can be characterized by our theoretical framework.** Let $Y$ be the target variable. Before entering one certain COAT iteration, $k$ factors *(denoted by $h_{[k]}(X)$ as defined on page 6 line 202)* have been proposed and verified. In this iteration, the LLM receives a feedback constructed with those factors, and then *we expect the LLM to propose a new factor $w_{k+1}$ such that $Y \perp w_{k+1}(X)\mid h_{[k]}(X)$ doesn't hold*. **In definition 3.2 (page 6 line 209), we define two measures to quantify the capabilities of LLMs in the COAT framework**:
- **Perception Score** $p$: the probability that the LLM proposes a new factor. This can be seen as a measure of the LLM's responsiveness to the given prompts and the feedback.
- **Capacity Score** $C_\Psi$: the decreasing ratio of the conditional mutual information, as described in Eq 8 on page 6. This can be seen as a **measure of the quality of the factors proposed by the LLM**.
At each iteration, we prompt LLM to propose factors multiple times so the two measures can be estimated, as shown in Table 6 (page 25, appendix). With the help of these two metrics, **we then theoretically analyze the impact of the LLM-involved modules on the number of COAT rounds**, as shown by Eq 9 in proposition 3.3 (page 6).
> W1.3 Convergence rate of COAT.
**As suggested by the reviewer, we are happy to further improve proposition 3.3 with the result about rate of convergence** (the proof is similar): *with probability at least $1-\delta$, the following inequality holds*:
$$
\frac{I(Y;X\mid h_{\le t}(X))}{I(Y;X)} \le \left(\frac{1}{1-C_\Psi}\right)^{-tp-z_{\delta}\sqrt{tp(1-p)}}
$$
That is, under the setting in proposition 3.3, with both $C_\Psi$ and $p$ being positive, COAT would converge exponentially with its number of rounds.
> W2.1 Ablation of the prompt templates.
We conduct an ablation study with a different prompt template following [1]:
- We put the task description (also the format instructions) in the beginning *[System]* part; and we put samples in the last *[Data]* part of the prompt.
- The *markdown grammar* is replaced by blankets to represent headings, like `[System]`, `[Data]`, and `[Groups with Y=1]` ...
- 3 COAT iterations are performed, which is aligned with the original experimental setup.
COAT with changed prompt template:
| | MB | NMB | OT | Recall | Precision | F1 |
|----------------|:--:|:---:|:--:|:------:|:---------:|:-----:|
| GPT-4 | 4 | 0 | 0 | 0.80 | 1.00 | 0.89 |
| GPT-3.5-Turbo | 4 | 0 | 0 | 0.80 | 1.00 | 0.89 |
| Mistral-Medium | 3 | 0 | 0 | 0.60 | 1.00 | 0.75 |
We observe that COAT is robust to the choice of templates, rejects unexpected factors (zero *NMB* and *OT*), and keeps a high precision.
[1] Judging llm-as-a-judge with mt-bench and chatbot arena, NeurIPS'23.
> W2.2 Baselines are too simple.
We need to clarify that, to the best of our knowledge, the *META* baseline is already a strong baseline, as the extensive empirical evidence about LLM-based methods in causality-related tasks [2]. If you happen to know a stronger baseline, please let us know.
Meanwhile, **we additionally construct a stronger baseline with CoT** based on *DATA*, where the LLM is prompted to "Think step by step to consider factors", and to output these factors in the same format as other methods.
| LLM | method | MB | NMB | OT | Recall | Precision | F1 |
|----------------|------------------|:-----------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| GPT-4 | CoT baseline | 4.33 ± 0.58 | 0.83 ± 0.29 | 0.17 ± 0.29 | 0.87 ± 0.12 | 0.81 ± 0.02 | 0.84 ± 0.06 |
| | COAT | 4.00 ± 0.82 | 0.33 ± 0.47 | 0.00 ± 0.00 | 0.80 ± 0.16 | 0.93 ± 0.09 | 0.85 ± 0.11 |
| GPT-3.5 | CoT baseline | 5.00 ± 0.00 | 1.00 ± 0.00 | 1.33 ± 0.58 | 1.00 ± 0.00 | 0.68 ± 0.05 | 0.81 ± 0.04 |
| | COAT | 3.67 ± 0.47 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.73 ± 0.09 | 1.00 ± 0.00 | 0.84 ± 0.07 |
| Mistral-Medium | CoT baseline | 4.33 ± 0.58 | 1.00 ± 0.00 | 0.67 ± 0.58 | 0.87 ± 0.12 | 0.73 ± 0.07 | 0.79 ± 0.05 |
| | COAT | 4.67 ± 0.47 | 0.00 ± 0.00 | 0.00 ± 0.00 | 0.93 ± 0.09 | 1.00 ± 0.00 | 0.96 ± 0.05 |
It shows that LLMs with CoT can:
- be aware of high-level factors behind data (lower OT than *META*);
- still struggle to distinguish the desired factors in Markov Blanket (higher *NMB* than *COAT*).
[2] Causal reasoning and large language models: Opening a new frontier for causality, arXiv'23.
---
Rebuttal 2:
Title: Responses to other questions from Reviewer 1cJQ
Comment: We give the responses to the other questions in the follow-up comment due to the character limits.
> W1.4 discussions on other suggested aspects.
We further improve the paper with the following **additional discussions on the impact of other suggested aspects**:
- **Quality of factor annotations**. The annotation could introduce an additional "error term" on the true factor values, as one can observe in Figure 4 (a) (b). *The key point here is to ensure the distribution of data from annotation not violating the assumptions of causal discovery methods.* For example, the "error terms" should be independent; otherwise, the faithfulness assumption required by FCI will be violated. *In practice, LLMs could also use tools like API to acquire data from external resources* (we try this way on the Neuropathic Benchmark in section 5.2 and also in the ENSO case study in Appendix J).
- **Prompt template**. Constraints on the prompt include the LLM's instruction-following ability and the length of the context window. Including more instruction, more data samples, or more background knowledge may improve the $p$ and $C_\Psi$, but would also be more challenging for LLM to handle. In practice, decomposing prompts into multiple simpler sub-tasks could alleviate this issue.
> Q1 The paper addresses an important and timely problem and proposes a simple and intuitive solution, leveraging the strengths of LLMs and traditional causal discovery methods. While the experiments demonstrate the effectiveness of the proposed method over two simple baselines, stronger baselines and more ablations on prompts and factor annotations would strengthen this claim. Theoretical analysis is limited to the well-studied causal discovery aspect of the pipeline while making strong assumptions on the powerfulness of the LLMs, diversity, and faithfulness of the raw observational data, and the number of iterations being sufficiently large, seems rather unsurprising.
Here we make an overall clarification:
- This paper considers a novel task to reliably utilize the LLM's ability. In particular, it introduces COAT to propose high-level factors behind unstructured data that form a Markov blanket of target variables.
- **Theoretically**, we show COAT can provably identify a Markov Blanket with sufficient iterations. Our theoretical results do not assume a sufficiently strong LLM or diverse data, rather, we propose two new metrics to characterize the capability of LLMs in identifying useful high-level factors. Then, we establish the theoretical guarantee including the convergence rate based on the developed metrics. **We need to clarify that these results are out of the causal discovery literature.**
- For **empirical evaluation**, we compare the performance of COAT with the state-of-the-art baseline *META* in the literature, along with **two additionally constructed stronger baselines** *DATA* and *DATA-CoT*. The experimental results demonstrate the superiority of COAT.
- We also conduct **extensive ablation studies** to verify the robustness of COAT to different prompt templates, LLMs, and causal discovery algorithms.
---
Rebuttal 3:
Title: Dear 1cJQ
Comment: Dear 1cJQ,
thank you so much for reading the paper and sharing your opinion. I went through your review and I think that it would be very beneficial for the authors and for their paper whether you could suggest how to tackle the issue emerging from the following sentences of yours.
"Finally, the chosen baselines are far too simple to make any strong claims on the effectiveness of COAT. Comparing against some of the methods covered in the related work section would help bolster this claim."
In particular, I kindly ask you to share which more appropriate baselines could be taken into account to improve the quality of the evaluation , and which methods covered in the related work are the most appropriate ones .
I think the authors will greatly benefit from your insights.
All the best
---
Rebuttal Comment 3.1:
Title: Thanks for the additional baselines
Comment: I quite like the additional baselines - DATA and DATA-CoT. These were in the lines of the stronger baselines that I had requested in my review.
---
Reply to Comment 3.1.1:
Title: Thank you for acknowledging our additional baseline
Comment: Dear Reviewer 1cJQ,
Thank you for acknowledging our additional baseline. Please kindly let us know if our responses address your remaining concerns, too. We would sincerely appreciate it if you could jointly take our responses into your evaluation of our work.
Best regards,
The Authors of Paper6701 | Summary: This work proposes COAT (Causal representation AssistanT), a novel framework to leverage LLMs to assist with causal discovery from unstructured data. COAT aims to combine the advantages of LLMs and causal discovery algorithms. To do so, COAT employs LLMs to identify high-level variables and parse unstructured data into structured data. On the other hand, causal discovery algorithms read the parsed data to identify causal relations. To improve the reliability of the results, COAT also constructs feedback from the causal discovery results to iteratively improve the high-level variable identification. The authors conduct extensive case studies ranging from synthetic data to realistic data, and find COAT effectively helps with discovering meaningful causal structures that well explain the target variable.
Strengths: 1. This work identifies a crucial and timely problem for how to advance the causal tasks including causal learning and reasoning with foundation models likes LLMs;
2. COAT is novel, interesting and well-motivated. The authors also provide theoretical discussion to justify its soundness;
3. COAT is model-agnostic and robust to the choice of LLMs, and input data modalities;
4. The authors construct several benchmarks, present comprehensive case studies, and conduct extensive experiments to verify their claims. The improvements over direct prompting LLMs are significant.
Weaknesses: 1. The authors should provide more comparisons with advanced prompting techniques such as CoT.
2. More discussions should be provided on the hyperparameters used in COAT, such as the group size in feedback.
3. Model names are inconsistent. The name in Fig 4(c) is not the same with other names.
4. GPT-4 reasoning in Fig 7(c) is unclear in meaning.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to "Weaknesses".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: No concerns regarding limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your support and constructive comments on our work. We hope our response can sufficiently address your concerns.
> W1 More comparisons with advanced prompting techniques such as CoT.
**We construct a CoT baseline** based on *DATA*, where the LLM is prompted to "Think step by step to consider factors", and to output the desired factors in the same format as other methods.
| LLM | method | MB | NMB | OT | Recall | Precision | F1 |
|----------------|------------------|:-----------------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| GPT-4 | CoT baseline | 4.33 $\pm$ 0.58 | 0.83 $\pm$ 0.29 | 0.17 $\pm$ 0.29 | 0.87 $\pm$ 0.12 | 0.81 $\pm$ 0.02 | 0.84 $\pm$ 0.06 |
| | COAT | 4.00 $\pm$ 0.82 | 0.33 $\pm$ 0.47 | 0.00 $\pm$ 0.00 | 0.80 $\pm$ 0.16 | 0.93 $\pm$ 0.09 | 0.85 $\pm$ 0.11 |
| GPT-3.5 | CoT baseline | 5.00 $\pm$ 0.00 | 1.00 $\pm$ 0.00 | 1.33 $\pm$ 0.58 | 1.00 $\pm$ 0.00 | 0.68 $\pm$ 0.05 | 0.81 $\pm$ 0.04 |
| | COAT | 3.67 $\pm$ 0.47 | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.73 $\pm$ 0.09 | 1.00 $\pm$ 0.00 | 0.84 $\pm$ 0.07 |
| Mistral-Medium | CoT baseline | 4.33 $\pm$ 0.58 | 1.00 $\pm$ 0.00 | 0.67 $\pm$ 0.58 | 0.87 $\pm$ 0.12 | 0.73 $\pm$ 0.07 | 0.79 $\pm$ 0.05 |
| | COAT | 4.67 $\pm$ 0.47 | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.93 $\pm$ 0.09 | 1.00 $\pm$ 0.00 | 0.96 $\pm$ 0.05 |
It shows that LLMs with CoT can:
- be aware of high-level factors behind data (lower OT than *META*);
- still struggle to distinguish the desired factors in Markov Blanket (higher *NMB* than *COAT*).
> W2 More discussions should be provided on the hyperparameters used in COAT, such as the group size in feedback.
Thanks for the suggestion. We have revised the manuscript to include a discussion on hyperparameters:
- **group size in prompt**: In the COAT prompt, several samples are given and grouped by the values of the target variable. The samples in each group are randomly selected and are kept in a fixed number (like 3 samples per group). Empirically, we keep it to be 3 throughout all experiments (sometimes smaller than 3 if samples are not enough). In practice, it is mainly constrained by the LLM's context length.
- **the number of clusters**: When constructing feedback, we first use clustering to separate the dataset and then find the cluster where the target variable is not explained well by current factors (This is a heuristic for the problem in line 191). Empirically, we set the number of clusters to be one plus the number of current factors.
Furthermore, we conduct ablation studies of COAT with GPT-4 using different hyperparameters:
| Method | cluster size | group size | MB | NMB | OT | Recall | Precision | F1 |
|--------|:---------------:|:----------:|:---------------:|:-----------:|:-----------:|:---------------:|:-----------:|:---------------:|
| META | - | - | 2.67 $\pm$ 0.94 | 0.67 $\pm$ 0.47 | 2.33 $\pm$ 0.47 | 0.53 $\pm$ 0.19 | 0.46 $\pm$ 0.08 | 0.49 $\pm$ 0.13 |
| DATA | - | 3 | 3.00 $\pm$ 0.00 | 0.33 $\pm$ 0.47 | 0.00 $\pm$ 0.00 | 0.60 $\pm$ 0.00 | 0.92 $\pm$ 0.12 | 0.72 $\pm$ 0.04 |
| COAT | len(factor) + 1 | 3 | 4.00 $\pm$ 0.82 | 0.33 $\pm$ 0.47 | 0.00 $\pm$ 0.00 | 0.80 $\pm$ 0.16 | 0.93 $\pm$ 0.09 | 0.85 $\pm$ 0.11 |
| COAT | len(factor) + 1 | 1 | 4.67 $\pm$ 0.58 | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.93 $\pm$ 0.12 | 1.00 $\pm$ 0.00 | 0.96 $\pm$ 0.06 |
| COAT | 2 | 3 | 3.67 $\pm$ 1.53 | 0.00 $\pm$ 0.00 | 0.00 $\pm$ 0.00 | 0.73 $\pm$ 0.31 | 1.00 $\pm$ 0.00 | 0.82 $\pm$ 0.22 |
One can observe that COAT is not sensitive to these hyperparameters, and performs robustly well than the baselines under different hyperparameter setups.
> W3 Model names are inconsistent. The name in Fig 4(c) is not the same as other names.
We have fixed them in the revised manuscript.
> W4 GPT-4 reasoning in Fig 7(c) is unclear in meaning.
Fig 7(c) shows the result based on directly prompting LLM to reason for the causal relations among given factors. We add explanations in the figure caption now.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: The authors have adequately addressed my concerns. I would like to thank the authors, and I hope the comparison to CoT
could be included in future revisions.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Dear Reviewer szEm,
We are happy to learn that our rebuttal addressed your concerns. Please feel assured that we will integrate all the promised revisions in the future version of this work (we have already done so for most of the required revisions)! | Summary: This paper combines the power of LLMs with that of causal discovery by proposing a Causal representatiOn AssistanT (COAT) approach. Specifically, it considers datasets with textual descriptions, and tries to identify the Markov blanket with respect to a target variable (such as customer ratings and medical diagnosis). The key contribution is discovery of the causal factors through a pipeline that uses both LLMs and a causal discovery algorithm.
Strengths: I find this an interesting and practical paper that combines the advantages of LLMs – such as the vast amount of knowledge that they encode – and that of causal discovery approaches. The ideas around combination are generally simple but novel, and I believe the approach could potentially be valuable in a suite of applications, although the extent of the value is unclear from the paper.
Weaknesses: A major limitation of the work is the empirical evaluation, even though it comes across on the surface as being extensive. I sympathize with the authors about benchmarks for causal discovery, but it seems they have used GPT-4 to generate the textual description of the data, and then used LLMs in their COAT procedure. This is clearly a synthetic dataset that can be problematic. Even the “realistic” benchmarks do not come across as sufficiently realistic, based on my understanding and the lack of details in the main paper.
I don’t understand why key aspects of the evaluation were moved to the appendix. I find it impossible to fully evaluate the work based solely on the contents of the main paper. I understand the need to make space and to move things to the appendix, but it’s never suitable to move key aspects such as the description of the benchmarks and the key results that show value of the work. This has impacted my assessment of this work and I have had to decrease my score because of the authors’ choices around appendix content.
A related weakness is the lack of any attempt at describing limitations, of which there are clearly many.
Technical Quality: 2
Clarity: 2
Questions for Authors: Could the authors share more about the scope of the work? Are there some other restrictions on the problem setting, besides needing a discrete label y? My assessment is that there is a gap between the scope mentioned in the problem setting and what is described in the experiments, which seems more restricted. Perhaps the authors can clarify.
Identifiability is mentioned loosely on page 2, with some technical references, but seems to have been used in an imprecise way here. The connection here seems tenuous at best.
There is a comment on pg. 3: “Note that the target variable Y serves as a guider, and no specific relation between x and Y is assumed.” What is a guider? And what do you mean no relation between x and Y is assumed? I thought the entire point is to do causal discovery: x is a function of z, and Y is a function of z. This line seems incorrect.
Section 3.2 would be much easier to follow with some illustrative examples. The content in Fig. 2 is too abstract to be really useful. I think the authors missed a trick here.
The meaning of C and p are unclear to me from what is described on pg. 6. How does one assess the significance of Proposition 3.3?
The details about benchmarks are incredibly important, and it should be easy for anyone to understand at least a high-level sense of a benchmark – basic things like the number of data points, for instance. Please fix Section 4 accordingly.
What is OT in Section 4.2? Is it the same as OT in the next section? Define MB, NMB, OT somewhere. I don’t see them mentioned anywhere clearly, although I understand from context that MB means Markov blanket.
Are Table 3 and Fig. 5 in the Appendix? If so, then mention that.
I’m not convinced that the “realistic” benchmarks are realistic. It’s too bad I can’t gauge this from the main text.
Please add a detailed limitations section. Mention all the limitations around evaluation in particular, as well as the significant risks of relying on LLMs for causal discovery.
Minor comments: line 181: “an potential” should be “a potential”; line 208: “Ability” should be “ability”; lines 211 and 212: seems there is a grammatical error here; line 216: what are “shared notations”?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors have not suitably described limitations. I consider this a major weakness; please see my previous comments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the detailed and insightful comments on our work. We hope our response can sufficiently address your concerns.
> W1. Reality of benchmarks
**The choice of synthetic and realistic benchmarks is because of the evaluation purpose**. Since we usually do not have access to the ground truth causal graph of the realistic benchmarks, we need to synthesize ones to effectively evaluate the performances of COAT.
Meanwhile, **we do evaluate COAT in realistic data**, for which we have revised our manuscript to make it clearer:
- Brain Tumor: This is an open-sourced dataset containing MRI images for brain tumor classification.
- Stock News: This dataset contains the close price of a company from 2006 to 2009 and its 804 news summary (from the New York Times).
- ENSO: This dataset contains high-dimensional information about Earth’s atmosphere with fine-grained time and space coverage from the 19th century to the early 21st century. This is a popular real-world dataset in climate science.
> W2 Evaluation details.
Thank you for the suggestion! We have revised the paper to include more details of the evaluation in Sec 4.1, and please kindly let us know if you feel any additional revision could further improve the clarity:
- Purpose: We evaluate whether COAT can propose and identify a set of high-level factors belonging to the Markov Blanket of the target variable Y.
- Construction: We prepare different high-level factors: 3 parents of Y, 1 child of Y, and 1 spouse of Y. These factors form a Markov blanket of Y. In addition, we also prepare one "disturbing" factor related to Y but not a part of this blanket.
- Expectation: A good Method is expected to propose the 5 high-level factors (up to semantical meanings) and exclude the "disturbing" factor.
The key findings from the experiments are summarized below:
- COAT is more resistant to the "disturbing" factor, which is supported by the lower *NMB* column (number of factors out of the Markov blanket) in both table 1 (page 8) and table 5 (page 24).
- COAT filters out irrelevant factors from LLMs' prior knowledge that are not reflected by the data, which is supported by the lower *OT* column (number of other irrelevant factors).
- COAT robustly encourages LLM to find more expected factors through the feedback, which is supported by the lower *MB* column (number of factors in the Markov blanket).
> W3 Lack of limitation discussion.
We indeed provided a discussion on limitations in the future work section on line 629, page 17. We revised the section title to **Limitations and Future Directions** to make it clearer.
> Q1 Scope of the work.
The problem setting in Sec 3.1 establishes the objective of this work that reliably leverages an LLM to identify the underlying high-level factors in the Markov Blanket of a given target variable.
The inputs are merely the target variable and the unstructured data, which can be either text or images. After identifying a set of candidate high-level factors, the values of those factors can be either obtained by LLM annotations or by some external tools.
Meanwhile, identifiable causal discovery also requires certain assumptions about the data:
- **Faithfuness**. The empirical distribution of the data reflects the actual data-generating process.
- **No selection bias**. Otherwise, the faithfulness condition would be violated.
- **Enough sample size**. Our method involves statistical tests, the higher the better.
> Q2 Use of identifiability mentioned on page 2.
The references here are classic books and survey papers about causal discovery methods with rigorous theoretical guarantees of identifiability. **We use them to show the important role of identifiability**, which is lacking in the current literature on using LLMs for causality-related tasks. The discussion in line 33 to 38 gives rise to our main research question: How can LLMs **reliably** assist in revealing the causal mechanisms behind the real world?
> Q3 Description of problem setting.
The meanings of the sentence are:
- Why Y serves as a `guider`: Only a subset of hidden high-level factors belongs to a Markov blanket of Y, that correlates with Y. Therefore, one may use Y to guide the identification of the desired high-level factors.
- The `relation` actually means causal relations. In a Markov Blanket, there are three types of causal relations: Parents, Children, and Spouses. In the second type, some elements in z could be functions of Y.
We have revised the sentence to "Note that the target variable Y serves as a guider, and no prior assumption between x and Y is assumed" to avoid any potential misunderstandings.
> Q5 The meaning of C and p, and the significance of Proposition 3.3?
We define the two concepts to formalize the LLMs' ability to propose useful factors:
- **Perception Score** $p$: the probability that the LLM proposes such a new factor. This can be seen as a measure of the LLM's responsiveness to feedback.
- **Capacity Score** $C_\Psi$: the decreasing ratio of the conditional mutual information, as described in Eq 8. This can be seen as a **measure of the quality of the proposed factors**.
The significance of Proposition 3.3:
- It shows that COAT can provably find a set of high-level factors in the Markov blanket of Y with sufficient rounds if $p>0$ and $C_\Psi >0$.
- It also characterizes the influence of LLM's ability ($p$ and $C_\Psi$) on the efficiency of COAT by Eq 9.
> Q6 The details about benchmarks.
Thanks for the suggestion. These details (initially in appendix G) are added to the main paper now. A related discussion can be found in our response to W1.
> Q10 & L1 Limitations.
Now the limitation part (initially a part of appendix B) is a single section. We also add discussions about *faithfulness*, *selection bias* , and *sample size*, as we responded. Note that COAT is relying on LLM for factor proposal, whose risk can be controlled given the faithfulness condition, as implied by our theoretical results.
---
Rebuttal 2:
Title: Responses to other questions from Reviewer AG9s
Comment: We give the responses to the other questions in the follow-up comment due to the character limits.
> Q4 Illustrative examples for Sec 3.2 and Fig. 2.
We revised Sec 3.2 and Fig. 2 with more illustrative examples:
- *Factor proposal*. After observing customers' comments on apples in different ratings, LLM may propose that sweetness is a possible factor with its criterion to assign values, like more sweet is 1, more sour is -1, and 0 if not mentioned in the text. LLM can also propose other factors, like the color of the apple.
- *Factor parsing*. Then, another LLM goes through all comments to assign values for each factor. Therefore we get tabular data with rows for comments and columns for *sweetness* and *color* factors.
- *Verification*. We use the tabular data to check these new factors. We find the color is conditional independent with the rating given existing factors in the current representation (line 202, empty now), and *sweetness* is not. So we add *sweetness* to the current representation.
- *Feedback*. Following Sec 3.3, we find a subset of comments where the current representation cannot explain Y well. We pass these comments to step 1 to continue the next iteration.
The revised Fig.2 is included in the pdf.
> Q7 Definition of MB, NMB, OT.
Formal definitions are now given in section 4 as follows:
- *MB* means the desired factor forming the Markov Blanket of Y.
- *NMB* means the undesired factor relevant to data but not in *MB*.
- *OT* means the unexpected factors irrelevant to data.
> Q8 Are Table 3 and Fig. 5 in the Appendix? If so, then mention that.
Table 3 is in the appendix, which is now mentioned in the main paper. Fig.5 is on page 8.
> Q9 I’m not convinced that the “realistic” benchmarks are realistic. It’s too bad I can’t gauge this from the main text.
Thanks for the important suggestion. These details in the appendix will be also seen in the main paper, as we responded in the comment about weakness.
> Q11 Minor comments.
Thanks for reminding the typos. They are fixed now.
*shared notations* means: *the notation (like $p$ and $C_\Psi$) used here are defined in Definition 3.2*.
---
Rebuttal Comment 2.1:
Title: Thanks for clarifications
Comment: I thank the authors for their detailed response and hope they will revise their exposition in various sections of the manuscript and move some content to the main file. I will remain open to re-assessing the work based on the rebuttal and further discussion.
---
Reply to Comment 2.1.1:
Title: Thank you and the list of revisions made according to your suggestions
Comment: Dear Reviewer AG9s,
Thank you for considering re-assessing our work! We have revised our manuscript according to your suggestions and promise that all of your suggestions will be reflected in our final manuscript.
Regarding your suggestions for **exposition**, we have revised our manuscript as follows:
- On page 2, lines 37 and 38, we replaced ", which is the central concept of causality research [3,4,5]" with a more clear sentence to show our purpose: "which plays an important role in classic causal discovery literature [3,4,5]" (as the response to `Q2`).
- In Sec 3.1, when stating the problem definition, we included a clearer description of the scope of this work and respective pointers about other assumptions needed (as the response to `Q1`).
- In line 115, we replace the last sentence containing "guider" and "relation" with a clearer one: "Note that the target variable Y serves as a guider, and no prior assumption between x and Y is assumed" (as the response to `Q3`).
- In Sec 3.2, we included the illustrative examples as stated in the initial response. We also improved Fig.2 with the [uploaded pdf](https://openreview.net/attachment?id=5nfJcSZHxp&name=pdf) (as the response to `Q4`).
- In Sec 3.4, after Definition 3.2, we included *the motivation and intuitive interpretation about the two metrics ($p$ for LLM responsiveness and $C_\Psi$ for quality of factors)* (as the response to `Q5`);
- In Sec 3.4, we also *explicitly interpreted Proposition 3.3*: Intuitively, Proposition 3.3 also characterizes the influence of prompt templates, the LLM responsiveness, and the quality of factors on the performance of COAT via the two proposed measures: ... (as the response to `Q5`);
- In Sec 4, before Sec 4.1, we included one sentence to state the experiment's purpose: "We evaluate whether COAT can propose and identify a set of high-level factors belonging to the Markov Blanket of the target variable Y." (as the response to `W2`).
- In Sec 4, after the 'Benchmark Construction' paragraph, we included one sentence to state the expected result of an ideal method: "good Method is expected to propose the five high-level factors (up to semantical meanings) and exclude the "disturbing" factor." (as the response to `W2`).
- In Sec 4, at the end of Sec 4.1, we included the definitions of MB, NMB, and OT. (as the response to `Q7`).
- In line 285, we replace "Table 3 and Fig. 5" with "Table 3 (in Appendix E.1) and Fig. 5 (on page 8)" to make it more convenient. (as the response to `Q8`).
- In Appendix B, we separated the discussion around line 629 into an individual 'Limitation' section. We also added discussions about faithfulness, selection bias, and sample size. (as the response to `W3`, `Q10`, and `L1`)
- Minor typos are fixed. (as the response to `Q11`).
Regarding your suggestions for **moving some content to the main file**, we have revised our manuscript as follows:
- In Sec 4.1, at the beginning of the 'Benchmark Construction' paragraph, we also included one sentence to show the concrete details: "We prepare different high-level factors: 3 parents of Y, 1 child of Y, and 1 spouse of Y. These factors form a Markov blanket of Y. In addition, we also prepare one "disturbing" factor related to Y but not a part of this blanket" (as the response to `W2` and `Q6`).
- In Sec 4.1, at the end of the 'Benchmark Construction' paragraph, we also moved the sample size of the Apple Gastronome to the main file: "we generated 200 samples for LLMs' analysis and annotation." (as the response to `Q6`).
- In Sec 5.1, at the end of the 'Benchmark Construction' paragraph, we also moved the sample size of the Neuropathic to the main file: "We generated 100 samples for LLMs' analysis; since the number of possible factors is finite, we generate 1000 tabular data for CI tests." (as the response to `Q6`).
- At the beginning of Sec 4.2, we included the summary of key findings from the full experiment results (as given in Table 5 in Appendix E.4) in the main file (as the response to `W2`).
- At the beginning of Sec 5.1, we moved short introductions about the three realistic datasets to the main file. (as the response to `W1`, `Q6`, and `Q9`).
Please kindly let us know if you feel any additional revisions could further help improve the clarity and the exposition of our work. We sincerely appreciate and are looking forward to your re-evaluation combining our rebuttal and the discussion. Thank you again for your time and constructive suggestions! | Rebuttal 1:
Rebuttal: Dear Reviewers,
Thank you for your time and constructive comments on our work. To summarize, all reviewers agree **the paper's proposal to reliably advance rigorous causal discovery methods with the advantages of foundation models like LLMs is novel and valuable** (AG9s, szEm, 1cJQ, PF9K). The method is sufficiently evaluated in constructed benchmarks (szEm, 1cJQ, PF9K). The soundness is justified by the provided theoretical results (szEM, 1cJQ). The paper also develops novel metrics to measure the LLMs' ability to propose desired factors (1cJQ, PF9K). Comprehensive case studies are presented on three realistic data (szEM, 1cJQ).
We believe all of the reviewers' concerns can be addressed. In the following, we brief our responses to the main concerns and suggestions raised in the review:
- **The guarantee on factor proposal** (AG9s, 1cJQ, PF9K)
- In this paper, we show COAT can provably identify a Markov Blanket of a given target variable. In section 3.4, we propose two new metrics ($p$ and $C_\Psi$) to characterize the capability of LLMs in identifying useful high-level factors and also establish the corresponding guarantee.
- We further discuss the intuition behind $p$ and $C_\Psi$, and also give the convergence rate of COAT in the response to Reviewer [1cJQ](https://openreview.net/forum?id=w50ICQC6QJ¬eId=jw35g23fxl).
- **Baselines and ablation study** (szEm, 1cJQ)
- We construct an additional stronger baseline with CoT prompting to enhance the evaluation. We present the key results and interpretation in the response to Reviewer [szEm](https://openreview.net/forum?id=w50ICQC6QJ¬eId=6vJZbvuIRV) and [1cJQ](https://openreview.net/forum?id=w50ICQC6QJ¬eId=jw35g23fxl).
- We provide empirical evidence to show COAT is not sensitive to hyperparameters like group size or cluster size in the response to Reviewer [szEm](https://openreview.net/forum?id=w50ICQC6QJ¬eId=6vJZbvuIRV).
- We also provide empirical evidence to show COAT is not sensitive to the choice of prompt templates in the response to Reviewer [1cJQ](https://openreview.net/forum?id=w50ICQC6QJ¬eId=jw35g23fxl).
We also provided an anonymous link to our sample code for reproducing the results in our paper to the Area Chair, according to the NeurIPS requirements.
Please let us know if there are any other concern and we are happy to discuss them. We would appreciate it if you could take our responses into consideration when making the final evaluation of our work.
Sincerely,
Authors.
Pdf: /pdf/70c19cb1cba6ca6fe669f212c360ef6ad4e4001f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can Language Models Learn to Skip Steps? | Accept (poster) | Summary: The paper explores the ability of language models to skip steps in their reasoning processes. The authors introduce a controlled framework to stimulate step-skipping behavior by iteratively refining models to generate shorter and accurate reasoning paths. The study demonstrates that models can develop this ability under guidance, leading to increased efficiency without sacrificing accuracy. The paper presents empirical results showing enhanced generalization capabilities in out-of-domain scenarios after fine-tuning on expanded datasets that include both complete and skipped reasoning sequences.
Strengths: - The empirical results are robust in three domains, showing benefits in efficiency of the proposed method.
- The paper is clearly written and well-organized, making it easy to follow the authors' methodology and findings.
Weaknesses: - Only one backbone model is considered. Experiments across model families and model sizes should be considered to show the generalization ability of the proposed methods.
- The OOD test is actually the harder in-domain test. I am curious about the across domain effect of the proposed method For example, how's the effect of training on "Analog of Algebra" and test on "Multi-digit Addition", given that the skip ability should be a general ability across different domains?
- In methodology: "We begin with a training dataset D0, which contains detailed full-step reasoning answers to the questions." -> How the full-step reasoning data created?
Technical Quality: 2
Clarity: 3
Questions for Authors: see weakness
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable and constructive feedback. We provide specific responses and clarifications as follows.
**[W1]**
Thank you for the suggestion. We acknowledge the importance of testing our method across different models. Following your advice, we performed additional experiments on Phi-3-mini for the Analog of Algebra task. These experiments confirm that our method is generalizable to other model series. We will include the results across all tasks in the revision.
| | In-domain | | OOD-easy | | OOD-hard | |
|----------|--------------|------------|------------|------------|------------|------------|
| | Acc | Avg steps | Acc | Avg steps | Acc | Avg steps |
| Cold Start | 99.5 | 3.18 | 96.86 | 6.16 | 1.67 | 9.83 |
| Iter 1 | 99.7 | 3.18 | 99.22 | 6.13 | 0.95 | 10.61 |
| Iter 2 | 99.9 | 3.11 | 100 | 6.06 | 5.48 | 9.06 |
| Iter 3 | 99.9 | 2.9 | 99.6 | 5.79 | 6.67 | 7.74 |
| Iter 4 | 99.9 | 2.64 | 99.41 | 5.36 | 9.05 | 7.46 |
| Iter 5 | 99.7 | 2.43 | 98.82 | 5.23 | 10.24 | 7.77 |
*Table: Phi-3-mini on Analog of Algebra.*
**[W2]**
Thank you for your valuable suggestion! We acknowledge the importance of evaluating the generalizability of our method. We are actively working on additional experiments and will provide an update on the results once the experiments are completed.
**[W3]**
For the three datasets used in our work, we use heuristic rules to automatically generate the reasoning processes. For example, in the Analog of Algebra dataset, we first create full-step reasoning data using standard algebra rules on operators. Then, we replace the variables with analog symbols. We also plan to release the code for creating the dataset in the future.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses, given the authors' response, the W2 has not been resolved. In addition, I am curious about how the authors ensure the quality of the auto-generated dataset. Is there any human evaluation of that?
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback! We address the remaining concerns as follows.
### **[W2] Generalization across different domains**
We appreciate the insightful questions regarding the generalization of step-skipping ability across different tasks. To address these concerns, we conduct a controlled experiment. Specifically, we sample 2000 training examples from each dataset, along with 1600 step-skipping answers where exactly one step is successfully skipped, drawn from these 2000 questions in Iteration 5. This approach ensures that all three tasks have an equal amount of full-step and step-skipping data.
We focus on evaluating a model that requires the number of steps as input. During testing, we instruct the model to solve each problem in $n−1$ steps, where $n$ is the full step count for that problem. This setting allows us to directly assess the step-skipping behavior by measuring accuracy under this evaluation setting. If the accuracy increases under this setting, it indicates improved step-skipping ability.
We utilize the Phi-3-mini model across all tasks. In the table below, the "withheld task" refers to the task that does not have step-skipping data in the training phase. The "All" setting includes only full-step answers across all three tasks without any step-skipping data.
To clarify the settings:
* All setting = task1-full + task2-full + task3-full
* Withheld setting = task1-full + *task1-skip* + task2-full + *task2-skip* + task3-full
*Table 1: Evaluation task - Analog of Algebra*
| Withheld task | In-domain | Avg steps | OOD-easy | Avg steps | OOD-hard | Avg steps |
|---------------------|-----------|-----------|----------|-----------|----------|-----------|
| All | 51.3 | 2.65 | 44.5 | 5.58 | 1.9 | 10.68 |
| Analog of Algebra | 53.9 | 2.71 | 56.9 | 5.74 | 7.1 | 10.97 |
*Table 2: Evaluation task - Multi-digit Addition*
| Withheld task | In-domain | Avg steps | OOD-easy | Avg steps | OOD-hard | Avg steps |
|---------------------|-----------|-----------|----------|-----------|----------|-----------|
| All | 100.0 | 2.86 | 22.4 | 4.71 | 4.2 | 5.39 |
| Multi-digit Addition| 95.7 | 2.59 | 34.3 | 4.75 | 2.4 | 5.35 |
*Table 3: Evaluation task - Directional Reasoning*
| Withheld task | In-domain | Avg steps | OOD-easy | Avg steps | OOD-hard | Avg steps |
|------------------------|-----------|-----------|----------|-----------|----------|-----------|
| All | 100.0 | 7.01 | 96.0 | 15.46 | 75.8 | 25.03 |
| Directional Reasoning | 97.8 | 6.98 | 96.2 | 15.42 | 80.0 | 24.92 |
The results show that the presence of step-skipping data in the training of other tasks positively impacts the performance of the withheld task. Compared to the "All" setting, models trained with step-skipping data in other tasks demonstrate improved step-skipping performance across all datasets with a comparable number of steps. Since the model is explicitly instructed to solve problems with fewer steps, the average steps in the withheld task remain similar to the "All" setting. However, the observed boost in accuracy suggests that the model benefits from the generalized step-skipping ability acquired from other tasks. This confirms the generalizability of the step-skipping ability across different domains.
We appreciate the reviewers' insightful questions and believe that these findings will further strengthen our work. We will incorporate these findings into our revision.
### **[Q] How to ensure the quality of the auto-generated dataset?**
Thank you for your additional question!
For the Analog of Algebra task, we ensure the quality of the auto-generated dataset by creating full-step reasoning data using standard algebraic rules applied to operators. To further verify the validity and consistency of the intermediate steps, we utilize the SymPy library. Specifically, we perform SymPy simplification for each intermediate step and ensure that the resulting equation remains algebraically equivalent to the final simplified answer.
For the Multi-digit Addition task, the internal results are generated using Python's built-in calculation modules, ensuring accurate computations.
For the Directional Reasoning task, the clarity of the question formulation guarantees that all intermediate steps are 100% correct. Each step is derived through rule-based decomposition, ensuring the correctness of the intermediate steps.
All the data is guaranteed to be correct due to the heuristic nature of the creation process. We will make sure that the data creation and quality control processes are further clarified in the revision. | Summary: This paper proposes an iterative training method that helps sequence models learn to skip steps. The method starts from a training set with full-length solutions or mixed with some skipped-length solutions. At each stage a model learns these solutions with the instruction “Solve it in n steps” and is prompted to generate shorter answers. Correct shorter answers are added to the training set. The effect of this approach is tested using LlaMa-7B on three tasks, including algebraic evaluations, multi-digit addition, and a direction inference task.
Strengths: The proposed skip reasoning pipeline is interesting and was evaluated against a diverse set of tasks with different levels of OOD generalization tests.
The authors conducted detailed analyses to understand the effect of the training pipeline, e.g., Figure 5 with the multi-digit addition is very informative.
The overall presentation is very clear and easy to follow.
Weaknesses: My main concern is the generalizability of this method. As shown in the paper, the model largely benefited from the warm start setup that includes some skipped problems in the first training set, and has trouble generalizing to problems requiring more steps. One interesting generalization test would be to train on a mixture of all three tasks, but withhold adding skipped step instances for one task, and see if the model can generalize skipping steps on the withheld task.
The shorter answers generated during the iterative process also don't seem quite “generated by the model itself”, as filtering out correct answers would require oracle knowledge. Is it assumed that correct answers are any exact subset of the full-length solution?
Technical Quality: 3
Clarity: 3
Questions for Authors: The accuracy metric measures final answer accuracy, are intermediate steps correct?
I'm unsure if it makes sense to read too much into the average step metric when accuracy is low.
In figure 4, what does the accuracy look like for only problems where the model skipped steps?
What do you think make multi-digit addition and directional inference more difficult than the algebra task? Especially that accuracy and average step for OOD problems are still pretty bad for multi-digit addition even with the warm start and a few iterations in.
What is the change in the ratio between D_init and D_skipped over the iteration process?
What’s the range of i (in n-i) of the added skip-step instances in D_skipped under warm start?
Line 221 has incorrect figure reference.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Noted in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable and constructive feedback. We provide specific responses and clarifications as follows.
**[W1]**
Thank you for your valuable suggestion! We acknowledge the importance of evaluating the generalizability of our method. We are actively working on additional experiments and will provide an update on the results once the experiments are completed.
**[W2]**
In the iteration phase, we query the model to generate shorter answers on the training set and filter based on the correctness of the final answers. The data used are the labels required for training any model, so our filtering process does not rely on any additional information beyond the training data.
When generating answers with fewer steps, the model determines which steps to skip or omit. Thus, the skipping behaviors are entirely generated by the model itself, and the filtering serves as guidance on how to strengthen such behaviors.
Correct answers can indeed be subsets of the full-length solution. The models may actively omit certain steps from the full-length answer to generate the shorter output.
**[Q1]**
Thank you for this insightful question. Our accuracy metric does not measure the correctness of intermediate steps. Automatically evaluating the correctness of the internal reasoning process is a common issue and can be challenging [1]. Previous work [2] has shown that training with incorrect reasoning traces can still lead to improvements. From our manual analysis of Analog of Algebra on In-domain test set, we observe 50 skipped answers and find that 96% of them are correct in its reasoning process. We'll provide the analysis across all tasks in the revision.
[1]Solving math word problems with process- and outcome-based feedback. \
[2]MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models.
**[Q2]**
The primary motivation of this metric is to measure whether the model has learned the ability to skip steps. Even when the accuracy is not high, the model still uses fewer steps overall while maintaining its overall accuracy. This consistent performance, both when accuracy is low and high, indicates that the model is developing this step-skipping ability and demonstrating it on both in-domain and out-of-domain data.
**[Q3]**
Thank you for your insightful question. We will include the analysis across all tasks in the revision. Here we present the results for the Analog of Algebra task. The Skipping Ratio indicates the proportion of answers where steps were skipped in the entire test set. The Acc measures the accuracy of these skipping answers. The results show that as iterations progress, the model tends to skip steps in more problems, and the accuracy increases accordingly. In the OOD-hard setting, due to the lengthy and difficult nature of the problems, the model initially tends to use shorter answers with lower accuracy. However, the overall accuracy increases with each iteration.
| | in-domain | | OOD-easy | | OOD-hard | |
|---------|--------------------|-------------------|-------------------|-------------------|-------------------|-------------------|
| | Skipping Ratio | Acc | Skipping Ratio | Acc | Skipping Ratio | Acc |
| Iter 1 | 6.0 | 100 | 15.4 | 63.3 | 69.8 | 4.1 |
| Iter 2 | 16.8 | 99.4 | 21.5 | 73.7 | 34.8 | 5.5 |
| Iter 3 | 32.7 | 100 | 29.9 | 79.1 | 52.4 | 6.4 |
| Iter 4 | 69.3 | 99.7 | 58.9 | 88.4 | 63.6 | 15.0 |
| Iter 5 | 68.8 | 99.7 | 63.2 | 90.0 | 61.9 | 7.7 |
*Table: Skipping ratio across the iterations and the accuracy of problems with skipping-step answers.*
**[Q4]**
These two tasks involve more steps and actions in the answers. From our manual observation, we noticed that the model tends to overfit on the multi-digit addition task. As the training data only contains numbers with up to three digits, the model overfits to three-digit additions. Here is an incorrect answer from the model's prediction:
Q: 4423 + 684\
Prediction:
1. Add the 10^0's place digits: 2 + 4 (plus carry 0) = 6 (write down 6, no carry over).
2. Add the 10^1's place digits: 4 + 8 (plus carry 1) = 12 (write down 2, carry over 1).
3. Add the 10^2's place digits: 4 + 6 (plus carry 1) = 11 (write down 1, carry over 1).
4. Carry the final carry over 1 to the next place. \
Thus, the answer is 11266.
In this example, the model actually performs the addition for 442 + 684, which indicates a biased overfitting from the initialization phase. During the iteration phase, the skipping data helps to mitigate this overfitting, thereby improving accuracy.
**[Q5]**
On the task of Analog of Algebra, the original training set contains 5770 data. As the iteration proceeds, the number of mixed data points (Total) increases significantly, nearly tripling by the fifth iteration. Here, $i$ indicates the number of valid answers that skip $i$ step in the reasoning process.
| | Total | i=1 | i=2 |
|---------|-------|--------|--------|
|Initialization | 5770 | 0 | 0 |
|Iter 1 | 5945 | 153 | 22 |
|Iter 2 | 7078 | 1050 | 258 |
|Iter 3 | 10010 | 2895 | 1345 |
|Iter 4 | 13744 | 4753 | 3221 |
|Iter 5 | 15048 | 5255 | 4023 |
*Table: The change in data volume across iterations.*
**[Q6]**
In the warm start phase, we only use i=1 in the manual skipping data.
**[Q7]**
Thank you for pointing it out. We will revise the figure reference accordingly.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough responses, they all make sense, and I appreciate the additional concrete evidence. I think adding/adjusting the metrics in your response to Q1 and Q3 on all tasks could really enhance the paper, and have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We are grateful for your recognition of our work! We here address the remaining concern as follows.
**[W1] Generalization across different domains**
We appreciate the insightful question regarding the generalization of step-skipping ability across different tasks. To address this concern, we conduct a controlled experiment. Specifically, we sample 2000 training examples from each dataset, along with 1600 step-skipping answers where exactly one step is successfully skipped, drawn from these 2000 questions in Iteration 5. This approach ensures that all three tasks have an equal amount of full-step and step-skipping data.
We utilize the Phi-3-mini model across all tasks. In the table below, the "withheld task" refers to the task that does not have step-skipping data in the training phase. The "All" setting includes only full-step answers across all three tasks without any step-skipping data.
To clarify the settings:
* All setting = task1-full + task2-full + task3-full
* Withheld setting = task1-full + *task1-skip* + task2-full + *task2-skip* + task3-full
*Table 1: Evaluation task - Analog of Algebra*
| Withheld task | In-domain | Avg steps | OOD-easy | Avg steps | OOD-hard | Avg steps |
|---------------------|-----------|-----------|----------|-----------|----------|-----------|
| All | 51.3 | 2.65 | 44.5 | 5.58 | 1.9 | 10.68 |
| Analog of Algebra | 53.9 | 2.71 | 56.9 | 5.74 | 7.1 | 10.97 |
*Table 2: Evaluation task - Multi-digit Addition*
| Withheld task | In-domain | Avg steps | OOD-easy | Avg steps | OOD-hard | Avg steps |
|---------------------|-----------|-----------|----------|-----------|----------|-----------|
| All | 100.0 | 2.86 | 22.4 | 4.71 | 4.2 | 5.39 |
| Multi-digit Addition| 95.7 | 2.59 | 34.3 | 4.75 | 2.4 | 5.35 |
*Table 3: Evaluation task - Directional Reasoning*
| Withheld task | In-domain | Avg steps | OOD-easy | Avg steps | OOD-hard | Avg steps |
|------------------------|-----------|-----------|----------|-----------|----------|-----------|
| All | 100.0 | 7.01 | 96.0 | 15.46 | 75.8 | 25.03 |
| Directional Reasoning | 97.8 | 6.98 | 96.2 | 15.42 | 80.0 | 24.92 |
The results show that the presence of step-skipping data in the training of other tasks positively impacts the performance of the withheld task. Compared to the "All" setting, models trained with step-skipping data in other tasks demonstrate improved step-skipping performance across all datasets with a comparable number of steps. Since the model is explicitly instructed to solve problems with fewer steps, the average steps in the withheld task remain similar to the "All" setting. However, the observed boost in accuracy suggests that the model benefits from the generalized step-skipping ability acquired from other tasks. This confirms the generalizability of the step-skipping ability across different domains.
We appreciate your insightful questions and believe that these findings will further strengthen our work. We will make sure to incorporate these valuable suggestions into our revision. Thank you again for taking the time to review our work thoroughly. | Summary: This paper proposes to teach LLMs to deliberately skip steps when doing complex tasks involving multi-step reasoning. The authors use self-generated inference paths with fewer steps to fine-tune the models, which is similar to self-distillation. The authors conduct experiments on a few controlled tasks show that the proposed approach can effectively reduces the reasoning steps while maintain performance.
Strengths: 1. The idea of teaching LLMs to skip steps following the human reasoning process is intuitive and makes sense.
2. The proposed method is overall technically sound and well described.
3. The paper is in general well-written and easy to follow.
4. Experimental results confirm the effectiveness of the proposed approach, at least on these "artificial" tasks.
Weaknesses: 1. The experiments are not solid because the tasks considered in the experiments are very artificial and not representative for real-world reasoning tasks. The paper could be made much stronger by conducting tasks/datasets such as GSM8K/MATH or coding tasks, instead of simple reasoning tasks. Without the empirical study on realistic tasks, it is hard to confirm the contribution and usefulness of the proposed metric.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback and valuable suggestion! We acknowledge the importance of using established practical benchmarks. We are currently conducting additional experiments with these datasets and will provide an update on the results once the experiments are completed. | Summary: The paper proposes a method for training an LLM to solve reasoning problems using fewer verbalized reasoning steps than it is naturally encouraged to by a fixed training dataset. The resulting model is shown to maintian or improve performance on in-distribution data and OOD data testing extrapolation w.r.t. length or compositionality, while using fewer reasoning steps at inference time. Analysis shows that performance gains are concentrated around problems requiring an intermediate number of reasoning steps, rather than very few reasoning steps. Experiments are conducted with Llama-2-7b on three synthetic datasets. The method itself works by using warm-start data with mixed length reasoning demonstrations, followed by bootstrapped training data created by controlling model generations with control codes (instructions) combined with filtering model generations for correctness to create new gold data.
Strengths: - Very important: The idea of shortening reasoning steps, particularly to mimic human reasoning that is variable in its verbalized length, is a very interesting and practical direction.
- Very important: Results are positive and promising for model generalization at an increased level of efficiency. Particularly interesting are results suggesting that model performance can increase on difficult OOD data by virtue of skipping some reasoning steps. Initially, CoT was found to improve OOD generalization, but it seems that this iteration on CoT could improve OOD performance even more in some situations.
- Important: The paper is overall straightforward to read and understand, with only a few exceptions.
- Of some importance: The connection to easy-to-hard generalization was interesting to me. That this method could improve OOD performance, specifically length/compositional generalization, was very interesting.
Weaknesses: - Important: What are the instructions at inference time? Do you require a ground-truth number of reasoning steps to run the model at inference time? If so, this important detail is missing from the paper and could make the method difficult to use in practice for problems if it is not know how difficult they are in advance. Would the method be robust to misspecified instructions at inference time?
- Important: I find it a little confusing to reconcile the results of Sec. 5.1 with Sec. 5.2. Sec. 5.1 makes it look like using fewer steps greatly hurts model performance, while Sec. 5.2 makes it seem like using fewer steps does not hurt performance (specifically the Warm start rows, relative to Cold start baselines).
- Of some importance: The data is a little artifical. There are existing reasoning and compositional reasoning benchmarks that could be appropriate for this work (though they could require stronger models), including SCAN (https://arxiv.org/abs/1711.00350), GSM8k, StrategyQA, and MATH datasets. However, this is not a major weakness as using clean, controlled datasets is advantageous for studying these kinds of phenomena and they enable automatic construction of warm start data.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why keep the cold start data in the training data if the bootstrapped data is good or better? Do you have ablations that suggest what mixture of the data is best?
- Suggested experiment: if you could have two models that are similar except for one being better at long-context reasoning, it would be interesting to see how your method affects each model. The reason for this is that compressing reasoning length could be beneficial by virtue of reducing the context length, rather than some other inherent benefit like allowing the model to spend more computation on harder steps. Such an experiment would help disambiguate if the improvement comes from shortening context length or from using fewer steps.
- Note L.68-69 is heavily disputed by follow work on ToM, e.g. https://arxiv.org/pdf/2310.19619
- Just so you’re aware, some highly related work has appeared contemporaneously: (1) https://arxiv.org/pdf/2405.14838, (2) https://arxiv.org/pdf/2407.06023
- L.34: use an em-dash rather than single dash here
- L.221: Fig7(a) should read Fig4(a)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Discussion was adequate, but it could be worth mentioning that experiments were conducted with only one model and three synthetic datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and encouraging feedback. We are pleased that you found our approach and results promising and will continue to refine and expand upon these ideas in future revisions.
**[W1]**
We only require the step number as input when generating the skipped data for the training set during the iteration phase. At inference time, we train an additional standard model (described as Eq. 2), which only takes the question as input, without the need for a number of reasoning steps. The instruction used is:
"Transform the expression to isolate ❤ on the left side of the ↔. \
Question: [[QUESTION]] \
Answer:"
This approach ensures the model is robust and practical for real-world applications, as it does not rely on the ground-truth number of reasoning steps during inference.
**[W2]**
We apologize for the confusion regarding the results in Sections 5.1 and 5.2. We will provide more detailed captions to clarify this. In Section 5.1, we analyze the initialized model $M_0$, which is trained with full step data only and requires the step number as input. When asked to generate answers in fewer steps, the model struggles to maintain accuracy because it has not been trained on the skipping-step data. The primary purpose of $M_0$ is to generate skipped step data during the iteration phase, and it is not expected to achieve high accuracy initially—only sufficient accuracy to generate usable data.
In Section 5.2, we evaluate the resulting standard model $M^{standard}$, which is trained on both the full step and the generated skipped data, and does not require the step number as input. This training method allows $M^{standard}$ to maintain accuracy while using fewer steps, demonstrating the effectiveness of our approach.
**[W3]**
Thank you for your valuable suggestion! We acknowledge the importance of using established benchmarks. We are actively working on additional experiments using these datasets and will provide an update on the results once the experiments are completed.
**[Q1]**
Mixing the data allows us to gradually process the iterations. At the beginning of the iterations, the valid skipping data may not be enough and they cannot help the model fully understand the task. For example, on Analog of Algebra, Iter 1 only has less than 150 valid skipping data to use. We are also working on additional ablation analysis to provide empirical support following your advice.
**[Q2]**
Thank you for this insightful suggestion! It would indeed be valuable to investigate where the improvements come from. We will consider incorporating this experiment in future revisions to better understand the underlying reasons.
**[Q3]**
Thank you for pointing this out. We will include this in the related work.
**[Q4]**
Thank you for providing the references. We are excited to see related work in the community that shares similar ideas and interests! We will incorporate these references in the revised version.
**[Q5, Q6]**
Thank you for pointing out these typos. We will revise them accordingly.
**[Limitation]**
We additionally conduct experiments with Phi-3-mini on the Analog of Algebra task. The results confirm that our method is generalizable to other model series. We will include the results across all tasks in the revision.
| | In-domain | | OOD-easy | | OOD-hard | |
|----------|--------------|------------|------------|------------|------------|------------|
| | Acc | Avg steps | Acc | Avg steps | Acc | Avg steps |
| Cold Start | 99.5 | 3.18 | 96.86 | 6.16 | 1.67 | 9.83 |
| Iter 1 | 99.7 | 3.18 | 99.22 | 6.13 | 0.95 | 10.61 |
| Iter 2 | 99.9 | 3.11 | 100.0 | 6.06 | 5.48 | 9.06 |
| Iter 3 | 99.9 | 2.90 | 99.6 | 5.79 | 6.67 | 7.74 |
| Iter 4 | 99.9 | 2.64 | 99.41 | 5.36 | 9.05 | 7.46 |
| Iter 5 | 99.7 | 2.43 | 98.82 | 5.23 | 10.24 | 7.77 |
Table: Phi-3-mini on Analog of Algebra.
---
Rebuttal Comment 1.1:
Title: Respose to rebuttal
Comment: >We only require the step number as input when generating the skipped data for the training set during the iteration phase. At inference time…
Thanks for the clarification! This highlights to me that there may not be strong control over how much step skipping the model does. Rather, the model learns to skip some portion of the time based on the training data. I think this is fine. I think the strength of this paper is interesting results around step skipping and model generalization, and not a new method for controlling how many steps a model takes when solving a problem.
>We apologize for the confusion regarding the results in Sections 5.1 and 5.2…
Thanks this makes sense!
>We are actively working on additional experiments using these datasets
Great, please do upload these if they finish in time.
---
Based on the above discussion, I plan on keeping my score at 7. If the results were especially interesting on additional, harder datasets, I could increase it, but I think the core contribution of the paper is already solid based on the three datasets used.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for your recognition and support of our work!
We have conducted additional experiments to evaluate generalization across different domains. Our findings indicate that step-skipping data from other tasks positively impacts the step-skipping performance on the withheld task. We kindly invite you to review the detailed results in our response to Reviewer DBKK and Reviewer E6Qj.
We will make sure to incorporate these valuable suggestions into our revision. Once again, we sincerely appreciate the time and effort you’ve taken to thoroughly review our work. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimization Can Learn Johnson Lindenstrauss Embeddings | Accept (poster) | Summary: This work shows that a deterministic optimization procedure can find a matrix $A$ that satisfies the Johnson Lindenstrauss guarantee. That is, a matrix $A$ maps a set of $n$ vectors to a lower dimensional space while preserving all pairwise distances up to some chosen multiplicative distortion. Typically, $A$ is constructed by sampling it from a random matrix distribution with i.i.d. entries. The authors prove that attempting to directly optimize the entries of $A$ through an optimization procedure by minimizing the maximum distortion is prone to being stuck at local minima. However, the authors show that by optimizing the mean of each entry and the entry-wise variance of the distribution $A$ is sampled from, one can maintain a fixed probability of $A$ being a JL-embedding while at the same time guaranteeing the entry-wise variance ‘sigma’ goes to zero. They then show that, when ‘sigma’ is sufficiently small, one may use the optimized expectation of $A$ as the embedding matrix while only slight increasing the maximum distortion, thereby deterministically finding the desired JL embedding matrix $A$. They show that $\rho$-SOSPs (second order stationary points) have sufficiently low variance when $\rho$ is small, and finally show that a method for finding $\rho$-SOSPs suffices to solve the designed optimization problem.
Strengths: Overall, the paper is clearly written and well-motivated. The intuition of the approach and analysis is easy to follow.
The key idea of optimizing the parameters of a random matrix distribution to preserve the JL-embedding property while reducing the entry-wise variance seems like an innovative approach. The authors point out the original space of matrices is contained in this larger probabilistic space, since a deterministic matrix $A$ is equivalent to having mean $A$ and zero variance. Hence, this can be seen as a probabilistic relaxation of the original matrix optimization problem. I have not seen this type of relaxation used in the field of matrix sketching or more generally randomized numerical linear algebra before, and I believe it may be useful for other problems in the area. I am not very familiar with diffusion models, so I cannot speak on the novelty of the approach regarding that area.
The empirical results are also strong in the sense that they show this procedure for constructing a JL embedding tends to achieve a much lower distortion factor than randomized constructions for a fixed dimension.
Weaknesses: The iterative method to find the matrix $A$ takes $\operatorname{poly}(n, k, d)$ steps, i.e., the complexity is proven to be polynomial but not explicitly determined. Since the paper is primarily theoretical with only limited experiments, it is unclear how efficient this method is in practice.
While the results seem very interesting theoretically, the paper could be strengthened by pointing out some practical applications where this improved deterministic JL embedding would be useful. In the applications I am familiar with, oblivious JL embeddings are needed due to the large number of points in the high-dimensional space (e.g., preserving k-means loss). The authors point to embeddings in deep learning as motivation. It is unclear to me as to how the authors expect progress in understanding deterministic JL embeddings to relate to these embeddings in deep learning. Additional clarification of this point would be helpful.
Technical Quality: 4
Clarity: 4
Questions for Authors: In the conclusion, you mention the potential for this approach in applications beyond the Johnson Lindenstrauss setting. In your approach for the JL setting, you upper bound the failure probability of the distortion guarantee via the union bound in eqn. (3). This formulation of the objective function seems difficult to translate to other sketching guarantees on $A$ (e.g., projection cost preservation, L2-subspace embedding, affine embedding). Is there any intuitive reason why it may be possible to formulate a relaxed differentiable objective function when the embedding guarantee must hold over an uncountable number of points?
How does learning a JL embedding relate to learning embeddings for application areas discussed in the introduction? In particular, how do you see the results of this paper affecting that line of work? As mentioned above, I think it would be helpful to expand on the link between your result and the motivation of deep learning embeddings given in the intro.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed and thoughtful review of our paper. We appreciate your recognition of the strengths and innovation of our work. We understand your concerns regarding the practical efficiency of our method and the explicit determination of its complexity. While our paper primarily focuses on the theoretical foundations, we acknowledge the importance of practical applicability. This is why we included an experiment to showcase the superior performance of our method, even with access to only first-order information, demonstrating its computational efficiency.
We understand your questions but first, we want to clarify what we mean by "our approach". Our approach includes lifting the domain to the distributional space, then optimizing the distributions’ parameters and finally obtaining a deterministic solution using variance reduction. We view this as the main contribution of this paper as we provide a different optimization paradigm and show that optimizing indirectly using our framework can be provably better than direct optimization. We expand on this further below.
**Q: [...] Is there any intuitive reason why it may be possible to formulate a relaxed differentiable objective function when the embedding guarantee must hold over an uncountable number of points?**
A: There are many applications of our approach in theoretical results. Often times specialized randomized constructions are used for several tasks (distance preserving in $L_2$ or other norms, $L_0$ sampling, linear-sketching or other tasks as you mentioned). Our hope is to be able to recover these results directly via optimization methods and potentially derandomizing them via variance reduction. While in the case of JL the initial distribution (Gaussian) was very simple, in other applications much more clever constructions are required. Can optimization recover these embeddings in a principled way? While the specific formulation may vary, the underlying principle of optimizing distribution parameters to achieve low variance and deterministic solutions could be adapted to other contexts.
**Q: How does learning a JL embedding relate to learning embeddings for application areas discussed in the introduction? In particular, how do you see the results of this paper affecting that line of work? [...].**
A: This is an interesting question. As a first point, we are optimistic that since our contributions include effectively, a new perspective and analysis, it opens the door to studying embeddings of points that have some specific structure. Secondly, in broader terms, deep learning is a general framework applicable to many domains. Frequently in deep learning, we utilize encoder architectures to generate embeddings from data, for which we want certain properties to hold. In this sense, learning a JL embedding can be viewed as a special case of an encoder, where our focus is on preserving the $L_2$ distance. Our work contributes to the deep learning field by informing the design of algorithms of this general principle. Specifically, we show that optimizing directly can lead to “bad” solutions and to overcome this one can choose to optimize in the “richer” space of distributions.
In addition, we believe our results provide a crucial link between deep learning practitioners and theorists who offer theoretical guarantees through classical/randomized algorithms. Embeddings are central to the success of neural networks, driving much of their empirical achievements. However, the lack of provable guarantees for these embeddings remains a significant challenge. By introducing provable guarantees for embeddings via optimization, alongside an algorithmic approach that is still essentially as tractable as a direct optimization approach, our work bridges these two perspectives. It allows us to harness the power of optimization while ensuring that the embeddings maintain desired properties.
This combination not only enhances the reliability of embeddings used in deep learning but also provides insights into the underlying dynamics contributing to the empirical success of deep learning models. This approach can potentially illuminate why certain embeddings work well in practice and how they can be systematically improved with theoretical backing.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their reply. I will maintain my score.
> Is there any intuitive reason why it may be possible to formulate a relaxed differentiable objective function when the embedding guarantee must hold over an uncountable number of points?
I agree that this paper is valuable for raising the possibility of the general approach you've described being used to achieve deterministic optimization based algorithms for these other sketching guarantees. However, doing so indeed seems quite challenging at first glance.
> How does learning a JL embedding relate to learning embeddings for application areas discussed in the introduction? In particular, how do you see the results of this paper affecting that line of work?
This is a very interesting interpretation of the results in relation to deep learning. Thanks for clarifying. | Summary: The paper proposes to calculate the embedding matrices used in the statement of the Johnson-Lindenstrauss lemma using optimization instead of randomization. The proposed algorithm is a Hessian descent. Authors prove that the algorithm finds the matrix of minimum distortion. Numerical results display the findings
Strengths: JL is a well celebrated result used for proving existence of optimal (low distortion) embeddings. It is stated in the formal result of JL that such embeddings can be found in polynomial time. But we often rely on randomization to exhibit them. It is useful to have an algorithm to calculate the embeddings. The paper tackles a well motivated problem and their presentation is clear and clean.
Weaknesses: The paper lacks complexity analysis of the algorithm. The algorithm proposed requires a full eigenvalue decomposition at every step. It is prohibitive to use this method in any practical scenario. A discussion on the complexity and how to scale the method up (using randomized methods??) would be nice.
Technical Quality: 2
Clarity: 4
Questions for Authors: The paper's main claim is that mean distortion embeddings are computationally well studied: spectral methods (SVD / eigenvalue) methods calculate those. Authors claim that the min (instead of mean) distortion method is what they want to find. Can authors explain why the relaxation of f* to f using the probability bound in Eq (2) and invoking union bound to obtain (3) does not reduce the max to a sum. The entire promise of the method is to work with the max directly to minimize distortion. It appears that relaxation of max to a sum using the union bound drops the nice property which was the primary motivation of the work. Can you explain what I am missing of misinterpreting?
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: scaling the method up and a clear argument why relaxation of Eq (3) is not making the problem trivial would impove the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and valuable comments. We appreciate your recognition of the strengths of our work and have addressed your concerns below.
You raise a valid point about the computational expense of second-order methods. We should clarify here that our primary result is that second-order stationary points of the objective function in Equation 4 satisfy the Johnson Lindenstrauss (JL) guarantee. We opted to use a deterministic second-order method in order to achieve an end-to-end derandomized construction of a JL matrix.
Having established these theoretical guarantees, in our empirical evaluations, we do not insist on deterministic computation as our focus is now different, namely to demonstrate that for practical instances that have non-worst-case structure our method significantly outperforms the randomized JL construction. Instead, for simplicity, we use Monte Carlo sampling and randomized first-order methods which are known to converge to second-order stationary points with high probability [Gao et al. 2017]. In practice, second-order methods can be prohibitively expensive but are not necessary for convergence to second-order stationary points if randomness is allowed.
**Q: [...] Can authors explain why the relaxation of f\* to f using the probability bound in Eq (2) and invoking union bound to obtain (3) does not reduce the max to a sum.**
A: The primary objective of our paper is to use optimization-based methods to find matrices that satisfy the JL guarantee, ensuring that the maximum distortion over all points does not exceed a specified threshold $\epsilon$. As we mention, this is a different (and more challenging) objective than what spectral methods like PCA control. Our goal is to show that this can be accomplished deterministically, directly using optimization. We show, however, that if one tries to do this naively, e.g., by directly working in the space of projection matrices, then significant challenges arise (to put it simply, such an approach will not work) because the landscape of the optimization in this space is non-convex, with bad local minima.
Therefore, another approach is needed. We do not change the objective -- as you correctly point out, working with the maximum is important in getting the guarantees we wish to give. Instead, we change the optimization approach, and the **space** in which we optimize. To do this, we work in the space of **solution samplers** rather than in the space of projection matrices. So at any current step in the optimization, the current "solution" of the optimizer is not a deterministic projection matrix, but rather a distribution from which one samples such a matrix. For our framework, therefore, we define a new objective $f^*$ (Equation 2) which represents the **probability** of generating a matrix $A$ with maximum distortion exceeding $\epsilon$. This is the probability that a given solution sampler generates a projection matrix that does not match the JL guarantee. We clarify that *we do not use the union bound to reduce the maximum distortion to the sum of all distortions*. Instead, we reduce $f^*$ to $f$ (Equation 3), i.e. to the sum of probabilities of each point having distortion larger than $\epsilon$ using a randomly generated matrix $A$. To make it more clear, we note that when no randomness is involved, i.e. $\sigma^2=0$, we have that $f^*(M,0) = 0$ means that no projected data point has distortion larger than $\epsilon$ and $f(M,0) = 0$ measures how many projected data points have distortion larger than $\epsilon$. Additionally, it is important to see that, $f^*(M,0) = 0$ if and only if $f(M,0) = 0$.
Ultimately, we show that our process converges to a distribution with no variance, hence, a specific projection matrix that satisfies the JL guarantee.
---
Rebuttal Comment 1.1:
Comment: thanks for the rebuttal. Follow-up questions:
* is there a reason why a first-order optimizer would not work and we actually need a second-order (more expensive) optimizer? First order optimizers are not necessarily stochastic / randomized.
* If we remove the $\sigma^2$ term from Equation (4), what estimator do we get? is it not SVD?
---
Rebuttal 2:
Comment: Thanks for the response. We answer your follow-up questions below.
**Q: is there a reason why a first-order optimizer would not work and we actually need a second-order (more expensive) optimizer? First order optimizers are not necessarily stochastic / randomized.**
A simple answer to this is that second-order optimization is necessary because, without it, using first-order optimization **without any randomization** results in the optimization getting stuck at the initial point. Generally, to avoid getting stuck throughout the optimization process, you either need second-order information or first-order information combined with randomization. Stochasticity ensures that progress can always be made.
**Q: If we remove the $\sigma^2$ term from Equation (4), what estimator do we get? is it not SVD?**
If we remove the $\sigma^2$ term from Equation (4), we would not obtain SVD. Equation (4) provides a bound on the probability of generating a matrix that achieves **worst-case distortion** larger than a specified threshold $\epsilon$. In contrast, PCA, which relies on SVD, finds an embedding that minimizes **average distortion** of distances between the original and the embedded vectors.
By fixing (or as you mentioned, removing) $\sigma^2$, we would essentially be looking to optimize the mean matrix $M$ for that specific $\sigma^2$. However, completely removing $\sigma^2$ from the optimization process would create an undesirable situation. While we might obtain a “better” mean matrix $M$, we would lose the intended derandomization at the end of the optimization, effectively resulting in $\sigma^2 = 0$.
---
Rebuttal Comment 2.1:
Comment: can you explain this?
> second-order optimization is necessary because, without it, using first-order optimization without any randomization results in the optimization getting stuck at the initial point."
is this an empirical finding on the numerical experiments you ran? did you try a larger learning rate?
or is it theoretically proven and related to the objective function that we are optimizing?
> Generally, to avoid getting stuck throughout the optimization process, you either need second-order information or first-order information combined with randomization. Stochasticity ensures that progress can always be made.
Can you share a reference or a more formal statement on this?
---
Rebuttal 3:
Comment: **Q: [...] or is it theoretically proven and related to the objective function that we are optimizing?**
A: It is related to the objective function and theoretically proven, so changing the learning rate won’t resolve it. More generally, to avoid getting stuck at any point during the optimization process, we require either a second-order method or a first-order method with some **randomness**. We expand further on this in the next question.
**Q: Can you share a reference or a more formal statement on this?**
A: Our main result is that the second-order stationary points of Equation (4) satisfy the JL guarantee (Theorem 2). To achieve this, we employ a deterministic second-order method, as detailed in our paper, and thus we achieve a **truly derandomized construction** of a JL matrix.
Additionally, it has been shown that first-order methods with randomness also converge to second-order stationary points with high probability, see for example the work of Jin et al. (2017) on saddle points. Thus, whether using a deterministic second-order method or a first-order method with some degree of randomness, we can reach a second-order stationary point, which is precisely what is required to ensure the JL guarantee.
We hope this clarifies your questions. | Summary: The paper considers using the optimization method to "learn" the Johnson Lindenstrauss transform. The paper first shows that the naive objective may not be good enough -- there are stationary points that are sub-optimal. Instead, they consider the way that optimize the random Gaussian space rather than the concrete matrix. Then the authors give an optimization method and show that using this way, every stationary point is a good point and claim that this gives way to deterministic learns the JL matrix. Finally, the paper gives an experiment that shows the advantages of the proposed method.
Strengths: The theoretical analysis of this paper is very interesting. To my knowledge, there are very few results about analyzing the landscape of the learned sketching matrix and this paper gives a strong analysis. The experiments also show the advantages of the proposed method. The presentation of the paper is also good.
Weaknesses: I am still confused about some parts of the paper. I can raise the score if the authors can adjust this. (see the below question)
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. I am still a little confused about the main conclusion of the results of this paper. That is -- it gives a deterministic construction of the JL lemma, or it gives a better optimization way and it works well empirically? (as the author mentioned, the bound of the JL lemma can not be improved)
2. The equation (4) is about probability, and B.1 says they use Monte Carlo sampling, however, would it means that the proposed method still contains some randomness part?
3. In the experiment section, the paper compares the proposed method with the JL lemma. It will make this stronger if the comparison with equation (1) is also given.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See above questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and for appreciating our analysis. We agree with you that there are not many results analyzing the landscape of the learned sketching matrix. We address your questions below.
**Q: [...] it gives a deterministic construction of the JL lemma, or it gives a better optimization way and it works well empirically? [...].**
A: The answer is both. Our main contribution is that one can use direct optimization to efficiently find "good" embeddings. We propose a novel optimization objective and analyze its performance from two distinct angles:
- First, we take a worst-case perspective where we theoretically show that optimization can obtain matrices that are always at least as good as the guarantees of the Johnson Lindenstrauss (JL) lemma. This establishes that our method is worst-case optimal, given that the JL lemma guarantees cannot be improved.
- Secondly, we demonstrate that for practical instances that have non-worst-case structure our method significantly outperforms the randomized JL construction. This is intuitively right as the randomized construction is data agnostic.
Along the way, we establish that naive methods do not have these guarantees as the space of projection matrices is non-convex and has bad local minima. Instead, we need to work in a slightly larger space of samplers, and there, our optimization method will find a deterministic solution.
In summary, our theoretical guarantees state that we can achieve the JL guarantee in the worst case, but we showcase that in practice we can go beyond that, if the data allows it.
**Q: The equation (4) is about probability, and B.1 says they use Monte Carlo sampling, however, would it means that the proposed method still contains some randomness part?**
A: As mentioned above our main contribution is the analysis of this novel objective. While this objective (Equation 4) involves probabilities, they have a closed form expression and can be calculated exactly without any sampling or randomness. Our main result is that second-order stationary points of this objective function satisfy the JL guarantee. Using a deterministic second-order method, we achieve a **truly derandomized construction** of a JL matrix.
Having established these theoretical guarantees, in our empirical evaluations, we do not insist on deterministic computation as our focus is now different (as explained in the previous question). Instead, for simplicity, we use Monte Carlo sampling and randomized first-order methods which are known to converge to second-order stationary points with high probability [Gao et al. 2017]. In practice, second-order methods can be prohibitively expensive but are not necessary for convergence to second-order stationary points if randomness is allowed.
**Q: In the experiment section, the paper compares the proposed method with the JL lemma. It will make this stronger if the comparison with equation (1) is also given.**
A: Thank you for the suggestion. However, as we show in Theorem 1, optimizing Equation 1 directly in the matrix space can lead to bad solutions with high distortion, whereas our method is guaranteed to succeed. Since we showed that taking the approach of Equation 1 can (provably) fail, we decided that it would not provide a fair comparison and would distract from the comparison with the randomized JL construction.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
- Derandomization: Thanks for pointing this out. However, the calculation of the closed form of expression (4) seems to be non-trivial and some discussion about the computation should be included to support the claim (e.g., the time complexity). This is the main reason I keep my score.
- Experiment: I agree that "optimizing Equation 1 directly in the matrix space can lead to bad solutions with high distortion" However, it was shown in the specific hard instance and the comparison over the real-world datasets will still be interesting.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response.
Regarding the first point, it's important to note that the calculations of the gradient involve **only** cumulative distribution functions of known distributions, i.e. the chi-squared distribution, which simplifies the computations. We acknowledge that this can be explained more clearly and we will incorporate a discussion on the computational aspects in our work.
We appreciate all your valuable feedback and will incorporate it to strengthen our paper. Thank you for your insightful comments and suggestions. | Summary: This paper investigates the problem of using optimization-based approaches to learn Johnson Lindenstrauss(JL) embedding. The authors proposed a new framework to achieve the JL guarantee via optimization, instead of the traditional randomized methods. Similar with diffusion models, the authors proposed a novel method that uses optimization in an extended space of random Gaussian solution samplers, which circumvents direct optimization in non-convex landscape. Their approach uses second-order descent, gradually reduces the variance without increasing the expected distortion of the sampler, then can identify a specific projection matrix with the Gaussian distribution. Overall, theoretical guarantees and empirical results demonstrate that this method efficiently achieves a deterministic solution that satisfies the JL guarantee.
Strengths: The paper is well-written. The state of the art is well discussed by an extensive literature review. The proposed method combining optimized-based approaches and Johnson Lindenstrauss embeddings is an innovative contribution to the field.
The paper is technically sound, provides rigorous theoretical analysis and proofs.
Weaknesses: It would be helpful to understand the main results if section 4 could be more organized, such as using subsections.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Could you explain the statement in lines 215-216? Are the values of 1/(3n) and 1/3 derived based on the chosen value of $\epsilon$?
2. Line 336, there is a typo “appplicability”.
3. Regarding the notation $\rho$-second-order stationary points($\rho$-SOSP), the paper uses $\rho$-second-order stationary points in some sections and uses $\rho$-SOSP in others.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback and for recognizing the innovation in our work.
**Q: Could you explain the statement in lines 215-216? Are the values of 1/(3n) and 1/3 derived based on the chosen value of** $\epsilon$**?**
A: Yes, that's exactly correct: you can choose $\epsilon$ appropriately to get a $1/(3n)$ probability for any data point to violate the threshold constraint. Thus, if you sum these $n$ probabilities you get $1/3$.
Thank you for your keen observations on the paper's typo and notation inconsistencies, we fixed it and made it consistent throughout.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I will maintain my score. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful and constructive feedback on our submission. We appreciate the time and effort they have dedicated to reviewing our work. We are encouraged by their positive reception, noting that they found our contribution innovative (pTrj, WAK9), our analysis strong (y6Yv), and the problem well-motivated (XscG).
Furthermore, reviewer WAK9 noted that our probabilistic relaxation can be useful for other problems in matrix sketching and randomized numerical linear algebra. Reviewers y6Yv and WAK9 also found our experiments valuable, highlighting that they effectively demonstrate the advantages of our method.
We address the reviewers' comments below and will incorporate all feedback. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Adversarially Robust Multi-task Representation Learning | Accept (poster) | Summary: In this study, the authors explore adversarial multi-task representation learning, where a predictor and feature extractor are trained on multiple source tasks with adversary and then another predictor following the feature extractor is trained on a target task with adversary. They provide bounds on the excess risk under mild assumptions, showing that these bounds decrease with larger training sample sizes for individual source tasks $n$, the total sample size of all source tasks $nt$, and the sample size of the target task $m$. These results suggest that large sample sizes and diverse source tasks contribute to robust learning in adversarial transfer learning. Additionally, the input and feature dimensions increase these bounds. The excess risk decreases more rapidly when using smooth and non-negative losses compared to Lipschitz losses from a sample size perspective. Furthermore, based on the multi-task results, the authors consider excess risk in a single-task setting.
The authors first derive Theorem 1 for Lipschitz loss and Theorem 4 for non-negative losses based on [38]. Rather than directly addressing the adversarial loss class, they consider the inflation of the sample space $S$ by an adversarial attack $A$, examining its coverage by balls and standard volume arguments. These results bound the Rademacher complexities of function classes in source and target tasks with adversary (Theorems 2 and 5).
Additionally, a new reduction method from a multi-task setting to a single-task setting (Theorem 3) may aid future work in both adversarial and non-adversarial settings.
Strengths: The problem settings and assumptions regarding data distribution (Lines 140--145), function properties (Assumptions 1--4), and the fat-shattering dimension and size of the inflated dataset (Theorems 2 and 5) are mild. The derived results, such as the order in terms of sample size and input or representation dimensions, seem appropriate. The bounds are interpretable and offer important insights for adversarial transfer learning: diverse source tasks and sample sizes facilitate robust transfer learning.
Many prior studies emphasize the importance of sample complexity in adversarial training. However, obtaining sufficient training samples for a target task is not always feasible. This study theoretically provides valuable guidance for such situations from the perspective of transfer learning.
Moreover, the derived upper bounds have the same order (growth rate concerning dimensions and sample sizes) as prior work on the non-adversarial setting [38]. This indicates that even in adversarial training, it is sufficient to prepare training samples similarly to standard training, ignoring constant and logarithmic terms, which is a positive outcome for the community.
Weaknesses: One might (easily) predict this result from [38]. Under Assumption 4, the sample complexity of the perturbed dataset can be regarded as the finitely scaled sample complexity of the original dataset (as the authors exploited this concept). From the perspective of covering number and Dudley's integral, this leads only to logarithmic differences in orders. It might not be very difficult to conclude that the same order controls the bounds of the excess risk even in adversarial transfer learning as in standard transfer learning. Nonetheless, I acknowledge the authors' effort in providing a formal proof, even if the results are predictable.
The looseness of the bound is also a weakness, though it is a natural property of Rademacher complexity-based bounds. For example, the bound in Theorem 1 includes two worst-case Rademacher complexities $\hat{R}(\ldots, n)$ and $\hat{R}(\ldots, m)$, and $\sup_h R(\ldots)$ (the worst-case in terms of the hypothesis class of representation). This looseness may be due to the mild assumptions. Tighter bounds for more restrictive cases might enhance the interpretability of the derived bounds.
Technical Quality: 3
Clarity: 2
Questions for Authors: The authors assume each source task has a common sample size $n$. If each source task has a different sample size, which affects the first term of the bounds: the maximum or the average sample size?
Minor comments:
- In Lines 46 and 47, there is unnecessary space.
- In the equation under Line 47, $\nu$ and $\epsilon$ are still not defined.
- Eq. (2) (and (6) in the Appendix) misses $(x_1), \ldots, (x_t)$. Additionally, $g \in G$ should be $q \in Q$.
- Eq. (3) and Line 323 might not need $\sup$.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors address the limitations of their assumptions in Appendix C.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are thankful for your thoughtful comments on our work.
> One might (easily) predict this result from [38]. Under Assumption 4, the sample complexity of the perturbed dataset can be regarded as the finitely scaled sample complexity of the original dataset (as the authors exploited this concept). From the perspective of covering number and Dudley's integral, this leads only to logarithmic differences in orders. It might not be very difficult to conclude that the same order controls the bounds of the excess risk even in adversarial transfer learning as in standard transfer learning. Nonetheless, I acknowledge the authors' effort in providing a formal proof, even if the results are predictable.
We initially had the same thought as this is the reasoning that is often the case in the non-adversarial setting. However, in our setting we found that these logarithmic dependencies are of supreme importance. Recall that the data is inflated to a sample size exponential in the dimension. Therefore, the order of the logarithmic dependence is of great importance.
Indeed, if one naively uses the standard argument, e.g. in [35], the order of the logarithm is too large and therefore you get at least linear dependence in dimension once all factors are accounted for. We actually spent several weeks at first working on that idea. This is the very reason we leverage the deep theory provided by Rudelson and Vershynin, as it provides minimal logarithmic dependence, and perform the careful analysis in the proof of Lemma 7.
> The authors assume each source task has a common sample size $n$. If each source task has a different sample size, which affects the first term of the bounds: the maximum or the average sample size?
The analysis can be readily extended to this setting with the final bound featuring the minimum of the sample sizes.
> * In Lines 46 and 47, there is unnecessary space.
> * In the equation under Line 47, $\nu$ and $\epsilon$ are still not defined.
> * Eq. (2) (and (6) in the Appendix) misses $(x_1), \ldots, (x_t)$. Additionally, $g \in G$ should be $q \in Q$.
> * Eq. (3) and Line 323 might not need $\sup$.
Thank you for your careful reading of the text. We will fix those typos and the missing definition.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their detailed response. I appreciate their efforts in achieving these results. I will maintain my current evaluation. | Summary: This paper conducts theoretical studies on adversarially robust transfer learning, which is to learn a model with small robust error on a downstream (target) task from a model pretrained on multiple other (source) tasks. Considering the specific multi-task representation learning (MTRL) setting, this paper provides rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth non-negative losses, showing that a representation derived from adversarial pretraining can assist in defending against adversaries on downstream tasks.
Strengths: 1. This paper theoretically shows bounds on the excess transfer risk for the adversarial loss class for both Lipschitz losses and smooth nonnegative losses, demonstrating the benefits of adversarial pretraining on source tasks for downstream tasks in transfer learning.
Weaknesses: 1. The proposed theoretical results are interesting, but empirical experiments are missing to support the presented theories, such as the benefits of adversarial pertaining to downstream tasks and that it takes fewer samples to learn a good predictor for downstream(target) tasks with adversarially robust representations learned from related source tasks,
2. As the paper introduces some additional empirical assumptions, such as assumption 4 which requires adversarial attack functions to be bounded within the known input domain, some practical examples or empirical experiments will be helpful to justify it.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What attacks are applicable to this work? $\|\cdot\|_2$ attack, $\|\cdot\|_1$ attack, or $\|\cdot\| {\\infty}$ attack?
1. What is $g, \mathcal G$ in equation 2? (line 173)
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the paper. No potential negative societal impact is found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our work.
> The proposed theoretical results are interesting, but empirical experiments are missing to support the presented theories, such as the benefits of adversarial pertaining to downstream tasks and that it takes fewer samples to learn a good predictor for downstream(target) tasks with adversarially robust representations learned from related source tasks,
We agree the paper would benefit from experiments, as many theory papers would. Although we do not have experimental results, we emphasize that our results are complete and stand on their own as a contribution to our understanding of adversarial robustness. We see our work as a theoretical first step in this direction and we foresee empirical work as future work.
> As the paper introduces some additional empirical assumptions, such as assumption 4 which requires adversarial attack functions to be bounded within the known input domain, some practical examples or empirical experiments will be helpful to justify it.
Indeed, we agree this would aid the reader and we will add such concrete practical examples. Recall we do have one example on line 150, $ \mathcal{A}= \\{ x \mapsto x + \delta \mid \| \delta \|_\infty \leq 0.01, x + \delta \in \mathcal{X} \\} $ for additive
$ \| \cdot \|\_\infty $ attacks. Also, please review Section B for additional commentary on this assumption.
> What attacks are applicable to this work? $\| \cdot \|_2$ attack, $\| \cdot \|_1 $ attack, or $ \| \cdot \|\_\infty $ attack?
Yes, our bound works for any finite additive $p$-norm perturbation ($p \geq 1$) attack. In addition, our approach allows for attacks beyond the above, as we can extend to patch attacks or spatial attacks (e.g., image rotations). This generality we believe is a strength of our analysis.
> What is in $g, \mathcal{G}$ equation 2? (line 173)
This should be $q, \mathcal{Q}$. Thank you for catching that typo.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their responses, I don't have other questions. | Summary: The paper studies adversarially robust transfer learning, wherein, given labeled data on multiple (source) tasks, the goal is to train a model with small robust error on a previously unseen (target) task. The paper considers a multi-task representation learning (MTRL) setting, i.e., assuming that the source and target tasks admit a simple (linear) predictor on top of a shared representation (e.g., the final hidden layer of a deep neural network). The paper provides rates on the excess adversarial (transfer) risk for Lipschitz losses and smooth nonnegative losses. These rates show that learning a representation using adversarial training on diverse tasks helps protect against inference-time attacks in data-scarce environments.
Strengths: The paper has good originality, quality, clarity, and of important significance.
Weaknesses: No experiments are provided.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.What's the experimential results of the proposed theory?
2.In line 155, as for the proposed Two-stage adversarial MTRL, I have a question wonder whether it's better to optimize a two-stage optimization than one-stage optimization?
3.Are Lipschitz losses and smooth nonnegative losses necessary for adversarial transfer?
4.Are different datasets effect the results of adversarial transfer?
5.Are the claim that representation derived from adversarial training assist in defending against adversaries on downstream tasks in different adversarial attacks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It's better to verify the effectiveness of the proposed approach on realistic datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your insights and recognition of our work.
> 1.What's the experimential results of the proposed theory?
We agree the paper would benefit from experiments, as many theory papers would. Although we do not have experimental results, we emphasize that our results are complete and stand on their own as a contribution to our understanding of adversarial robustness. We see our work as a theoretical first step in this direction and we foresee empirical work as future work.
> 2.In line 155, as for the proposed Two-stage adversarial MTRL, I have a question wonder whether it's better to optimize a two-stage optimization than one-stage optimization?
That is an interesting question and a comparison between these two settings would be valuable. Your suggestion naturally allows the representation to be trained having seen the data, in theory providing a benefit, so we consider this a promising direction of future work. While the single-stage approach underscores the value of learning the representation, the two-stage approach would be akin to fine-tuning.
> 3.Are Lipschitz losses and smooth nonnegative losses necessary for adversarial transfer?
This is an interesting question. We study Lipschitz losses and smooth nonnegative losses, e.g., hinge and squared loss, because many of the standard losses in the literature fall under these assumptions. It would be interesting to see if these conditions are also necessary.
> 4.Are different datasets effect the results of adversarial transfer?
If we fix a target dataset, then different source datasets affect the task diversity assumption parameters $\nu, \varepsilon$ (assuming task diversity is satisfied).
> 5.Are the claim that representation derived from adversarial training assist in defending against adversaries on downstream tasks in different adversarial attacks?
Our theory applies to a wide variety of attack models. But, the threat model is known to the learner at the time of training. Otherwise, it is hard to say anything. Possibly, one can prove a no-free-lunch theorem to formalize it.
> It's better to verify the effectiveness of the proposed approach on realistic datasets.
We agree and that may follow in future. But, currently it is a theoretical paper and complete on its own as are many related theory papers. There is hardly any space to discuss any empirical results in a meaningful way without seriously undermining the writing of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response, I have no more question. | Summary: This work studies the adversarially robust multi-task representation learning. They introduce the definition of robust $(\nu, \epsilon, \mathcal{A})$-task diversity and the algorithm of two-stage adversarial MTRL. Using these, they show novel results on excess transfer risk for adversarial loss under mild conditions. The authors then present the proof sketches and compare the results with previous work in detail.
Strengths: 1. It is a valuable work to study adversarially robust multi-task representation learning. The notations, definitions, and assumptions are all clearly written, which makes it easy to understand.
2. The algorithm 1 is reasonable in practice for me. the authors discuss the novel assumption to show that it is also reasonable. Most of the assumptions in this paper seem to be mild.
3. The authors carefully discuss the differences of results shown in this work and related works. They also compare the techniques used in this work and previous works. It is clear to understand the contribution of this work.
Weaknesses: 1. The proofs shown in section F.1 are not clear. The authors do not show the formal proofs of these theoretical results.
2. The authors introduce the definition of the vector-valued fat-shattering dimension, which is a generalization of the fat-shattering dimension, while it does not seem to appear in the theoretical results, which makes it confusing.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Although the authors include a section to discuss the difference in results and techniques used in this work and prior works. It is still not clear whether there is a major difference between the **proof techniques** of your work and that of [26] since the results shown in these two works are similar. If so, what are the main differences and difficulties?
2. In the definition of $\mathcal{A}$, it looks like any function $A \in \mathcal{A}$ maps all inputs $\mathcal{x}$ to $\mathbb{x} + \delta$ with the same $\delta$. Does it correct? If so, it is a weaker version of the regular adversarial attack.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and appreciating our work.
> The proofs shown in section F.1 are not clear. The authors do not show the formal proofs of these theoretical results.
We will revisit this section to improve the clarity and rigor. For Theorem 1 and Theorem 4, we were careful to identify exactly which modifications are required from the standard arguments within [38] or [40] to complete the proof. We will add these algebraic details to make a more cohesive document for completeness so the reader doesn't need to reference [38] and [40]. Additionally we will revisit the proofs of theorem 2 and theorem 3, whose work is entirely our own.
> The authors introduce the definition of the vector-valued fat-shattering dimension, which is a generalization of the fat-shattering dimension, while it does not seem to appear in the theoretical results, which makes it confusing.
Yes, you are correct. The definition of vector-valued fat-shattering dimension in section ``Vector-valued fat-shattering dimension digression'' within the appendix is not used. However, while not essential for a complete document, we do believe that this digression is useful for understanding the utility and weaknesses of the lifting argument we use. In particular such a definition seems necessary to prove data dependent (i.e., not worse-case) bounds (see commentary after Proof Sketch 2). Indeed, we could have explained this better and we will add additional signaling to aid the reader towards this end per this discussion.
> Although the authors include a section to discuss the difference in results and techniques used in this work and prior works. It is still not clear whether there is a major difference between the proof techniques of your work and that of [26] since the results shown in these two works are similar. If so, what are the main differences and difficulties?
While we were heavily inspired by [26], we were interested if similar results hold in general. By studying their proof we noticed that the main roadblock towards our goal was after applying Dudley’s integral that the sample complexity within the covering number was itself a function of the variable of integration from Dudley’s integral. Standard arguments do not account for this complexity and it is unclear where to proceed from here to retain generality without getting at least a linear dependence on dimension.
So while we both start with volumetric arguments then use Dudley's integral, at this point we substantially diverge. On one hand, [26] instantiates the various function classes and attacks to proceed with their final bound by leveraging prior covering numbering arguments for the classes they instantiated. On the other hand, to remain function class and attack class agnostic (under our assumptions), we utilized several celebrated general comparison inequalities that are not featured [26]. This is primarily Lemma A.3. in [35] and the celebrated result in [32]. In addition, the simple application of these inequalities does not give the final result as the various quantities must be treated carefully which we believe is shown by the relative complexity of the proof of Lemma 7. For additional commentary please review section C.2 Comparison to [26].
> In the definition of $ \mathcal{A} $, it looks like any function $A \in \mathcal{A}$ maps all inputs $x$ to $x + \delta$ with the same $\delta$. Does it correct? If so, it is a weaker version of the regular adversarial attack.
Yes, your first sentence is correct, yet we emphasize that the attacker can pick a different $A$ for each data point, which allows for the generality to instantiate the regular adversarial attacks. In fact, our adversarial attack formalization allows for significantly more general attacks, which we believe has value in our analysis. Besides regular additive attacks, one can also instantiate to spacial attacks (e.g. image rotations) or patch attacks. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
High Rank Path Development: an approach to learning the filtration of stochastic processes | Accept (poster) | Summary: The paper addresses the issue of weak convergence of stochastic processes, whereby evolving information is generally unaccounted for. This can lead to discontinuities when applying these processes to multi-period decision-making problems. Prior work has proposed the concept of extended weak convergence, as introduced by Aldous (1981), but practical numerical implementations have been challenging. To address this, the authors introduce a novel metric called High Rank PCF Distance (HRPCFD) which is shown to overcome computational issues encountered in previous attempts. The paper then demonstrates the utility of HRPCFD via experiments on hypothesis testing and generative modelling of time series data.
Strengths: Unfortunately this paper lies well outside my area of expertise and I am unable to review it effectively. The mathematical framework around extended weak convergence is not an area I’m familiar with, and I consequently found it challenging to grasp the nuances of the problem statement, the significance of the proposed HRPCFD metric, and the potential implications for applications in finance and economics.
So as not to negatively impact the paper’s chances of acceptance, I have defaulted to a mid-range score in my review, which reflects my assessment that the paper could nevertheless still be made more accessible for readers who are less familiar with the domain.
Weaknesses: See above.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address the reviewer's comments and questions in detail as below.
---
**Comments:** *Unfortunately this paper lies well outside my area of expertise and I am unable to review it effectively. The mathematical framework around extended weak convergence is not an area I’m familiar with, and I consequently found it challenging to grasp the nuances of the problem statement, the significance of the proposed HRPCFD metric, and the potential implications for applications in finance and economics.*
**Answer:**
We appreciate the reviewer's kindness and positive score for our work. In the following, we would like to provide further explanation for our work, which hopefully would facilitate the reviewer's understanding.
1. *Importance and scope of our work*. Time series data, such as the stock price process, is ubiquitous and often exhibits randomness in finance and economics. Accurate modelling for the law of random time series (stochastic processes) plays an important role in decision marking. For example, training high-fidelity generative models to simulate synthetic time series that mimic future stock price movements can aid in developing more effective trading strategies and provide better estimators for assessing portfolio risk, with significant implications in finance and economics.
To tackle the challenge of time series modelling, a key question lies in defining a metric that quantifies the difference between two stochastic processes. We will explain the importance of the extended weak topology (EWT) in defining such a metric in the next bullet point. In this paper, we proposed the HRPCFD metric to capture EWT and design a computationally efficient algorithm to a computationally efficient algorithm that enables its direct application to enhance the discriminator of GANs to develop high-quality generative models for synthetic time series data.
2. *Why is EWT important?* We understand the difficulty for readers to grasp the meaning of EWT at first glance. To facilitate the understanding, we would like to use the toy example (Example A.1 in the paper) to illustrate the key concept of EWT and its usefulness. In the example, while the unconditional law of the processes $\mathbb{X}^n$ converges to $\mathbb{X}$, weak convergence fails to capture a key difference between the financial models $\mathbb{X}^n$ and $\mathbb{X}$. Specifically, if an agent believes the market dynamics as in $\mathbb{X}^n$, he/she always knows the outcome of the last day in advance, granting a predictive advantage, whereas in the “fair” market $\mathbb{X}$, the agent lacks this foresight. This crucial difference in the observed information flow-"$\text{No knowledge} \Rightarrow \text{Full knowledge} \Rightarrow \text{Full knowledge}$" for $\mathbb{X}^n$ versus "$\text{No knowledge} \Rightarrow \text{No knowledge} \Rightarrow \text{Full knowledge}$" for $\mathbb{X}$—is not reflected in weak convergence alone.
EWT is vital because it captures this difference through the conditional distributions. For markets $\mathbb{X}^n$, where the agent has full information on day 1, the conditional distribution becomes a single Dirac measure, annihilating randomness. In contrast, $\mathbb{X}$ retains genuine randomness at day 1, as reflected by a linear combination of Dirac measures. Since EWT is based on conditional distributions, it effectively measures differences in information evolution styles, ensuring continuity in multi-period decision-making as agents update their actions based on continually evolving information.
3. *Significance of the HRPCFD metric.* In machine learning, many popular metrics on stochastic processes, (e.g., the Wasserstein distance, the PCFD metric, the signature MMD) based on weak topology, fail to ensure that the closeness of the conditional law (filtration) of two processes. To this end, we propose a novel HRPCFD metric to tackle this issue and prove that it can characterise EWT. Therefore, by using *HRPCFD metric* as the discriminator, we are able to detect those differences of filtrations between various stochastic processes which are hard to be measured by those classical tools based on the weak convergence, and this observation was verified by our Hypothesis testing in Section 5.1.
Besides, our proposed HRPCFD is much more computationally efficient than the existing HR-SigMMD, which can theoretically characterise EWT. This enables the feasibility of using the HRPCFD as the discriminator of GANs for synthetic time series in practice.
The resulting *HRPCF-GAN* demonstrates the consistent outperformance of the strong and popular GAN model baselines specifically for time series generation, such as TimeGAN and RCGAN (see Section 5.2). The comprehensive evaluation of model performance using 8 different metrics indicates that the synthetic time series generated by our proposed HRPCF-GAN shares not only a similar law but also a similar filtration information with real-time series. These promising results highlight the potential of HRPCF-GAN for synthetic time series generation and its applications across various domains, including finance and economics.
---
**Comments:** *So as not to negatively impact the paper’s chances of acceptance, I have defaulted to a mid-range score in my review, which reflects my assessment that the paper could nevertheless still be made more accessible for readers who are less familiar with the domain.*
**Answer:** We will try our best to explain the essential ideas behind these abstract terms and simplified lots of presentations in the revised version to make sure that the mathematical content is accessible for readers without any prior specialised knowledge on this topic. For instance, we will add a very detailed explanation after Example A.1. (as above) to show why the EWT is important in the multi-periods optimisation problems.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for the additional context and explanation of the paper's contributions.
---
Reply to Comment 1.1.1:
Comment: You're welcome. If you have any further questions, please feel free to let us know. We are happy to address them as soon as possible. Many thanks. | Summary: Time series is ubiquitous in machine learning. They are modeled as stochastic processes and therefore notions of distance between stochastic processes and more generally convergence of stochastic processes are fundamental ideas. Weak convergence of probability measures occupies a central position in this area, but for many settings, like optimal stopping, or the one studied in this paper, it is not sufficient. Extended weak convergence, defined via weak convergence of prediction processes, is the right notion. This paper introduces a metric on the space of filtered processes that metrizes the topology of extended weak convergence, proposes statistical procedures to compute it, and then tests these ideas on GANs for time series.
Strengths: This is a paper on a very nice topic and I learnt a lot while reading it. Some of the ideas are very abstract and the paper does a good job of organizing the topics and defining everything precisely so that a meticulous reader is not left confused. It is welcoming to see a rigorous paper in ML conferences. The ideas introduced in the paper are novel and the proofs of all the claim are carefully done, although I can't claim to have read every section in appendix in detail.
Weaknesses: Although as mentioned above the paper defines everything clearly, the exposition on PCF and HRPCF could be improved. It took me quite some time after re-reading the paper multiple times and some other referenced paper to develop an intuition for these concepts, even though I have some background on probabilistic notions like weak convergence and extended weak convergence. I understand this is difficult to do well in a conference paper with page limits, but I think having a more detailed appendix on PCF and HRPCF would help.
I should mention here that I didn't find the experiments super convincing, but I am viewing this paper as a theoretical contribution, and thus any experiments it has as an added bonus and not a weakness.
Technical Quality: 4
Clarity: 4
Questions for Authors: None
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are pleased to know that the reviewer enjoyed reading our paper. We address the reviewer's comments as follows.
**Comments:** *Although as mentioned above the paper defines everything clearly, the exposition on PCF and HRPCF could be improved.* *It took me quite some time after re-reading the paper multiple times and some other referenced paper to develop an intuition for these concepts*, *even though I have some background on probabilistic notions like weak convergence and extended weak convergence.* *I understand this is difficult to do well in a conference paper with page limits, but I think having a more detailed appendix on PCF and HRPCF would help.*
**Answer:** We will add two additional sections in the appendix to provide a self-contained survey on the essential properties of extended weak convergence and PCF, and explain the construction of the HRPCF in a more transparent way.
**Comments:** *I should mention here that I didn't find the experiments super convincing, but I am viewing this paper as a theoretical contribution, and thus any experiments it has as an added bonus and not a weakness.*
**Answer:** We acknowledge that there is further room for improvement in numerical experiments. However, we would like to highlight that the proposed HRPCFD is significantly more computationally efficient than the existing high-rank signature MMD counterpart. This is empirically illustrated in Table 4 of Appendix C.1, which compares the inference times for the hypothesis testing example. Besides, the proposed HRPCFGAN overcomes the computational bottleneck of the HRSigMMD to enable its applications in generative modelling. We benchmark the proposed HRPCFGAN with strong GAN baselines specifically for time series generation, such as timeGAN and RCGAN, and show consistent performance improvement in 8 different test metrics to provide a comprehensive profile for the quality of synthetic time series data generation. The promising numerical results indicate the potential applications of HRPCFGAN on more challenging complex empirical datasets, which merits future research.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. I have re-read the paper and the author response, and have increased my score and confidence to reflect my positive evaluation of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! | Summary: The paper constructs a computationally-implementable metric which metrizes an "extended" weak convergence for stochastic processes, which more plausibly accounts for the convergence of the process with respect to their filtrations.
The result can apparently more effectively account for similarities between controlled processes, at least in the class of linearly-interpolated stochastic paths considered here.
Strengths: The method seems to construct high-rank analogues of classic tools, such as the characteristic function, such that the prediction processes arising from projection onto the filtration at each moment can be quantified and divergence between them metrized, without density evaluations, using empirical measures.
The classical SDE reasoning seems sound. I confess that I am less familiar with the rough path theory component, but there are no obvious red flags in the material.
Weaknesses: The paper is long;
The results seem to be an improvement both theoretically and empirically over the main antecedents
* [18] Hang Lou, Siran Li, and Hao Ni. PCF-GAN: generating sequential data via the characteristic function of measures on the path space. Advances in Neural Information Processing Systems, 36, 2023.
* [19] Cristopher Salvi, Thomas Cass, James Foster, Terry Lyons, and Weixin Yang. The signature 392 kernel is the solution of a goursat pde. SIAM Journal on Mathematics of Data Science, 3(3):873–899, January 2021.
* [20] Cristopher Salvi, Maud Lemercier, Chong Liu, Blanka Hovarth, Theodoros Damoulas, and Terry Lyons. Higher order kernel mean embeddings to capture filtrations of stochastic processes. Advances in Neural Information Processing Systems, 34:16635–16647, 2021
But it is not clear whether the increment is "important" in practice; Is the increased performance "worth" the implementation effort and/or computational cost? The answer is probably problem-dependent.
Technical Quality: 3
Clarity: 3
Questions for Authors: l101: The prediction process seems to be introduced on a fixed finite set of times — $I = {0, \dots, T }$ and $X = (X_t)_{t\in I} $ — and yet we are concerned with continuously-indexed processes, so I would more naturally assume I is the interval $I=[0,T]$, and in fact we discuss linear interpolation in l109. Is this a notational confusion? What is the index $t$ of the filtration $\mathcal{F}_t$?
Title: I'm not sure about the grammar of the paper's title. "High Rank Path Development: an approach of learning the filtration of stochastic processes" -> "High Rank Path Development: an approach _to_ learning the filtration of stochastic processes"?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No particular issues noted.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We address the reviewer's comments and questions in detail as below.
---
**Comments:** *The results seem to be an improvement both theoretically and empirically over the main antecedents*
*[18] Hang Lou, Siran Li, and Hao Ni. PCF-GAN: generating sequential data via the characteristic function of measures on the path space. Advances in Neural Information Processing Systems, 36, 2023.*
*[19] Cristopher Salvi, Thomas Cass, James Foster, Terry Lyons, and Weixin Yang. The signature kernel is the solution of a Goursat PDE. SIAM Journal on Mathematics of Data Science, 3(3):873–899, January 2021.*
*[20] Cristopher Salvi, Maud Lemercier, Chong Liu, Blanka Hovarth, Theodoros Damoulas, and Terry Lyons. Higher order kernel mean embeddings to capture filtrations of stochastic processes. Advances in Neural Information Processing Systems, 34:16635–16647, 2021.*
*But it is not clear whether the increment is "important" in practice; Is the increased performance "worth" the implementation effort and/or computational cost? The answer is probably problem-dependent.*
**Answer:**
We agree with the referee that the answer is problem-dependent. We expect that our method would demonstrate more significant strength in solving multi-period optimization problems, such as optimal stopping and utility maximization, as well as in conditional time series tasks where the extended weak topology is crucial.
However, the empirical performance is hard to estimate prior to experiments due to various factors regarding the dataset, models and training procedure (e.g., data size, model hyper-parameters, optimizer, etc). Therefore, it is very challenging to have such a cost-effective analysis before conducting the experiments. For practical applications, typically, given the computational budget, one would try a suite of machine learning methods and choose the best one among them. Given the moderate additional computational cost and robustness of our proposed HRPCF metric, we believe that the HRPCF metric and its corresponding GAN models are well-suited for practical applications.
1. *Consistent performance improvement \& moderate computational cost*. Our numerical experiments verified that our approach has a significantly better performance than several state-of-the-art methods (based on the weak convergence) in hypothesis testing and generative modelling. The computation complexity of HRPCFD is significantly reduced compared to the existing high rank methodologies such as HR-SigMMD, which is illustrated in Table 4 of the appendix. In comparison with popular GAN models (e.g., PCFGAN, TimeGAN and RCGAN), our HRPCF-GAN takes a longer time, but at the same magnitude. For example, in our numerical examples, the training time of HRPCF-GAN is kept at a manageable level across different datasets ranging from 30 minutes to 4 hours.
2. *Open source codes*. The open-source codes for our proposed method will significantly reduce the implementation effort for its empirical applications. We will make the code repository publicly available upon publication. By doing so, it is readily available for reuse and adaptation by the research community. In particular, our HRPCFGAN codes can be flexibly applied to general time series data without the need for re-implementation from scratch.
3. *Robustness of HRPCFD metrics*. We would like to emphasize that our approach based on the extended weak convergence is **robust in any case**, as our Example A.1 shows: it can happen that sometimes weak topology provides a completely wrong criterion, whilst the extended weak convergence always works correctly, which was proved rigorously in many literature (see e.g., [1]).
On a related note, it is interesting to investigate the sufficient condition under which stochastic processes such that the induced extended weak topology is strictly stronger than weak topology. To our best knowledge, so far, there is no easy criterion to judge in which circumstance the weak convergence coincides with the extended weak convergence (and even when they are equivalent, it is very hard to get a quantitative comparison). This study will be instrumental for the theoretical underpinnings of the performance gain of the EWT-based distance and provide guidance on empirical applications.
---
**Question:** *The prediction process seems to be introduced on a fixed finite set of times--$I = 0,\ldots,T$ and $X = (X_t)_{t \in I}$--and* *yet we are concerned with continuously-indexed processes, so I would more naturally assume $I$ is the interval $I = [0,T]$*, *and in fact we discuss linear interpolation in l109. Is this a notational confusion? What is the index $t$ of the filtration $\mathcal{F}_t$ ?*
**Answer:** In the present work, we only consider the discrete-time processes defined on $I= 0, \ldots, T$, and therefore the time index $t$ appeared in the filtration $\mathcal{F}_t$ only takes values in $0,\ldots,T$. It is just a convention in the rough path community that one views a discrete time path defined on $I= 0, \ldots, T$ as a piecewise linear path defined on the continuous time interval $[0,T]$ by a routine linear interpolation, because such identification may make some formulations easier (e.g., by doing so the unitary feature of a path can be formulated as the solution of an ODE on $[0,T]$). We will leave a remark in the revised version to explain these notations explicitly to avoid confusion.
---
**Question:** *I'm not sure about the grammar of the paper's title. "High Rank Path Development: an approach of learning the filtration* *of stochastic processes" -> "High Rank Path Development: an approach to learning the filtration of stochastic processes"?*
**Answer:** Thanks for this suggestion and we will change the title accordingly.
---
[1] Julio Backhoff-Veraguas, Daniel Bartl, Mathias Beiglboeck, and Manu Eder. All adapted topologies are equal. Probability Theory and Related Fields. 178(3), 2020.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my review. I don't think there is any need to shift my recommended score; I remain positively disposed towards a good paper, and I thank the authors for sharing it with us.
---
Reply to Comment 1.1.1:
Comment: You are welcome. Thank you very much for your positive feedback on our paper! | Summary: This paper proposes High Rank Path method, motivated by the extended convergence notion and the rough path theory, to generate (conditioned) time-series data. A new metric HRPCFD is introduced, and experiments are conducted for Brownian motion, GANs with applications in finance.
Strengths: The paper is rigorously written, which introduces a new metric HRPCFD on the path-valued processes based on various ideas from probability theory -- extended convergence, rough path, signature... I checked most proofs, and they are correct.
Weaknesses: Weakness and comments:
(1) The paper may be too heavy for the Neurips audience (though I enjoyed reading it). It seems to be more suitable for a rigorous mathematical or statistical journal (e.g., Annals of Statistics).
(2) Many proofs of the results (e.g., Thm 3.3) are purely measure-theoretical, and I think the authors may shrink some proofs to keep the idea concise.
(3) The authors may want to explain why the proposed HRPCFD outperforms others (e.g., signature...) Is there any possible theoretical guarantee?
(4) The authors may have a discussion on the computational efforts of the proposed method (e.g., computational complexity and running time). The path-space optimization (or signature-type methods) often suffer from computational efficiency.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the helpful comments. We address each question in detail as follows.
---
**Answer to Q(1) / Why this work fits NeurIPS**
Our paper addresses the crucial problem of defining the computationally feasible metric on the law of stochastic processes to capture the extended weak topology, which is ubiquitous for measuring the sensitivity w.r.t. models in decision making problems. This has significant implications in machine learning, statistics and probability, particularly in hypothesis testing and generative models for time series data. As such, our work aligns well with the scope of NeurIPS.
Besides, we would like to highlight substantial numerical contributions of our paper, which might be overlooked. In this paper, we design an efficient algorithm for training HRPCFD from data and construct the HRPCF-GAN by using HRPCFD as the discriminator for conditional time series generation. Numerical results show the promise of our proposed HRPCF-GAN for empirical time series data. We will make the code repository publicly available upon publication, which would be a useful tool for generative models for synthetic time series generation. The concrete algorithms and description of the corresponding methodology along with the open-source code repository, should have their own interests in the machine learning community.
Our work balances the theoretical underpinnings of the HRPCFD metric with the practical implementation of the HRPCF-GAN algorithm for synthetic time series generation. Given the diversity and interdisciplinary nature of the NeurIPS audience, we believe that the NeurIPS conference is a suitable avenue for this work.
---
**Answer to Q(2):** Now we adopt the reviewer's suggestion to add an ``informal proof'' for this theorem by emphasising the core idea and the main steps of the proof in an intuitive way so that the readers can easily understand the whole picture of the proof.
---
**Answer to Q(3):**
By the definition of the extended weak convergence (EWC) we can see that it induces a genuinely stronger topology than the weak convergence (i.e., **the EWC always implies weak convergence** while the converse fails in general, see Example A.1). Moreover, the EWC induces the coarsest topology for which the value function in any decision making problem is continuous. Since the **HRPCFD metrises the EWC**, it is natural to see an outperformance of HRPCFD in hypothesis testing and generative modeling than any other metrics based on the weak convergence, e.g., PCFD, signature MMD.
---
**Answer to Q(4):**
We summarize the computation complexity and running time of our proposed HRPCF method as follows. We will add Table 1 and a discussion on the computational complexity in the revised version.
1. *Inference time*. As we use the same generator architecture across different GAN models for a fair comparison, the inference time of the PCF-GAN is the same as that of other baseline models.
2. *Computational complexity \& Training time*. Let $d$ and $T$ denote the feature dimension and time dimension of the target time series data. The training algorithm of HRPCF-GAN can be divided into three parts: 1) Vanilla PCF-GAN training, 2) Regression training, and 3) HRPCF-GAN training. For each part, the training time complexity per epoch is linear in both $d$ and $T$ when keeping the hyper-parameters the same. However, the evaluation of EPCFD/EHRPCFD might be costly when using a very large matrix order $l$ of the Lie algebra. It could alleviated by employing the scaling-and-squaring method for an efficient computation for matrix exponential. Detailed complexity of this operation is discussed in [1].
In all the numerical experiments of GAN training, we used a moderate matrix order ($l \leq 30$) to achieve satisfactory results. Specifically, our experiments were conducted on a single GPU, with the training time for HRPCF-GAN ranging from 30 minutes to 4 hours. Although HRPCF-GAN takes longer to train compared to other baselines, the total training time is kept at a manageable level, while the HRPCF-GAN consistently delivers better performance. We summarize the computation time of each of the models over 100 training iterations in Table 1.
| Training Time (s) | TimeGAN | RCGAN | PCF-GAN | HRPCF-GAN |
|-------------------|------------------|------------------|------------------|------------------|
| fBM | 11.21 ± 0.28 | 5.98 ± 0.35 | 15.63 ± 1.31 | 31.96 ± 2.92 |
| Stock | 12.48 ± 0.31 | 7.39 ± 0.65 | 17.33 ± 1.36 | 34.52 ± 2.72 |
| RV | 12.75 ± 0.41 | 6.37 ± 0.59 | 12.94 ± 1.21 | 32.77 ± 2.05 |
*Table 1: Time measurement over 100 training iterations. The experiments are done using a single Quadro RTX 8000 GPU; each experiment is repeated 5 times, with the mean and standard deviation recorded.*
3. *Significant computation reduction of HRPCFD over HR-SigMMD*. The dimension of PCF is independent with $d$, and the same applies to HRPCF. However, in contrast, SigMMD's dimension grows geometrically with $d$, leading to dimensionality issues. The kernel trick can be used to alleviate the curse of dimensionality issue for the signature-based method. However, it is worth noting that the training time of SigMMD computed via the signature kernel is quadratic in terms of the sample size $n$ and time dimension $T$. In comparison, the computation complexity of HRPCFD is linear w.r.t $n$ and $T$.
The significant computational advantage of PCF-based methods over signature-based methods is demonstrated in Table 4 of the appendix.
[1] F. Longstaff and E. Schwartz. A new scaling and squaring algorithm for the matrix exponential. SIAM Journal on Matrix Analysis and Applications, 31(3):970–989, 2009.
---
Rebuttal Comment 1.1:
Comment: I would thank the authors for the response, and will raise the score to 4.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for raising your score! We noticed that the updated score is still negative. Please let us know your concerns and questions, and we will address them as soon as possible. Thanks in advance for your time.
---
Rebuttal Comment 1.2:
Title: What is the purpose of NeurIPS?
Comment: I possibly share reviewer zBhD's concerns about the scope of NeurIPS. Is this paper within scope for the conference? My feeling is that the conference has increasingly broad and increasingly unclear scope, and we may wish to prevent NeurIPS "eating" all the other conferences and journals. On the other hand, I am not sure that policing this boundary is responsibility of the authors of any given paper. To my mind this paper "does enough work" to make some abstract but useful mathematical quantity more computationally available, so it is probably within scope of the conference as it stands. Does the reviewer feel similarly? Or do they see the question of the _heaviness_ of this paper differently?
---
Reply to Comment 1.2.1:
Comment: Thank you very much for your thoughtful comments. You raised a valid point about the scope of NeurIPS, which is an important open question. The interpretation of its scope is subjective, and we respect the different views that individuals may hold.
We appreciate your positive feedback regarding our paper being within the conference's scope. In our opinion, channeling mathematical insights into innovative ML algorithms and investigating their theoretical underpinnings is crucial for advancing machine learning (ML) research. We have made our best efforts to ensure the rigor of our paper by introducing key mathematical tools while still making it accessible to a general ML audience in terms of its computational and application aspects.
We welcome and are open to any suggestions for further improving our presentation of the paper from reviewers. We firmly believe that our paper would be a valuable contribution to the NeurIPS community. | Rebuttal 1:
Rebuttal: We deeply appreciate all the reviewers for their helpful comments and constructive suggestions. We are pleased that all the reviewers find our work sound and well-presented. In the following, we provide detailed responses to the questions raised by each reviewer individually. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand | Accept (poster) | Summary: The paper leverages state-of-the-art conditional generative models and algorithms from causal do calculus to perform "approximately correct" high-dimensional interventional sampling. Their contribution is ID-GEN, a recursive algorithm that uses diffusion models (among other generative models) to sample from any identifiable causal effect estimand in high-dimensional settings. The efficacy of the method is demonstrated on three diverse datasets: Colored MNIST, CelebA, and MIMIC-CXR.
Strengths: - The paper introduces ID-GEN, a novel algorithm that integrates diffusion models with causal inference for high-dimensional interventional sampling.
- ID-GEN creatively exploits the causal structure via the known recursive algorithm for sampling in complex causal scenarios, particularly with latent confounders.
- The algorithm is applied in three applications across diverse datasets (Colored MNIST, CelebA, MIMIC-CXR).
Weaknesses: - The paper is objectively hard to read. Many important graphical elements that should be paired with the text in the main paper are delayed to supplements. This is notably problematic for Example 4.1, where one would expect the full example to be self-contained in the main text.
- The contributions are stated in the introduction, but it still seems hard to understand if the proposed method is "just" an implementation of the ID algorithm, replacing probability tables by samples from diffusion models. I appreciate that this is hard already, but it has much lower novelty then proposing a new recursive algorithm. This should be well explained in the manuscript.
- The paper does not discuss the implications of the proposed algorithm. Is there any way to extend this to symbolic calculus, or to probabilistic programming? What are the obstacles for moving towards automatic causal inference with images (which would be a super exciting prospect).
Technical Quality: 3
Clarity: 1
Questions for Authors: - Can you clarify whether the proposed ID-GEN algorithm is primarily an implementation of the existing ID algorithm with diffusion models substituting for probability tables, or does it introduce fundamentally new recursive methodologies? How does this distinction affect the perceived novelty of your contribution?
- What are the potential extensions of your algorithm to areas like symbolic calculus or probabilistic programming?
- Are there any specific obstacles that need to be overcome to advance towards automatic causal inference using high-dimensional data such as images? How feasible is this prospect in the near future?
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable effort. We are happy that they acknowledged our algorithm as novel and sound. Below we
address their concerns.
> ..., Obstacles for moving towards automatic causal inference with images?.
> How feasible is this prospect in the near future?
We interpret the reviewer's mention of "automatic causal inference" as the process of
automating every step of causal inference where the inputs are raw data and assumptions/constraints and the outputs are
high-dimensional
interventional samples/images. Below we share the steps and challenges for each step.
Step 1, causal variables: We detect the causal variables in the raw data (challenging in high-dimension).
Step 2, causal graph: We discover the causal relationships among the causal variables obtained from Step 1.
It is hard to obtain fully specified causal graphs specially when high-dim variables are involved.
Generally, we obtain partial graphs with uncertain edges and/or uncertain directions.
Step 3, causal inference: We learn the underlying causal mechanisms that generate each variable from
the causal graph parents obtained from Step 2. Finally, we generate interventional samples using the learned mechanisms
and report samples as output. Even though some promising results have been shown for fully-specified graphs,
high-dimensional causal inference is
still under-explored in presence of unobserved shared parents (confounders). Our proposed ID-GEN offers a solution for
this problem.
We hope our contribution becomes a useful piece in this challenging process.
> ... extend this to symbolic calculus
In Pearl's causality, we found do-calculus [1] closest to symbolic calculus [2].
In [2], Pearl mentioned that "Polynomial tests are now available for deciding when $P(x_i|do(x_j)$ is identifiable,...,
in terms of observed quantities.
These tests and derivations are based on a **symbolic calculus**,..., in which interventions, side by side with
observations, are given explicit notation,
and are permitted to transform probability expressions."
The 3 do-calculus rules express an interventional distribution in terms of observational distribution. The ID
algorithm (Shpitser et al [45]) we considered
actually applies these 3 rules systematically along its recursive steps. Since we follow ID's recursive steps, we are
implicitly following the do-calculus rules.
[1] Pearl, J. 2000. Causality. Cambridge University Press, 2nd edition\
[2] Pearl, Judea. "A causal calculus for statistical research." AISTAT (1996): 23-33
> or to probabilistic programming?
We found a relevant work [3] that implements Bayesian causal inference based on probabilistic programming using MiniStan
and Gen programming languages.
According to [3], one can use these programs to estimate the parameters of a model from data that represents the
conditional distributions of variables. As ID-GEN is model agnostic,
one could explore the connection between ID-GEN and the probabilistic programming approach.
[3] Witty, Sam, et al. "Bayesian causal inference via probabilistic program synthesis." arXiv (2019).
> ..., for Example 4.1, ... ,expect the full example to be self-contained in the main text.
We will to utilize the additional space of the camera-ready version to accommodate Example 4.1 if our paper gets accepted.
> ...if the proposed method is "just" an implementation of the ID algorithm, replacing probability tables by
> samples from diffusion models.
We apologize for such an impression. Our algorithm definitely offers novel methodologies compared to the mentioned
implementation. Let us imagine an algorithm, AlgX that follows ID step-by-step and trains a
diffusion model for every conditional probability table it encounters.
Below we discuss the challenges it faces.
$X \rightarrow W_1 $\
$\updownarrow~$ $\swarrow$ $~~\updownarrow$\
$W_2 \rightarrow Y$
Given the query, $P_{x}(y)$, ID factorizes as:
$$P_{x}(y) = \sum_{w_1,w_2} P_{x,w_1}(w_2) \cdot P_{x,w_2}(w_1,y)$$
where each factor of this product is an interventional distribution.
i) We have only observational samples as training data. If we train the diffusion models on our dataset, it will learn
conditional distributions. However, if we want to sample from the above factors we need interventional samples for model
training.
AlgX doesn't know how to obtain such interventional samples. \
-We recursively solve each factor while generating interventional training data during the top-down recursion (**Alg 1:
step 4+Alg 1: step 7+Alg 4:Update**).
ii) Suppose, AlgX somehow can sample from each factor. In which order should it
sample?
If it samples $\\{W_2\\}
\sim P_{x,w_1}(w_2)$ it needs $W_1$ as input, which has to be sampled from $P_{x,w_2}(w_1,y)$. But to sample $\\{W_1,
Y\\}
\sim P_{x,w_2}(w_1,y)$, it needs $W_2$ as input which has to be sampled from $P_{x,w_1}(w_2)$.
Therefore, it will reach a deadlock situation and fail to sample all $W_1, W_2, Y$ consistently.\
-We disintegrate the factors after the recursion ends, into single variable models and build a sampling network
connecting them (**Alg 3: MergeNetwork**).
iii) It is unclear how AlgX will mimic the product and the marginalization of the factors for the generated
high-dimensional samples (ex: image generation).\
-We perform ancestral sampling on our built sampling network and drop the marginalized ones at the end.
iv) ID goes to step 7 for specific queries such as $P_{w_1, w_2, x}(y)$. It iterates over all values of $X, W_2$ and
performs $do(X, W_2)$ to update its probability distribution parameter $P(V)$ as
$$P'(V)= P(W_1|X)P(Y|W_1, x, w_2)$$ for the next recursive calls.
AlgX equipped with diffusion models does not give any hint about how to mimic this step for high-dimensional sampling.\
-We generate $do(X, W_2)$ interventional training data with **Alg 4:Update**.
We are more than willing to have further discussion. We would highly appreciate it if the reviewer reconsiders a stronger
the score for our paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response. I will consider changing my score while discussing with other reviewers.
---
Reply to Comment 1.1.1:
Comment: We cordially thank you for reading our responses. We are also happy to know that you would consider changing your score after the discussion with other reviewers. | Summary: This paper proposes an algorithm for sampling from intervention distributions under known causal models with hidden confounders, using conditional generative models.
This allows nonparametric distributional estimation of high-dimensional outcomes such as X-ray image generation, unlike existing methods.
The proposed method combines the existing ID algorithm with a generative model and inherits the ID algorithm's theoretical guarantees. That is, non-identifiable quantities are indicated to be non-identifiable, while all identifiable quantities can be estimated.
Strengths: * They shed light on the new problem setting of causal simulation for outcomes with high-dimensional and multimodal distributions, such as image generation. This could open up new applications if well justified. Such a problem setting requires a very different approach to point estimation of expectations for low-dimensional, unimodal distributions.
* The theoretical background is solid and well-explained. The method is proved to be able to estimate all identifiable quantities and otherwise outputs "unidentifiable."
Weaknesses: [W1] The motivation for high-dimensional distribution estimation is weak. For example, it does not seem very meaningful to me to generate synthetic X-ray images.
[W2] In particular, is it important in cases where there are bidirectional edges due to the presence of hidden confounding factors, but where the causal orientations are all identified among variables? A clear comparison with similar methods would be beneficial for readers, e.g., comparison in assumptions and targets (e.g., parametric/nonparametric, latent confounder, distributional estimation, etc.).
[W3] The base procedure seems to come from existing methods, such as the ID algorithm, and they just combine it with a generative model.
Technical Quality: 4
Clarity: 3
Questions for Authors: [Q1] Related to W2, is the proposed method a non-trivial novelty in situations where there are no unobserved confounding variables, i.e. no bidirectional edges?
[Q2] Related to W3, are there any non-trivial points in theoretical guarantees when the generative model is combined with the ID algorithm?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The method is limited to the cases where the causal direction is known.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their efforts and useful feedback. We are happy that they found our work important and our
theoretical background solid. Below we address their concerns.
>Is it important in cases where there are bidirectional edges ..., but the causal orientations are all
> identified ...?
Our considered setup is well known as the semi-Markovian causal model in the literature.
However, all causal orientations can not be identified in general, and we obtain Markov equivalent graphs. There exist
some recent works [1] that can estimate specific causal effects in such scenarios.
Thus, one could draw connections between [1] and ID-GEN to perform high-dimensional causal inference in the presence of
uncertain causal orientation.
[1] Jaber et al. "Identification of conditional causal effects under Markov equivalence." NeurIPS 2019.
> Is the proposed method a non-trivial novelty in situations where there are no ... bidirectional edges?
Even though ID-GEN will work for graphs with no bidirectional edges, it is a much easier problem and can be solved by existing works such as Kocaoglu et al [26]. The problem becomes non-trivial when bi-directed edges are considered.
>The motivation for high-dimensional distribution estimation is weak. ... not seem very meaningful to generate
> synthetic X-ray images.
We would like to humbly point out that X-ray image generation is not the main goal of our algorithm but part of
answering a causal question.
We borrowed this application from Chambon et [10] who developed the generative imaging models to deal with the lack of
high-quality, annotated medical imaging datasets.
Our main purpose is to illustrate the type of high-dimensional causal query we can answer.
> The base procedure seems to come from existing methods, such as the ID algorithm, and they just combine it with a
> generative model.
We respectfully disagree with the reviewer that we just combine the ID algorithm with generative models.
To explain this point, let us imagine an algorithm, AlgX that follows ID step-by-step and trains a
generative model for every conditional probability table it encounters.
Below we discuss the challenges it faces.
$X \rightarrow W_1 $\
$\updownarrow~$ $\swarrow$ $~~\updownarrow$\
$W_2 \rightarrow Y$
Given the query, $P_{x}(y)$, ID factorizes as:
$$P_{x}(y) = \sum_{w_1,w_2} P_{x,w_1}(w_2) \cdot P_{x,w_2}(w_1,y)$$
where each factor of this product is an interventional distribution.
i) We have only observational samples as training data. If we train the generative models on our dataset, it will learn
conditional distributions. However, if we want to sample from the above factors we need interventional samples for model
training.
AlgX doesn't know how to obtain such interventional samples. \
-We recursively solve each factor while generating interventional training data during the top-down recursion (**Alg 1:
step 4+Alg 1: step 7+Alg 4:Update**).
ii) Suppose, AlgX somehow can sample from each factor. In which order should it
sample?
If it samples $\\{W_2\\}
\sim P_{x,w_1}(w_2)$ it needs $W_1$ as input, which has to be sampled from $P_{x,w_2}(w_1,y)$. But to sample $\\{W_1,
Y\\}
\sim P_{x,w_2}(w_1,y)$, it needs $W_2$ as input which has to be sampled from $P_{x,w_1}(w_2)$.
Therefore, it will reach a deadlock situation and fail to sample all $W_1, W_2, Y$ consistently.\
-We disintegrate the factors after the recursion ends, into single variable models and build a sampling network
connecting them (**Alg 3: MergeNetwork**).
iii) It is unclear how AlgX will mimic the product and the marginalization of the factors for the generated
high-dimensional samples (ex: image generation).\
-We perform ancestral sampling on our built sampling network and drop the marginalized ones at the end.
iv) ID goes to step 7 for specific queries such as $P_{w_1, w_2, x}(y)$. It iterates over all values of $X, W_2$ and
performs $do(X, W_2)$ to update its probability distribution parameter $P(V)$ as
$$P'(V)= P(W_1|X)P(Y|W_1, x, w_2)$$ for the next recursive calls.
AlgX equipped with generative models does not give any hint about how to mimic this step for high-dimensional sampling.\
-We generate $do(X, W_2)$ interventional training data with **Alg 4:Update**.
> ...Any non-trivial points in theoretical guarantees when the generative model is combined with the ID
> algorithm?
We believe that the following theoretical contributions are nontrivial, and allow us to establish our novelty.
**Lemma D.11**: ID carries a probability table as a recursion parameter whereas we carry an interventional dataset.
Even though we follow ID's recursive trace, to deal with the interventional dataset, we have two extra parameters:
intervened variables at step 7s: $\hat{X}$
and causal graph $\hat{G}$ containing $\hat{X}$. This lemma ensures that our interaction with these extra parameters
will not deviate us from the trace of ID.
**Lemma D.14**: We perform on our training dataset and update them to the interventional dataset at step 7s whereas ID
manipulates its probability table from observational distribution to interventional distribution.
Thus, we show a guarantee that our training dataset represents the corresponding probability distribution in
the ID algorithm at each recursion level.
**Lemma D.20**. shows that every edge in the sampling graph adheres to the topological ordering $\pi$ of the original
graph. This ensures that our proposed concept of
a sampling network containing the trained models is not invalid and samples consistently **in an acyclic manner.**
**Lemma D.21**: After training the conditional models from our recursive steps, we connect them to build a sampling
network before performing the sampling. Here, we prove
that the sampling network consists of trained models can sample from the expected joint distribution.
We are more than willing to have further discussion. We would highly appreciate it if the reviewer reconsiders a stronger
the score for our paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer fUEc,
We thank you for your invested valuable time to provide constructive feedback for our paper.
We were wondering if we have addressed all your concerns including the theoretical and technical novelties of our algorithm.
If you have any additional questions in your mind, please let us know. We would be more than happy to answer them.
Also, we have used some latex notations in our responses. Please refresh the page if they do not render properly.
Thank you.
---
Rebuttal 2:
Comment: Thank you for your response.
For [W1], the lack of motivating examples has not been resolved.
For [W2], I understand the problem of unidentified causal direction would be another direction, and the proposed method can be extended.
For [W3], I understand that there would be some issues in a straightforward extension of the ID algorithm to generative models.
Based on the resolution of these concerns, the score is slightly increased.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer fUEc,
We cordially thank you for your response and highly appreciate that you raised your score.
>For [W1], the lack of motivating examples has not been resolved.
We apologize that we misunderstood your concern about the motivational example. To address this concern, we will illustrate the scope of high-dimensional causal inference in another real-world scenario involving brain MRI [1] in our introduction section. Here the goal would be to generate samples from the interventional distribution involving MRI images with attributes such as age, sex, brain volume and ventricle volume [2].
Our algorithm, ID-GEN would play an important role in estimating causal effects when we consider the presence of confounders among variables, as existing works such as [2] does not consider confounders in the system. We will explain this motivational example in detail.
[1] Sudlow, Cathie, et al. "UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age." PLoS medicine 12.3 (2015): e1001779.\
[2] Ribeiro, Fabio De Sousa, et al. "High fidelity image counterfactuals with probabilistic causal models." arXiv preprint arXiv:2306.15764 (2023). | Summary: This paper studies the problem of sampling from an interventional distribution of high-dimensional data, where the causal relationships are described by a acyclic directed mixed graph (ADMG). Motivated by the ID algorithm that provides a recursive way of identifying any causal effect from a conditional probability table, the authors propose ID-GEN that follows the recursive steps of ID but instead trains deep generative models to fit the conditional distributions. The final sampling model is then obtained by connecting all the trained networks together in some suitable way. The authors prove that ID-GEN is sound and complete, and run extensive experiments on both synthetic and real dataset to demonstrate the effectiveness of their approach.
Strengths: 1. This paper studies the problem of sampling from an interventional distribution, an arguably important problem with broad applications. Current approaches, as the authors point out, are either restricted to simple causal graphs or face computational challenges.
2. Most parts of the paper are clearly written, and sufficient explanations are provided for the key steps in the ID-GEN algorithm. Also, simple examples are provided that help with the understanding of the paper. The paper is also well-organized and the authors put most complicated details into the Appendix.
3. The authors conduct extensive experiments to demonstrate the superior performance of their model, by comparing with other sampling models proposed by previous works.
Weaknesses: I don't think this paper has obvious weaknesses. One thing that the authors may wish to improve is that the notations are a litle bit complex; and it would be better to more often remind the authors of their meanings.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In some identification formula e.g. Eq.(1) in the paper, the probability on the denominator might be small. To what extent would this affect the stablity of the proposed algorithm.
2. Can your algorithm be straightforwardly adapted to the case of imperfect interventions i.e. some conditional probabilities in the structural causal model are modified, but no causal edges are removed?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their effort on our paper. We are really happy that they found our work broadly applicable,
our paper well-written and our experiments extensive. Below, we address their concerns.
>In some identification formulas e.g. Eq.(1) in the paper, the probability on the denominator might be small. To what
> extent would this affect the stability of the proposed algorithm.
A fraction in ID-GEN is generally created when we traverse the recursion and reach any point where we have to estimate
a conditional distribution $P(Y|Z)$. This $P(Y|Z)$ can be written as $\frac{P(Y,Z)}{P(Z)}$ and estimated from the joint distribution.
Thus, in such an expression if the denominator $P(Z=z)\sim 0$, it implies that we have a small number of samples for $Z=z$. This will result in a high variance for the estimation of $P(Y|Z=z)$.
ID-GEN trains generative models to learn and sample from $P(Y|Z)$.
Thus, for $Z=z$, the trained models might have low prediction accuracy or low image generation quality when sampled from the final interventional distribution.
However, for other values $Z=z'$ where $P(z')$ is not close to zero, the trained models in ID-GEN shows good empirical performance (for example: Figure 2)
and does not affect the over all stability of the algorithm.
> Can your algorithm be straightforwardly adapted to the case of imperfect interventions i.e. some conditional
> probabilities in the structural causal model are modified, but no causal edges are removed?
We thank the reviewer for such an interesting question.
For causal effect estimation in high-dimension, we follow the ID algorithm [1] to consider hard intervention $do(X=x)$, i.e., fixing a specific value to the intervened variable,
and assume that the rest of the conditional distribution stays unchanged.
However, there are recent algorithms such as [2] that
deals with **imperfect/soft intervention** where the underlying mechanism of intervention $X$ changes instead of fixing to a specific value.
Note that, these algorithms deal with low-dimensional variables.
Since [2] also utilizes the identification algorithm, in principle our algorithm could be adapted for such cases. However, it might require
more careful handling, and we will address it as our future works.
[1] Shpitser et al. Complete identification methods for the causal hierarchy. Journal of Machine Learning Research, 9:1941–1979, 2008.\
[2] Correa et al "A calculus for stochastic interventions: Causal effect identification and surrogate experiments." AAAI, 2020. | Summary: This paper provides an algorithm for sampling from a causal interventional distribution using conditional generative models, building on Shpitser and Pearl's ID algorithm. They discuss how their algorithm, ID-GEN, can sample from any identifiable distribution given a causal graph, and handles the presence of unobserved confounders (when identifiable). Empirically, they demonstrate their method can work for measurement, evaluation and interpretability purposes in the challenging setting where both the target and treatment are high dimensional e.g. images.
Strengths: - interesting work, the case of high-dimensional variables in a causal graph is a super important and under-discussed one
- thorough theoretical treatment of the extension of the ID algorithm
- experiments show a nice range of usages of the suggested ID-GEN approach
Weaknesses: - I frankly got a bit lost in a few key parts of Section 3. I got the main ideas (I think) but missed a bunch of nuance. Some spots were: Example 4.1 (I don’t understand why ID fails in this case but ID-GEN succeeds - specifically the importance of merging is a bit lost on me), Step 4 of ID-GEN (again, merging), and Step 7 of ID-GEN (I think the logic around how training data is sampled, used and modified wrt the graph needs to be explained more clearly)
- Step 1 of ID-GEN confuses me - I don’t see why we can’t just learn a model of P(y) directly in this case? Also the 2nd equality in 203 doesn’t make sense to me - how is the sum over values of v equal to P(y)?
- in each experimental section I find I have at least a medium-sized point of confusion around the setup or evaluation - more care should be taken to explain empirical setup + results overall
- In 5.1, the authors state that U_color makes W1 and Y correlated by color - however, X contains color information and is a direct ancestor of Y, so this unobserved confounding seems trivial
- in 5.1, it seems like a better metric than a classifier (which may be unreliable and as you note isn’t useful for all possible images) would be something based specifically on the RGB values of the pixels themselves
- in 5.2, I don’t quite see why Young & Male have unobserved confounding - are they not fully determined by the shared + observed parent I_1?
- in 5.3, I don’t understand why the report is a causal ancestor in the graph - isn’t it generated upon viewing the X-ray?
- in 5.3, I think the setup with the labeller can be made clearer - how good is this labeller? How is it structured? Additionally, is the bottom row intended to be a success or failure examples? (label says it should be right lung base but all inferences name the left lung)
Smaller points:
- L115: are unobserved confounders only allowed to affect 2 variables in this framework? Is that more limiting than general SCMs?
Technical Quality: 3
Clarity: 3
Questions for Authors: - would be great to see clarification throughout Section 3, particularly in the highlighted areas and around merging and training data sampling
- experimental setups all need clarifications
- generally assuming I understood what's happening better I'd be happy to increase my score
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable efforts on our paper.
We are happy to receive their appreciation for our theoretical contribution and experimental setup.
Below we address their concerns.
> Step 1 of ID-GEN ... why we can't just learn a model of P(y) directly? ..., how is the sum over values of v equal to
> P(y)?
The reviewer is correct that if the intervention set is empty, we can learn a single model $M$ that can sample the
variable set $\mathbf{Y}$ from
$P(\mathbf{Y})$, specially if all variables in $\mathbf{Y}$ are low dimensional. However, we consider that $\mathbf{Y}$
is allowed to contain both low and high-dimensional variables.
Rahman and Kocaoglu [40] show that it is non-trivial to achieve convergence if we directly attempt to match such joint
distributions $P(\mathbf{Y})$.
In line 203, we stated that $P(y)= \sum_{\mathbf{v} \setminus y} P(\mathbf{v})$. Here, $\mathbf{V}$ is the set of all
variables and $Y$ is the target variable.
Please note that we do not sum over $\mathbf{V}$ but rather over $\mathbf{V} \setminus Y$.
Intuitively, we first consider all variables and then drop (sum) the unnecessary ones.
> more care should be taken to explain empirical setup + results overall
We apologize for the confusion and will bring the Napkin-MNIST, CelebA, and Xray generation empirical details from the
appendix to the main paper.
> In 5.1, the authors state that U\_makes W1 and Y correlated by color - however, X contains color information and
> is a direct ancestor of Y, so this unobserved confounding seems trivial.
We humbly point out that even though image X (takes R and G) is a direct ancestor of image Y (takes all 6 colors),
$Y$ **only inherits the digit property** from $X$ but correlates the color property with image $W_1$. This color
correlation between $W_1$ and $Y$ is created by U\_color.
> It seems like a better metric than a classifier, ..., would be something based specifically on the RGB values of the
> pixels themselves
As we are not doing counterfactual generation, we should not match against any specific image.
Our goal is to generate images that are correct at the population level.
For example, in Fig 2 (napkin-MNIST), if we generate 1000 images from the interventional distribution,
there should be around 167 images for each of the 6 colors (Pr=0.17). Since the output is not deterministic, we can not
evaluate them with pixel-wise RGB values.
> in 5.2, ..., why Young & Male have unobserved confounding - are they not fully determined by the shared + observed
> parent I_1?
In the CelebA dataset, men are more likely to be old (correlation coeff=0.4, (Shen et al [44])). As a result, any
classifier trained on the CelebA, will
not have 100% accuracy at the population level and might have some bias toward predicting young-male images
as [Old, Male].
> in 5.3, ..., why is the report a causal ancestor in the graph - isn't it generated upon viewing the X-ray?
We followed the setup discussed by Chambon et al [10] where they generate synthetic X-ray images from prescription
reports
with a vision-language foundation model. We aim to make the model's predictions interpretable and invariant with domain
shifts.
Thus, we follow the causal mechanisms of the environment the model will be deployed in to build the causal graph.
Therefore, we consider the same initial input and final output as the model, having the report variable as an ancestor
of the X-ray image.
> ..., the setup with the labeller can be made clearer - how good is this labeller?
We skipped details on the X-ray report labeler as it is taken from Gu et al [16] and not our contribution.
However, we will add more details. In short, it is a BERT architecture trained on pseudo-labels created by GPT-4
to classify findings in CXR reports. It achieves the best or the second-best F1 score compared to its baselines.
> The label says it should be the right lung base but all inferences name the left lung.
We apologize for the mistake and have fixed it. The label should have said, "Atelectasis at the **left** lung base". Top
and bottom, both rows are simulations of how ID-GEN is more useful compared
to the direct application of baseline LLM.
> L115: are unobserved confounders only allowed to affect 2 variables in this framework? Is that more limiting than
> general SCMs?
We consider the commonly used semi-Markovian SCM where unobserved confounders only affect two variables. A more relaxed
assumption is when hidden confounders
can affect any number of variables, known as the non-Markovian SCM.
> Step 4 of ID-GEN (again, merging), and Step 7 of ID-GEN,...,needs to be explained more clearly
We apologize some parts of our paper were hard to understand. We will elaborate the definitions in Section 3 and add
more discussion on the merging in Step 4 and the parameter
updates in Step 7.
> Example 4.1 ... I don’t understand why ID fails ... but ID-GEN succeeds. ...
ID can't deal with high-dimensional variables. However, if we imagine an algorithm that follows ID step-by-step while
training a generative model for every factor it encounters,
that approach might fail at step 4. Step 4 factorizes the query into multiple factors and this algorithm would not know
which factor to learn and sample first with the generative models.
In some cases, there exists no such sampling order of the factors at all (cyclic issue).
ID-GEN avoids sampling factor by factor, and first trains required models for each of them, considering all possible
input values.
After the recursion ends, it disintegrates the variables in the factors and treats their model as individual nodes.
A sampling network is built connecting these model nodes considering their input-outputs. This network now enables
consistent interventional sampling.
We will be happy to provide further clarification for any concerns of the reviewer. We would highly appreciate if the reviewer reconsiders a stronger score for our paper.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: Thanks for the rebuttal - I appreciate the corrections on some of the points. I think that given that my main issues were around general methodological confusion I probably will lean against increasing my score, but happy to consider this given discussion with other reviewers.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 6o9W,
We thank you for your reply. We agree that some steps of our algorithm might seem a little hard to understand. We aim to add more details on those steps according to your feedback. If you prefer any further clarification, please let us know and we would be happy to elaborate.
Thank you. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Revive Re-weighting in Imbalanced Learning by Density Ratio Estimation | Accept (poster) | Summary: This paper presents a dynamic re-weighting method for imbalanced learning. The author defines the ratio of the balanced data set distribution to the training set distribution, and tries to estimate it with an iterative update method. The effectiveness of this method is proved by experiments.
Strengths: 1. This paper points out a problem with distribution differences, which leads to the potential missing feature patterns in general re-weighting methods.
2. This paper proposes a new method, which approximates the ratio of the balanced data set distribution to the training set distribution using methods of density ratio estimation. As far as I know, a dynamic re-weighting strategy is novel in this field.
3. The experimental introduction of this paper is clear, and extensive experiments have been carried out, which validates the effectiveness of the proposed method.
Weaknesses: 1. The formula derivation in Sec. 3.3 can be more detailed. It is suggested to explain how formula (7) is obtained in the appendix.
2. The introduction may have overlooked some key articles. For example, the article mentions Wang et al. 's article at the end of Sec.3.3, but does not discuss this paper in the introduction section.
3. Does the new method enjoy the same theoretical boundaries as the general reweighting method? It is recommended to provide more analysis.
4. Besides, there are some typos in the details:
- In the experimental section, 'class[390,385] 'may be a typo.
- In table 3, the interpretation of Tr_{Few} is supposed to be there.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to Weaknesses.
Besides, I am also curious about some experimental details. Do the authors use other techniques such as RandAug or mixup?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No. Although the authors say they clarify the limitations in Sec.3.4, I find they mainly highlight the efficiency of the proposed method. More discussion about limitations and potential negative societal impact is recommended.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The formula derivation in Sec. 3.3 can be more detailed. It is suggested to explain how formula (7) is obtained in the appendix.
Thank you for the suggestion. We will include detailed derivations of Eq. (7) in the appendix of revision, which we simply summarize the deduction as follows for clarity:
Back to Eq. (5), replace the expectations over $P _ {bal}$ and $P$ by $\boldsymbol{\Phi} _ {P}$ and $F _ {P _ {bal}}$, respectively. For each class $i$, We can obtain
$\widehat{r _ {i}}=\widehat{\mathrm{MM}(r)}$, where $\widehat{\mathrm{MM}(r)}=
\frac{1}{n _ {i}^{2}}{r _ i}^\top{\boldsymbol{\Phi} _ {P}^{i}}^{\top} {\boldsymbol{\Phi} _ {P}^{i}}{r _ i}-\frac{2}{n _ {i}}r _ {i}^\top{\boldsymbol{\Phi} _ {P}^{i}}^\top F _ {i}$
Then, taking the derivative of $\widehat{\mathrm{MM}(r)}$ with respect to $r$ and setting it to zero, we can obtain the estimation of density ratio in imbalanced learning as follows
$\frac{2}{n _ {i}^{2}} {\boldsymbol{\Phi} _ {P}^{i}}^{\top} {\boldsymbol{\Phi} _ {P}^{i}} r _ {i}-\frac{2}{n _ {i}}{\boldsymbol{\Phi} _ {P}^{i}}^{\top}F _ {i}=0$
Solving equation above with respect to $r _ {i}$, we can obtain the solution as
$\widehat{r _ {i}}=n _ {i}\left({\boldsymbol{\Phi} _ {P}^{i}}^{\top} \boldsymbol{\Phi} _ {P}^{i}\right)^{-1} {\boldsymbol{\Phi} _ {P}^{i}}^{\top} F _ {i}$
> The introduction may have overlooked some key articles. For example, the article mentions Wang et al. 's article at the end of Sec.3.3, but does not discuss this paper in the introduction section.
We will take the reviewers' advice to carefully revise the second paragraph of the Introduction (lines 32-34), specially including a discussion of the work by Wang et al. and other related studies. We present the corresponding revision as follows for your reference.
However, such subsequent improvements can alleviate but still cannot effectively push that forward. *Wang et al. [2023] obtains a fine-grained generalization bound for re-weighting in imbalanced learning through the data-dependent contraction technique.* Limited research has focused on the intrinsic limitations...
> Does the new method enjoy the same theoretical boundaries as the general reweighting method? It is recommended to provide more analysis.
Thank you for your suggestion. To address the reviewer's concern, we present a generalization bound sketch for the re-weighting-based methods and provide the possible insights with the empirical verifiction in the following. Note that, about its formal version with the complete hypothesis and the deduction procedure, we will update to the manuscript in the final.
Given the model $m\in M$ and a reweighting loss function $L_{RW}$, for any $\delta \in(0,1)$, with probability at least $1-\delta$ over the training set $\mathcal{S}$, the following generalization bound holds for the risk on the balanced distribution
$\mathcal{R} _ {\text {bal}}^{L}(m) \precsim \Phi\left(L _ {RW}, \delta\right)+\frac{\mathfrak{S} _ {\mathcal{S}}(M)}{d \pi _ {1}} \sum _ {y=1}^{d} w _ y \sqrt{\pi _ {y}}\left[1-\operatorname{softmax}\left(B _ {y}(m)\right)\right]$
where $\Phi\left(L _ {RW}, \delta\right)$ is positively correlated with the empirical reweighting risk of training set. $\mathfrak{C} _ {\mathcal{S}}(M)$ denotes the empirical complexity of the function set $M$. $B _ {y}(f)$ denotes the minimal prediction on the ground-truth class $y$ in the training set. $w _ y$ refers to the weight of class $y$ of the reweighting loss $L _ {RW}$. The formal theory and the proof will presented in the revision.
We can get some insights from the above generalization bound:
1. **Why reweighting is necessary**: $w _ y$ helps to rebalancing the imbalanced term $\sqrt{\pi_{y}}\left[1-\operatorname{softmax}\left(B _ {y}(m)\right)\right]$ to get a sharper bound.
2. **Why dynamic reweighting is necessary**: The term $B _ {y}(m)$ changes dynamically with model training. Therefore, we need a $w_y$ that can adapt dynamically to the changes of $B _ {y}(m)$.
3. **Why RDR works**: From Figure 1 in **the attached PDF**, we observe that the dynamic trend of the RDR weight aligns well with $\sqrt{\pi _ {y}}\left[1-\operatorname{softmax}\left(B _ {y}(m)\right)\right]$, denoted as $B _ y^{\prime}$. This demonstrates that our RDR can adapt to the dynamic changes in $B _ y^{\prime}$, maintaining a sharp bound during dynamic training.
> Besides, there are some typos in the details:
> In the experimental section, 'class[390,385] 'may be a typo.
> In table 3, the interpretation of Tr_{Few} is supposed to be there.
> Do the authors use other techniques such as RandAug or mixup?
Thank you for pointing it out. We have carefully proofread the whole manuscript, corrected typos, and added a table about necessary explanations of some notations or terms (please refer to the notation table in the **attached PDF file**). $\overline{\textit{Tr}}_{Few}$ denotes average trace of Hessian matrix over Few classes. These changes will be included in the revision. We used RandAug in our experiments but did not include mixup, for fair competition.
> More discussion about limitations and potential negative societal impact is recommended.
Thanks for your suggestion. We will enrich the discussion about limitation and social impact in the revision. Specially, we will highlight the potential computational complexity issue when scaling up to a super large number of classes (e.g., in face recognition, retail product recommendation and landmark detections), which should pay additional attention to some corresponding lightweight techniques. Regarding the potential negative societal impact, we should clearly recognize that overly rebalancing to minority groups during training may bring the desctructive effect on the learning of majority groups, which is also not the desirable goal of imbalanced learning. All rebalancing techniques should build on top of a proper range for fairness and we should avoid some improper abuse by the malicious minority groups.
---
Rebuttal 2:
Comment: Dear Reviewer 24eD,
We genuinely appreciate your detailed feedback and the insightful comments you've provided on our manuscript.
We have provided more clarifications and explanations as suggested. Additionally, we have discussed limitations and potential negative societal impact.
Please let us know if anything is unclear. We truly appreciate this opportunity to improve our work and shall be grateful for any feedback you could give to us.
Best Regards,
The authors of Submission 4464
---
Rebuttal 3:
Comment: Thanks for your rebuttal! My concerns have been clarified. Hence, I will increase my rating accordingly.
---
Rebuttal Comment 3.1:
Comment: Dear Reviewer 24eD,
We greatly appreciate your time and effort in reviewing our responses and contributing to the enhancement of this paper. We will carefully follow your suggestions to incorporate all the points of our rebuttal in the revised version.
Best,
The authors of Submission 4464 | Summary: The paper introduces a novel approach called Re-weighting with Density Ratio (RDR) to address the challenges posed by imbalanced data distributions in machine learning. The RDR approach aims to mitigate overfitting on majority classes and enhance adaptability across diverse datasets by continuously updating the weights in response to observed shifts in class density. Extensive experiments on various large-scale, long-tailed datasets demonstrate that the RDR method significantly improves the model's generalization capabilities, particularly under severely imbalanced conditions. The analysis of the weight changes during training reveals that the method increasingly focuses on minority classes as training progresses, initially learning common features across all categories and then targeting learning towards minority samples to enhance generalizability. The paper also provides an ablation study to further validate the effectiveness of the proposed approach.
Strengths: 1. The paper introduces a novel approach called Re-weighting with Density Ratio (RDR) to address the challenges posed by imbalanced data distributions in machine learning.
2. Extensive experiments on various large-scale, long-tailed datasets demonstrate that the RDR method significantly improves the model's generalization capabilities, particularly under severely imbalanced conditions.
3. The analysis of the weight changes during training reveals that the method increasingly focuses on minority classes as training progresses, initially learning common features across all categories and then targeting learning towards minority samples to enhance generalizability.
4. The paper provides an ablation study to further validate the effectiveness of the proposed approach.
5. The results show that RDR generally outperforms other methods, including Inverse Frequency (1/n) and SAM variants, in both the Many and Few classes, indicating that RDR can efficiently address the overfitting issues for Few classes.
Weaknesses: - The paper does not provide a detailed theoretical analysis or justification for the proposed Re-weighting with Density Ratio (RDR) method, beyond the intuition that it can mitigate overfitting on majority classes and enhance adaptability across diverse datasets.
- I am interested in how RDR might perform in the presence of extreme imbalance, noisy data, or other challenging scenarios. The current experiment is well-established but dataset itself is relatively simple.
- The paper discusses reweighting/non-reweighting for classification problems. I suggest the authors also briefly discuss reweighting methods in imbalanced regression problems, e.g., VIR [1] for reweighting problems and ConR [2] for non-reweighting problems.
[1] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing, NeurIPS 2023
[2] ConR: Contrastive Regularizer for Deep Imbalanced Regression, ICLR 2024
**Summary** I think the theoretical analysis or at least insights is needed for acceptance, so my suggest score is 5, as the experiment part is excellent in this paper.
Technical Quality: 4
Clarity: 2
Questions for Authors: see above
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 2
Limitations: authors discussed in sec 3.4
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper does not provide a detailed theoretical analysis or justification for the proposed Re-weighting with Density Ratio (RDR) method, beyond the intuition that it can mitigate overfitting on majority classes and enhance adaptability across diverse datasets.
Thank you for your suggestion. To address the reviewer's concern, we present a generalization bound sketch for the re-weighting-based methods and provide the possible insights with the empirical verifiction in the following. Note that, about its formal version with the complete hypothesis and the deduction procedure, we will update to the manuscript in the final.
Given the model $m\in M$ and a reweighting loss function $L _ {RW}$, for any $\delta \in(0,1)$, with probability at least $1-\delta$ over the training set $\mathcal{S}$, the following generalization bound holds for the risk on the balanced distribution
$\mathcal{R} _ {\text {bal }}^{L}(m) \precsim \Phi\left(L _ {RW}, \delta\right)+\frac{\mathfrak{S} _ {\mathcal{S}}(M)}{d \pi_{1}} \sum _ {y=1}^{d} w _ y \sqrt{\pi _ {y}}\left[1-\operatorname{softmax}\left(B _ {y}(m)\right)\right]$
where $\Phi\left(L _ {RW}, \delta\right)$ is positively correlated with the empirical reweighting risk of training set. $\mathfrak{C} _ {\mathcal{S}}(M)$ denotes the empirical complexity of the function set $M$. $B _ {y}(f)$ denotes the minimal prediction on the ground-truth class $y$ in the training set. $w_y$ refers to the weight of class $y$ of the reweighting loss $L _ {RW}$. The formal theory and the proof will presented in the revision.
We can get some insights from the above generalization bound:
1. **Why reweighting is necessary**: $w _ y$ helps to rebalancing the imbalanced term $\sqrt{\pi _ {y}}\left[1-\operatorname{softmax}\left(B _ {y}(m)\right)\right]$ to get a sharper bound.
2. **Why dynamic reweighting is necessary**: The term $B _ {y}(m)$ changes dynamically with model training. Therefore, we need a $w _ y$ that can adapt dynamically to the changes of $B _ {y}(m)$.
3. **Why RDR works**: From Figure 1 in **the attached PDF**, we observe that the dynamic trend of the RDR weight aligns well with $\sqrt{\pi _ {y}}\left[1-\operatorname{softmax}\left(B _ {y}(m)\right)\right]$, denoted as $B _ y^{\prime}$. This demonstrates that our RDR can adapt to the dynamic changes in $B _ y^{\prime}$, maintaining a sharp bound during dynamic training.
> I am interested in how RDR might perform in the presence of extreme imbalance, noisy data, or other challenging scenarios. The current experiment is well-established but dataset itself is relatively simple.
Thank you for the suggestion. In the following, we conduct more experiments. The results are shown in Table 2 and Table 3 in the **attached PDF**.
- We include the results of two new datasets: CIFAR-10-LT-NL and CIFAR-100-LT-NL, with both class imbalance and label noise. We can see that on more complex datasets, our method achieves consistent and significant improvements.
- Additionally, note that the imbalance factors constructed on ImageNet-LT and Places-LT in the submission are 256 and 996, respectively. These datasets are indeed extremely imbalanced. We will explicitly indicate the imbalance factors of these two datasets in the experiment tables in the revision. Furthermore, we conduct experiments on the more imbalanced CIFAR-10-LT and CIFAR-100-LT datasets, specifically with imbalance factors of 200 and 500. Our method consistently achieves significant improvements.
> The paper discusses reweighting/non-reweighting for classification problems. I suggest the authors also briefly discuss reweighting methods in imbalanced regression problems, e.g., VIR [1] for reweighting problems and ConR [2] for non-reweighting problems.
[1] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing, NeurIPS 2023
[2] ConR: Contrastive Regularizer for Deep Imbalanced Regression, ICLR 2024
Thank you for the valuable suggestion and the recommendation. We will include a brief discussion on imbalanced regression problems in the revision as follows:
We shall note that the primary focus of this study is on imbalanced classification. Beyond imbalanced classification that has discretized label space, an noteworthy area, imbalanced regression that has a continuous label space is also very common in real applications [a,b]. In this direction, the empirical label distribution often does not accurately reflect the true label density in regression tasks, which limits the effectiveness of traditional reweighting techniques [a,c]. Label Distribution Smoothing (LDS) [a] and Variational Imbalanced Regression (VIR) [c] propose using kernel smoothing and other techniques to estimate an accurate label density distribution. Ranking Similarity (Ranksim) [b] leverages local and global dependencies by encouraging the correspondence between the similarity order of labels and features. Balanced Mean Squared Error (Balanced MSE)[d] extends the concept of Balanced Softmax[e] to regression tasks to achieve a balanced predictive distribution. Contrastive Regularizer (ConR)[f] improves contrastive learning techniques to translate label similarities into the feature space. Considering the different rebalancing paradigms compared with that in imbalanced classification, and the limited space, we will leave the potential extension of our RDR to this area in the future explorations.
[a] Delving into Deep Imbalanced Regression. ICML 2021.
[b] RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression. ICML 2022.
[c] Variational Imbalanced Regression: Fair Uncertainty Quantification via Probabilistic Smoothing. NeurIPS 2023.
[d] Balanced MSE for Imbalanced Visual Regression. CVPR 2022.
[e] Balanced Meta-Softmax for Long-Tailed Visual Recognition. NeurIPS 2020.
[f] ConR: Contrastive Regularizer for Deep Imbalanced Regression. ICLR 2024.
---
Rebuttal Comment 1.1:
Title: reviewer update
Comment: I would like to thank the authors for their detailed response. I will take all the reviews and responses into consideration.
---
Rebuttal Comment 1.2:
Title: score update
Comment: It seems that the other reviewers have not responded yet. After reviewing the authors' responses to the other reviewers, I have decided to raise my score. However, I **cannot** promise that the authors' responses address the concerns of the other reviewers, so the authors **should not use my improved score as a reference or evidence**.
In summary, I appreciate the authors' responses to all of us. Specifically, I feel that the authors have addressed my questions, so I have decided to increase my score.
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer MG5X,
We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will carefully follow reviewer's advice to incorporate all the addressed points with additional experiments in the updated version.
We promise not to use your improved score as evidence to persuade other reviewers, but focus on truly addressing their concerns.
Thank you once again for your dedicated and valuable contribution in reviewing our paper!
Best,
The authors of Submission 4464 | Summary: The paper presents a weighting strategy in order to handle class imbalance. Contrary to existing method, they propose to adapt the weight throughout the training procedure.
Their method estimates the discrepancy between the sample distribution and the balanced sample distribution for parameterization w and updates the estimate through the training.
The authors use two resnet architectures to evaluate their contribution on multiple datasets. They also compare to other baselines and show significant gain.
Strengths: * The paper develops a novel approach for handling class imbalance.
* The methodology is derived theoretically from the problem formulation
* The authors propose an analysis of the complexity of the method and empirically evaluate the training time.
* The methodology is evaluated on multiple datasets and compared to multiple baselines.
Weaknesses: * The paper is sometimes difficult to read:
* Row 125, the authors refer to the distribution of training set, which get parameterized by w. Thus, my understanding is that the authors refer to the distribution of the training set "captured by the model".
* row 134, P_bal = pi P.. P_bal is the distribution of y in the balanced case ? But should therefore be 1/number classes... and P, should just be the class proportion and we should have P = pi P_bal ?
* LDAM and LA terms are not defined at first. First definition of LA is at row 208
* row 212 "trategies" => strategies
Technical Quality: 3
Clarity: 1
Questions for Authors: * Could you clarify the term, P, etc. You often refer to it as "real world data distribution", but the distribution of x does not depends on any other classes (imbalanced or not) ?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The paper would benefit from a clarification in notification (what is the true data distribution, what is the feature distribution, what is an estimate of what quantity, etc.). I believe the contribution is novel and worth it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Row 125, the authors refer to the distribution of training set, which get parameterized by w. Thus, my understanding is that the authors refer to the distribution of the training set "captured by the model".
Yes, you are right. We use the distribution parameterized by $w$ to represent "the distribution captured by the model". We will enrich its description for clarity in the revision.
> row 134, P_bal = pi P.. P_bal is the distribution of y in the balanced case ? But should therefore be 1/number classes... and P, should just be the class proportion and we should have P = pi P_bal ?
Thank you for pointing out this oversight: the term $\pi_i$ in the equation should indeed be $\frac{1}{\pi_i}$. We would like to clarify here that this mistake is limited to this equation and does not affect the correctness of our subsequent derivations. We greatly appreciate the reviewer’s careful review and will make the correction.
> LDAM and LA terms are not defined at first. First definition of LA is at row 208
> row 212 "trategies" => strategies
LDAM and LA respectively refer to Label-Distribution-Aware margin loss and Logit Adjustment, two prevalent long-tailed learning methods. We will include their full names at the first mention. For the typo, we have carefully proofread the whole manuscript again and corrected all possible mistakes. These changes will be included in the revision.
> Could you clarify the term, P, etc. You often refer to it as "real world data distribution", but the distribution of x does not depends on any other classes (imbalanced or not) ?
Sorry for the possible confusion. Here is a clarification of $P$:
$P$ represents the real world data distribution, or more directly, the training set distribution. Specifically, $P(x,y)$ is the joint distribution of the training data (x, y), and the training set $D$ is obtained by IID sampling from $P$. $P(x)$ and $P(y)$ represent the marginal distributions of the training data $x$ and $y$, respectively. $P(x|y)$ and $P(y|x)$ represent the corresponding conditional distributions. For other distribution notations parameterized by $w$, they mean the corresponding distributions captured by the model, as the reivewer's understanding.
We also place a notation table for clarity in the **attached PDF file**, which will be placed into the manuscript if possible (or in the appendix if not). If you have any further questions or need any clarifications, please let us know. We will carefully improve the manuscript.
---
Rebuttal 2:
Comment: Dear Reviewer xrua,
We sincerely appreciate your constructive feedback and positive evaluation of our submission.
We have provided more clarifications and explanations, made necessary corrections, and improved definitions as suggested.
Please let us know if anything is unclear. We truly appreciate this opportunity to improve our work and shall be most grateful for any feedback you could give to us.
Best Regards,
The authors of Submission 4464
---
Rebuttal Comment 2.1:
Title: Answer
Comment: Dear authors,
I thank you for your answers. After considering your propositions and mostly the enhanced readability (e.g. symbol table), I have decided to raise my note.
Regards
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer xrua,
We sincerely appreciate you taking the time to review our responses and contributing to improve this paper. We will carefully follow your advice to incorporate all the points of our rebuttal in the updated version.
Best,
The authors of Submission 4464 | null | null | Rebuttal 1:
Rebuttal: We thank reviewers for your valuable feedback, and appreciate the great efforts made by all reviewers, ACs, SACs and PCs.
Please refer to our detailed responses to each reviewer, where we addressed each question and concern point by point. In the **attached PDF**, we have included a notation summary table for improved readability and clarification, additional tables for experimental results, along with the empirical validation of the theoretical insights presented in the rebuttal.
We appreciate all reviewers’ time again. We are looking forward to your reply!
Pdf: /pdf/fd049f6923267f191d534af72f72d078c595b746.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective | Accept (poster) | Summary: This paper investigates the two different misalignment issues between CLIP and downstream tasks, i.e., task misalignment and data misalignment. The author designed several experiments that demonstrated that over-fitting occurs when tuning with the learnable prompt. They propose the Causality-Guided Semantic Decoupling and Classification(CDC) method to mitigate the impact of task-irrelevant generative factors on downstream tasks. The extended experiments demonstrate that the proposed CDC method is effective.
Strengths: This paper investigates the difficulty of adapting CLIP to downstream tasks via two-level misalignment, which is a bright idea to help the community understand the working mechanism. Then the author provides a comprehensive experiment that reveals how the overfitting occurs and impacts the recognition of new classes. The author uses the perspective of causal inference to alleviate the data misalignment and proposes CDC with front-door adjustment for implementation, which predicts with explicit evidence.
Weaknesses: 1. There is no obvious evidence to show that the CDC will improve the prediction in certain cases, which category previous wrong and correct with CDC. It would be better if several compared failure cases to demonstrate that the CDC can solve the question at which level and which case still needs more advanced methods.
2. It is interesting how the misalignment between CLIP and downstream pattern will change as the model’s capabilities increase, such as ViG-Large, and whether the more powerful model can solve the misalignment problem. Hence, it will be better if some comparisons of models can be added.
3. As we all know, MLLMs already sweep through the multimodal community, it will be better if expand some discussion about the misalignment in this paradigm and current method whether easily general to that.
4. How many parameters are tuned during downstream adapting? Compared to the pre-trained model, what is the ratio of tuned parameters?
Technical Quality: 3
Clarity: 2
Questions for Authors: see weaknesses
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer 7jjJ for the valuable feedback and constructive suggestions. The mentioned issues are addressed as follows:
***
**W1**: There is no obvious evidence to show that the CDC will improve the prediction in certain cases, which category previous wrong and correct with CDC. It would be better if several compared failure cases to demonstrate that the CDC can solve the question at which level and which case still needs more advanced methods.
**A1**: Thank you for your suggestions. We provide several examples in the **attached PDF of the global rebuttal** to compare the prediction results of CDC with those of the baseline method MaPLe. We will include these analyses in the appendix to the final version of our paper, to enhance an intuitive understanding of our proposed CDC.
In the success cases in Figure 1, we observe that:
(1) Based on different prompt templates, the model captures different semantic information of the samples and generates diverse prediction results. This validates the effectiveness of our semantic decoupling strategy.
(2) By fusing the prediction results based on different templates (i.e., different semantic sets), CDC can obtain correct classification results. As shown in Figure 1 (e), the fused result is not a simple average of all predictions and may differ from all of them. This indicates that the fusion process comprehensively considers the credibility and consistency of each prediction result, enabling CDC to correctly classify samples that MaPLe fails to recognize.
For the failure cases in Figure 2, we discover that when the predictions based on each semantic set consistently lean towards the wrong category, CDC also struggles to correct the samples misclassified by MaPLe. Taking Figure 2 (a) as an example, intuitively, from different semantics such as the character's posture and the appearance of the yo-yo, the image is easily misidentified as playing the flute, making it difficult to classify. In the future, richer training data may enhance the model's ability to distinguish different objects and scenes, further improving CDC's performance.
***
**W2**: It is interesting how the misalignment between CLIP and downstream pattern will change as the model’s capabilities increase, such as ViG-Large, and whether the more powerful model can solve the misalignment problem. Hence, it will be better if some comparisons of models can be added.
**A2**: Refer to the global rebuttal for the details.
***
**W3**: As we all know, MLLMs already sweep through the multimodal community, it will be better if expand some discussion about the misalignment in this paradigm and current method whether easily general to that.
**A3**: Thank you for your suggestions. The misalignment in MLLM is consistent with that presented in our paper. Theoretically, our approach can be applied to MLLM.
MLLMs have demonstrated remarkable potential across a wide range of tasks. After aligning features from different modalities through pre-training, MLLMs often employ techniques such as instruction tuning to encourage the model to complete downstream tasks effectively.
Take Instruct-BLIP as an example. During the instruction tuning phase, Instruct-BLIP learns from specific datasets for certain tasks to build an instruction-aware Q-Former, thus assisting the model in extracting informative features tailored to the given instruction. The learned Q-Former is expected to generalize to unseen datasets and unseen tasks. However, for different tasks, the informative features for similar instructions can be diverse. Therefore, the learned Q-Former risks overfitting the training data. This is similar to prompt tuning analyzed in our paper, where prompts overfit base classes. Therefore, we argue that the discrepancies between training and testing in Instruct-BLIP introduce "data misalignment" in a general sense.
Our proposed CDC has the potential to address the data misalignment issue in Instruct-BLIP. To mitigate the interference of visual features that overfit training tasks, we can decouple the obtained features and estimate the importance of the semantics in the features. By performing a weighted fusion, we can assist the test tasks in extracting more task-relevant visual features, thereby improving overall task performance.
In conclusion, when aiming to enhance the zero-shot generalization performance of a model, the misalignment between training and testing data is a crucial issue that must be carefully considered. Our proposed SCM comprehensively models the misalignment problem in training and testing. CDC has the potential to be transferred to scenarios where misalignment exists, providing a valuable framework for addressing this common challenge in machine learning. In future work, we will investigate the specific implementation of CDC in MLLM to solve the data misalignment problem in it.
***
**W4**: How many parameters are tuned during downstream adapting? Compared to the pre-trained model, what is the ratio of tuned parameters?
**A4**: Thank you for your valuable comments. In Table 5, we provide the number of learnable parameters of our method. Compared to MaPLe, our proposed CDC increases the number of learnable parameters from 3.55M to 14.20M, which brings an average performance improvement of 1.70% in the Base-to-New setting.
In the implementation, we can reduce the model overhead by having all templates share the V-L functions, i.e., CDC* in Table 5. CDC* adds 0.027M parameters compared to MaPLe, bringing an average performance improvement of 1.14%. The analysis of the parameters highlights the efficiency of CDC.
**Table 5**. Comparison of prompting complexity among different methods.
| Method | Params | Params % CLIP | HM |
| --- | --- | --- | --- |
| CoOp | 2048 | 0.002 | 71.66 |
| CoCoOp | 35360 | 0.03 | 75.83 |
| Independent V-L | 31488 | 0.02 | 78.55 |
| MaPLe | 3.55 M | 2.85 | 78.55 |
| CDC(Ours) | 14.20 M | 11.40 | 80.25 |
| CDC*(Ours) | 3.58 M | 2.87 | 79.69 |
---
Rebuttal Comment 1.1:
Comment: Thank you for your efforts and response. You've addressed my main concern, and I believe this paper will make a valuable contribution to the NeurIPS community. I will maintain my current rating. | Summary: This paper addresses the two-level misalignment (task and data) issue in adapting CLIP to specific tasks. The authors develop a structural causal model to analyze CLIP's pre-training and adaptation processes, revealing how task-irrelevant knowledge interferes with predictions. To mitigate this, they propose Causality-Guided Semantic Decoupling and Classification (CDC), which implements front-door adjustment. CDC includes Visual-Language Dual Semantic Decoupling (VSD) to represent different semantics through multiple prompt templates, and Decoupled Semantic Trusted Classification (DSTC) to perform classification based on each decoupled semantic while estimating uncertainties. Experiments demonstrate CDC's effectiveness in enhancing CLIP's performance across various settings and tasks, addressing the challenge of data misalignment in vision-language model adaptation.
Strengths: * CDC is well motivated from a causal perspective and has significant technical novelty.
* Clear writing and well organized.
* Experiment results show the effectiveness of CDC.
Weaknesses: * Figure 1(a) appears to illustrate task misalignment. Consider enhancing the caption of Figure 1 with more detailed explanations to clarify this concept.
* Regarding data misalignment, it would be beneficial to provide a more precise definition. Does it specifically refer to discrepancies in classes between training and testing processes? It's important to clarify that data misalignment encompasses both label misalignment and distribution misalignment. A brief explanation of each type would improve understanding.
* In Figure 3, the term "fuse" is used. It would be helpful to clarify the meaning and context of this term within the figure.
* How about accuracy if we directly use zero-shot test for CDC?
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer JJqY for the valuable comments and constructive suggestions. The mentioned issues are addressed as follows:
***
**W1**: Figure 1(a) appears to illustrate task misalignment. Consider enhancing the caption of Figure 1 with more detailed explanations to clarify this concept.
**A1**: Thanks for your valuable suggestions. The example we present in Figure 1(a) is indeed to illustrate the task misalignment problem.
In the final version, we will modify the caption of Figure 1 to: "The motivating examples of the two-level misalignment. (a) Task Misalignment. Determined by the contrastive learning mechanism of CLIP, a given image is more similar to the entire textual description in the embedding space than to a single semantic element, which is inconsistent with the demands of the classification task. (b) Data Misalignment. The learned model intends to overfit on base classes. On the DTD dataset, as the number of training epochs increases, the accuracy of base classes rises, while the accuracy of new classes first rises and then drops."
* * *
**W2**: Regarding data misalignment, it would be beneficial to provide a more precise definition. Does it specifically refer to discrepancies in classes between training and testing processes? It's important to clarify that data misalignment encompasses both label misalignment and distribution misalignment. A brief explanation of each type would improve understanding.
**A2**: Thank you for your suggestion. In our paper, data misalignment encompasses label inconsistency and distribution inconsistency. In the final version, we will further clarify the concept of data misalignment in the appendix to avoid confusion.
Data misalignment refers to the inconsistency between the distribution of training data and testing data. This inconsistency can arise due to two main reasons:
(1) **Label Inconsistency**: The training and testing classes do not completely overlap. For instance, some classes present in the training data might not appear in the testing data and vice versa. We refer to the classes that appear in training as base classes and the classes that appear in testing as new classes.
(2) **Distribution Inconsistency**: Even if they share the same class names, the distributions of the classes in the training and testing data may differ, resulting in distribution inconsistency. In such cases, the testing classes are essentially also new classes.
Label inconsistency and distribution inconsistency together constitute the problem of data misalignment.
Furthermore, we have considered the effectiveness of CDC under both types of inconsistencies. The base-to-novel experimental setup mainly involves label inconsistency, while the cross-dataset and cross-domain setups address both label inconsistency and distribution inconsistency. Our proposed CDC has demonstrated effectiveness in all three experimental setups, highlighting its capability to address both types of inconsistencies.
* * *
**W3**: In Figure 3, the term "fuse" is used. It would be helpful to clarify the meaning and context of this term within the figure.
**A3**: Thank you for your constructive suggestions. In Figure 3, "fuse" refers to iteratively combining the evidence from all templates according to Equation (7) to obtain the final classification results. We will add an explanation of the "fuse" operation to the caption of Figure 3: "fuse" represents the process of iteratively combining all evidence according to Equation (7) to obtain the final results.
* * *
**W4**: How about accuracy if we directly use zero-shot test for CDC?
**A4**: Thanks for your valuable comment. The CDC method we proposed preserves the zero-shot ability of CLIP. In the cross-dataset transfer experiments, we use exactly zero-shot testing for the target datasets. To further verify the performance of our method under the zero-shot setting, in addition to the experiments in Table 2 of our manuscript, we have employed another architecture, i.e., ViT-L/14, to evaluate the zero-shot performance of CDC. The results of the additional experiments are presented in Table 4.
As shown in the table, compared to the baseline method MaPLe, our CDC method achieves an average performance gain of 0.43% under the zero-shot setting when using the ViT-B/16 structure, and an average zero-shot performance improvement of 1.05% when using the ViT-L/14 structure. These results further demonstrate the effectiveness of the CDC method in preserving the model's zero-shot ability.
**Table 4**. Comparison of CDC on the zero-shot classification to evaluate the out-of-distribution generalization of CDC in the cross-dataset setting.
| Method |Architecture| Caltech | Pets | Cars | Flowers | Food | Aircraft | SUN | DTD |SAT |UCF |Avg|
| --- | --- |--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|CLIP| ViT-B/16| 93.30 | 89.10 | 65.6 | 70.7 | 85.90 | 24.70 | 62.60 | 44.00 | 48.40 | 67.70 | 65.20 |
|CLIP+MaPLe| ViT-B/16| 93.53 | 90.49 | 65.57 |72.23 |86.20 |24.74 |67.01 |46.49 |48.06 |68.69 |66.30 |
|CLIP+CDC| ViT-B/16| 94.47 |90.77 |66.27 |72.67 |86.27 |24.50 |68.07 |46.60 |49.13 |68.60 |66.73 |
|CLIP|ViT-L/14|95.20|93.50|76.80|79.50|90.90|32.70|67.70|53.00|60.30|75.10|72.47|
| CLIP+MaPLe| ViT-L/14 | 96.23 |92.57 |77.60 |76.90 |91.40 |30.07 |71.63 |54.23 |53.83 |75.90 |72.04 |
|CLIP+CDC| ViT-L/14|96.70 |93.10 |77.33 |75.87 |91.53 |31.67 |72.87 |57.63 |58.00 |76.20|73.09|
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I decide to keep my score. | Summary: This paper investigates the task and data misalignment issues in pre-trained vision-language models such as CLIP. It discovers that the task-irrelevant information significantly affects the prediction of CLIP and soft prompt tuning cannot mitigate the data misalignment issue. The authors propose a novel Causality-Guided Semantic Decoupling and Classification method to mitigate the interference of task-irrelevant information. The experimental results show that the proposed method effectively mitigates the data misalignment and improves the generalization of CLIP.
Strengths: 1. The paper is well-organized. The introduction of the method and the figures are clear and easy to understand. The description of the experiment setting is detailed, which makes the paper reproducible.
2. The proposed methods to mitigate the task and data misalignment of CLIP are highly-motivated and intuitive.
3. The authors design and conduct exhaustive experiments to demonstrate the effectiveness of the propose method. The proposed methods provide significant improvements on the generalization of CLIP.
Weaknesses: 1. In the experiments section, the method is currently adapted solely to the CLIP model. This limitation may not fully demonstrate the model's universality. The authors can adapt the method to various vision-language models with different architectures to showcase broader applicability.
2. The experiments are exclusively conducted on image classification tasks. The authors can explore adapting vision-language models (VLMs) to a wider range of tasks, such as object detection, image captioning, or visual question answering, to further validate the model's versatility and performance across diverse applications.
Technical Quality: 4
Clarity: 4
Questions for Authors: Is it feasible to adapt the proposed method to various tasks such as object detection, image captioning, or visual question answering.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have discussed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Ptf4 for the valuable suggestions. The mentioned issues are addressed as follows:
***
**W1**: In the experiments section, the method is currently adapted solely to the CLIP model. This limitation may not fully demonstrate the model's universality. The authors can adapt the method to various vision-language models with different architectures to showcase broader applicability.
**A1**: Thank you for your valuable suggestions. We believe that most current VLMs suffer from the misalignment problems we analyzed in our paper. The SCM we developed effectively models the overfitting problems that VLMs could encounter when adapting to downstream tasks. The proposed CDC, guided by the SCM, can effectively alleviate the interference of task-irrelevant information in VLMs on downstream tasks, thereby enhancing performance.
To validate the universality of our proposed method, we conduct further experiments based on two additional models:
(1) **CLIP based on ViT-L/14**. ViT-B/16 is the most common setting in prompt tuning and is also employed in our paper. Compared with CLIP based on ViT-B/16, CLIP based on ViT-L/14 is a more powerful model, and experiments on ViT-L/14 can verify the effectiveness of CDC across VLMs of different capacities;
(2) **ALIP based on ViT-B/32**. ALIP introduces a bi-path model that integrates raw text supervision and synthetic caption supervision. We conduct experiments on ALIP to demonstrate the generalization of our CDC to VLMs with different pre-training objectives.
Refer to the **global rebuttal** for the experimental results and more analysis. The experimental results demonstrate that our proposed CDC can effectively improve the performance of various VLMs on downstream classification tasks.
***
**W2**: The experiments are exclusively conducted on image classification tasks. The authors can explore adapting vision-language models (VLMs) to a wider range of tasks, such as object detection, image captioning, or visual question answering, to further validate the model's versatility and performance across diverse applications.
**A2**: We appreciate your valuable suggestions. To validate whether our proposed method is beneficial for applying VLMs to a wider range of tasks, we explore the application of our method in one-shot object detection. As shown in Table 3, our approach achieves the state-of-the-art performance. The experimental results confirm the effectiveness of our method.
In one-shot object detection, each foreground object category has only one labeled instance available for training. Due to the limited amount of training data, it is often inconsistent with the true data distribution of the dataset. Therefore, there exists a serious misalignment of the training data with the testing data, which is in line with our definition of data misalignment. Our proposed SCM framework is highly effective in addressing this challenge, which is achieved by transferring our SCM (Figure 2 in the original manuscript) to the scenarios of one-shot object detection. Specifically, $D$ in our SCM can be interpreted as the knowledge contained within the base model of one-shot object detection, which encompasses both knowledge related to one-shot object detection (denoted as $G_r$) and knowledge unrelated (denoted as $G_i$). The process of fine-tuning the base model on a single instance can be seen as an attempt to extract $G_r$ while eliminating $G_i$. However, due to the scarcity of samples required for fine-tuning, $G_i$ is often not accurately identified, thus interfering with the modeling of the true causal relationships between the testing instances and their corresponding categories. $\hat{G_i}$ becomes a confounder.
Our proposed CDC effectively mitigates such confounders. Concretely, we generate feature vectors for the foreground object categories using the text encoder of CLIP and align the visual features from the pre-trained base model with the corresponding text features in CLIP through a projector. Additionally, we enhance the feature alignment via prompt tuning and CDC within the CLIP text encoder. As shown in Table 3, our proposed CDC has achieved state-of-the-art performance among works in recent years. Compared to the baseline method DeFRCN [R2], our approach yields an average performance improvements of 1.61% across three different data splits, demonstrating its effectiveness in extending VLM to other tasks.
In future work, we will continue to explore the potential of CDC for enhancing the application of VLMs, e.g., CLIP, across a wider range of tasks.
**Table 3** One-shot experimental results of AP50 on the VOC dataset [R3] (%).
| Methods | Novel Set 1 | Novel Set 2 | Novel Set 3 |
| --- | --- | --- | --- |
|Pseudo-Labelling [R4]|54.50|32.80|48.40|
| DeFRCN [R2] | 57.03|35.82 | 52.49 |
| ICPE [R5] | 54.30 | 33.50 | 50.90 |
| DiGeo [R6] | 37.90 | 26.60 | 30.40 |
| CDC (Ours)| **59.62** | **37.67** | **52.89** |
[R2] Qiao L, Zhao Y, Li Z, et al. Defrcn: Decoupled faster r-cnn for few-shot object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 8681-8690.
[R3] M. Everingham, L. V. Gool, C. K. I. Williams, J. M. Winn, A. Zisserman, The pascal visual object classes (VOC) challenge, Int. J. Comput. Vis. 88 (2) (2010) 303–338.
[R4] P. Kaul, W. Xie, A. Zisserman, Label, verify, correct: A simple few shot object detection method, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 14237–14247.
[R5] X. Lu, W. Diao, Y. Mao, J. Li, P. Wang, X. Sun, K. Fu, Breaking immutable: Information-coupled prototype elaboration for few-shot object detection, in: B. Williams, Y. Chen, J. Neville (Eds.), AAAI 2023, AAAI Press, 2023, pp. 1844–1852.
[R6] J. Ma, Y. Niu, J. Xu, S. Huang, G. Han, S. Chang, Digeo: Discriminative geometry-aware learning for generalized few-shot object detection, in: CVPR 2023, IEEE, 2023, pp. 3208–3218. | null | null | Rebuttal 1:
Rebuttal: Response to **Weakness 1** of **Reviewer** **Ptf4** and **Weakness 2** of **Reviewer** **7jjJ**.
***
Thank you for your suggestions on exploring the impact of the misalignment issues we proposed across different models. We believe that most current VLMs can suffer from the misalignment problem when adapting to downstream tasks, regardless of the model's capacity and architecture.
Firstly, our proposed data misalignment arises from the distribution differences between training data and testing data during the adaptation process of VLMs, which is an inherent attribute of the task. During the process of tuning prompts based on training data, regardless of the model's architecture and pre-training method, the distribution of testing data remains unknown, causing the learned prompts to overfit the training data. Therefore, the data misalignment problem will persist.
Furthermore, to validate the efficiency of our proposed CDC in addressing the data misalignment problem regarding different VLMs and different model architectures, we conduct experiments based on two experimental settings:
(1) **Base-to-New Generalization**. As shown in Table 1, we adopt an additional more powerful architecture, i.e., ViT-L/14, and another VLM, i.e., ALIP [R1], to validate the effectiveness of our CDC. Table 1 reports the harmonic mean (HM) of accuracy on base and new classes in the base-to-new experimental setup, based on CLIP and ALIP. As shown in the table, with ViT-L/14, CDC achieves a 5.78% average performance improvement over CLIP and a 2.66% improvement over MaPLe. With ViT-B/32, CDC achieves a 13.07% improvement over ALIP, and a 1.37% improvement over MaPLe.
(2) **Cross-Dataset Out-of-Distribution Generalization**. We also validate the efficiency of CDC in the cross-dataset experimental setting. Table 2 reports the zero-shot test results of MaPLe and CDC on 10 target datasets. As shown in the table, CDC achieves a 1.05% average performance improvement over MaPLe.
The experimental results indicate that even with the use of a more powerful architecture and another VLM, CDC can still improve model performance by mitigating data misalignment issues during the prompt tuning process.
**Table 1**. The comparison with baseline methods on base-to-new generalization setting based on different VLMs and different architectures.
| Method |Architecture | Avg | ImageNet | Caltech | Pets | Cars | Flowers | Food | Aircraft | SUN | DTD |SAT |UCF |
| --- |--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |--- |
|CLIP |ViT-L/14 |78.75|76.51|96.69|96.63|79.23|82.75|94.35|40.62|75.54|65.84|76.70|80.30 |
|CLIP+MaPLe | ViT-L/14 |81.86|79.76|97.22|97.55|83.09|87.70|94.65|44.81|82.52|72.03|72.98|84.75 |
|CLIP+CDC | ViT-L/14 | 84.52|80.61|97.14|97.76|83.42|88.90|94.91|46.10|84.31|78.34|90.78|86.15|
|ALIP |ViT-B/32 |41.10 | 38.50 | 80.05 |41.76 |4.89 |60.36 |50.91 |4.05 |53.87 |28.60 |43.60 |43.94 |
|ALIP+MaPLe | ViT-B/32 |52.80 |45.84 |88.44 |60.64 |12.29 |74.70 |61.14 |7.10 |67.30 |36.27 |63.42 |56.43 |
|ALIP+CDC | ViT-B/32 | 54.17 |45.67 |90.06 |62.69 |12.44 |77.97 |62.31 |7.20 |67.91 |41.32 |65.99 |57.96 |
**Table 2**. Comparison of CDC on cross-dataset evaluation based on ViT-L/14.
| Method |Architecture| Caltech | Pets | Cars | Flowers | Food | Aircraft | SUN | DTD |SAT |UCF |Avg|
| --- | --- |--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
|CLIP|ViT-L/14|95.20|93.50|76.80|79.50|90.90|32.70|67.70|53.00|60.30|75.10|72.47|
| CLIP+MaPLe| ViT-L/14 | 96.23 |92.57 |77.60 |76.90 |91.40 |30.07 |71.63 |54.23 |53.83 |75.90 |72.04 |
|CLIP+CDC| ViT-L/14|96.70 |93.10 |77.33 |75.87 |91.53 |31.67 |72.87 |57.63 |58.00 |76.20|73.09|
[R1] Yang K, Deng J, An X, et al. Alip: Adaptive language-image pre-training with synthetic caption[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 2922-2931.
Pdf: /pdf/6b0f791eafa6609150166b21c9a487468a5b277d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Low Precision Local Training is Enough for Federated Learning | Accept (poster) | Summary: This paper proposes an efficient federated learning (FL) paradigm, where the local models in the clients are trained with low-precision operations and communicated with the server in low precision format, while only the model aggregation in the server is performed with high-precision computation. The performance is comparable to full-precision training, and sometimes even better since the over-fitting issue in local training is relieved.
Strengths: S1. The idea of applying SWALP's low-precision training within each cycle for local training in FL is meaningful and effective.
S2. There are theoretical analysis on convergence.
S3. Experiments are comprehensive and demonstrate the effectiveness of the proposed method.
Weaknesses: W1. The biggest concern is regarding the novelty w.r.t. SWALP [40]. The entire Sec 3.2 and Sec 4.1 are almost the same as in [40]. The only difference seems to be Sec 4.2 but still very similar to the idea of SWA, just the setting of aggregation changes from by cycles to by clients, and a moving average of parameters is used.
W2. Writing needs improvement. For example, there is a typo "sever" in Line 108.
Eq (5) is different from its counterpart in [40] where the power was F-1 but now W-F-1. please explain the reason of difference.
Line 135, missing a space before "i.e."
Eq(7) uses E which is not clear until continuing reading to Line 161 and Algorithm 2 Lines 3-4.
Algorithm 2 Line 11, t' is only briefly mentioned in Line 161 without even referring to the used lines.
W3. It would be good to estimate the time reduction with the professional hardware (real acceleration).
Technical Quality: 3
Clarity: 3
Questions for Authors: Justify method novelty against [40] (W1) and answer related questions in W2.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The biggest concern is regarding the novelty w.r.t. SWALP [40].**
**A1:** We summarize the differences between our method and SWALP as follows:
1) SWALP is designed for standard sgd in centralized training, our approach focuses on FL.
2) We show both empirically and theoretically the effecitivenss of our approach in heterogeneous datasets.
3) We show that besides FedAVG, our method can be integrated with various FL algorithms.
**Q2: Typos, e.g., "sever" in Line 108 and missing a space before "i.e.".**
**A2:** Thanks. We will correct the typos and improve the writing accordingly.
**Q3: Eq (5) is different from its counterpart in [40] where the power was F-1 but now W-F-1. Please explain the reason of difference.**
**A3:** The reason for this issue is that in Section 3.1 of SWALP, the definitions of W and F are different between the "Fixed Point Quantization" section and the "Block Floating Point (BFP) Quantization" section. Our definition of W and F is the same as in the "Fixed Point Quantization" section. In our paper and "Fixed Point Quantization" section, F represents the number of bits occupied by the fractional part, while in the "Block Floating Point (BFP) Quantization" section, F stands for the number of bits occupied by the shared exponent.
**Q4: Eq(7) uses E which is not clear until continuing reading to Line 161 and Algorithm 2 Lines 3-4.**
**A4:** We provided the definition of E in line 115 of the paper.
**Q5: Algorithm 2 Line 11, t' is only briefly mentioned in Line 161 without even referring to the used lines.**
**A5:** Thank you for your suggestion, we will add further description after t'.
**Q6: It would be good to estimate the time reduction with the professional hardware (real acceleration).**
**A6:** It is difficult to conduct experiments on professional hardware in university, which is expensive and involves hardware programming. Fortunately, we note that our simulation is standard and approved by the community. The reasons are
1) Such low precision training methods can be implemented on the machine learning accelerators efficiently to achieve real saving in computational and communication cost;
2) The results would be always consistent with those obtained from simulation.
For details, please refer to [r9] and [r10].
[r9] Kuzmin A, et al. Fp8 Quantization: The Power of the Exponent. NeurIPS 2022.
[r10] Nagel M, et al. Overcoming Oscillations in Quantization-Aware Training. ICML 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses. I still feel the method a bit similar to SWALP, and writing needs some work to be clear.
I will keep my rating since it was very positive already.
---
Rebuttal 2:
Comment: Dear Reviewer 1d4C,
We sincerely appreciate your support and valuable comments on improving our paper. We believe the primary similarity between SWALP and our method is their high-level inspiration from Kolmogorov’s law, which asserts that the sample average can almost surely converge to the expected value despite the presence of noise. We also find it highly appropriate to develop efficient low-precision local training methods for federated learning, given that the clients in many federated learning applications are resource-constrained. Our proposed method is both simple and effective, presenting a new framework/paradigm for accelerating federated learning. We hope it will inspire researchers to create more efficient federated learning algorithms based on low-precision local training in the future. We believe this contribution holds comparable significance to our proposed method itself.
As the deadline for the author-reviewer discussion phase approaches, could you please let us know if any further discussion or clarification is needed? Thank you again for your valuable time; your support is greatly appreciated.
Sincerely,
Authors of Submission 11555 | Summary: The paper proposes a federated learning approach that performs local training on low precision through quantization combined with a high-precision averaging and a moving average at the server. The paper guarantees convergence and empirically compares several levels of low-precision local training to full-precision training on 4 baseline FL methods. It remains unclear, though, what the contribution of the method is: the main focus seems to be on performance in terms of test accuracy, but the experiments do not show a significant improvement over existing methods. The method supposedly improves communication and computation efficiency but is not empirically compared to state-of-the-art methods, such as [1,2,3].
In their rebuttal, the authors provided novel results that address my concerns about missing baselines. While I remain concerned about the limited novelty and the presentation, I believe that the authors will be able to address these issues to some extent in the next version of the manuscript. Therefore, I have decided to increase my score.
\
[1] Liu, Shiyu, et al. "Quantized SGD in Federated Learning: Communication, Optimization and Generalization." International Conference on Neural Information Processing. Singapore: Springer Nature Singapore, 2023.
[2] Kamp, Michael, et al. "Efficient decentralized deep learning by dynamic model averaging." Machine Learning and Knowledge Discovery in Databases: ECML PKDD, 2018.
[3] Reisizadeh, Amirhossein, et al. "Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization." International conference on artificial intelligence and statistics. PMLR, 2020.
Strengths: - convergence guarantees
- clear explanation of the method
Weaknesses: - the presentation should be improved (e.g., several typos, such ass "accept" instead of "aspect", the term "a avg" for "with moving average" is unintuitive)
- the ablation study does not clearly show that low precision local training improves performance, since it is combined with a moving average that has a strong positive impact on performance.
- lack of comparison to baselines
- unclear use-case for the method
- the paper does not discuss existing federated learning with quantization literature in sufficient detail.
- for the non-iid experiments it would be interesting how quantization interacts with local BN layers in FedBN [4]
\
[4] Li, Xiaoxiao, et al. "FedBN: Federated Learning on Non-IID Features via Local Batch Normalization." International Conference on Learning Representations, 2021.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It is unclear from the results how much of the benefit stems from quantization and how much from the moving average. Is it correct that the moving average has a very large positive effect regardless of quantization?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: Section F in the appendix addresses the limitation of simulating quantization. It does not address issues like computation and communication complexity, the applicability of the approach, or limitations in the empirical evaluation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Contribution in terms of test accuracy.**
**A1:** It is a misunderstanding. We do not aim to improve test accuracy, but rather to demonstrate that low precision local training is sufficient for FL and can be used to reduce training and communication cost. Our method, which performs low precision local training, achieves comparable (or even higher) accuracy with full-precision methods. The improvements on the training efficiency and communication cost can be guaranteed as our low precision operators are standard.
**Q2: Compare with FedPAQ, Quantized-SGD and dynamic-model-averaging.**
**A2:** Thanks. In our paper, we gave the results with strong baselines HeteroFL [9] and SplitMix [14]. Below is the result on CIFAR10 ($\alpha=0.01$) of FedPAQ. "()" denotes the percentage of models on the clients. Following [r6] we use the number of weights, activation, and gradients of local training to approximate training cost. It shows that we can reduce communication and computational cost while maintain high accuracy.
| Method | Acc | Communication cost (MB / round) | Training cost (MB / client) |
| :- | :-: | :-: |:-: |
| HeteroFL (1) |41.1±2.4 | 39.1 | 205.4 |
| HeteroFL (1/2) |36.4±0.5 | 19.6 | 83.7 |
| HeteroFL (1/4,1/8) |29.2±2.1 | 7.3 | 31.1 |
| SplitMix (1) |41.0±2.3 | 39.1 | 178.4 |
| SplitMix (1/2) |40.4±3.6 | 19.6 | 89.2 |
| SplitMix (1/4,1/8) |38.9±3.5 | 7.3 | 33.4 |
| FedPAQ 8 bit |40.7±2.1 | 9.8 | 205.4 |
| FedPAQ 6 bit |38.1±0.7 | 7.3 | 205.4 |
| FedPAQ 5 bit |35.0±1.3 | 6.1 | 205.4 |
| FedAVG+Ours(8 bit) |53.3±2.8 | 9.8 | 51.3 |
| FedAVG+Ours(6 bit) |49.4±1.0 | 7.3 | 38.5 |
| FedAVG+Ours(5 bit) |44.7±1.0 | 6.1 | 32.1 |
We gave the results of another SOTA CoCoFL in A2 of Reviewer 5cLQ. We have not yet found the code of Quantized-SGD and dynamic-model-averaging, due to the time limitation, we postpone their comparison to revision.
**Q3: Typos.**
**A3:** Thanks. We will improve the writing accordingly.
**Q4: This result does not clearly show low precision local training improves performance, since it is combined with moving average.**
**A4:** Low-precision training typically tends to decrease instead of improve the accuracy. We introduce it to reduce communication and computational costs, while moving average is adopted to compensate the above accuracy degradation. This point was confirmed in Table 1 and Figure 2. That is that with moving average, the accuracy of the low-precision model can be comparable to full-precision model, and sometimes even better due to its effect on reducing over-fitting risk.
**Q5: Unclear use-case.**
**A5:** Reducing training and communication cost is an important topic in FL since the clients in FL are always resource-constrained edge devices with low-bandwidth communication network. We address this challenge by using low precision local training, which can be implemented efficiently on machine learning accelerators. We note that using accelerators with low precision operators is an new paradigm to accelerate AI models. For details, please refer to the surveys [r7] and [r8].
**Q6: Discuss FL with quantization literature.**
**A6:** Thanks. We gave comparison results with SOTA CoCoFL and FedPAQ with quantization in the table in A2 of Reviewer vykk and the table in A2 of reviewer 5cLQ. We will include them and discuss in details in the revision if accepted.
**Q7: How quantization interacts with FedBN.**
**A7:** Thanks. The results ($\alpha = 0.01$ ) given below show that our method works well with FedBN. More results will be included in the revision.
| Method | CIFAR10 | FMNIST |
|:- |:- |:- |
| FedBN | 48.7±1.9 | 79.3±0.6 |
| FedBN+Ours (16 bit) | 49.3±0.4 | 80.2±0.2 |
| FedBN+Ours (8 bit) | 48.5±0.3 | 78.6±0.5 |
**Q8: Benefits stem from quantization and moving average.**
**A8:** We would like to clarify two things:
1) Quantization is used to reduce the computational and communication cost and typically it tends to produce negative instead of positive impact on accuracy and we adopt moving average to alleviate this impact.
2) In some tasks, quantization can reduce the risk of over-fitting leading to higher test accuracy. It is verified by Table 1 and Fig. 2 in our paper.
**Q9: Does moving average have a large positive effect regardless of quantization?**
**A9:** Moving average is used to reduce the error introduced by quantization. When full precision local training is adopted, its effect is relatively limited compared with that of low-precision, see Table 1 in our paper.
**Q10: More limitation discussion.**
**A10:** Thanks. For computation and communication, we follow the standard settings in FL. We gave a series of additional experimental results in this rebuttal and we will give more discussion in the revision.
[r6] Efficient personalized federated learning via sparse model-adaptation. ICML 2023.
[r7] AI and ML Accelerator Survey and Trends. IEEE HPEC, 2022.
[r8] Advances and Open Problems in Federated Learning. Foundations and Trends in Machine Learning, 2021.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: Dear authors,
Thank you for your detailed reply and the additional experiments. Please include those points and results in the manuscript. I am leaning towards improving my score after the discussion period with my fellow reviewers.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer vykk,
We sincerely appreciate your response and final support! Since the deadline for the author-reviewer discussion phase is approaching, would you mind kindly letting us know if any further discussion or clarification is needed? Thank you again for your valuable time, your support is significant to us.
Sincerely,
Authors of Submission 11555 | Summary: The paper studies an FL system with data heterogeneity, a topic has been extensively studied in the past few years. The idea is to perform local training with lower precision through applying block floating point quantization. The idea per se is not new, but proving that the convergence can be achieved using low precision local training is an interesting contribution.
Strengths: The paper is well written and is easy to follow. The idea is also interesting but it is not necessarily new. The most important contribution is the theoretical proof for convergence. The evaluation results in Section 6 validate the theoretical results.
Weaknesses: The paper mainly focuses on data heterogeneity in FL systems. What about resource heterogeneity? It would be important to have a discussion (or better some experimental results) on how the proposed solution perform in such setting. Should we use the same quantization level for all clients, or can we adjust the precision according to the resource availability? Also, the current state of the art of quantization in conjunction with Federated Learning is also missing, e.g.:
- FedQNN[1] uses quantized training in FL.
- CoCoFL[2] uses a combination of quantization and freezing for heterogeneous resources in FL.
How does these SOTA techniques perform compared with the proposed solutions?
[1] Y. Ji and L. Chen, "FedQNN: A Computation–Communication-Efficient Federated Learning Framework for IoT With Low-Bitwidth Neural Network Quantization," in IEEE Internet of Things Journal.
[2] Kilian Pfeiffer, et al. "CoCoFL: Communication-and Computation-Aware Federated Learning via Partial NN Freezing and Quantization." Transactions on Machine Learning Research., 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: Please also check my questions in the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Appendix F clarifies that the fake quantization is applied in the experiments. I really appreciate learning this information, as one of my question was how the quantization is implemented in PyTorch (as PyTorch only supports int8 inference out of the box).
Regarding the broader impact, the discussion is about FL in general. The question is if performing lower precision training at clients improves or worsen these privacy and security aspects (e.g. if data poisoning cab be done easier, as the local updates are low precision).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: What about resource heterogeneity? It would be important to have a discussion (or better some experimental results) on how the proposed solution perform in such setting. Should we use the same quantization level for all clients, or can we adjust the precision according to the resource availability?**
**A1:** Thanks for your suggestion. We give the results in the setting of HeteroFL [9]. That is, we set three types of resource devices: 1, 1/2, and 1/4, which respectively represent the devices that can train the entire network, 1/2 network, and 1/4 network. We use (x, x, x) to represent the corresponding bits in three type of devices in training. We let $\alpha=0.16$. The experimental results are as follows:
| Precision | Acc |
|:---------- |:---- |
| (32,32,32) | 55.6 |
| (8,8,8) | 55.4 |
| (8,8,6) | 54.7 |
| (8,6,8) | 52.6 |
| (6,8,8) | 50.4 |
| (8,6,4) | 52.8 |
| (4,6,8) | 43.7 |
The results above show that our low-precision training mechanism works well in resource heterogeneity applications. They also indicate that when the sum of bit widths used across the three sets of devices is fixed, devices with poorer computational resources should be allocated a smaller number of bit widths.
**Q2: How does these SOTA techniques FedQNN and CoCoFL perform compared with the proposed approach?**
**A2:** Thanks for your suggestion. Since the code of FedQNN has not been released, we give the comparison results with CoCoFL. We follow the setting of CoCoFL and conduct experiments on on Shakespeare and CIFAR10. Following [r3] we calculate the communication cost by counting the size of the model transmitted from the client to the server in each round, and the training cost by counting the weights, activation, and gradients of local training.
| Method |Shakespeare Acc | Communication cost (MB / round) | Training cost (MB / client) |
|:---------- |:---------: | :-----: | :-----: |
| FedAVG | 49.1 | 75.5 | 2927.9 |
| CoCoFL | 49.3 | 50.3 | 1951.9 |
| FedAVG+Ours(8 bit)| 49.4 | 18.9 | 731.9 |
| Method |CIFAR10 Acc | Communication cost (MB / round) | Training cost (MB / client) |
|:---------- |:---------: | :-----: | :-----: |
| FedAVG | 84.3 | 21.5 | 167.7 |
| CoCoFL | 82.0 | 14.3 | 111.8 |
| FedAVG+Ours(8 bit)| 83.1 | 5.4 | 41.9 |
The results above show that our method has a significant reduction in overhead, because our local training is completely low-precision, and the model transmitted to the server is also low-precision. CoCoFL still trains the unfrozen parameters with full precision, and the transmission is also full-precision, so it will have a relatively larger overhead.
**Q3: Appendix F clarifies that the fake quantization is applied in the experiments. I really appreciate learning this information, as one of my question was how the quantization is implemented in PyTorch.**
**A3:** We adopt the widely used simulation method in the studies [40, r4, r5] of low-precision training, which is approved by the community as its result is always consistent with that on the real ML accelerators. We have submitted our code in the supplementary materials, which includes the specific implementation of fake quantization. Our approach to quantization involves implementing the quantization of weights, activation values, and gradients during the training process. We have three quantization operations: the first is to implement the quantization of activations after every model module, the second is to quantize the gradients of the parameters after the loss is generated and the gradients are backpropagated, the third is to quantize the weights after the optimizer is updated. The specific implementation process can be referred to in our code.
**Q4: If performing lower precision training at clients improves or worsen these privacy and security aspects.**
**A4:** We tested different levels of label flip attack on FedAVG and ABAVG [38] by using MNIST, and the results are as follows. ABAVG maintains a validation dataset on the server side and adjusts the weight of the client aggregation according to the client's accuracy, so it can be used for adversarial attacks. We let $\alpha=0.1$. Our experimental results show that low precision training can slightly improve the performance. The reason could be that low precision training prevents the client from over-fitting the wrong labeled samples, and thus reduces the impacts of the poisonous data. More results will be included in the revision.
| Method | Label Flip attack rate |Acc |
|:- |:-: |:- |
| FedAVG | 0.4 | 60.4 |
| FedAVG+Ours(8 bit)| 0.4 | 62.6 |
| FedAVG | 0.7 | 22.8 |
| FedAVG+Ours(8 bit)| 0.7 | 27.5 |
| ABAVG | 0.4 | 63.1 |
| ABAVG+Ours(8 bit)| 0.4 | 64.0 |
| ABAVG | 0.7 | 37.9 |
| ABAVG+Ours(8 bit)| 0.7 | 40.6 |
[r3] Chen D, et al. Efficient personalized federated learning via sparse model-adaptation. ICML 2023.
[r4] Kuzmin, et al. The Power of the Exponent. NeurIPS 2022.
[r5] Nagel, et al. Overcoming Oscillations in Quantization-Aware Training. ICML 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed reply. I suggest to integrate the results for Q2 - Q4 in the final version. I have no follow up to these questions. For Q1, I do not see why we need to adapt HeteroFL and adjust the network size at different devices. This could cause some fairness issues, especially when the data is non-i.i.d, as shown in the CoCoFL paper. What if you train the whole model at all devices, but adjust the precision level to the availability of resources at devices?
---
Reply to Comment 1.1.1:
Title: Rebuttal by Authors (2)
Comment: We would like to sincerely thank the reviewer for the thorough and prompt feedback. We appreciate the opportunity to address and clarify the points raised.
**Q5: What if you train the whole model at all devices, but adjust the precision level to the availability of resources at devices?**
**A5:** Thanks for your constructive suggestion. Actually, the setting of our previous experiments in the rebuttal is unfair to us rather than to other baselines, as our method is added with much more strict and also unnecessary constraint on the resource.
According to your advice, below we present the experimental results of training all models by adjusting the precision level to the availability of resources at devices. We now impose three $\textbf{memory}$ resource constraints (1, 1/2, and 1/4) on the clients. We would like to clarify that in our previous experiments in the rebuttal, three types of resource devices: 1, 1/2, and 1/4 respectively represent the devices that can train the entire network, 1/2 channels of the network, and 1/4 channels of the network, which is equivalent to the networks with full, 1/4 and 1/16 $\textbf{memory}$ cost. Here we use the memory reconstraint and let it to be 1, 1/2 and 1/4 just to make the corresponding precision levels be the widely used ones, i.e., 32, 16 and 8 bits. We list four experimental methods:
1) FedAVG with full resources, where all the devices have the full resources and be able to train the full network.
2) HeteroFL adjusts the number of parameters of each client to meet the resource constraints for training.
3) Our method sets the precision levels of the devices under the three constraints to be 32, 16, and 8 respectively to approximately meet the resource constraints and train the full model.
4) FedAVG refers to the setting that the network which cannot be trained (due to resource constraints) is directly dropped. We tested the results on CIFAR10, setting $\alpha=0.04$ and $0.16$. The detailed result is given in the table below, which verifies the effectiveness of our method.
|Method |Acc($\alpha=0.04$)|Acc($\alpha=0.16$)|
|:-|:-:|:-:|
|FedAVG(full resource)|54.4±2.0 |71.9±1.5|
|HeteroFL |46.3±1.7 |63.4±1.8|
|Ours |58.1±0.5 |72.7±0.5|
|FedAVG |32.4±2.3 |52.8±2.1| | Summary: The paper proposes an efficient Federated Learning (FL) paradigm where local models are trained using low-precision operations and communicated with the central server in low precision format. The aggregation on the server, however, is performed with high-precision computation to ensure accuracy. The authors demonstrate that high-precision models can be recovered from low-precision local models with proper aggregation on the server side. This approach significantly reduces the computational load on client devices and the communication cost. The method is theoretically proven to converge to an optimal solution, even with non-IID data distributions, and extensive experiments show that models trained with low precision (as low as 8 bits) are comparable in performance to those trained with full precision.
Strengths: 1. The proposed method reduces the computational and communication overhead for client devices, which is crucial for resource-constrained environments for large models. The paper also provides theoretical guarantees for convergence to the optimal solution, even with non-IID data distributions.
2. The method is effective on the datasets and models in the experiments where low precision training has little to no impact on utility.
Weaknesses: 1. The integration of low precision training and high precision aggregation may add complexity to the implementation.The performance improvements are partly dependent on the hardware capabilities, such as the availability of processors supporting low precision operations.
2. The experiments are limited. Only image datasets are considered. Evaluation on other types of data, like text or tabular, can strengthen the results.
3. No integration with differential privacy or other privacy protection mechanisms. Federated learning itself is not private and it would be interesting to see what privacy mechanisms are suitable for low precision model updates.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How can we determine the optimal precision level for local training without extensive hyperparameter tuning which is expensive in FL?
2. Have you explored mixed precision strategy? E.g. using different quantization schema for gradients, activations etc.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The integration of low precision training and high precision aggregation may add complexity to the implementation.**
**A1:** Actually, in our method, the transformation between low and high-precision parameters is performed on the server, and the computation in the clients is standard for the clients supporting low precision operators. As we know, in FL, the server always has rich computation and memory resources and the main challenge in improving the training efficiency comes from the resource constrained clients and the communication cost. Therefore, the increased implementation complexity above would not prevent us from improving the overall training efficiency.
**Q2: The performance improvements are partly dependent on the hardware capabilities, such as the availability of processors supporting low precision operations.**
**A2:** Yes. But we would like to point out some promising progress in this area. In recent years, deep learning applications urge the development of machine learning accelerators. Lots of accelerators supporting low precision operations have been announced, which make it easy to implement the low precision training/inference algorithms including our approach efficiently in practice. For details, please refer to the survey [r1].
**Q3: Only image datasets are considered. Evaluation on other types of data, like text or tabular, can strengthen the results.**
**A3:** Thanks for your suggestion. Actually, we have conducted experiments on most of the datasets used in existing studies. Below, we give the results on the text dataset IMDB and Shakespeare. The results also confirm our conclusion on text data that low-precision local training is sufficient for Federated Learning. Following [r2], we approximate the communication cost by counting the size of the model transmitted from the client to the server in each round, and the training cost by counting the weights, activation, and gradients of local training.
| Method |IMDB Acc | Communication cost (MB / round) | Training cost (MB / client) |
|:---------- |:---------: | :-----: | :-----: |
| FedAVG | 75.6 | 323.0 | 3700.8 |
| FedAVG+Ours(8 bit)| 74.8 | 80.8 | 925.2 |
| Method |Shakespeare Acc | Communication cost (MB / round) | Training cost (MB / client) |
|:---------- |:---------: | :-----: | :-----: |
| FedAVG | 49.1 | 75.5 | 2927.9 |
| FedAVG+Ours(8 bit)| 49.4 | 18.9 | 731.9 |
**Q4: No integration with differential privacy or other privacy protection mechanisms. Federated Learning itself is not private and it would be interesting to see what privacy mechanisms are suitable for low precision model updates.**
**A4:** We think that theoretically proving what privacy mechanisms are suitable for low precision model updates is too complicated to be done in the author response period due to the variety of privacy mechanisms. Below, we give the results of FL with differential privacy and low-precision local updates. It shows that our method works well with differential privacy. The reason could be that as our quantization process is stochastic, it can be approximately viewed as a differential privacy protection mechanism and therefore they are compatible with each other.
|Method | Gaussian Epsilon | Acc |
|:- |:-: |:- |
| FedAVG | 30 | 45.5 |
| FedAVG+Ours(8 bit)| 30 | 53.9 |
| FedAVG | 10 | 43.6 |
| FedAVG+Ours(8 bit)| 10 | 53.4 |
| FedAVG | 5 | 40.8 |
| FedAVG+Ours(8 bit)| 5 | 53.0 |
| FedAVG | 1 | 38.4 |
| FedAVG+Ours(8 bit)| 1 | 45.8 |
Moreover, we agree with you that theoretically FL is not private. We give the results with standard FL in the manuscript because FL is still one of the main machine learning paradigms to address the challenges on preserving data privacy. The reason could be that as indicated in Figure 4 of [29] recovering the data with details from the received local models is highly nontrivial.
**Q5: How can we determine the optimal precision level for local training without extensive hyperparameter tuning, which is expensive in FL?**
**A5:** Our empirical results show that 8-bit precision level works well on all the tasks. In practice, one can tune the precision level during training, e.g., choose 2 sets of clients with different precision levels in some rounds and choose proper precision level based on the performance of the two aggregated models.
**Q6: Have you explored mixed precision strategy? E.g. using different quantization schema for gradients, activations etc.**
**A6:** Thanks for your suggestion. We performed local mixed precision training on CIFAR10 with $\alpha=0.01$. We use the triple (x, x, x) to represent the number of bits are used in quantizing weights, activations, and gradients, respectively. The results are as follows:
| Mixed precision| Acc |
| :--------------|:---------|
| (32,32,32) | 52.7±0.6 |
| (8,8,8) | 53.3±0.5 |
| (6,8,8) | 53.1±0.5 |
| (8,6,8) | 50.9±1.0 |
| (8,8,6) | 53.2±0.5 |
| (8,6,4) | 50.4±1.0 |
| (6,6,4) | 50.1±0.7 |
The experimental results indicate that our method also supports mixed-precision. Results with more mixed precision training methods will be included in the revision if accepted.
[r1] Reuther, et al. AI and ML Accelerator Survey and Trends. IEEE HPEC 2022.
[r2] Chen D, et al. Efficient personalized federated learning via sparse model-adaptation. ICML 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Zy16,
We sincerely appreciate for your support and have carefully responded to your constructive comments thoroughly.
Since the deadline for the author-reviewer discussion phase is approaching, would you mind kindly letting us know if any further discussion or clarification is needed? Thank you again for your valuable time.
Sincerely,
Authors of Submission 11555 | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models | Accept (poster) | Summary: This paper presents an innovative adversarial robustness measure, which leverages a generative model to produce data samples, record the marginal confidence score as a local statistic and average them over the data distribution. The proposed measure is designed to be efficient, scalable, and potentially applicable to unknown data distributions. Empirical validation is conducted using local models and commercial inference APIs, demonstrating the utility of the robustness evaluation.
The concept introduced in this study is commendable for its originality, and the metric indeed offers valuable insights into model robustness. Nonetheless, it requires substantial revisions to enhance its clarity, presentation, and justification of claims before it can be accepted.
Strengths: 1. The metric introduced is a pioneering approach for assessing model robustness, characterized by its attack-independence, scalability, and potential applicability to unknown data distributions.
2. A theoretical analysis is provided, establishing that the metric serves as a lower bound for the probabilistic minimum adversarial perturbation.
3. The practicality of the proposed measure is supported by experimental validation on commercial black-box APIs.
4. There is a demonstrated strong correlation between the proposed metric and robust accuracy, suggesting the metric's effectiveness.
Weaknesses: 1. Presentation issues that may lead to confusion include:
(1) The second paragraph of introduction lacks a precise definition of adversarial robustness evaluation, which could be problematic for less experienced readers.
(2) Putting the testing algorithm in the appendix hurts the coherence of the paper. It would be better to include it in the main text.
(3) Figure 2 requires additional clarification to elucidate how robust accuracy (RA) and the proposed metric are integrated into the same plot. Current discussion is insufficient.
2. The generative model's training requires at least partial knowledge of the data distribution. So the claim that the proposed metric can scale to unknown data distribution needs justification.
3. The metric's performance is contingent on the generative model's capacity to produce benign samples, yet no guarantee is provided that the generative model's ability to do so.
4. The endeavor to train a generative model to produce benign samples should be considered in making it a cost-effective and scalable solution. Maybe this metric can consider online learning to update the generative model. Hope a discussion can be provided on this.
5. The claim regarding the limitation to white-box settings (Page 2, Line 56) is inaccurate, as adversarial accuracy can also be assessed in black-box scenarios, evidenced by the effectiveness of the Square Attack method.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The paper omits discussion of several significant works that evaluate model robustness from different perspectives. The authors should consider addressing the following studies in the related works section:
[1] "How many perturbations break this model? evaluating robustness beyond adversarial accuracy." Olivier, Raphael, and Bhiksha Raj. International Conference on Machine Learning. PMLR, 2023.
[2] "Probabilistically robust learning: Balancing average and worst-case performance." Robey, Alexander, et al. International Conference on Machine Learning. PMLR, 2022.
[3] "Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume." Guo, Ping, et al. arXiv preprint arXiv:2403.05100 (2024).
2. While the paper presents test results on the original test samples on CIFAR-10 in Table 9, it does not provide results for the ImageNet dataset. The authors should explain this choice and consider including ImageNet results for a more comprehensive comparison.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: See weakness and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express sincere gratitude for your constructive and detailed comments.
## Weakness 1: Definition of adversarial robustness; Algorithm in appendix reduces coherence; Clarification on Figure 2.
(1) We apologize for the lack of definition. To clarify, adversarial robustness evaluation refers to the process of assessing a model's resilience against adversarial attacks, which are inputs intentionally designed to deceive the model. We will include it in the next version .
(2) Due to space limitations, we placed the algorithm in the appendix. However, we understand your concern about coherence and will move it to the main text in the next version.
(3)To clarify how RA and the GREAT Score are integrated into Figure 2, we provide the additional details:
1. Auto-Attack RA: RA at each perturbation level is the fraction of correctly classified samples under perturbation. GREAT Score RA: Cumulative RA at each perturbation level is the fraction of samples with GREAT Scores above that level.
2. Blue curve: RA from empirical Auto-Attack. Orange curve: RA from GREAT Score, providing a certified robustness guarantee.
3. Trend similarity suggests GREAT Score reflects empirical robustness. The gap indicates the difference between certified and empirical robustness, not necessarily GREAT Score's inferiority, but possible undetected adversarial examples at higher perturbations.
## Weakness 2: Unknown data distribution.
Thank you for your insightful comment. By "unknown" data distribution, we mean the true data distribution is unknown (e.g., the data distribution of all nature images). In practice, one trains a generative model to learn this unknown function. The aim of the generative model to serve as a proxy for the true data distribution. In the Appendix A.2, we provide an explanation of how the generative model can effectively match the true data distribution. This justification supports our claim that the proposed metric can scale to unknown data distributions, which is also an aim for current state-of-the-art image generative models.
## Weakness 3: No guarantee on producing benign samples.
We acknowledge that the performance of our metric depends on the generative model's ability to produce benign samples. Our approach assumes the use of high-quality GANs.
- **Ablation Study**: As demonstrated in our ablation study, using GANs with better generative quality improves the ranking coefficient, indicating stronger performance of the robustness metric.
- **Theoretical Support**: In Appendix 2, we discuss how GANs can provably match the data distribution, providing a theoretical foundation for their use in our approach.
## Weakness 4: Training and online learning for Generative Models.
Thank you for your valuable suggestion. Our ablation study indicates that improving the quality of the generative model significantly enhances the ranking correlation. Therefore, we definitely need better generative models. Currently, we are using off-the-shelf generative models, but we will consider incorporating online learning techniques to further improve their performance.
## Weakness 5: Limitation to white-box settings.
Thank you for highlighting this important distinction. However, we would like to clarify that our discussion on limitations pertains specifically to the evaluation of attack-independent robustness, which we stated is limited to white-box settings. We acknowledge that adversarial accuracy can indeed be assessed in black-box scenarios, as demonstrated by methods like Square Attack.
## Question 1: Omission of works on robustness evaluation.
Thank you so much for recommending the related works we need , we will add them in next version. Below is the added content.
In addition to discussed works, several studies evaluate model robustness differently. [Olivier 2023] introduce adversarial sparsity, quantifying the difficulty of finding perturbations, providing insights beyond adversarial accuracy. [Robey et al. 2022] propose probabilistic robustness, balancing average and worst-case performance by enforcing robustness to most perturbations, better addressing trade-offs. [Guo et al. 2024] introduce the adversarial hypervolume metric, a comprehensive measure of robustness across varying perturbation intensities.
[1] "How many perturbations break this model? evaluating robustness beyond adversarial accuracy." Olivier, Raphael, and Bhiksha Raj. International Conference on Machine Learning. PMLR, 2023.
[2] "Probabilistically robust learning: Balancing average and worst-case performance." Robey, Alexander, et al. International Conference on Machine Learning. PMLR, 2022.
[3] "Exploring the Adversarial Frontier: Quantifying Robustness via Adversarial Hypervolume." Guo, Ping, et al. arXiv preprint arXiv:2403.05100 (2024).
## Question 2: Results on the original test samples on ImageNet.
Following the reviewer's suggestion, we conducted an additional experiment using 500 test samples from the original ImageNet dataset, with image dimensions of 224x224 pixels, and reported their GREAT Scores in the table below. However, it is important to note that unless these test samples are generated by a generative model, they may not fully represent the data distribution captured by such a model. The correlation between the GREAT Score ranking and the RobustBench ranking remains consistent with the results observed on the generated samples.
| Model | RobustBench Accuracy (%) | AutoAttack Accuracy (%) | GREAT Score |
|------------|---------------------------|-------------------------|-------------|
| Trans1 | 38.14 | 56.0 | 0.58 |
| Trans2 | 34.96 | 50.2 | 0.48 |
| LIBRARY | 29.22 | 50.6 | 0.49 |
| Fast | 26.24 | 39.4 | 0.41 |
| Trans3 | 25.32 | 39.8 | 0.32 |
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for submitting your rebuttal in response to the concerns I previously raised. I appreciate the effort you have put into addressing these issues. Please find below my comments following your rebuttal:
## Overview
The manuscript presents an intriguing concept in the evaluation of adversarial robustness, which offers a novel perspective that diverges from the traditional attack-based methods. This is a commendable approach that contributes to the field.
Nevertheless, I observe that the proposed inference-based method bears resemblance to existing work in robustness evaluation, such as Adversarial Sparsity. While the incorporation of generative models mitigates some of the limitations inherent in prior methods, it also introduces new challenges associated with the characteristics of GANs. These issues are likely to be of interest to the readers.
Overall, these limitations do not hurt the significance of your contribution. Furthermore, your response has addressed most of my concerns, so I decide to raise my score. However, I have summarized your response and identified several additional points that may warrant further consideration.
- **Presentation:** I apprecite your efforts in improving the readability, no further questions.
- **Unknown Data Distribution:** The discussion surrounding the concept of a 'true unknown data distribution' is somewhat controversial. The use of training data for the model under evaluation in the training of generative models raises questions about the definition of unknown data distribution. This issue is not present in studies focusing solely on generative models.
- What would be the implications if the training data were not available for GAN training?
- How would the use of only test data affect the training, and could this potentially lead to the collapse of GANs due to a shift in distribution?
- **GAN Producing Benign Examples:** This may be alleviated by considering the convergence properties of GANs. However, same issue of data dependence persists.
- **Online Learning:** The topic of online learning is a potential avenue for future research by the authors and shares the aforementioned limitation regarding unknown data distribution. This is particularly relevant given the requirement for substantial data volumes.
- **Minor Issues:**
- **Black-box Setting:** The extension to a black-box-based accuracy appears to be a minor one, yet it should be acknowledged in the manuscript.
- **Related Works:** I note that the authors intend to include additional related works in the revised manuscript. I believe that a more in-depth discussion of these works could provide valuable insights for future research in this area.
- **Additional Results:** I am pleased with the results provided and have no further concerns in this regard.
I look forward to seeing the final version of the manuscript and wish to see your continued research in this important area.
Best regards,
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your insightful and constructive feedback on our manuscript. We are pleased to learn that the reviewer has raised the score. We have carefully considered your comments and have made the following revisions to address your concerns:
- Presentation and Related Works: Thank you for your suggestion, especially for pointing out the 3 important related papers, we will update this in the final version.
- Unknown Data Distribution:
On the review comment "What would be the implications if the training data were not available for GAN training?". We are unsure what the review meant, could the reviewer elaborate on this point? We note that our framework uses an off-the-shelf generative model for robustness evaluation, which does not involve the training of a generative model.
On the review comment "How would the use of only test data affect the training, and could this potentially lead to the collapse of GANs due to a shift in distribution?" We are also unsure about this question. Test data should not affect the training by any means.
- GAN producing benign examples: We agree that the GAN Producing Benign Examples rely heavily on the generative quality to approximate the true distribution, as already justified by our results in Figure 3. We also want to emphasize that our framework is not limited to GAN. We also include diffusion models in our results.
- Online Learning: We agree with the reviewer that this is a good future direction to explore.
- Minor issues:
Black box setting: We thank the reviewer for pointing this out, we will acknowledge it in the manuscript.
Related Works: We will ensure we include sufficient discussion of related works.
Additional Results: We thank you for pointing out that we should also add the Imagenet experiment, we will include it in the appendix of the final version.
Thanks again for your efforts.
Sincerely, | Summary: The authors propose the GREAT score that uses conditional generative models to mimic the data generating distribution. Thereafter, the classification margin on a set of generated images can be used to obtain a global robustness score. For this, the authors make the connection between local and global robustness explicit and show that the classification margin yields a lower bound on the robustness w.r.t. L2 perturbations (convolution with Gaussian). The authors empirically validate their GREAT score on various benchmarks and highlight the strong correlation to other common robustness evaluations while reducing the computational cost for the robustness evaluation significantly.
Strengths: 1. The authors propose an efficient procedure to rank the robustness of models using generative models
1. GREAT can be used with "off-the-shelf" generative models and does not require specialized training etc.
1. GREAT does not require access to the gradient
1. The paper is well-written and easy to follow
Weaknesses: 1. There are several assumptions on the generative model that are not sufficiently/prominently enough covered. The assumptions are: (a) the model generates an instance actually belonging to the conditioned class, (b) the true class is unambiguous (e.g., conversely, there might be cases where the "Bayes optimal" model cannot decide between two or more classes). (c) the generative model is a good approximation of the true data-generating distribution. The authors should highlight such limitations more and their implications for the guarantees/method.
1. Since the authors emphasize the guarantee on the average robustness of a model, the authors could elaborate more on the practical importance of such a guarantee
1. The derived Lipschitz constant might be a loose estimate since the Lipschitz constant also includes the generative model and not only the neural network. This is not accurately reflected in, e.g., Eq 10. Here it seems the model was convolved with the Gaussian ($g' * N(0,1))$), but it should actually be $((g' \circ G) * N(0,1))$.
Minor:
- The font size in figures and tables is very small
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Figure 2 shows a large gap between empirical attacks (upper bound) and the GREAT score (lower bound). To what extent do the authors expect this gap to be due to the looseness of the respective upper and lower bounds?
1. How is the class label (input of conditional generative model) distributed in the experiments for calculating the GREAT score?
1. Would it also be possible to derive guarantees for a subset of the data distribution's support? For example, obtaining class-specific average robustness guarantees?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors sufficiently addressed the limitations (except for the weaknesses stated above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and constructive comments, and we are encouraged that you find our work “well written, easy to follow”.
## Weakness 1: Generative model limitations: (a) valid class instance, (b) unambiguous class, (c) data approximation.
Thank you for your feedback regarding the assumptions on the generative model. We would like to specify how to ensure these points:
(a) We acknowledge that the performance of our metric relies on the generative model's capability to produce samples belong to the conditioned class. Recent studies ([Nicolas, 2021] and [Tengyuan Liang, 2021]) have shown GANs' convergence to true data distributions under specific conditions. Moreover, Figure 3 shows high-quality instances produced by the generative models, as evidenced by the inception score and the Spearman's rank correlation between the GREAT Score and RobustBench.
(b) We recognize that there may be instances where the true class is ambiguous, potentially impacting the performance. Given that our focus is on evaluating the robustness of classifiers, it is important to note that the labels we use are typically well-defined and distinctive. When two classes exhibit ambiguity, determining their robustness is a separate issue that needs to be addressed before applying our method. Consequently, we consider the issue of label ambiguity to be outside the scope of our method.
(c) We understand that the assumption of the generative model being a good approximation of the true data-generating distribution is crucial. Recent work has demonstrated the convergence rate of approaching the true data distribution for a family of GANs under certain conditions. Please refer to Appendix A.2 for more details.
References:
Nicolas Schreuder, Victor-Emmanuel Brunel, and Arnak Dalalyan. Statistical guarantees for generative models without domination. In Algorithmic Learning Theory, pages 1051–1071. PMLR, 2021. 14
Tengyuan Liang. How well generative adversarial networks learn distributions. The Journal of Machine Learning Research, 22(1):10366–10406, 2021. 14
## Weakness 2: Importance of robustness guarantee.
We appreciate your feedback regarding the importance of our score. Our method is a form of certified robustness, a concept well-utilized by [Tramer, F. 2020] and [Pintor, M. 2022], and recognized as a secure way for validating defenses. Many empirical defenses were soon to be broken by advanced attacks because they were not certified to be robust. Additionally, our score can be used for robustness ranking across different models, as discussed in our paper. This ranking allows for comparison of the resilience of models against adversarial attacks.
1. Tramer, F., Carlini, N., Brendel, W., & Madry, A. (2020). On adaptive attacks to adversarial example defenses. Advances in neural information processing systems, 33, 1633-1645.
2. Pintor, M., Demetrio, L., Sotgiu, A., Demontis, A., Carlini, N., Biggio, B., & Roli, F. (2022). Indicators of attack failure: Debugging and improving optimization of adversarial examples. Advances in Neural Information Processing Systems, 35, 23063-23076.
## Weakness 3: Equation Clarification.
Thank you for your insightful comment. We would like to clarify that, as defined in Equation 9, $g'$ is inherently linked with the Gaussian distribution through the generative model $G(\cdot)$. Therefore, the formulation $(g' \circ G) * N(0, I)$ is indeed consistent with our definitions. The inclusion of $g'$ with the Gaussian reflects the model's behavior in the context of our robustness analysis. We believe this captures the relationship and does not undermine the integrity of our Lipschitz constant estimation.
## Weakness 4: Small fonts.
We appreciate your feedback regarding the small font size. We will increase the font size in the next version to enhance readability.
## Question 1: Gap between empirical attacks and GREAT score?
We thank the reviewer for bringing up this discussion. We believe your concern can be addressed by comparing empirical versus certified robustness. AutoAttack does not guarantee the absence of undiscovered adversarial examples when it fails, a common limitation of empirical robustness evaluation. In contrast, our score provides a certified robustness guarantee, ensuring no adversarial examples exist within the certified perturbation range. Therefore, the gap between our certified curve and AutoAttack's empirical curve does not imply our method's ineffectiveness; it could indicate undiscovered adversarial examples at higher perturbation radii. Without sound and complete attacks that confirm no adversarial examples exist if they fail, we cannot rule out the possibility of hidden adversarial examples in these high-radii regimes, as highlighted by certified robustness analysis.
## Question 2: Label distribution?
We appreciate the reviewer's insightful idea. We utilize a conditional generative model to generate samples based on class labels. The class labels are uniformly distributed.
## Question 3: Class-specific robustness?
Thank you for the insightful question. We acknowledge that deriving guarantees for subsets of the data distribution is important to explore.
Our current work focuses on evaluating model robustness over the entire data distribution. Given our capability to perform class-specific generation, we can indeed measure class-specific robustness guarantees. By generating samples specific to each class, we can evaluate how robust the model is against adversarial attacks targeting particular classes, thereby evaluating class-specific robustness .
Actually, we have performed a similar analysis to evaluate group-level robustness for facial recognition. This method can be extended to derive class-specific robustness guarantees. Table 4 in our paper displays the group-level GREAT Score results. Our evaluation reveals interesting observations: Most APIs exhibit a large discrepancy in scores between "Old" vs. "Young" groups.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank for the detailed explanations. I will raise the score to 6 if the authors will include a detailed discussion concerning Weakness 1 in the main part of the revised paper and "correct" the RHS of Eq 10.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback. We are delighted to learn that the reviewer is inclined to raise the score. While we can't edit the submission at this point, we will include a detailed discussion of Weakness 1 in the main part of the revised paper, specifically in the experimental section, with a dedicated discussion on GANs. Additionally, we will explicitly add the dependency of the generator $G$ in the right-hand side of Equation 10 as suggested. | Summary: The paper introduces a novel framework called GREAT Score (Global Robustness Evaluation of Adversarial Perturbation using Generative Models), aimed at evaluating the global robustness of machine learning models against adversarial perturbations. Unlike traditional methods that aggregate local robustness results from a finite set of data samples, GREAT Score leverages generative models to approximate the underlying data distribution, providing a more comprehensive global robustness assessment.
Strengths: - The GREAT Score framework introduces a novel method for global robustness evaluation using generative models, which is a fresh and innovative approach in the field.
- The paper provides solid theoretical foundations with formal definitions, probabilistic guarantees, and detailed proofs, enhancing the credibility of the proposed method.
- The GREAT Score framework offers significant computational savings over traditional methods and can audit privacy-sensitive black-box models without accessing real data, highlighting its practical importance and broad applicability.
Weaknesses: 1. The GREAT Score in the paper primarily focuses on adversarial perturbations under the L2 norm. While this is a common setting in adversarial attack research, it lacks ablation studies for other norms, such as the L∞ norm
2. The GREAT Score framework relies on generative models (such as GANs or diffusion models) to approximate the true data distribution. If the quality of the generative model is not high, the generated samples may not accurately represent the true data distribution, thus affecting the accuracy of robustness evaluation. Besides, the evaluation results of the GREAT Score also depend on the generated sample set. If the sample set is biased or fails to comprehensively cover the diversity of the data distribution, the evaluation results may be inaccurate or unrepresentative.
3. The evaluation of online facial recognition APIs using GREAT Score is innovative, but the paper could provide more detailed analysis and discussion on the specific challenges and insights derived from this application. For instance, exploring the variability in robustness scores among different groups (e.g., age, eyeglasses) in greater depth and providing potential reasons for these variations would add depth to the analysis.
4. The calibration process described in Section 3.5 appears somewhat ad-hoc, relying on grid search for optimizing temperature parameters. This could be perceived as lacking robustness and generalizability. A more systematic approach to calibration, possibly incorporating advanced optimization techniques or sensitivity analysis, would strengthen the framework. Discussing the stability and consistency of the calibration process across different models and datasets would also be beneficial.
5. Despite claiming computational efficiency, the paper does not provide a detailed analysis of the scalability of the GREAT Score framework, especially in the context of extremely large datasets and models. A thorough examination of how the computation time scales with increasing data size and model complexity would add significant value. This could include empirical results demonstrating the method's performance on larger datasets or theoretical analysis of its computational complexity.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Have you considered extending the GREAT Score framework to other norm-based perturbations like L1 or L∞ norms? If so, what are the potential challenges or theoretical adjustments needed?
2. How does the quality of the generative model affect the GREAT Score evaluation? Have you conducted any experiments using generative models of varying quality to analyze this impact?
3. Can you provide more detailed information on the scalability of the GREAT Score framework with respect to extremely large datasets and models? How does the computation time scale with increasing data size and model complexity?
4. How stable and consistent is the calibration process described in Section 3.5 across different models and datasets? Have you explored any advanced optimization techniques for calibration?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and constructive comments.
## Weakness 1 & Question 1: Ablation studies for other norms (e.g., L∞)?
We thank for bring the limititions of our theorem. As stated in Section 5, our framework currently focuses on the $\mathcal{L}_2$ norm due to limitations in extending Stein's Lemma to other $\mathcal{L}_p$ norms (see Lemma 4 in Appendix A.3). We agree that generalizing beyond $\mathcal{L}_2$ robustness would be beneficial, but it is challenging without a generalized Stein's Lemma for other $\mathcal{L}_p$ norms. As of now, we are not aware of any results that support such an extension.
## Weakness 2 : Generative model quality and sample set bias.
The reviewer's intuition is correct. In our ablation study of generative models (GMs), we do find that better GMs give higher ranking. In Figure 3, we show that increasing the Inception Score of GMs can significantly increase the Spearman's rank correlation. Intuitively, a higher inception score means better learning of GMs to the underlying data distribution, resulting in improved ranking efficiency in our case. Recent works such as [Nicolas, 2021] and [Tengyuan Liang, 2021] have proved theoretically that GANs can learn the true generating distribution under certain conditions. In summary, the effectiveness of GREAT Score is positively correlated with the generation capabilities of the GM in use. Please refer to Appendix A.2 for details
References:
Nicolas Schreuder, Victor-Emmanuel Brunel, and Arnak Dalalyan. Statistical guarantees for generative models without domination. In Algorithmic Learning Theory, pages 1051–1071. PMLR, 2021. 14
Tengyuan Liang. How well generative adversarial networks learn distributions. The Journal of Machine Learning Research, 22(1):10366–10406, 2021. 14
## Weakness 3: Detailed analysis on group variability.
We appreciate your suggestion to provide a more detailed analysis among different demographic groups to add depth to our analysis.
Exploring robustness variability among demographic groups is crucial for mitigating biases in facial recognition systems. Studies like [Klare, 2012] and [Deb, 2020] have shown that factors such as age and race can affect recognition accuracy.
We have found an evidence for the facial recognition system has less elder faces in training data. [Meade, R. 2021] found that the majority of models that perform gender classification are trained on the most popular actors and actresses sourced from Wikipedia and IMDB. As this is a celebrity dataset, most celebrities appear much younger than someone of the same age who is not a celebrity - creating a bias towards classifying older people.
We will add this discussion in the revised version.
References:
[1] Klare, Brendan F., et al. "Face recognition performance: Role of demographic information." *IEEE Transactions on Information Forensics and Security*. 2012.
[2] Deb, Debayan, et al. "Face Recognition Performance: Role of Demographic Information on Consumer- to Organizational-Level Applications." *IEEE Transactions on Information Forensics and Security*. 2020.
[3] Meade, R., Camilleri, A., Geoghegan, R., Osorio, S., & Zou, Q. (2021). Bias in machine learning: how facial recognition models show signs of racism, sexism and ageism.
## Weakness 4 & Question 4: Stability of grid search in calibration ?
Thank you for your feedback on the calibration process. While grid search for optimizing temperature parameters might seem ad-hoc, it is widely used and accepted in state-of-the-art research.
Several recent studies have used grid search for calibration due to its simplicity and effectiveness. [Guo et al., 2017] optimize temperature scaling for neural network outputs, forming a foundational reference for modern calibration techniques. Similarly, [Kumar et al., 2019] use grid search to optimize calibration parameters, demonstrating its utility for reliable uncertainty estimates.
Grid search is simple and effective under time constraints. Advanced techniques require significant resources and may not yield much better results. A systematic study with these techniques could strengthen our framework for future work.
References:
[1] Guo, Chuan, et al. "On calibration of modern neural networks." *International Conference on Machine Learning*. 2017.
[2] Kumar, Ananya, et al. "Verified uncertainty calibration." *Advances in Neural Information Processing Systems*. 2019.
## Weakness 5 & Question 3: Scalability and computational efficiency on large datasets/models?
To address the question on the scalability of the GREAT Score framework, we conducted experiments using three ResNets, varying dataset sizes from 500 to 2000 images. We recorded the computation times in ms without employing any attack mechanisms.
| Dataset Size | ResNet50 | ResNet101 | ResNet152 |
|-------------------|---------------|----------------|----------------|
| 500 | 3274 | 6251 | 9149 |
| 1000 | 6529 | 12528 | 18339 |
| 1500 | 9785 | 18838 | 27481 |
| 2000 | 12960 | 24917 | 36588 |
The results show a linear increase in computation time with increasing dataset size and model complexity. More complex models like ResNet152 took proportionally more time than simpler ones like ResNet50.
This linear scalability demonstrates that the GREAT Score efficiently handles larger datasets and more complex models, making it suitable for large-scale applications.
## Question 2: Impact of generative model quality ?
Yes, we thank the reviewer for pointing out this question, actually, we have done an Ablation study on GANs and DMs. Evaluating on CIFAR-10, Figure 3 compares the inception score (IS) and the Spearman’s rank correlation coefficient between GREAT Score and RobustBench on five GANs and DDPM. One can observe that models with higher IS attain better ranking consistency.
---
Rebuttal Comment 1.1:
Comment: Thanks for your clarification. If more search methods can be used as an ablation study, I will raise my score from 5 to 6.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your insightful feedback and the suggestion for this valuable ablation study. We agree that demonstrating our framework's reliability across different search methods is crucial. We are also very pleased to learn that you are inclined to raise the score based on these additions.
In response to your recommendation, we have conducted a comprehensive ablation study comparing various search methods for calibration on the temperature, including grid search, binary search, simulated annealing, and genetic algorithm. This study was performed under the same experimental settings as the calibration process described in our paper. We will incorporate these findings into Section 4.4 of the final version, alongside the ablation study of generative models.
The results of our experiment are summarized in the table below:
| Method | GREAT Score vs. RobustBench Correlation | GREAT Score vs. AutoAttack Correlation | Best Temperature |
| ------ | -------------------------------------- | -------------------------------------- | ---------------- |
| Grid Search | 0.8971 | 0.6941 |0.00742 |
| Binary Search | 0.8971 | 0.6941 | 0.00781 |
| Simulated Annealing | 0.8971 | 0.6941 | 0.00748 |
| Genetic Algorithm | 0.8971 | 0.6941 | 0.00879 |
For the implementation of simulated annealing and genetic algorithm, we ensured fair comparison by maintaining consistent search space ([0, 2]), precision (minimum step size of 0.00001), and computational resources across all algorithms.
Simulated Annealing:
We implemented a standard simulated annealing algorithm with an exponential cooling schedule. The algorithm starts with an initial temperature of 1 (this temperature is not the same as the temperature in calibration) and employs an exponential cooling schedule with a decay rate of 0.95, continuing until the temperature reaches a minimum of 0.001 or other termination criteria are met. To accommodate time constraints, we reduced the number of iterations in the inner loop to 10, while maintaining the overall algorithm structure. The initial temperature was set high to allow for broad exploration, gradually decreasing to enable fine-tuning.
Genetic Algorithm:
Our genetic algorithm implementation used a population of 30 candidate solutions over 50 generations. We employed tournament selection, single-point crossover, and a mutation rate of 0.3. The population size and number of generations were adjusted to align with the computational time of the simulated annealing algorithm.
Both algorithms were adapted to optimize the temperature parameter in our GREAT Score calibration process, with the objective of maximizing the correlation scores. While the reduced iterations might affect the algorithms' ability to find the global optimum, this trade-off reflects real-world constraints and allows for a fair comparison of their performance under limited computational resources.
The ablation study demonstrates that while different search methods (grid search, binary search, simulated annealing, and genetic algorithm) yield slightly different optimal temperature values, they all converge to the same ranking results. This consistency across different search methods highlights the reliability and practical applicability of the GREAT Score calibration process. | Summary: The paper addresses the important and under-explored problem of "global robustness evaluation" for neural networks. It proposes GREAT Score, a novel framework for assessing global robustness using generative models (GMs). Besides, through Monte Carlo sampling from GMs and using Hoeffding's concentration bound, the algorithm can reach an epsilon probabilistic guarantee on the sample mean's closeness to the true mean. The paper then applies their proposed algorithm on various classifiers using GMs to measure global robustness scores.
Strengths: 1) The paper attempts to tackle a significant gap in global robustness assessment, offering a reasonable and innovative contribution to the field.
2) The paper is well-organized, clearly written, and easy to follow.
3) The experimental results show high consistency between GREAT Score and attack-based model rankings on RobustBench, demonstrating its potential as an efficient alternative to existing robustness benchmarks.
Weaknesses: 1) The reliance on GANs as a proxy for the true data distribution raises concerns about the method's accuracy. To the best of my knowledge, current GANs do not generate better coverage than the test set. GANs are a bad estimation of the underlying data distribution with known issues such as bias and model collapsing. Considering model collapse, the fixed test set is likely to have even better distribution coverage than the samples generated from GAN. It would be much more reliable and convincible by involving the recent class generative models.
2) I also encourage the authors to include experiments with other local robustness estimators, further strengthening the submission.
How does the choice of generative model and local robustness estimator affect the reliability of the global measure computed by the paper?
3) Theoretically, while the authors provide a probabilistic guarantee on the obtained estimates derived from GMs and true estimate, there's a lack of theoretical bound on gap between the true estimate and models' global robustness arising from the distance of the generative distribution and underlying data distribution. Otherwise, the significance and utility of the GREAT score is unclear for me, and this omission makes it unclear how the accuracy of the empirical distribution affects the overall error, beyond just sample complexity.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please respond my questions in the weakness part.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express sincere gratitude for your valuable feedback and constructive comments.
## Question 1: GAN reliability and distribution coverage?
Thank you for your feedback on the reliance on GANs as a proxy for the true data distribution. We acknowledge the concerns about the method's accuracy, particularly the issues of bias and model collapse inherent in GANs. We would like to address these concerns and explain our approach in detail.
Firstly, we would like to clarify that we utilized conditional generative models in our framework. Conditional generative models, such as Conditional GANs (cGANs), allow for more controlled generation of samples by conditioning on specific labels or attributes. This helps in generating more representative and varied samples, which can mitigate some of the biases and issues associated with traditional GANs.
Recent works such as [Nicolas, 2021] and [Tengyuan Liang, 2021] have proved theoretically that GANs can learn the true generating distribution under certain conditions. Besides GANs, we have diffusion models in our analysis, and as the reviewer expected, it gives a better robustness analysis, as shown in Fig. 3. Please also see Appendix A.2 for details.
References:
Nicolas Schreuder, Victor-Emmanuel Brunel, and Arnak Dalalyan. Statistical guarantees for generative models without domination. In Algorithmic Learning Theory, pages 1051–1071. PMLR, 2021. 14
Tengyuan Liang. How well generative adversarial networks learn distributions. The Journal of Machine Learning Research, 22(1):10366–10406, 2021. 14
## Question 2: Experiments with other estimators?
Thank you for the insightful question. Following the reviewer's suggestion, in our experiments below, we included two types of local robustness estimators to evaluate model robustness more comprehensively: **Input Gradient Norm (as a proxy of sensitivity)** and **Entropy of the classifier output (as a proxy of confidence)**. We note that unlike our proposed GREAT Score, these estimators do not provide any certified robustness guarantee. Here's why we chose these estimators and how they affect our global measure:
1. **Input Gradient Norm**:
- **Rationale**: This estimator measures the sensitivity of the model to small perturbations in the input by calculating the norm of the gradient of the loss with respect to the input. A higher gradient norm indicates greater sensitivity to input changes, suggesting lower robustness.
- **Reference**: The use of input gradient norm for robustness evaluation is well-documented. [ Finlay and Oberman, 2021] proposed a method using input gradient regularization to improve adversarial robustness.
- **Implementation**: We computed the gradient norm for input samples and took the average to represent the model's sensitivity.
2. **Entropy**:
- **Rationale**: Entropy of the classifier's output probabilities reflects the uncertainty of the model's predictions. Higher entropy indicates that the model is less confident in its predictions, which can be a sign of lower robustness.
- **Reference**: The relevance of entropy as a measure of model uncertainty and robustness is supported by [Smith and Gal ,2018], who explored various uncertainty measures, including entropy, for detecting adversarial examples .
- **Implementation**: We calculated the entropy of the output probabilities for input samples and took the average to represent the model's uncertainty.
To validate the reliability of our global measure, we computed the Spearman correlation coefficient between our averaged estimation scores and the rankings from RobustBench. Besides the local robustness estimator, the experiment setup is the same as Table 1.
#### Experimental Results
| | Input Gradient Norm | Entropy | GREAT Score Calibrated |
|-----------------------------------|----------------------------|----------------|------------------------|
| Correlation with AutoAttack | -0.0956 | 0.4500 | 0.6941 |
| Correlation with RobustBench | 0.0343 | 0.4951 | 0.8971 |
This table summarizes the correlation results for the local robustness estimators and the GREAT Score with respect to AutoAttack and RobustBench. These results demonstrate the effectiveness of GREAT Score in evaluating model robustness and their alignment with established benchmarks.
### References
Finlay, C., & Oberman, A. M. (2021). Scaleable input gradient regularization for adversarial robustness. Machine Learning with Applications, 3, 100017.
Smith, L., & Gal, Y. (2018). Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533.
## Question 3: Theoretical bound on distribution gap and its effect on utility?
We separated out the convergence of our edtimate to the true estimate versus the learned data distribution to the true data distribution because the latter has been proved in the literature. Recent works such as [Nicolas, 2021] and [Tengyuan Liang, 2021] have proved theoretically that GANs can learn the true generating distribution under certain conditions. Besides GANs, we have diffusion models in our analysis, and as the reviewer expected, it gives a better robustness analysis, as shown in Fig. 3.
The reviewer's intuition is correct. In our ablation study of generative models (GMs), we do find that better GMs give higher ranking. In Figure 3, we show that increasing the Inception Score of GMs can significantly increase the Spearman's rank correlation. Intuitively, a higher inception score means better learning of GMs to the underlying data distribution, resulting in improved ranking efficiency in our case. We also had a discussion on the approximation guarantee of some GMs to the true data distribution in Sec. 6.2. In summary, the effectiveness of GREAT Score is positively correlated with the generation capabilities of the GM in use. | Rebuttal 1:
Rebuttal: We appreciate the valuable feedback from the reviewers . Below is a high-level summary of our rebuttal, addressing the major concerns raised:
### Performance and reliability of the generative model
* **Concern:** The reviewers were concerned about the dependency of our metric on the generative model's ability to produce samples that truly belong to the conditioned class, and whether generative models can appoximation to true data distribution. Besides , the reviewers also wonder theoretical bound on distribution gap and its effect on utility
* **Response:** We have provided related liteature proving the convergence rate of GANs in approximating true data distributions. We have also mentioned more details can be found in Appendix A.2. We have reiterated the positive correlation between generative model quality and GREAT Score effectiveness, supported by our ablation study shown in Figure 3.
### Experimental Validation
* **Concern:** The reviewers requested additional evidence to support the effectiveness of our approach on Imagenet, scalability to large datasets/models, and comparison with other estimators.
* **Response:** We have provided further experimental results and comparisons with local estimators to demonstrate the robustness and reliability of our method. We also conducted an additional experiment on 500 test samples from the original ImageNet dataset and reported their GREAT scores, confirming the consistency of the correlation between the GREAT score ranking and the RobustBench ranking. We also perform experiments on different ResNets and dataset sizes to demonstrate the time efficiency of our method.
### Practical Applications of Our Score.
* **Concern:** Reviewers highlighted practical applications of our method and asked comparasion between our method and Auto-Attack.
* **Response:** We discussed the practical significance of our score as a lower-bound guarantee of robustness to attacks, useful for robustness ranking and informed model selection.
### Methodological Improvements and Future Directions
* **Concern:** Reviewers suggested exploring other norms and discussed the scalability of our approach.
* **Response:** We acknowledged the limitation of our current framework to the L2 norm and highlighted this as a potential future research direction.
### Demographic and Class-Specific Analyses
* **Concern:** The reviewers emphasized the importance of demographic group analysis and class-specific robustness.
* **Response:** We clarified the hidden reason for different robustness among groups in gender classification. We metioned we can be extend our Score to derive class-specific robustness guarantees. We also specified the uniform distribution of class labels used in our experiments to ensure fairness and reliability.
### Omitted Works
* **Concern:** Reviewers recommended additional relevant works.
* **Response:** We thanked the reviewer for recommending relevant works and committed to including them in the revised manuscript.
### Clarity and Presentation
* **Concern:** The clarity and presentation of certain sections were highlighted as areas for improvement.
* **Response:** We have revised the manuscript to enhance clarity and readability. Specific sections have been rewritten for better comprehension, and additional explanations have been included where necessary. We also clarified the definition and inclusion of the Lipschitz constant in our analysis, ensuring consistency with our framework. Feedback on font size has been acknowledged, and we committed to increasing it in the revised manuscript.
We believe that these changes have significantly strengthened our manuscript and addressed the reviewers' concerns. We hope that the paper is now suitable for publication. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Average gradient outer product as a mechanism for deep neural collapse | Accept (poster) | Summary: Given the complexity of the process of neural network training, any understanding of robust phenomena that can be identified in the training process has potential value that can guide the design of models and algorithms. Neural Collapse (and its deep counterpart) is one such phenomenon that has been identified and reproduced across multiple model classes and datasets. This work shows that Neural Collapse also occurs for a recursive kernel-based model known as Deep RMF, when trained using an algorithm that is based on projection onto a matrix constructed from an outer products of gradients computed locally at each layer.
Additionally, the authors present experimental results that document neural collapse in these models when trained on standard datasets. They also show that in standard neural networks, the projection of features onto the gradient outer product leads to neural collapse, rather than the effect of the nonlinearity.
Strengths: The paper is clearly written, and presents both theoretical results and some empirical results that complement them, since they apply to datasets that violate the assumptions under which the results hold. They prove that deep Neural Collapse can indeed occur in models beyond standard neural networks trained with gradient descent.
The experimental results (specifically in Appendix D) demonstrate that the projection onto the gradient outer product matrix (AGOP) leads to neural collapse in standard models, motivating the further study of this object.
Weaknesses: Given that the main results apply both to a non-standard kernel method and a non-standard training algorithm, it is unclear what the implications of the results are for more well-known models and algorithms. If the authors believe that these results have implications of this form, they should be presented more clearly. Algorithms that are not based on backpropagation are interesting both as possible means of explaining learning in biological systems where backpropagation is unrealistic, and in distributed settings where backpropagation may incur a prohibitive communication overhead. However, the motivation of the algorithm used appears to be that it is a simple model that demonstrates certain phenomena that arise in the training of deep networks.
The authors assume that the gram matrix of the data is full-rank. This requires assuming that the number of datapoints is smaller than or equal to the input dimension (which subsumes assumption 4.1). Standard datasets violate this assumption.
Technical Quality: 3
Clarity: 3
Questions for Authors: Are the models in appendix D trained with SGD? If so, they indicate the importance of the AGOP projection in causing neural collapse in standard models and I believe this result should be highlighted. That being said, this result may be of independent interest regardless of the analysis of Deep RMF.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations and societal impacts have been addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address all your concerns below:
**Given that the main results apply both to a non-standard kernel method and a non-standard training algorithm [...]**
Our first motivation for studying DNC with Deep RFM is that, unlike the standard analytical approaches for neural collapse, the Deep RFM procedure is highly dependent on both the input samples and their labels. Specifically, the collapse process of Deep RFM depends on mapping with the AGOP of a predictor trained to fit these input/label pairs. Moreover, as we demonstrate in Theorem 4.3, the rate of this collapse throughout the layers is highly dependent on the specific structure of the data through its relationship to the parameters $\lambda_\Phi$ and $\lambda_{\mathrm{lin}}$. We also provide significant evidence that this AGOP mechanism is present in deep neural networks in Section 5 of our original manuscript, indicating that our results apply to more practical architectures.
Secondly, the ability of Deep RFM to recover interesting behaviors of deep learning has broad implications for our understanding of neural networks. In fact, the same Deep RFM model we study has recovered a number of other interesting deep learning phenomena including, e.g., grokking modular arithmetic [1], convolutional feature learning [2], and even low-rank matrix completion [3]. As you mention, Deep RFM is a simpler model than neural networks, in the sense that we understand its individual components: (1) a kernel regression step, (2) mapping with the AGOP, and (3) random feature projection. Therefore, our work additionally justifies Deep RFM as a simplified proxy to understand neural networks.
[1] Mallinar, Beaglehole, Zhu, Radhakrishnan, Pandit, Belkin, “Emergence in non-neural models: grokking modular arithmetic via average gradient outer product.” arXiv pre-print, 2024.
[2] Radhakrishnan, Beaglehole, Pandit, Belkin, "Mechanism for feature learning in neural networks and backpropagation-free machine learning models." Science, 2024.
[3] Radhakrishnan, Belkin, Drusvyatskiy, "Linear Recursive Feature Machines provably recover low-rank matrices." arXiv pre-print, 2024.
**The authors assume that the gram matrix of the data is full-rank. This requires assuming that the number of datapoints is smaller than or equal to the input dimension (which subsumes assumption 4.1). Standard datasets violate this assumption.**
The strong assumption that the gram matrix of the data is full-rank is needed only if one requires neural collapse in every layer of the network, starting from the very first one. In contrast, if we consider collapse starting at any given later layer of Deep RFM, then we only need that the smallest eigenvalue of the corresponding feature map is bounded away from 0. This in turn requires that the number of features at that layer is greater than the number of data points, a condition which is routinely satisfied by the overparameterized neural networks used in practice.
**Are the models trained with SGD?**
The neural network models are trained with standard SGD. We agree that this result is of independent interest, and we will emphasize in the main section that our training procedure is standard. Let us also clarify that Deep RFM is not trained by any gradient-based method, and is instead optimized through an iterated three-stage process: (1) solve kernel ridgeless regression, (2) compute the AGOP of the predictor from step 1, and (3) apply a high-dimensional feature map.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed response. After consideration of their comments and those of the other reviewers I have decided to increase my score. | Summary: This paper studies deep neural collapse (DNC) in deep neural networks (DNN) through the prism of the neural feature ansatz (NFA) and deep recursive feature machines (RFM). It is comprised of several results:
- empirical evidence that DNC occurs in deep RFMs,
- a theoretical analysis of DNC in a high-dimensional RFM setting,
- a theoretical analysis of DNC in a kernel learning setting,
- empirical evidence that the mechanisms which lead to DNC in RFMs and traditional DNNs are the same.
Strengths: This paper shows that deep neural collapse occurs in a similar way in deep networks and deep recursive feature machines. It thus provides a simplified setting in which to investigate deep neural collapse, which is an important research direction to further our understanding of deep learning. Specifically, it shows that neural collapse can be obtained just by iterating linear regression problems, without backpropagating through a deep network.
Weaknesses: My main issue with the paper is its writing, which makes it quite difficult to read.
- The notations could be improved in several places throughout the paper (see minor points below).
- I could not follow most of section 4.2, despite being rather familiar with kernel methods and their behavior in high dimensions.
On a high level, I don't understand how a linear kernel could be the best setting for neural collapse. The text contradicts itself, as it simultaneously state that "if [$\lambda_k = 0$] [...], collapse will occur in just one layer] , but also that "this theory offers an explanation for why non-linear activation is needed". A linear layer can collapse within-class variability but also typically collapses class means together, and thus cannot induce neural collapse (see paragraph below).
On a technical level, $k_\Phi$ and $\lambda_\Phi$ are referred to before being defined, and I do not understand the roles played by $k$/$\lambda_k$ vs $k_\Phi$/$\lambda_\Phi$. Assumption 4.2 is also referred to before being stated.
- Section 4.3 is also slightly difficult to read.
I took me several tries to guess that $k_M(x,x') = \tilde k_M(x,x')\mathrm{Id}$, which should appear in the paper. The terms "input-level" and "output-dimension level" kernels should be introduced for non-specialists in multi-task kernel learning.
I also do not understand the point of introducing $M$ if it is dropped afterwards. Theorem 4.4 could simply be stated as "the optimal feature map for ridge regression is the one which already predicts the label: $\Phi(x) = y$". This result is not very surprising, and is not very integrated in the paper. I suppose that it is some kernel-level equivalent of the unstructured feature model, and suggests that weight decay might be instrumental in bringing about neural collapse? The normalization of $k$ should be restated in the definition of Problem 3 (otherwise the optimal loss is obtained when $k \to 0$).
- The message of section 5 could be presented more clearly. What I understood was that it argues that RFMs and DNNs achieve neural collapse through the same means. I suggest making this point before introducing RFMs (in particular, stating the NFA correlations). I also did not understand why this mechanism is referred to as "denoising".
My second issue is that I was not convinced by the claim that it is the right singular vectors and singular values which lead to neural collapse. By the same logic as lines 309-315, the right singular vectors do not change the DNC1 metric (with a "full" SVD where $U$ and $V$ are invertible). Similarly, if I were to divide operations in the network as $V^T\sigma$ and $US$ as opposed to $\sigma U$ and $SV^T$, I should see that it is now $US$ which is responsible for neural collapse (again with a full SVD). This conclusion also depends on the chosen metric for evaluating collapse. Why do the authors consider the ratios of traces of between- and within-class covariances, rather than the trace of their ratio (the Fisher linear discriminant)? It seems that it would reverse one of the conclusions of the analysis, since the trace of the Fisher discriminant ratio $\mathrm{tr}(\Sigma_W^{-1} \Sigma_B)$ is invariant to invertible linear transformations, and decreases under non-invertible linear transformations, so can only be improved through the non-linearity. If the conclusion of which part of the network is responsible for DNC depends strongly on the chosen metric, can we really ascribe meaning to the question? It seems to me that it is really the sequence of weights and non-linearity which _together_ induce DNC, and trying to separate their effects is not really possible.
Finally, Proposition A.1 was first published by Cho and Saul in _Kernel Methods for Deep Learning_, NIPS 2009. Besides, the expression of the kernel in eq. (5) can be simplified with algebra and trigonometry (compare with their eq. (6)).
Minor notes and suggestions:
- I suggest using a so-called "diverging" colormap (such as "bwr") in Figure 1 to clearly separate positive from negative correlations, and use the same range for both datasets.
- I suggest replacing "Gram matrix" with "(uncentered) covariance" to refer to $W^TW$, as weight matrices $W$ are generally decomposed in rows which correspond to individual neurons.
- The notation $||X||$ to refer to the vector in $\mathbb R^N$ of column norms of a matrix $X \in \mathbb R^{d\times N}$ is never introduced (and clashes with the usual convention that this is a matrix norm).
- Why is the last layer denoted $W_{L+1}$ instead of $m_{L+1}$?
- The choice of layer-indexing is confusing and seems inconsistent throughout the paper. Contrarily to what is stated in section 3.1, isn't $X_l$ the features after $l-1$ network layers? I suggest to denote the input as $X_0$ instead of $X_1$ to simplify the notations. Also, it seems that $M_l^{1/2} X_l$ should be referred to as $\tilde X_{l+1}$ rather than $\tilde X_l$ given the chosen conventions.
- Typo: missing a norm in the definition of $\bar H_l$ line 128.
-In section 4.2, I suggest defining activations before the kernels, e.g., $\tilde X_{l+1} = \kappa^{-1/2} M_l^{1/2} X_l$ and $X_{l+1} = \Phi_{\rm lin}(\tilde X_{l+1})$. I also suggest choosing a different notation for $k_{\rm lin}$ and $\Phi_{\rm lin}$ which are confusing as they imply linearity, and to avoid the awkward "non-linear feature map $\Phi_{\rm lin}$".
- Typo line 260: the output space of $k_M$ should be $\mathcal R^{C\times C}$.
- I suppose that $\lambda = \mu$ in section 4.3.
- In the caption of Figure 2, I suppose that "fully-connected" should be removed in the case of ResNet.
Technical Quality: 3
Clarity: 1
Questions for Authors: - Are the feature maps $\Phi_l$ and kernels $k_l$ unrestricted in Algorithm 1, or do they have to match in the sense that $k_l(x,x') = \langle \Phi_l(x), \Phi_l(x')\rangle$?
- What is the motivation behind considering _normalized_ features _before_ the non-linearity in section 4.1? Could the authors clarify the role of these non-standard conventions?
- Why are the setting different between sections 4.1 and 4.2? (relative to normalizations). Is neural collapse still empirically observed with the choices of section 4.2? It raises the suspicion that it could not the case, in which case Theorem 4.3 would not really explain the results of section 4.1 (e.g., because the asymptotic regime is not reached in practice).
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: See weaknesses.
In its current state, I think that the paper is slightly below the acceptance bar and would require minor, if not major, changes before it can be fully appreciated by the NeurIPS community. I would be happy to raise my score if the authors address the points raised above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback. We note that we will significantly improve the presentation of our paper, and specifically Sections 4.2 and 4.3. Please see the global response for a summary of our changes. We now proceed to address individual comments.
**I could not follow most of section 4.2 [...]**
We will clarify this. The two kernels $k_{\mathrm{lin}}$ and $k_{\Phi}$ have opposite effects. In particular, the closer $k_{\mathrm{lin}}$ is to a linear kernel, the more accelerated collapse will be for Deep RFM. In fact, a linear kernel induces NC in just one AGOP application. To see this, note that ridgeless regression with a linear kernel is exactly least-squares regression on one-hot encoded labels. In the high-dimensional setting we consider, we find a linear solution $f(x) = \beta^T x$ that interpolates the labels, and the AGOP of $f$ is $\beta \beta^T$. Since we interpolate the data, $\beta^T x = y$ for all $(x,y)$ input/label pairs and, hence, the data collapses to $\beta\beta^T x = \beta y$.
In contrast, having $k_{\Phi}$ far from a linear kernel accelerates collapse, and the non-linear activation function is needed so that $k_{\Phi}$ is significantly non-linear. In fact, if $k_{\Phi}$ were exactly linear and the original data were not linearly separable, then collapse cannot occur at any depth of Deep RFM. In that case, the predictor would need to contain a non-linear component in order to fit the labels exactly, deviating from the ideal case described above.
**Section 4.3 is also slightly difficult to read [...]**
Thank you for your suggestions. We address your points one-by-one:
- We introduce $M$ because it is the key part of the parametrized kernel ridge regression which can be understood as the objective function of the RFM iteration. We emphasize that when the data (or the features) are full rank, dropping $M$ does come without loss of generality. Due to the character limit, we kindly refer you to our response to reviewer AtBe for details.
- We appreciate your interpretation of this model as an unconstrained kernel model. We agree with this interpretation and will call it in this way in the revision.
- Theorem 4.4 is well integrated in our paper as the kernel ridge regression loss can be understood as the quantity the RFM learning algorithm is minimizing. Thus, the result describes DNC as an implicit bias of the RFM iteration.
- For additional structural changes to Section 4.3, see our global response.
**The message of section 5 [...]**
“Linear denoising” highlights the nature of the mechanism: by only applying linear transformations, we decrease the within-class variability of the data and improve DNC1 by only discarding class-irrelevant information (i.e., noise) via the non-trivial null-space of the weight matrix. See our global response for additional changes.
**Why splitting into $\sigma U$ and $SV^T$?**
In the compact SVD (which is an equivalent re-writing of the full SVD), it is the matrix $V^T$ which has non-trivial null-space, while the matrix $U$ always has a trivial null-space. Note that the non-triviality of the null-space allows for the DNC1 metric to decrease. Hence, this is the correct split of the layer.
More generally, the main message of this section is not which part of the SVD decreases the DNC1 metric, but rather that the linear transformation of $W$ has a stronger effect on NC1 than the non-linearity $\sigma$. In fact, the most natural grouping of the layer - into the non-linearity alone and the entire weight matrix alone - gives equivalent NC1 values to our grouping, as the two standard metrics for NC1, $\mathrm{tr}(\Sigma_W \Sigma_B^\dagger)$ and $\mathrm{tr}(\Sigma_W)/\mathrm{tr}(\Sigma_B)$, are invariant to the rotation by the left singular vectors of $W$.
**This conclusion also depends on the chosen metric...**
This is an interesting consideration, however we kindly disagree that Fisher linear discriminant cannot decrease due to a linear mapping. Consider the example in which the feature vectors of $X$ are not collapsed, but the differences between points of the same class lie in the null-space of $W.$ Then, $WX$ is perfectly collapsed and any NC1 metric would be identically zero.
While the two metrics are similar, we choose our metric because it directly quantifies how close the feature vectors are to their class means - a common and intuitive interpretation of NC1. Our metric is also frequent in the literature, see e.g. Rangamani et al., 2023 and Tirer et al., 2023.
**Are the feature maps $\Phi_l$ and kernels $k_l$ unrestricted?**
The feature maps and kernels do not have to match. We will clarify this when we introduce the two objects.
**What about the normalized features in 4.1?**
We normalize the features as neither the AGOP projection nor the random feature map have a native ability to rescale the data. Therefore, while we still see collapse in terms of the angles between data points, the dynamics of the data norms are more difficult to control in this setting.
**Why are the setting different between sections 4.1 and 4.2? [...]**
Neural collapse would still be observed without normalizing the data, in the sense that all points of the same class will be parallel and all points of different classes will be perpendicular (or at angle $-\frac{1}{K-1}$). However, we are not guaranteed convergence of the data Gram matrix to $yy^T$ in general without the re-normalization, as the diagonal entries of $X^T X$ will not be identically $1$. In our analysis, it is sufficient to scale the Gram matrix at each layer with $\kappa^{-1}$. The main advantage of this scaling is that it is analytically much simpler than explicitly projecting the data to be norm $1$.
**Minor notes.**
We will address all these points in the revision. As an example, we will rename $k_{\mathrm{lin}}$ and $\Phi_{\mathrm{lin}}$ to $\widehat{k}$, indicating the predictor kernel, and $\Phi_{\mathrm{map}}$, indicating the feature map applied to the data.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. They have addressed my main concern regarding the writing of the paper. As a result, I have increased my score.
If I understand correctly, in this paper neural collapse is only considered on training data? That is, generalization is not required and a perfectly overfitting network would achieve neural collapse? I think this should be explicitly pointed out in the text. Indeed, it makes it very easy to achieve neural collapse, with a number of random features that is larger than the number of training samples, followed by a linear layer. Doesn't the statement "deep RFMs can achieve neural collapse" then reduce to "deep RFMs can overfit their training set"? A mystery of neural collapse that is not studied here is that the within-class variability remains negligible _on the test set_.
Thank you for pointing out that the Fisher linear discriminant can decrease under noninvertible linear transformation, I was mistaken. I however maintain that stating something like "the linear layers are mostly responsible for neural collapse" is misleading, as removing all non-linearities from the network would prevent neural collapse from arising since the data is not non-linearly separable. So the non-linearities must have a role, as they "enable" the collapse of within-class variability by the next linear layer. Although this is not reflected in the metrics evaluated at intermediate points, non-linearities thus play an important role. To iterate on the question of the metric, if I replace NC1 with "the value of NC1 after applying a linear operator that minimizes NC1" (i.e. after a linear "probe" classifier), then only non-linearities can decrease this new metric.
---
Rebuttal 2:
Comment: We thank the reviewer for their reply.
Yes, we only study neural collapse on the training set. Note that there are mixed empirical results about whether or not neural collapse transfers to the test set [1, 2, 3]. Moreover, to the best of our knowledge no work has studied the emergence of neural collapse at test time theoretically. Perfect collapse at test time would mean perfect generalization, therefore only weaker forms of test-collapse are available [3].
We emphasize that neural collapse is a particularly structured form of interpolation that is not implied by low train loss, or overfitting, alone. In fact, there are many ways either neural networks or Deep RFM can interpolate the training data. Therefore, it is remarkable that the particular interpolating solutions found by these models exhibit DNC. Our paper, as well as all the other theoretical papers on collapse, study why among all interpolations for a given model class and dataset, the training procedure selects a solution exhibiting DNC.
You are correct, the non-linearity is essential to achieve the collapse for linearly non-separable data. The ReLU plays an important role, as your proposed metric shows. We will make it clear in our revision that what we mean by saying that the linear layers are responsible for reducing within-class variability is not that they can achieve collapse on their own, but that in the solved model, it is the linear layers that directly decrease the NC1 metric. Moreover, the ReLU plays a critical role in enabling collapse, especially when the data is not linearly separable.
[1] Xu and Liu. “Quantifying the variability collapse of neural networks.” International Conference on Machine Learning, 2023.
[2] Kothapalli. “Neural collapse: A review on modelling principles and generalization.” arXiv pre-print, 2022.
[3] Hui, Belkin, Nakkiran. “Limitations of neural collapse for understanding generalization in deep learning.” arXiv pre-print, 2022.
---
Rebuttal Comment 2.1:
Comment: Thank you very much for these precisions and the references. I had wrongly assumed that neural collapse was known to occur on the test set, but I see the picture is much more nuanced. | Summary: The submission introduces a mechanism for Deep Neural Collapse (DNC) using the average gradient outer product (AGOP). The authors also propose the Deep Recursive Feature Machines (Deep RFM) model, which employs AGOP in its architecture to empirically and theoretically demonstrate DNC. The main contribution is that AGOP-based explanation is a data-based approach while prior work focused on data-agnostic explanations.
Strengths: * Using a data-based approach based on AGOP to explain DNC is novel to the best of my knowledge
* The paper offers both theoretical analysis and empirical evidence supporting the role of AGOP in inducing DNC
* The experiments are performed on different architectures and datasets
Weaknesses: * I found the paper challenging to read
* I am unsure about the practical implications of this work
Technical Quality: 2
Clarity: 2
Questions for Authors: * Can the authors clarify if other metrics, such as the Neural Tangent Kernel (not the limit), would effectively predict this behavior? Or AGOP is unique in this aspect?
* What are the practical implications of this work?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your concerns below.
**I found the paper challenging to read.**
We will make a number of changes to our presentation. Please see our global response to all reviewers for a summarized description of these changes.
**Can the authors clarify if other metrics, such as the Neural Tangent Kernel (not the limit), would effectively predict this behavior? Or AGOP is unique in this aspect?**
Prior work has shown that neural collapse can occur if the empirical Neural Tangent Kernel has a particular block structure dependent on the inputs and labels [1]. In Deep RFM, we consider a different setting, where we fix a kernel function to be independent of the data, i.e. the NTK in the infinite-width limit, and then understand how a linear transformation prior to the kernel evaluation can induce DNC.
[1] Seleznova, Weitzner, Giryes, Kutyniok, Chou. “Neural (Tangent Kernel) Collapse.” NeurIPS, 2023.
**What are the practical implications of this work?**
While the scope of this work is theoretical, there are a number of works demonstrating the practical applications of (deep) neural collapse and Recursive Feature Machines. More specifically, neural collapse has been used as a tool in several important contexts, including generalization bounds, transfer learning, OOD detection, network compression and robustness, see lines 85-90 and the references therein. Furthermore, RFM itself has been shown to be a practical tool for tabular data (Radhakrishnan et al., Science 2024) and scientific applications [1,2].
Our work also has broad implications for our understanding of deep learning. In particular, this work is part of a large body of research demonstrating that Deep RFM is a useful proxy to explain interesting phenomena in deep networks. In addition to neural collapse considered in this work, RFM has been shown to exhibit grokking in modular arithmetic [3], recover convolutional edge detection, and learn the same features as fully-connected networks on vision tasks (Radhakrishnan et al., Science 2024). These findings point toward Deep RFM and AGOP as powerful tools to improve our understanding of neural networks, which would likely lead to significant practical implications.
[1] Aristoff, Johnson, Simpson, Webber. “The fast committor machine: Interpretable prediction with kernels.” arXiv pre-print, 2024
[2] Cai, Radhakrishnan, Uhler, “Synthetic Lethality Screening with Recursive Feature Machines.” arXiv pre-print, 2023.
[3] Mallinar, Beaglehole, Zhu, Radhakrishnan, Pandit, Belkin, “Emergence in non-neural models: grokking modular arithmetic via average gradient outer product.” arXiv pre-print, 2024.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. They have addressed most of my concerns. Therefore, I have increased my score. I look forward to the updated version of the manuscript. | Summary: The authors study two effects associated with neural collapse: the within class variability going to zero and the orthogonality/tight-frame of the class means. They study the deep recursive feature machine model, and show that neural collapse forms in that setting as well, due to the projection of the data onto the average-gradient outer product (AGOP). They show both empirical and theoretical results on this phenomenon, leveraging high-dimensional gaussian equivalence of nonlinear random feature models. Further, they show that the right singular vectors of the weight matrices are responsible for most the within-class variability collapse, projecting onto the subspace spanned by the gradients.
Strengths: The writing of the paper is for the most part quite readable. The literature review is thorough and puts the results of this paper in a good context. The empirics are extensive and compelling. Moreover, the theoretical ideas leveraging the equivalence of nonlinear random features to a linear kernel with additional identity term make for a compelling argument about the mechanism for neural collapse in RFMs. Given the good mix of theory and experiment, I recommend this paper for acceptance.
Weaknesses: Section 4 is doing many things at the same time. It may be better to split it into an empirical evidence section, and then do a section on the theoretical results. In particular, it would be good to give an idea of where the theoretical ideas are going at the start of 4.2 before setting out to prove the deep neural collapse results. This would substantially improve the readability of this section.
This goes double for section 4.3. The opening paragraph of that section is unreadable:
*"Next, we show that the formation of the neural collapse is not only implicitly given by the specific
optimization procedure of the Deep RFM, but is also implicitly regularized for in the parametrized
kernel ridge regression, a model class that includes RFM (i.e., a single layer of Deep RFM)"*
I don't really understand what this is saying, or even what you're trying to accomplish in the entire subsection. I tried many times to read it. The whole subsection should be rewritten. There are many sentences there that make no sense to me. Here is another one:
*"Since we do not explicitly regularize M, we will drop the dependence on it, treat k as
a free optimization variable and compute the optimal value of the following relaxed problem:"*
This is certainly not something you can do generally. For example if I had a matrix parameterized as $A = M_1 M_2$ and optimized just the $M_i$ with no explicit regularization, there are many cases where this isn't the same as optimizing $A$. Maybe you mean to say something else but once again I can't understand what you're trying to say. The prior subsection was compelling enough that I am discounting this rather poor form of writing. Please rewrite this section.
More generally, there are many sentences throughout the paper that are not well-worded and seem to run on. Improving the writing would benefit the quality, precision, and reach of this otherwise strong paper. If in your rebuttal you can provide examples of improved presentation, I may raise my score higher.
Technical Quality: 4
Clarity: 3
Questions for Authors: One can show that for a $\ell_2$ regularized deep network that the weights pick up rank one spikes proportional to $W^{\ell}_{ij} \propto \frac{\partial f}{\partial x^\ell_i} \phi(x^{\ell}_j)$ where $\phi$ is a nonlinearity and $x^\ell$ is the preactivation.. This usually means that the *left* singular values of the weight matrices should pick up terms aligned with $\nabla f$. See for example the update equation for $W$ in 2.2.4 of LeCun:
http://yann.lecun.com/exdb/publis/pdf/lecun-88.pdf
Is there any easy way to square this with the results on RFMs, that the *right* singular values align with $\nabla f$ terms?
It would be good to be explicit and put a subscript below the $\nabla$s on the $\nabla f_\ell(x^\ell_{c i})$ in algorithm 1 to be clear what you're differentiating with respect to.
I don't understand why 4.3 is called non-asymptotic analysis. I don't think you're proving non-asymptotic bounds compared to 4.2. If anything 4.3 seems completely unrelated. Can you please give it a title where someone can understand what you're trying to do? Once again the rest of the paper is quite readable but this subsection is a mess.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: This work elucidates an important phenomenon in deep learning theory. Developing a principled understanding of feature learning is likely to have implications for the interpretability and reliability of AI systems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review. In our response here, we will pay significant attention to improving the writing and organization of our work, especially Section 4. Please also read our global response where we discuss the changes in presentation in detail to all reviewers. We proceed by addressing each weakness and question individually.
**Section 4 is doing [...]**
Thank you for this suggestion, please see our global response for the changes we will make in our revision.
**The opening paragraph of Section 4.3 is unreadable.**
We will present our result differently. As an example of our improved presentation, we would rewrite the opening paragraph of this section as follows:
“We have shown so far that mapping with the AGOP induces DNC in Deep RFM empirically and theoretically in an asymptotic setting. In this section, we demonstrate that DNC emerges in parameterized kernel ridge regression: in fact, DNC arises as a natural consequence of minimizing the norm of the predictor jointly over the choice of kernel function and the regression coefficients. This result proves that DNC is an implicit bias of kernel learning. We connect this result to Deep RFM by providing intuition that the kernel learned through RFM effectively minimizes the parametrized kernel ridge regression objective and, as a consequence, RFM learns a kernel matrix that is biased towards the collapsed gram matrix $yy^T$.”
**Dropping the dependence on $M$ is not always equivalent to the original problem.**
You are right that optimizing over all $k$ is not always equivalent to optimizing over all $M$. Thus, in general, this provides only a relaxation. However, if the gram matrix of the data is invertible (this is a realistic assumption if we consider layers $\ell>1$ of Deep RFM, where we are allowed to pick the feature dimension to be large enough so that this is satisfied), then the two optimizations have the same solution (where the min is taken to be an infimum) and, thus, the relaxation is without loss of generality.
We first show any $k$ realizable by our input-level kernel (under Euclidean distance on some dataset $R$) can be constructed by applying the input-level kernel to our data $X$ under Mahalanobis distance with appropriately chosen $M$ matrix. Let $k$ be a realizable matrix - i.e. any desired positive semi-definite kernel matrix for which there exists a dataset $R'$ on which our input-level kernel satisfies $k = \phi(d(R', R')))$, where $d(\cdot, \cdot)$ denotes the matrix of Euclidean distances between all pairs of points in the first and second argument. Our construction first takes the entry-wise inverse $r=\phi^{-1}(k).$ Since $k$ is realizable, this $r$ must be a matrix of Euclidean distances between columns of a data matrix $R$ of the same dimensions as $X$. Assuming the gram matrix of $X$ is invertible, we simply solve the system $R=NX$ for a matrix $N,$ and set $M=N^TN,$ which yields $k=\phi(r)=\phi(d(R, R))=\phi(d_M(X, X)),$ where $d_M(\cdot, \cdot)$ denotes the operation that produces a Mahalanobis distance matrix for the Mahalanobis matrix $M$.
We now give a construction demonstrating that the solution $k^*$ to problem $(2)$ is realizable up to arbitrarily small error using our input-level kernel on a dataset $R$ under Euclidean distance. We can realize the ideal kernel matrix that solves problem $(3)$, $yy^T$, up to arbitrarily small error by choosing $R$ according to a parameter $\epsilon > 0$, such that $\phi(d(R, R)) \rightarrow yy^T$ as $\epsilon \rightarrow 0$. In particular, for feature vectors $x_i,x_j$ of the same label in columns $i,j$ of $X$, we set the feature vectors $R_i, R_j$ for columns $i,j$ in $R$ to have $||R_i-R_j||=0$. Then, for $x_i,x_j$ in $X$ of different class, we set $R_i, R_j$ as columns of $R$ such that $||R_i-R_j|| > \epsilon^{-1}$. Then $k(R_i, R_j)$ is identically 1 for $R_i, R_j$ from the same class and converges to 0 for $R_i, R_j$ from different classes, as $\epsilon \rightarrow 0$, giving that $k = \phi(r) \rightarrow yy^T$.
For any choice of $\epsilon>0$, we can apply the procedure described two paragraphs above to construct $M$ that realizes the same $k^*$ using our input-level kernel applied to our dataset $X$ under the Mahalanobis distance. Therefore, the solution to problem $(2)$ can be constructed as the infimum over feasible solutions to problem $(3)$, completing the proof.
**Is there any easy way to square this with the results on RFMs, that the right singular values align with $\nabla f$ terms?**
This is a keen observation. You are correct that, although the NFA describes structure in $W^T W$, the correlation between NFM and AGOP is enabled by the alignment between the left singular vectors of $W$ and the gradients with respect to the pre-activations, as argued in [1]. This can be seen through decomposing the AGOP as $W^T K W$, where $K$ is the covariance of $\frac{df}{dx}$ and $x$ are the pre-activations. Then, the NFA holds provided that the right singular structure of $W$ is not perturbed through the inner multiplication by $K$, an alignment that naturally occurs through training by gradient descent. See [1] for a more detailed description of this phenomenon.
[1] Beaglehole, Mitliagkas, Agarwala. "Feature learning as alignment: a structural property of gradient descent in non-linear neural networks." arXiv pre-print, 2024.
**It would be good to be explicit and put a subscript below the $\nabla$'s on the $\nabla f_\ell(x_{ci}^\ell)$**
Thank you for this comment. We will fix this in the revision.
**Why is Section 4.3 called “non-asymptotic analysis”?**
We will rename this subsection “Connection to parametrized kernel ridge regression”. | Rebuttal 1:
Rebuttal: We thank the reviewers for their thorough feedback on our manuscript. We will make a number of clarifying changes to the organization and presentation of our results. We list major changes here.
First, we will split Section 4 into two new sections. The first section will contain the empirical results in Subsection 4.1 together with the first row of Figure 3 in Appendix D of our original manuscript, which shows quantitative improvements in our DNC metrics for Deep RFM. We will then make Sections 4.2 and 4.3 into their own section, which will be renamed “Theoretical explanations of DNC in Deep RFM”.
Second, we will make the following improvements to the presentation of Section 4.2:
- We will first define $k_\Phi$ and $\lambda_\Phi$ at the end of the first paragraph.
- We will then add a paragraph following the first of Section 4.2 in our original manuscript to outline the argument and results of this subsection:
“We show that Deep RFM exhibits exponential convergence to DNC, and the convergence rate depends on the ratio $\lambda_{k} / \lambda_\Phi$. These two parameters modulate the extent of DNC with Deep RFM. Namely, as we will show, if $k_{\mathrm{lin}}$ is close to the linear kernel, i.e., if $\lambda_{k}$ is small, then the predictor in each layer will resemble interpolating linear regression, an ideal setting for collapse through the AGOP. On the other hand, if $k_{\Phi}$ is close to the linear kernel, i.e., if $\lambda_\Phi$ is small, then the data will not be easily linearly separable. In that case, the predictor will be significantly non-linear in order to interpolate the labels, deviating from the ideal setting. We proceed by explaining specifically where $k_\Phi$ and $k_{\mathrm{lin}}$ appear in Deep RFM and why linear interpolation induces collapse, and then follow with the statement of our main theorem."
Third, we will do the following improvements to the presentation of Section 4.3:
- We will rename the subsection “Connection to parametrized kernel ridge regression" to clarify its role.
- We will rewrite the introduction of the subsection to clearly state its message: DNC is a natural consequence of minimizing the norm of the predictor over the choices of both the kernel function and the kernel coefficients in parametrized kernel ridge regression. We also clearly state why this result is relevant in our context – the RFM learning algorithm can be understood as an optimization procedure for the parametrized kernel ridge regression objective.
- We will discuss matrix-valued kernels in more detail and write the explicit formula for the kernel we are describing, as the reviewer uXDR suggests.
- We will discuss in more detail both why it is important to introduce $M$ and why we are able to consider the relaxed optimization over kernel matrices $k$. In particular, introducing $M$ is necessary as that is the key added parameter in the parametrized kernel ridge regression and it corresponds to the AGOP matrix used in RFM. Dropping the $M$ and optimizing over $k$ does not always lead to an equivalent optimization, as pointed out by reviewer AtBe, however their solutions are equivalent in the setting of our asymptotic analysis - where the data gram matrix is invertible. We explain this point in detail in our response to reviewer AtBe. Note that the invertibility assumption is guaranteed to hold when we consider collapse beyond the first layer of Deep RFM, at which we have inflated the data dimension through a high-dimensional feature map $\Phi_{\mathrm{lin}}$. For more detail on this point, see our response to reviewer Ljpy.
- We will call the problem $(3)$ “unconstrained kernel model.”
Fourth, we will do the following changes to Section 5:
- We will rename the section “AGOP as an effect of linear denoising in neural networks”, and emphasize the role of AGOP as responsible for improvements in NC1 in neural networks. In particular, we will rewrite the first paragraph as follows:
"We provide evidence that the DNC mechanism of Deep RFM, i.e., the projection onto the AGOP, is responsible for DNC formation in typical neural networks, such as MLPs, VGG, and ResNet trained by SGD with small initialization. We do so by demonstrating that DNC occurs by this iterated linear mechanism through the right singular structure of the weights. As the NFM, which is determined by the right singular structure of $W$, is highly correlated with the AGOP, our results imply the AGOP is responsible for NC1 progression." | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Segmentation from Point Trajectories | Accept (spotlight) | Summary: The authors propose a loss function that seeks to group the trajectories into low-rank matrices where the motion of object points can be approximately explained as a linear combination of other point tracks. Experiments on the synthetic MOVi-F variant of
the Kubric dataset and the real datasets DAVIS 2016, SegTrackv2 and FBMS show that the proposed method outperforms single-sequence methods, single-stage end-to-end methods and multi-stage methods.
Strengths: 1) The authors address key issues in the field and the contribution is original even if somewhat incremental.
2) The proposed method is detailed and reproducible.
3) Experiments are relatively well conducted on synthetic and real datasets showing the superiority of the proposed method.
Weaknesses: About the presentation, please clearly state a name/acronym to the proposed method and replace "ours" by it in the comparison tables.
Technical Quality: 3
Clarity: 4
Questions for Authors: No questions
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy the reviewer has found our work to be original, detailed, reproducible, addressing key issues, and with well-conducted experiments. We also thank the reviewer for suggestions to improve the clarity of our work.
> About the presentation, please clearly state a name/acronym to the proposed method and replace "ours" by it in the comparison tables.
We will update the text and tables using the tag "LRTL (Ours)" to denote our *Low-Rank Trajectory Loss*. | Summary: This paper introduces a method for training a segmentation network using long-term point trajectories as a supervisory signal to enhance optical flow. It proposes a novel loss function aimed at grouping these trajectories into low-rank matrices, allowing the motion of object points to be approximately represented as a linear combination of other point tracks. The proposed approach surpasses previous methods in motion-based segmentation, demonstrating the value of long-term motion and the effectiveness of the new formulation.
Strengths: 1. The introduction describes the problem in more detail when introducing the issue.
2. The structure of the article is good.
3. The experimental results of the method proposed in this paper show a significant improvement.
Weaknesses: 1. The main contribution of this paper is the proposal of two losses, but the loss seems to be effective in the experiments of other segmentation methods.
2. The contribution of the paper in Subspace Clustering is not described clearly.
3. The resolution of Fig 3 is relatively low.
4. There is a lack of comparison in terms of inference speed.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Does the method proposed in this paper work on other segmentation networks as well, and would additional experiments on other segmentation networks help demonstrate the generality of the proposed loss function?
2. How is the training time and inference speed of the method proposed in the paper? It would be better to include some quantitative comparison experiments.
3. What is the specific contribution of this paper to Subspace Clustering?
4. The Flow Estimator and Point Tracker are both frozen during training in this work. Is it possible to also update them during training to leverage the information in the dataset?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have acknowledged some limitations of their work. I suggest the authors could further describe the limitations in the generalizability of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy the reviewer has found our work detailed, well-structured and showing significant improvement. We thank the reviewer for their thoughtful questions and suggestions. We reply to each comment below.
> The contribution of the paper in Subspace Clustering is not described clearly. [...] What is the specific contribution of this paper to Subspace Clustering?
We do not consider our loss function a contribution to the area of Subspace Clustering. The main question we explore is how to supervise segmentation networks with the motion information contained in point trajectories (L37).
The reason why we discuss Subspace Clustering is because our loss is inspired by it, and because Subspace Clustering has historically been applied to clustering motion trajectories. However, as we show in our experiments, general Subspace Clustering algorithms such SSC or LRR might not be suited to training neural networks.
> The resolution of Fig 3 is relatively low.
Thank you; we shall update the size of the picture for the final version.
> Does the method proposed in this paper work on other segmentation networks as well, and would additional experiments on other segmentation networks help demonstrate the generality of the proposed loss function?
Yes, the proposed loss function is network-architecture agnostic as it only requires mask prediction. Thus any network which predicts masks or has mask-like representation could be used. In our experiments, we used (a) a scaled down version of a UNet network when considering single-sequence optimization (L270), and (b) the same network architecture as used in prior work (MaskFormer + DINO) in video segmentation (L273).
In the table below, we experiment with changing the segmenter architecture in the DAVIS benchmark. This shows that we can swap different network architectures with relative ease.
| Network | DAVIS |
| -------- | -------- |
| MaskFormer + DINO| 82.2 |
| MaskFormer + Swin-Tiny | 81.2 |
| UNet | 80.6 |
In these results, we did not further optimize batch size, learning rate and other parameters that typically depend on the type/architecture of neural network trained (except training UNet for 60k iterations rather than 5k as it is randomly initialized), so these numbers likely underestimate the actual performance that could be obtained with such networks after hyperparameter tuning.
> There is a lack of comparison in terms of inference speed. [...] How is the training time and inference speed of the method proposed in the paper? It would be better to include some quantitative comparison experiments.
In the table below, we provide the inference time comparison using different networks as average FPS during DAVIS evaluation. Note that since our contribution is a loss function, it is network architecture agnostic. Using it does not affect inference time; only the choice of network architecture does. We matched the architecture with prior work for the best comparisons. Below, we show inference speed for different network choices.
| Network | Inference Speed |
| -------- | -------- |
| MF + DINO| 3.3 FPS |
| UNet | 6.4 FPS |
As expected, with smaller networks, the inference time is reduced.
A good question is how much of a computation burden our trajectory loss imposes. We measure time spent in the loss function during the forward pass as 0.3s or about 14% of the total batch time, which is small.
As we reported in Appendix E.3, training a network for our experiments takes about 3 hours. With our proposed losses, one can train a chosen segmentation network quickly on the benchmark data.
> The Flow Estimator and Point Tracker are both frozen during training in this work. Is it possible to also update them during training to leverage the information in the dataset?
This is an interesting idea! As the task under consideration is completely unsupervised, we would not recommend continuing to train flow and point-tracking networks while training the segmenter. That is because there is a trivial degenerate solution: if no motion is predicted (e.g., by ignoring the input) by flow and/or point tracker then the segmentation network can predict arbitrary segmentation as well with minimal loss, thus the training would likely collapse.
However, one could alternate training segmenters and motion predictors on data where point annotations are available. In this work, we concentrate on establishing how trajectory data can be used as an "objectness" signal.
> I suggest the authors could further describe the limitations in the generalizability of their approach.
While we showed good results in quite general real-world videos, as mentioned in L324, extending our loss to multi-object segmentation in videos where objects undergo non-rigid articulated motion to the same effectiveness is not trivial. This is, ultimately, because the precise number of objects is unknown, so recovering the appropriate number of instances from that might have been seperated into parts is not straightforward. This could be overcome by introducing an appearance-based loss across groups of predicted masks, effectively encouraging the network to predict $n$ objects each of up to $k$ parts by predicting $n \times k$ total components.
---
Rebuttal Comment 1.1:
Comment: Thanks for the author's rebuttal, I will keep my score. | Summary: This paper proposes a novel loss function that allows training image object segmentation
models based on object motion in videos. Motivated by recent work on self-supervised
learning of segmentation using optical flow, the authors propose to use longer point
trajectories as additional self-supervision signal. Related to subspace clustering, the
proposed loss function encourages to predict segments whose trajectories can be well
explained by few basis trajectories. The predicted segments are merged into a binary
segmentation mask and evaluated on standard, real-world segmentation benchmarks.
Previous methods based only on optical flow are consistently outperformed, demonstrating
the effectiveness of the proposed method.
Strengths: - Unsupervised learning of segmentation is an important problem. Several recent methods
approached this task using optical flow as a self-supervision signal, extending this
line of research to trajectories is a well-motivated idea.
- The mathematical motivation of the loss is very well explained. Without having a deep
mathematical background, I could follow the derivation of the loss function without
issues.
- Standard benchmarks and modelling components are used for evaluation, which makes it
easy to compare the proposed method to previous approaches.
Weaknesses: 1. It is not described clearly enough what kind of segmentation task is targeted. From
the introduction and method section it seems to me that multi-object segmentation is
adressed, only at the very end of the method section it is mentioned that the
predicted segments are merged into a binary segmentation in some cases.
- To my understanding the task is multi-object segmentation for MOVi and binary
segmentation for all other datasets. This should be clearly stated in the
experiment section.
- It should be stated in the introduction and method section more clearly that the
main task is binary segmentation.
2. The proposed method is not compared to models that do not use optical flow for
self-supervision. It would be interesting to see how the proposed method compares to
other self-supervised segmentation approaches. For example
- CutLER ([Wang et al. 2023](https://arxiv.org/abs/2301.11320)) and VideoCutLER ([Wang et al. 2023](https://arxiv.org/abs/2308.14710))
- DINOSAUR ([Seitzer et al. 2023](https://www.amazon.science/publications/bridging-the-gap-to-real-world-object-centric-learning)) and VideoSAUR ([Zadaianchuk et al. 2023](https://proceedings.neurips.cc/paper_files/paper/2023/hash/c1fdec0d7ea1affa15bd09dd0fd3af05-Abstract-Conference.html))
The masks predicted by these models could be merged to obtain a binary segmentation
in the same way as for the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: - How do the predicted segments look like before merging? Visualization would help to
better understand the capabilities and limitations of the proposed method.
- The principle of common fate is not cited in the paper, a reference to the literature
on Gestalt psychology would be appropriate (e.g., Wertheimer 1912).
- How well does the proposed method perform on MOVi when estimating trajectories using
RAFT and CoTracker? This would allow for better judging how much the proposed method
could be improved in the future by using more accurate trajectory estimation methods.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors address limitations of their work in a dedicated paragraph. Their
discussion is brief but adequate in my view.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for thoughtful comments and suggestions and are happy that they recognized our work as well motivated, well explained, and easy to compare.
> To my understanding the task is multi-object segmentation for MOVi and binary segmentation for all other datasets. This should be clearly stated in the experiment section. [...] the main task is binary segmentation.
Correct! While this is already noted on L261, we will make this clear throughout in the final version.
The reason why several experiments focus on binary segmentation is to allow comparison to relevant prior art which used video benchmarks with binary masks, as these are far more common. The "over-segmentation and merging" approach is also common in the literature, so we also adopt it to be more comparable to prior works.
However, as shown in the MOVi experiment, our loss is indeed more general and not limited to binary segmentation.
> The proposed method is not compared to models that do not use optical flow for self-supervision. It would be interesting to see how the proposed method compares to other self-supervised segmentation approaches. For example
CutLER (Wang et al. 2023) and VideoCutLER (Wang et al. 2023)
DINOSAUR (Seitzer et al. 2023) and VideoSAUR (Zadaianchuk et al. 2023)
The masks predicted by these models could be merged to obtain a binary segmentation in the same way as for the proposed method.
In the table below, we provide a comparison of VideoCutLER and VideoSAUR on DAVIS using the same merging strategy for combining multiple predictions to a binary segmentation as in our method:
| Method | DAVIS |
| -------- | -------- |
| Ours | **82.2** |
| VideoCutLER | 67.2 |
| VideoSAUR | 17.5 |
Our method shows a significant advantage. We observe that VideoCutLER has trouble segmenting instances from crowds in the background. VideoSAUR has imprecise object boundaries which severely impacts performance when measurred using Jaccard score.
We also note that VideoCutLER and VideoSAUR are trained on much larger video datasets as they, ultimately, attempt to derive a learning signal from DINO features computed across adjacent frames. The goal of our paper is to study whether point trajectories can be used to train a segmentation network, which is a complementary rather than an alternative approach: these forms of supervision could be combined.
> How do the predicted segments look like before merging? Visualization would help to better understand the capabilities and limitations of the proposed method.
We include the visualization of segments before merging them in the rebuttal PDF, Figures R1 and R2. Mostly, different entities or parts of entities (legs, heads, other limbs) are segmented out. This suggests that the model learns that both individual instances and articulated parts of the bodies are likely to have coherent motion.
> The principle of common fate is not cited in the paper, a reference to the literature on Gestalt psychology would be appropriate (e.g., Wertheimer 1912).
Thank you, we will include a suitable references.
> How well does the proposed method perform on MOVi when estimating trajectories using RAFT and CoTracker? This would allow for better judging how much the proposed method could be improved in the future by using more accurate trajectory estimation methods.
We repeat the experiments in Tab. 1 using CoTracker-estimated trajectories for our method and show the results as "Ours (CoTracker)" below. Note that our experiments on MOVi-F used only the trajectory loss (no optical flow) to match the same input modalities employed by other methods.
| Method | ARI | FG-ARI |
| -------- | -------- | -------- |
| K-Means | 15.26 | 42.53 |
| Ours (GT traj.) | 46.07 | 65.76 |
| Ours (CoTracker) | 38.20 | 57.85 |
The estimated (imperfect) tracks impact performance but it still exceeds that of simple K-Means and prior methods. This shows that improving the tracks can further improve results achieved by the loss.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed response. My main concerns were addressed well
by the additional comparisons and clarifcations. With the proposed changes, I am happy
to recommend the paper for acceptance. | Summary: This paper proposes a model to process long-term motion and short-term motion simultaneously to achieve motion-based segmentation. Specifically, motivated by subspace clustering, this work proposes a loss function that enables training a neural network to learn motion grouping from both optical flows and point trajectories. It outperforms the previous method in the unsupervised video segmentation task. The qualitative comparison also shows obvious improvement, giving a clearer boundary.
Strengths: 1. The motivation and method explanation seems to be clear. The paper writing is easy to follow.
2. Using a simple sample to introduce the low-rank intuition is convincing and reasonable. Based on this core idea, other smoothing losses and regular loss from optical flow make learning more effective.
3. Experiments show the strength of the proposed strategy. A comprehensive ablation study has been performed to illustrate the impact of each factor.
Weaknesses: 1. As mentioned in the limitation, the paper's principle assumes that the object is rigid. However, the task that this paper works on not only includes rigid objects -- it's a general video segmentation task. Then it seems that the low-rank theory can not extend to a general setting. And why not consider local rigid like ARAP loss? (SpatialTracker)
2. Do not give some corner cases or failure cases, especially for non-rigid objects. I hope to see some corner cases like multiple objects, where they behave similarly in the short term but different in the long term. Then it can better demonstrate the motivation of the paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: Why does solely using long-term loss get worse performance than solely using optical flow loss (7 percent drop in Table 4)? Though the paper gives a short explanation that it is due to the sparse set of points and noise, long-term motion also has its advantage like it's more stable than short-term information.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: As mentioned in the weakness, the principle the paper proposed is reasonable, but seems like it does not fully support the motivation and the ultimate goal of the task. More analysis and experiments are needed to show the right practice when applying the proposed to real-world videos.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that the reviewer finds the motivation and method clear, easy to follow, convincing and reasonable, with strong results and comprehensive ablation.
We thank the reviewer for constructive comments and helpful suggestions.
> the paper's principle assumes that the object is rigid. However, the task that this paper works on not only includes rigid objects -- it's a general video segmentation task. Then it seems that the low-rank theory can not extend to a general setting.
We *do not* assume objects are rigid. As we state on L202, relaxing $r$ enables supporting non-rigid objects too.
> And why not consider local rigid like ARAP loss? (SpatialTracker).
Using an ARAP loss would require additional depth input and training an additional encoder network to embed trajectories. In SpatialTracker, this is done using RGBD data for the trajectory prediction task.
Additionally, similar to other adjacency matrix methods, the scaling of pairwise trajectory loss is quadratic (L111): SpatialTracker uses 256 trajectories during training using 8xA100, while we can train using 70k trajectories on a single GPU.
> Do not give some corner cases or failure cases, especially for non-rigid objects. I hope to see some corner cases like multiple objects, where they behave similarly in the short term but different in the long term. Then it can better demonstrate the motivation of the paper.
We included additional visualisations in the Rebutall PDF showing predicted components as well as the motion of the frame.
Note that our trained network does not observe motion during inference. Because of this, it is robust to lack of motion for all horses in the 2nd sequence (Fig. R1), or similar flow patterns of people in the 3rd sequence (Fig. R1), or even noisy optical flow in the 2nd sequence (Fig. R2). It has learned to separate instances or parts. We show predicted components to highlight that the network learns to predict instances and sometimes limbs as separate masks as they might not move together.
We also include some failure cases in Fig. R2, such as missing very small objects and learning to identify and segment shadows as a component.
> Why does solely using long-term loss get worse performance than solely using optical flow loss (7 percent drop in Table 4)? long-term motion also has its advantage like it's more stable than short-term information.
Optical flow carries less information than long-term tracking, but it has some advantages. First and foremost, it can be computed at full resolution (even CoTracker, which is efficient, cannot cope with more than 70k tracks, which are fewer than the number of image pixels). This is extremely beneficial when the task requires pixel-level accuracy, such as segmentation. Second, instantaneous motion is easier to predict and model than long-term motion. Hence, when optical flow is sufficient to segment the objects, it generally allows it to be done robustly and easily. Long-term tracking is thus complementary to optical flow.
> As mentioned in the weakness, the principle the paper proposed is reasonable, but seems like it does not fully support the motivation and the ultimate goal of the task. More analysis and experiments are needed to show the right practice when applying the proposed to real-world videos.
The main goal of our paper was to establish whether long-term trajectories are useful source of "objectness" for training segmentation networks and to propose a principled way to accomplish this. We have established a positive answer showing improvement over prior art in a series of benchmarks. Additionally, we analysed and showed why alternative implementations of this idea do not work as well. | Rebuttal 1:
Rebuttal: We thank the Reviewers for their thoughtful comments and suggestions. We are happy they found our presentation clear, well-flowing, well-motivated, convincing and reasonable, our results strong and our experiments comprehensive. We reply to each comment individually.
To aid replies, we also provide a Rebutall PDF that contains two figures (R.1, R.2), which we reference. The figures show more examples, including failure cases, from FBMS sequences alongside motion and predicted components. We shall include these figures as well as additional results and discussion presented in the reply in the final paper.
Pdf: /pdf/24a3a1573c1e5ef40b14e303e014d8f582a70676.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper tackles video object segmentation by incorporating into the loss function not only instantaneous optical flow information but also long term pixel tracking information. Raft was used for optical flow and CoTracker was used for long term pixel tracking in the experiments. The experiments show a marginal improvement in performance when combining the two information sources in the loss function.
Strengths: The paper flows quite well, it addresses that video object segmentation is the problem space, the focus is on loss function, Figure 2, the layout appears clear as well. There are a handful of datasets and comparing methods used in the experiments.
Weaknesses: Table 2 where the experimental results are presented lists a collection of methods categorized into different groupings. Perhaps these groupings and methods could be better discussed in the lit review, it appears that the categories in the lit review do not correlate nicely and I do not know the difference of these methods unless I look at the references and read the papers myself.
The improvement is incremental. IT is expected that there would be some improvement however what cases do we actually get the improvement in, a bit of more depth in the analysis would make this a better paper.
I assume that the camera is static?, correct? if not, perhaps making this clearer would help.
I have no idea how long the long term point trajectories were, perhaps analyzing this would help. Also depending on the trajectories, were there occlusions or other interesting factors that contribute to the loss function would be interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I found the related works were like a laundry list. You divided the categories into unsupervised video object segmentation, motion segmentation, trajectory based motion segmentation and subspace clustering. That is find however your focus is only video object segmentation, why is that and how can you address the other problem areas?
2. I would imagine if we had 3d scene flow, by perhaps combining monocular depth and optical flow would result in good results without long term tracking?
3. why not incorporate appearance information as well?
4. Appearance information for segmentation in the examples would suffice, it would be interesting to focus on cases where appearance info is not sufficient for segmentation and we require motion information.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I am not sure that the examples actually illustrate that motion segmentation is necessary for these cases. I would focus on cases where appearance information is not enough.
Can this system deal with a moving camera or does the camera have to be static?
How well does the system work under occlusion?
Different motions of the object of interest will results in different performance, perhaps diving into this analysis would be informative.
Both sources of info, optical flow and long term pixel tracking info are based on 2D info, the projection of 3D info. This has limitations. The paper should have explored different object movements. It does state that non rigid objects when dealing with multiple objects is an issue however an in depth exploration for a single non rigid object would be informative.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for constructive comments and suggestions. We reply to concerns below.
> Table 2 where the experimental results are presented lists a collection of methods categorized into different groupings. Perhaps these groupings and methods could be better discussed in the lit review.
While our literature review is structured based on related sub-fields, the methods in Tab. 2 are arranged based on the approach to the task: using single-video optimization (L95), training a network in multiple stages (L89) or single-stage training. For clarity, we will summarise the groupings in the respective section.
> The improvement is incremental.
We disagree with this assertion. Firstly, we propose a principled loss to derive an objectness signal from a novel modality. Secondly, our proposal shows effective improvement across several benchmarks where improvements have been historically difficult. For example, after OCLR [66] has first reached a Jaccard score of 80.9 on the DAVIS benchmark, follow-up works [54,37,59] could only approach or match it for several years, showing the difficulty of the task.
> I assume that the camera is static?, correct? if not, perhaps making this clearer would help. How well does the system work under occlusion? Different motions of the object of interest will results in different performance, perhaps diving into this analysis would be informative.
The camera does not need to be static (and in almost all scenes it is not); in fact, our loss explicitly models a time-varying camera (see L185). Similarly, we make no assumptions about occlusions or object motion.
In general, the method is robust: the included result videos contain complex camera motion, "zooming" (e.g., dance videos), strong occlusions (e.g., bicycle or dog scenes), self-occlusions and non-rigid motion. Our method works well in quantitative studies and qualitative examples.
> I have no idea how long the long term point trajectories were, perhaps analyzing this would help.
The default trajectory length is 41 frames (L267). We studied the effect of trajectory length in Tab. 7 (Appendix). Longer tracks suffer from increasing tracking errors over longer periods. On the other hand, very short tracks might not be sufficiently informative. Our results in Table 7 support this.
> Also depending on the trajectories, were there occlusions or other interesting factors that contribute to the loss function would be interesting.
As can be seen in the example tracking visualization video from the supplement, occlusions (in blue) are very common. Point trackers estimate the position of tracked points even when they are occluded, and we make use of this prediction in our loss. We also experimented with different trackers (L566-568, Tab. 8). We found that TAPIR and BootsTAP, which are about as good as CoTracker for visible points, are less reliable for occluded points, which hinders performance.
> That is find however your focus is only video object segmentation, why is that and how can you address the other problem areas?
The difference between video object segmentation and motion segmentation is that the latter only segments objects in the frames where they are moving, whereas the former segments objects as long as they move at some point. As we note in L80-82, the difference between these two tasks is relatively small, and in fact, often, the same benchmark data is used to assess both. In our case, the network is trained to segment the object given a single image as input at a time, and so it should be regarded as solving video object segmentation as it cannot perceive motion directly. However, it is easy to reuse it for motion segmentation by checking if the segmented region actually moves in a given frame, e.g., by measuring optical flow.
> I would imagine if we had 3d scene flow, by perhaps combining monocular depth and optical flow would result in good results without long term tracking?
This is an interesting idea, but we are not aware of works that have done so yet, at least for video object segmentation. One challenge is that it is difficult to obtain robust 3D predictions (see, e.g., the extensive literature on video depth estimation). Should this information be available, our formulation can be extended trivially to 3D trajectories simply by replacing 2D points with 3D ones. In principle, by avoiding camera projection, these 3D trajectories should be statistically "simpler", so a lower value of the parameter $r$, e.g., 3 or 4, should suffice.
> Appearance information for segmentation in the examples would suffice, it would be interesting to focus on cases where appearance info is not sufficient for segmentation and we require motion information.
Our method does not ignore appearance.
First, motion is used as a learning signal to train a segmentation network that takes as input an _image_, and is thus appearance-based. Second, our approach also uses appearance indirectly, as optical flow and point trajectories are predicted from RGB data.
In any case, the main goal of our paper is to show how point trajectories can be used to derive an "objectness" signal to train a segmentation network. We propose a new loss to do so. We note that our new loss could be combined with other losses that capture complementary sources of supervision, but doing so is orthogonal to our main goal.
For completeness, and as suggested by other reviewers, we do compare with SoTA video object segmentation methods that are appearance-based, i.e., VideoCutLER and VideoSAUR, and obtain better results than them. This also suggests that appearance-based approaches are not sufficient for this data yet (as the gap is considerable).
> Both sources of info, optical flow and long term pixel tracking info are based on 2D info, the projection of 3D info.
Perspective projection indeed complicates the analysis. As we state L198, we increase $r$, to generalize to wide range of scenarios in the real world data. | null | null | null | null | null | null |
MoTE: Reconciling Generalization with Specialization for Visual-Language to Video Knowledge Transfer | Accept (poster) | Summary: The paper introduces a novel framework called MoTE. This framework addresses the trade-off between zero-shot generalization and close-set performance in video recognition tasks by tuning a mixture of temporal experts. The key contributions include:
- Introducing Weight Merging Regularization to balance generalization and specialization.
- Proposing temporal feature modulation to improve generalization during inference.
- Demonstrating state-of-the-art or competitive results on various video datasets such as Kinetics-400, Kinetics-600, UCF-101, and HMDB-51.
Strengths: - The introduction of Weight Merging Regularization and temporal feature modulation provides a novel approach to balancing generalization and specialization in video recognition.
- The experimental results are thorough, demonstrating the effectiveness of the proposed methods on multiple datasets.
Weaknesses: - The framework's text space is confined to video category names, which limits the richness of textual representations. Expanding the semantic space using large-scale generative models could enhance performance.
- The method currently explores limited forms of additional parameters. Extending the approach to other forms could improve generality and versatility.
- While results on certain benchmarks are promising, the model's performance on more diverse and challenging datasets needs further validation.
- The additional complexity from Weight Merging Regularization and other components can slightly increase training time, which may be a barrier for real-time applications.
- Extensive fine-tuning required for different tasks can be computationally expensive and time-consuming.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Can you provide more details on how expanding the text space with large-scale generative models might improve the model's performance?
- How does the performance of MoTE vary with different numbers of temporal layers and experts? Are there optimal configurations for specific tasks?
- What measures can be taken to reduce the computational overhead introduced by the additional components such as Weight Merging Regularization?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The following work is recommended for citation & discussion:
Oh, C., Lim, H., Kim, M., Han, D., Yun, S., Choo, J., Hauptmann, A., Cheng, Z.-Q., & Song, K. (2023). Towards calibrated robust fine-tuning of vision-language models. In NeurIPS 2023 Workshop on Distribution Shifts: New Frontiers with Foundation Models.
Tu, S., Dai, Q., Wu, Z., Cheng, Z.-Q., Hu, H., & Jiang, Y.-G. (2023). Implicit temporal modeling with learnable alignment for video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 19936-19947).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below.
***
**1. Expanding the semantic space with large-scale generative models.**
Thanks for your constructive suggestion! Following your suggestion, we replace the category names with rephrased detailed descriptions, which are generated by a large generative language model GPT3.5 as used in FROSTER [1]. The generated descriptions expand the semantic space by providing more details about the action and the scene. For example, "*Ice climbing*" is rephrased to "*A person is seen climbing up a wall of ice using specialized equipment like crampons and ice axes*". The results are presented in the table below.
|Method|K400 (close-set)|UCF|K600|
|-|:-:|:-:|:-:|
|MoTE + category name|86.8|88.7|79.0|
|MoTE + rephrased description|86.8|89.2|80.2|
We can see that the enriched description leads to better generalization performance. We will actively explore this direction in future work.
[1] FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition. ICLR 2024.
***
**2. Extending the approach to other forms could improve generality and versatility.**
Thanks for your constructive suggestion! To further demonstrate the generality and versatility of MoTE, we apply our method to ST-Adapter [2]. For simplicity, we remove the depth-wise Conv3D operation in our adapter. In this case, our baseline is slightly lower than ST-Adapter, but it doesn't affect justifying the effectiveness of our method. The results are shown in the table below.
|Method|K400 (close-set)|UCF|K600|
|-|:-:|:-:|:-:|
|ST-Adapter|81.1|77.3|61.1|
|ST-Adapter+MoTE|81.2|79.4|66.7|
Our method delivers a notable generalization performance improvement while maintaining the close-set result, indicating the effectiveness of MoTE applied to alternative networks.
[2] ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning. NeurIPS 2022.
***
**3. The model's performance on more diverse and challenging datasets needs further validation.**
Thanks for your suggestion! We have evaluated our method on the Something-Something V2 (SSv2) dataset under the few-shot setting in Table 5, which requires intensive temporal understanding capability of the model. We find an inappropriate hyperparameter setting during fine-tuning and present the fixed results in the table below.
|Method|SSv2-2 shot|SSv2-4 shot|SSv2-8 shot|SSv2-16 shot|
|-|:-:|:-:|:-:|:-:|
|ViFi-CLIP|6.2|7.4|8.5|12.4|
|MAXI|7.1|8.4|9.3|12.2|
|MoTE|7.4|9.5|13.3|16.7|
We also evaluate our method on SSv2 under the zero-shot setting, as shown in the table below
|Method|SSv2-zero shot|
|-|:-:|
|CLIP|2.7|
|ViFi-CLIP|4.5|
|MoTE|6.4|
In both settings, our method outperforms the previous works by a notable margin, demonstrating the effectiveness of our method in facing more challenging datasets.
***
**4. The additional complexity from Weight Merging Regularization and other components can slightly increase training time.**
Thanks for your thoughtful comment! We show the training time costs (Table 8 of the supplementary material) and the corresponding GFLOPs in the table below. GPU days are calculated by the number of GPUs multiplied by the training time in days.
|Method|GPU-days|GFLOPs|
|-|:-:|:-:|
|Baseline|4.35|649|
|+MoTE|4.35|+0.34|
|+$L_{WMR}$|4.50|+0.34|
|+$L_{MSE}$|4.51|-|
Applying our method on the baseline introduces slight computational and training time costs, and this cost can be effectively reduced when training with the Distributed Data Parallel framework of PyTorch, demonstrating the efficiency of our method.
***
**5. Extensive fine-tuning required for different tasks can be computationally expensive and time-consuming.**
Thanks for your comment! We would like to clarify that our method is evaluated on various datasets under zero-shot and close-set settings using **one unified model**, avoiding the need for multiple fine-tuning processes. Our method can also be applied to parameter-efficient training frameworks, such as adapters, which exhibit impressive training efficiency. The details of this part please refer to the response of the second question. We will actively explore this direction in future work.
***
**6. How does the performance of MoTE vary with different numbers of temporal layers and experts? Are there optimal configurations for specific tasks?**
Thanks for your comment! We have ablated the number of temporal layers and experts in **Figure 1 (b)** and **Table 2 (b)** in the main paper. Please refer to the figure and table in the main paper for the results.
According to the results, we observe that 4 temporal experts per layer are sufficient to learn various knowledge for both close-set and zero-shot tasks. More experts do not lead to further performance gains. As for the number of the temporal layers, our method consistently outperforms the vanilla Transformer layer and 6-layer MoTE achieves the best trade-off between close-set and zero-shot performance. Therefore, we conclude that the 6 layers MoTE with 4 experts per layer are generally the optimal configuration for various tasks.
***
**7. What measures can be taken to reduce the computational overhead introduced by the additional components?**
Thanks for your thoughtful question! Our work is applied to on the standard Transformer structure in the paper. Replacing it with some efficient structure (e.g. flash attention) can effectively reduce the computational overhead of the additional modules. Besides, applying our method to efficient structures (e.g. adapters) can also reduce the computational cost of the additional components.
***
**8. Relevant works**
Thanks for your suggestion! We find the recommended papers relevant to our work. We'll cite and discuss them in the revised manuscript.
***
We sincerely hope that this response will address the reviewers' concerns. We will supply the above responses in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Have a discussion
Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify. | Summary: This paper addresses the issue of Video-Language Models (VLMs), such as CLIP, experiencing reduced generalization performance to unseen categories when learning domain-specific knowledge for video understanding tasks. The authors propose the MoTE framework, which introduces temporal experts and employs a Mixture of Experts (MoE) approach to effectively learn domain-specific knowledge for videos. Additionally, a soft stochastic routing policy is utilized to further enhance the learning efficiency of the experts. To guarantee the discrepancy in knowledge learned by different experts while maintaining a flat loss landscape, the paper incorporates weight merging regularization, which improves the generalization performance of the learned features. Moreover, the paper presents a temporal feature modulation method that leverages the semantic relevance confidence of proxy text features to modulate features.
Strengths: 1. The paper introduces the Mixture of Experts (MoE) approach in zero-shot video classification tasks based on Video-Language Models (VLMs). By utilizing weight merging regularization and other methods, the approach ensures effective learning of domain-specific knowledge in videos while maintaining strong model generalization.
2. The study effectively combines temporal modeling of visual content with the MoE approach. During downstream task adaptation, it leverages multi-perspective data bias learning to avoid overfitting, thus enhancing the learning effectiveness of domain-specific knowledge in videos.
3. The paper analyzes model generalization from the perspective of loss landscape flatness. By improving the flatness, weight merging regularization enhances the generalization performance of the learned features.
Weaknesses: 1. There is ambiguity in the use of certain symbols within the paper. For example, the symbol L is used to represent both the loss function of CLIP and the number of layers in the Transformer introduced in MoTE. This issue is particularly evident in Equations (4) and (7). The paper should consider adjusting the usage of these symbols to avoid confusion.
2. There seems to be a problem with the calculation in Equation (5). The notation "exp" typically represents the exponential function of e, but this is not clearly explained. According to the equation, the probability of selecting an expert increases with i, which seems to contradict the intended randomness of stochastic. This requires clarification or correction.
3. In the Introduction and Section 3.4, the paper emphasizes the plug-and-play characteristic of the modulation module. However, the subsequent experiments only demonstrate the improvement in model performance without introducing additional training parameters (Play). They do not showcase the flexibility and usability of the module regardless of the upper model structure (Plug). Therefore, it would be beneficial to add experiments validating the plug-and-play effect or adjust the relevant descriptions in the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the design basis for the candidate set of the temperature hyperparameter in weight merging? The paper does not provide a reference for the design of this candidate set, nor does it further validate its superiority over continuous space selection schemes in the experimental analysis.
2. What is the connection between the modulation method proposed in Section 3.4 and the paper's overall motivation? The issue of constrained semantic space it addresses does not seem to be related to the MoE method or the maintenance of feature generalization.
3. What is the specific idea behind the trade-off metric mentioned in Section 4.3? Considering the balance between the two, the arithmetic mean does not seem to be a good metric. If a model achieves 100% ZS performance but 0% close-set performance, its trade-off metric result would be the same as if both values were 50%. How is this issue addressed?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper does not adequately explain the connection of data bias view and MoE in Section 3.2. For reading-friendly, there should be additional descriptions of the relationship between experts and data bias views, and how the MoE approach leverages multiple data bias views to improve model performance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below.
***
**1. Ambiguity in the use of certain symbols.**
Thanks for your kind reminder, and sorry for the confusion. We double-checked the usage of all symbols and will correct the mistakes in the revised manuscript.
***
**2. Missing the notation of the operation exp(). The probability of selecting an expert increases with i, which seems to contradict the intended randomness of stochastic.**
Thanks for your comments! The randomness of expert selection can be described and realized under different probability distributions. For example, the vanilla stochastic routing algorithm follows the uniform probability distribution, while ours follows the multinomial probability distribution. Compared to the uniform distribution, the multinomial probability distribution is less random but more controllable (by assigning different activation probabilities to each expert as in Equation (5)). This allows experts with a greater index to be activated more likely and therefore receive a larger volume of data during training. As a result, each expert can learn knowledge with different degrees of generalization and specialization.
We will include the explanation of the notation exp() and the above discussion in the revised manuscript.
***
**3. It would be beneficial to add experiments validating the plug-and-play effect or adjust the relevant descriptions in the paper.**
Thanks for your thoughtful suggestion! We agree that the plug effect should be further validated. Actually, Temporal Feature Modulation can be applied when the temporal module is separate from the CLIP encoder. Considering this constraint, we decided to adjust the relevant descriptions to make them more accurate and rigorous:
Original:
"We propose a plug-and-play module that measures the confidence of temporal features by means of CLIP’s text space."
Revised:
"We propose a test-time adaptation strategy for networks where the temporal module is separated from the CLIP encoder, to measure the confidence of temporal features by means of CLIP’s text space."
***
**4. What is the design basis for the candidate set of the temperature hyperparameter in weight merging? Missing comparison with the continuous space selection schemes.**
Thanks for your valuable question! The candidate set of the temperature hyperparameter {$±2^n · β$} $_{n=0}^4$ ∪ {∞} is designed by ourselves. We set the candidate temperature parameters to grow exponentially so that we can explore a larger range in the weight space (i.e. the coefficients calculated from Equation 7 with different temperatures vary more). {∞} is a special case where all the experts are averagely merged according to Equation 7.
The comparison with the continuous space selection schemes is presented below. We implement two continuous space selection schemes. (1) Sampling from a continuous standard normal distribution (mean=0, variance=1). (2) Sampling from a continuous uniform distribution. The continuous space sampling strategy results in a notable performance degradation.
|Type|K400 (close-set)|UCF|K600|
|-|:-:|:-:|:-:|
|Discrete set (default)|86.8|87.5|78.9|
|Continuous normal dist.|86.5|87.1|77.4|
|Continuous uniform dist.|86.0|85.9|77.9|
***
**5. The connection between the modulation method, the issue of constrained semantic space, and the paper's overall motivation.**
Thanks for your constructive suggestion! This part is correlated with the second objective of our paper: *How can generalization and specialization coexist in one unified model (line 45-46)?* The additional parameters can model the temporal information quite well when dealing with the fine-tuning categories. However, since **the semantic space is constrained** during fine-tuning, the additional parameters may not model the temporal information properly when facing unknown categories. This concern grows as the test categories are less semantically correlated with the fine-tuning categories. Thus, we propose the **modulation module** that measures the confidence of the temporal feature by the semantic association of proxy fine-tuning and text categories. This strategy allows us to **improve the model generalization while keeping its specialization performance**.
We will refine the above discussion and add it to the revised manuscript.
***
**6. Improper trade-off metric.**
Thanks for your constructive question! We agree that the reviewers' concern is thoughtful and reasonable. We change the trade-off metric to the harmonic mean of the zero-shot performance A and close-set performance B, calculated as $\frac{2AB}{A+B}$. This metric is not sensitive to extreme values and provides a more accurate indication of the overall performance.
***
**7. The relationship between experts and data bias views, and how the MoE approach leverages multiple data bias views to improve model performance.**
Thanks for your suggestion, and sorry for the confusion. ''Data bias views'' indicates the diverse knowledge learned from various optimization trajectories. We set different optimization trajectories for each expert and construct a more generalized model using distinct knowledge learned across experts. We will add the above description to the revised manuscript.
***
We sincerely hope that this response will address the reviewers' concerns. We will supply the above responses in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Have a discussion
Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify. | Summary: This paper introduces MoTE (Mixture-of-Temporal-Experts) to improve the generalization and specialization capabilities of visual-language models (VLMs) when adapting to video tasks. MoTE addresses two main questions: how to enhance the generalization of additional parameters during fine-tuning, and how to balance generalization and specialization in a unified model. The approach uses multiple feedforward network (FFN) experts in each Transformer layer to capture various data bias views, improving generalization. A routing algorithm based on multinomial distribution maximizes knowledge diversity among experts, while Weight Merging Regularization effectively combines generalized and specialized knowledge in the final model.
To further improve generalization at test time, MoTE incorporates a Temporal Feature Modulation module. Notably, the approach maintains the same computational cost and final structure as conventional methods. The paper contributes to the field by offering a new perspective on enhancing parameter generalization and balancing it with specialization in the context of adapting VLMs to video tasks. Extensive experiments demonstrate that MoTE achieves an optimal trade-off between zero-shot and close-set performance, with thorough ablation studies showing the scalability and effectiveness of the proposed method.
Strengths: - The manuscript is well-written and easy to follow.
- It is interesting to observe that the introduction of a mixture of experts can enhance the balance between acquiring generalizable knowledge and learning video-specific features. The motivation is intuitive, and the extensive experiments effectively validate the method’s efficacy.
- The design of weight merging regularization and temporal feature modulation harmonizes the pursuit of the two learning objectives. The temporal feature modulation is particularly noteworthy, as it takes into account the categorical relationships between the training and test sets to inform the integration of features.
Weaknesses: - The primary motivation for this study stems from two objectives: (1) mitigating the catastrophic forgetting that emerges with the integration of trainable parameters, and (2) striking a balance between generalizable knowledge and video-specific learning within one single model. However, these objectives bear considerable resemblance to the work presented in the paper FROSTER (ICLR 2024), which has not been discussed by the authors. While I acknowledge that the current paper and FROSTER employ distinct methodologies to address these issues, their close relevance necessitates a thorough discussion and a direct performance comparison.
- According to the description in the paper, the baseline model utilizes a clip encoder equipped with several temporal transformer layers. This leads me to question whether the model can be effectively integrated with alternative network architectures, such as adapter-based networks, X-CLIP, and ST-adapter, particularly given their noted efficiency in training.
- I would also request that the authors provide details regarding the additional computational and training time costs associated with implementing their method in conjunction with the baseline model.
- I believe it would be beneficial to delve deeper into the specific types of actions that each expert excels at recognizing. Providing a more detailed analysis in this area would enhance our comprehension of the distinct roles played by various experts, as well as the unique temporal knowledge they contribute in comparison to one another.
[1] FROSTER: Frozen CLIP Is A Strong Teacher for Open-Vocabulary Action Recognition. ICLR 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have not sufficiently addressed the limitations of their methodology, as it has been applied exclusively to a specific type of adapted network without demonstrating broader applicability. It would be advantageous to see an exploration of the method’s versatility across different network architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below.
***
**1. Discussion on FROSTER.**
Thanks for your kind reminder! Our motivation does resemble FROSTER's in some way but differs in the following aspects.
*Motivation:*
* FROSTER aims to mitigate the catastrophic forgetting caused by **fine-tuning the model**, while our method focuses on eliminating the catastrophic forgetting caused by **the integration of the additional trainable parameters**. The two motivations stem from different observations and perspectives.
* FROSTER aims to strike a balance between generalized knowledge and video-specific learning **specifically for zero-shot video recognition**, while our work pursues this objective **in more general and various task settings** (close-set, zero-shot, and few-shot recognitions). Since the close-set setting requires much more video-specialized knowledge than the zero-shot setting, our objective is more challenging and practical.
*Contribution:*
* Our method and FROSTER address the objective with different methodologies. FROSTER focuses on knowledge distillation from the frozen CLIP, while our method concentrates on the model structure design and loss landscape regularization. We offer distinct contributions to the community.
We will add the above discussion and performance comparison in the revised manuscript.
***
**2. Whether the model can be effectively integrated with alternative network architectures, such as adapter-based networks?**
Thanks for your constructive suggestion! To further demonstrate the generality and scalability of MoTE, we apply our method to ST-Adapter [1]. For simplicity, we remove the depth-wise Conv3D operation in our adapter. In this case, our baseline is slightly lower than ST-Adapter, but it doesn't affect justifying the effectiveness of our method. The results are shown in the table below.
|Method|K400 (close-set)|UCF|K600|
|-|:-:|:-:|:-:|
|ST-Adapter|81.1|77.3|61.1|
|ST-Adapter+MoTE|81.2|79.4|66.7|
Our method delivers a notable generalization performance improvement while maintaining the close-set result, indicating the effectiveness of MoTE applied to alternative networks.
[1] ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning. NeurIPS 2022.
***
**3. Additional computational and training time costs associated with implementing the method in conjunction with the baseline model.**
Thanks for your comment! We show the training time costs in Table 8 of the supplementary material. Below, we include the table for your convenience and additionally add GFLOPs for reference. GPU days are calculated by the number of GPUs multiplied by the training time in days.
|Method|GPU-days|GFLOPs|
|-|:-:|:-:|
|Baseline|4.35|649|
|+MoTE|4.35|+0.34|
|+$L_{WMR}$|4.50|+0.34|
|+$L_{MSE}$|4.51|-|
As shown in the table, applying our method on the baseline introduces slight computational and training time costs, illustrating the efficiency of our method.
***
**4. More detailed analysis of the specific action types that each expert excels at recognizing.**
Thanks for your constructive suggestion! We perform a category-wise performance analysis of the merged experts and each individual expert. We visualize the Top-1 classification accuracy for each video category, which is sampled from the UCF-101 dataset. The experiment is conducted on UCF-101 under the zero-shot setting. **The visualization is provided in the attached global response PDF.**
We find that different experts exhibit distinct performance for each video category. This is because diverse generalization knowledge is learned across experts, as we demonstrate in the paper. The merged experts benefit from the aggregation of knowledge across experts, especially for some video categories where temporal information is particularly needed, for example, 'Body Weight Squats', 'Handstand Pushups', and 'Wall Pushups'. This suggests that each expert can learn various temporal patterns for the same category. For more visualizations, please refer to Figure 5 and Figure 6 in the supplementary material, which shows more intuitive visualizations.
***
We sincerely hope that this response will address the reviewers' concerns. We will supply the above responses in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Have a discussion
Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify.
---
Rebuttal 2:
Title: Response to rebuttal
Comment: Thank you to the authors for their comprehensive rebuttal, which has addressed many of my concerns.
However, I remain unconvinced by the explanation provided for the first point of contention.
>FROSTER aims to mitigate the catastrophic forgetting caused by **fine-tuning the model**, while our method focuses on eliminating the catastrophic forgetting caused by the integration of the **additional trainable parameters**. The two motivations stem from different observations and perspectives.
Nevertheless, upon examining the FROSTER paper, it becomes evident that the term **fine-tuning the model** encompasses approaches that incorporate **additional trainable parameters** within CLIP. Notably, in Figure 1 and Table 3 of the FROSTER paper, experiments with adapter-based methods such as AIM and ST-adapter are presented, which clearly involve the addition of extra parameters.
Given this context, the authors’ response does not sufficiently address the overlap between the two methodologies. I would appreciate further clarification on this matter.
Besides, I would like to know how the authors initialize the parameters of MOTE.
Thanks!
---
Rebuttal Comment 2.1:
Title: Further clarification on FROSTER and the initialization of MoTE
Comment: **1. Further clarification on FROSTER:**
Thanks for pointing this out. After carefully reading the FROSTER paper, we find that the catastrophic forgetting problem in FROSTER is caused by the task-specific learning steers the learned features diverging too far from the frozen CLIP. This divergence comes from optimizing parameters through gradient descent, including both CLIP and additional trainable parameters. Differently, in our work, we believe that **the main cause of catastrophic forgetting is the overfitting of additional parameters rather than the variations in CLIP parameters**. This can be evidenced by our observation that the diminishment in the model's generalization is closely related to the scale of the additional parameters, regardless of whether the CLIP parameters are tuned. Therefore, our work focuses on how to improve the generalization **specifically for additional parameters** while FROSTER aims to enhance the generalization of the **overall feature** by ensuring the learned features do not diverge too far from the frozen CLIP. Our motivation indeed bears some resemblance to FROSTER's in addressing catastrophic forgetting but differs in (1) the observations that lead to the motivation (2) the perspective of increasing model generalization.
From the technical perspective, our work and FROSTER provide distinct contributions to the community by proposing different methodologies. Note that FROSTER can also potentially improve the generalization of additional parameters through knowledge distillation, but our method presents a more explicit way to achieve this goal.
**2.The initialization of MoTE**
The parameters of projection matrices (nn.Linear) are initialized with values drawn from the normal distribution (mean=0, std=0.02). Each projection matrix has different initial values to ensure different optimization trajectories. All bias terms are initialized as zero.
**3. Response to the performance comparison with FROSTER**
Thank you! We will include the discussion of FROSTER and experiment results in our revised manuscript. | Summary: To preserve the generalization ability of the model trained on general visual-language model (VLM) with task-specific data, while boost the performance on specific task, this paper propose a new framework and training strategy to learn a unified model with specific performance and generalization ability. Three techniques are introduced. Mixture temporal experts to avoid overfitting on the task-specific data. A weight merging regularization to enlarge the loss flat region such that optimization on generalization ability will not introduce perturbation that drops the close-set performance. A temporal feature modulation to reuse the feature of VLM model when the target category label is not fitted during task-specific finetuning. The proposed method is evaluated on four benchmark datasets. K400 for close-set finetuning and UCF-101, HMDB-51and K600 for zero-shot evaluation.
Strengths: 1. To train a model with both task-specific performance and zero-shot generalization ability is a interesting topic, and it is less explored in the community.
2. The proposed method achieves competitive performance compared with the similar methods.
3. Balancing between the zero-shot and the task-specific ability is always hard to handle. Considering the wide application of general VLM, this method bears practical value in the industry.
Weaknesses: 1. The experimental setting may hide the weakness of the proposed method. The method is only trained on K400 and evaluated its zero-shot ability on UCF-101, HMDB-51and K600. Considering K400 is already a large-scale dataset, the MoTE may still have good performance on UCF-101 and HMDB-51. Besides, K600 is an extension of K400, therefore they may have similar data distribution. It would be great to also finetune the model on small-scale dataset and evaluated generalization ability on large-scale dataset, for example, train the model on UCF-101 and evaluate it on K400.
2. A simple solution to handle the zero-shot / task-specific balancing issue is to use a finetuned model such as Text4Vis for specific task and to use its temporally mean-pooled clip feature when facing out-of-distribution task. This baseline is missing in the comparison. If the performance of this baseline is acceptable, is it really necessary to train a unified model with such much cost?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The second question in the weakness section
2. The VLM Clip is actually trained on noisy data, and there are also VLM trained with selected data to boost its cross-modality alignment [1]. Therefore, the selection of K in line189 may have influence on the final performance. Besides, for different text-query, the influence of noisy data is diffrerent, and one fixed K may not be optimal. Is there any solution for this issue. Is the selection of K have large influence on the performance?
[1] Bulat, Adrian, Yassine Ouali, and Georgios Tzimiropoulos. "FFF: Fixing Flawed Foundations in contrastive pre-training results in very strong Vision-Language models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation has been discussed in the suplemental material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the valuable comments and the time you have dedicated to this paper. Please find the responses to your comments below.
***
**1. The experimental setting may hide the weakness of the proposed method. It would be great to also fine-tune the model on the small-scale UCF-101 and evaluate it on K400.**
Thanks for your thoughtful and reasonable suggestion! Our experimental setting follows previous works (Methods listed in Table 3) in fine-tuning on large-scale K400 and then evaluating zero-shot performance on relatively small downstream datasets. Following your suggestion, we train the ViT-L/14 network on UCF-101 and evaluate it on K400, results are shown in the table below.
|Method|UCF (close-set)|K400 (zero-shot)|
|-|:-:|:-:|
|Raw CLIP|80.9|59.2|
|Baseline|95.6|56.6|
|MoTE|96.1|66.3|
MoTE yields a 0.5 improvement over the Baseline on UCF-101, while zero-shot performance on K400 significantly outperforms both Raw CLIP and Baseline. This demonstrates the applicability of MoTE to small-scale datasets and its ability to learn generalized knowledge from limited data.
***
**2. Comparison with the baseline variant. Is it really necessary to train a unified model with such much cost?**
We adopt the recommended baseline and present the results in the table below. Our method still shows notable advantages in zero-shot performance.
|Method|K400 (close-set)|UCF|HMDB|K600|
|-|:-:|:-:|:-:|:-:|
|Baseline Variant|86.7|87.1|58.2|77.4|
|MoTE|86.8|88.7|61.4|79.0|
We would like to claim that the most significant value of our work is its ability to simultaneously introduce new specialized knowledge while keeping original generalized knowledge, rather than solely achieve the best performance on a particular dataset. Although the recommended baseline performs fairly on close-set and zero-shot tasks, it does not really achieve our goal. This can be evidenced when facing a middle-ground task between close-set and zero-shot tasks, for example, the few-shot task. It requires both specialization and generalization capability to rapidly adapt using limited samples. We present the results of the few-shot task in the table below. Our method significantly outperforms the recommended baseline.
|Method|UCF-2 shot|UCF-4 shot|UCF-8 shot|UCF-16 shot|HMDB-2 shot|HMDB-4 shot|HMDB-8 shot|HMDB-16 shot|
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Baseline (clip feature)|78.6|81.6|82.3|83.0|53.7|54.2|55.2|55.8|
|Baseline (temporal feature)|86.4|88.9|90.6|91.2|55.9|61.2|65.4|66.3|
|MoTE|88.8|91.0|92.3|93.6|61.3|63.9|67.2|68.2|
From the perspective of model structure, the recommended baseline can only be applied when the additional parameters are separated from the CLIP encoder. Our proposed method can be applied to various forms of additional parameters (e.g. adapter) that are integrated into the CLIP encoder, demonstrating the versatility of our method. The details of this part please refer to the response of the second question to the reviewer s5Zs. Moreover, in real-world applications, the data distribution shift is often hard to identify. The recommended baseline faces the choice of whether to use the temporal feature when facing a moderate or unknown distribution shift, which limits its practicality. This further demonstrates the necessity of training a unified model.
***
**3. For different text-query, the influence of noisy data is diffrerent, and one fixed K may not be optimal. Is there any solution for this issue. Is the selection of K have large influence on the performance?**
Thanks for your interesting question! We agree with your opinion and conduct an experiment. In this experiment, we employ the K-means algorithm to cluster the fine-tuning and test category text features and compute the semantic association by retrieving the most similar cluster-centered feature. In this case, the retrieved cluster-centered feature may represent different numbers of data points. We hope this strategy can better represent the semantic associations between fine-tuning and test categories. We also test the effect of the number of K. The results are shown in the table below (TFM indicates the Temporal Feature Modulation).
|Method|TFM|UCF|K600|
|-|:-:|:-:|:-:|
|MoTE|-|87.5|78.9|
|MoTE (clustering)|√|88.8|78.9|
|MoTE (K=1)|√|88.9|78.9|
|MoTE (K=5)(default)|√|88.7|79.0|
|MoTE (K=10)|√|88.5|78.7|
We find that applying the clustering algorithm does not lead to performance improvement. The reason lies in that using the semantic association to measure the confidence of the temporal feature is actually a rough estimation process. Since this process is inherently somewhat noisy, better-selected data may not largely affect it. This also explains why the selection of the K-value does not have a significant influence on the performance. However, we still observe performance degradation when K is too large, indicating that the noise introduced from the selection of the K dose has a negative influence on the performance.
***
We sincerely hope that this response will address the reviewers' concerns. We will supply the above responses in the revised manuscript.
---
Rebuttal Comment 1.1:
Title: Have a discussion
Comment: This paper receives mixed reviews. The authors have provided a detailed response. Please give your reply and check whether there is still unclear point for authors to clarify. | Rebuttal 1:
Rebuttal: We are grateful to all reviewers for their valuable and constructive comments. We have carefully considered the points raised by each reviewer and provided comprehensive responses to each question. Besides, we attach an additional PDF file containing a detailed analysis of the category-wise performance across experts.
Pdf: /pdf/db927dfb73824b46457c1266194e2d406966c1d7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
UltraPixel: Advancing Ultra High-Resolution Image Synthesis to New Peaks | Accept (poster) | Summary: The paper presents UltraPixel, an innovative architecture for ultra-high-resolution image generation that tackles semantic planning, detail synthesis, and high resource demands. UltraPixel uses cascade diffusion models to generate images at multiple resolutions within a single model, efficiently guiding high-resolution generation with lower-resolution semantics. It features implicit neural representations for continuous upsampling and scale-aware normalization layers. Moreover, it requires less than a 3% increase for high-resolution outputs, boosting efficiency.
Strengths: 1、The paper demonstrates impressive results, with generated high-resolution images exhibiting remarkable detail. The proposed method outperforms existing approaches in terms of speed and flexibility, supporting arbitrary resolution image generation with a single model. This represents a significant advancement in the field.
2、The authors present a clear and well-motivated approach. They provide compelling evidence (Figures 2 and 6) to support their argument that the absence of low-resolution (LR) guidance can lead to suboptimal generation results.
Weaknesses: 1、 The manuscript's layout requires some refinement. For instance, Figure 4 extends beyond the page margins, and the text adjacent to Figure 9 appears overly condensed.
2、 Given that this is a text-to-image generation work, the paper would benefit from a more comprehensive set of visual results, including additional comparisons with state-of-the-art methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The discussion in Section 4.3 regarding the timesteps of LR guidance extraction is intriguing. It would be valuable to see final generated images using different timesteps for guidance, rather than just attention visualizations.
2. The authors' use of LR guidance bears similarities to recent diffusion-based image super-resolution methods. A comparative discussion of these approaches could provide valuable context.
3. Given the method's design, it should theoretically support even higher resolutions (e.g., 5K, 6K). Have the authors explored this possibility?
4. The visual results demonstrating the limitations mentioned in the paper could be included in the supplementary materials to provide a more comprehensive understanding of the method's constraints.
5. Will the code and pre-trained models be made publicly available to facilitate reproducibility and further research in this area?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1`: **The manuscript's layout requires some refinement.**
Thank you for your constructive suggestion. We have carefully revised the format accordingly.
`Q2`: **Paper would benefit from a more comprehensive set of visual results, including additional comparisons with state-of-the-art methods.**
Thank you for the valuable advice. In the rebuttal PDF, we provide visual comparisons with diffusion-based SR methods, including SUPIR and StableSR in **Figure 1**, and leading T2I commercial products, DALLE 3 and MidJourney V6 in **Figure 4**. Our results are visually pleasing and richer in detail. We have included more comparisons in the revised supplementary materials.
`Q3`: **Final generated images using different timesteps for guidance**
Thank you for the constructive advice. We include visual results of the ablation study on LR guidance timesteps in **Figure 2** of the rebuttal PDF. The figure illustrates the LR generation process and HR generation processes with different guidances, considering three cases: **(1) ‘t=t^H’** where the timestep of LR guidance is synchronized with the timestep of HR generation, **(2) ‘t=0.5T’** means LR features at the middle timestep, and **(3) ‘t=0.05T’** near the end. The HR generation process establishes the overall semantic plan in the early stage. However, synchronized or middle timestep guidance is too noisy to provide sufficient direction at this stage. In contrast, the final step of LR guidance delivers clear semantic information. With semantics-rich guidance, the HR image generation can produce coherent structures and fine-grained details. We have included this visualization in the revised manuscript.
`Q4`: **Comparative discussion of diffusion-based image SR methods**
Thank you for the valuable suggestion. As a T2I task, our objective is to generate a high-quality, high-resolution image aligned with the text input, rather than merely enlarging the synthesized low-resolution (LR) image. Unlike SR methods that typically use LR images as input, we adopt multi-level intermediate model features as LR guidance for high-resolution generation. This approach allows for refinement when the LR guidance is unreliable.
We compare our method with state-of-the-art diffusion-based SR methods, namely **SUPIR** and **StableSR**. The visual results in **Figure 1** of the rebuttal PDF demonstrate that our method produces more reasonable structures and richer details. Notably, our method excels in refining images that contain artifacts in the LR version, enhancing visual quality, particularly in facial and hand details, as shown in the second example. Additionally, processing a 4096x4096 image takes **12 minutes** for SUPIR and **11 minutes** for StableSR on an A100 GPU, while our model requires only approximately **78 seconds**.
We also evaluate the FID, patch-based FID, Inception, Inception-score, PickScore, clip-score at resolution of 4096x4096 , as shown below, where our method achieves superior results. These findings collectively underscore the effectiveness of our approach. We have included these comparisons in the revised manuscript.
method | PickScore (vs StableSR) $\uparrow$ |PickScore (vs SUPIR)$\uparrow$| FID$\downarrow$| FID_p$\downarrow$| IS$\uparrow$| IS_p$\uparrow$| CLIP$\uparrow$| Latency(sec.)$\downarrow$|
|---|---|---|---|---|---|---|---|---|
| StableSR | 31.3% | - | 65.27| 48.18|27.55|9.25|32.49|728
| SUPIR | - | 34.7% |64.13|46.98|26.16|9.83|31.28|682
| Ours | **68.7%** | **65.3%**|**63.80**|**44.32**|**27.65**|**10.23**|**33.10**|**78**
`Q5`: **Support even high resolution image generation**
Yes, we have further trained a model that supports up to 6K generation. Some visual results at resolutions of **5760x3840** and **3072x6144** are provided in **Figure 8** of the rebuttal PDF. We have included more visual results in the revised manuscript and will make the model publicly available.
`Q6`: **The visual results demonstrating the limitations mentioned in the paper could be included in the supplementary materials to provide a more comprehensive understanding of the method's constraints**
Thank you for the constructive advice. We found that our model may occasionally fail in generating accurate structures in people's hands. Introducing more human hand data should help address this issue. We will include a more detailed analysis of both the strengths and weaknesses of our model in the revised paper.
`Q7`: **Will the code and pre-trained models be made publicly available to facilitate reproducibility and further research in this area**
Yes, all models and code will be released to promote the development of the community.
---
Rebuttal Comment 1.1:
Comment: The author's rebuttal addresses my concerns. The explanation using a figure is clear. After reading the rebuttal and other reviewers' comments, I think this work is worth being accepted for subsequent researchers to follow up on their studies. | Summary: This paper introduces UltraPixel, a method for generating high-quality ultra-high-resolution images. It utilizes the semantics-rich representations of lower-resolution images in a later denoising stage to guide the overall generation of highly detailed high-resolution images. The method incorporates implicit neural representations for continuous up-sampling and scale-aware normalization layers that are adaptable to various resolutions. The experimental results show that it has excellent ability in generating high-resolution images of different sizes.
Strengths: 1. The introduction of implicit neural representations for continuous up-sampling and scale-aware normalization layers adaptable to various resolutions is a creative solution that addresses a challenge in the scalability of image generation models.
2. The methodology is well-articulated, with a clear explanation of how the model manages to generate high-quality images while maintaining computational efficiency.
3. The ablation experiments are thoroughly conducted, systematically revealing the contribution of each component to the overall performance.
4. The paper proposes an innovative method for generating high-quality, ultra-high-resolution images efficiently, tackling a major challenge in image synthesis.
Weaknesses: 1. The explanation of the implicit neural representation (INR) requires further clarity regarding its ability to enable continuous upscaling. Moreover, an in-depth analysis and dedicated ablation study of the Scale-Aware Normalization (SAN) feature would provide insights into its role in resolution adaptability.
2. To underscore the advantages of the proposed framework, the experiments should be expanded to include comparative analyses with Latent Diffusion Model (LDM)-based and pixel-based image synthesis methods, showcasing the superior performance of the framework in high-resolution image generation tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why is the perturbed version $𝑧_{1}$ preferred over the $𝑧_{0}$ from the low-resolution synthesis for guidance purposes?
2. Regarding Figure 4, could you clarify why additional learnable tokens are integrated with the guidance tokens for the self-attention mechanism, instead of solely relying on the guidance tokens? What unique function do these learnable tokens serve?
3. Can you outline the computational steps involved in the implicit neural representation? Is there a need for manually specifying positions?
4. What justifies the forms of Equations (3) and (4), which amalgamate terms with distinct physical interpretations? Is there an underlying principle that supports their direct summation, as it seems to go against intuitive reasoning?
5. In the context of line 255, the use of 𝑡=0.5 and 𝑡=0.05 is ambiguous. Are these intended to denote specific sampling stages within the low-resolution synthesis—fixed and terminal steps, respectively? Consequently, is 𝑡=1 encompassed within the scenario where 𝑡=0.5?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The paper may not sufficiently address how well the model generalizes to datasets beyond the training distribution. It is crucial to understand if the model's performance degrades with different or less common image content.
2. There is a need for more rigorous testing of the model's robustness to various corruptions and perturbations that could be encountered in real-world applications.
Flag For Ethics Review: ['Ethics review needed: Safety and security', 'Ethics review needed: Discrimination, bias, and fairness']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1` **Clarification on INR and analysis of SAN feature**
Compared to discrete grid pixels, INR represents data as a neural function, mapping continuous coordinates to signals. Its representation capacity depends not on grid resolution but on the neural network's ability to capture underlying data structures, reducing redundancy and providing a compact yet powerful consistent representation. INR has proven useful in various 3D/2D tasks, such as NeRF[1] and LIIF[2]. To illustrate the effectiveness of INR , we provide visual examples in Fig 6 of the rebuttal PDF. With INR guidance, our model consistently generates high-quality images across different resolutions. In contrast, using simple bilinear upsampling followed by several conv. fails to provide clear guidance for higher resolution images like 4K, resulting in noisy artifacts. The quantitative comparisons of ‘BI+Conv’ and ‘INR’ in Table 3 of the main paper also support this observation.
Regarding SAN, we compute feature statistics for varying resolutions, with and without SAN, as shown in Fig. 5 of the rebuttal PDF. With SAN, different resolutions have similar feature distributions, while without SAN, mean values vary significantly. This shows SAN enables stable handling of different resolutions, resulting in better visuals.
[1] Nerf: Representing scenes as neural radiance fields for view synthesis. ECCV 2020
[2] Learning continuous image representation with local implicit image function. CVPR 2021
`Q2` **Compare with LDM-based and pixel-based methods**
Our method is fundamentally a LDM model, performing the generative diffusion process in a latent space. The methods we compared, including Demofusion, ScaleCrafter, FouriScale, and Pixart-Sigma, are all LDM adapted with various techniques for HR image generation. To our best knowledge, no open-source pixel-based synthesis method can directly generate HR images due to the extreme memory and computational demands of doing so in pixel space.
To demonstrate our method's superiority, we compare it with leading T2I products, DALLE-3 and MJ V6. Quantitative evaluation shows our method is favored by PickScore in 70% of cases against DALLE-3. Visual comparisons in Fig 4 of the rebuttal PDF show our images are of comparable quality to both products.
`Q3` **Why z1 over z0**
We do not use z1 or z0 latents. Instead, we utilize intermediate multi-level model features by forwarding z1 to the base model.
`Q4` **Why additional learnable tokens**
Unlike LR guidance tokens with a strong local bias, the learnable tokens can globally query and aggregate compact useful information from guidance tokens, enhancing the model’s comprehension capabilities. Additionally, learnable tokens have proven effective in adapting to various vision tasks, as evidenced by works [3-5]. While the LR guidance tokens primarily encode local patterns, the learnable tokens acquire auxiliary information from large datasets, improving the adaptability and performance of models.
[3] Vision transformers need registers. ICLR 2024
[4] End-to-end object detection with transformers. ECCV 2020
[5] Generalized decoding for pixel, image, and language. CVPR 2023
`Q5` **Steps involved in the INR**
After obtaining the LR feature, INR is involved in all HR generation sampling steps to forward the LR guidance to the HR branch.
`Q6` **What justify eq 3 4**
Eq (3) (4) inject time and image scale information into the model using AdaIN, which predicts scale and shift from the input information to modulate model features. This technique is widely employed and proven effective in injecting style or time information into generative models [6-8].
[6] High-resolution image synthesis with latent diffusion models. CVPR 2022
[7] Sdxl: Improving latent diffusion models for high-resolution image synthesis. ICLR 2024
[8] A style-based generator architecture for generative adversarial networks. CVPR 2019
`Q7` **Clarification of t**
We have revised the notation to t=0.5T and t=0.05T. For example, when the number of sampling steps is 20 (T=20), t=0.5T refers to using the LR features at the 10th step to guide all HR sampling steps. As shown in Table 4 of the main paper and Fig 3 of the rebuttal PDF, t=0.05T is preferred, as the LR features at this time point provide clear semantic guidance for HR generation. We do not consider t=T, as the LR features at the first step of the LR sampling stage are too noisy to provide useful information.
`Q8` **Generalization concern**
As mentioned in lines 177-178, we build our model upon the well-trained 1024^2 StableCascade, which is trained on huge datasets and generalizes well across various scenarios. As described in lines 10-13 and 170-172, we train an additional 3% of the parameters specifically for high-resolution image generation, while keeping the other parameters frozen. This method preserves the model's generative power. As shown in Fig 1 of the main paper, and Fig 1 (“A photo of an astronaut riding a horse in the forest. There is a river in front of them with water lilies.”) and 2 (“Dogs sitting around a poker table”) in the rebuttal PDF, our method can generate less common content of high quality. Fig 8 in the rebuttal PDF also shows that our method can generate pleasing images of various styles (photo-realistic or oil-painting) and content (real or imaginary).
`Q9` **Model robustness**
Our model performs well in situations involving corruptions and perturbations. Fig 7 of the rebuttal PDF shows that even when the prompts are miswritten as "2014 brabus b63s 700 6x6 mercedes benz g class hd pictures" and " Ext for in ldg and sc gatlinburg cabin wahoo sale cabins rentals of american homes tn log city" our method accurately generates high-quality 4K images. Even if the prompt is incomplete, jumbled, or contains spelling errors, our method still can generate high-quality images. This demonstrates the robustness of our method in understanding and interpreting diverse and imperfect input.
---
Rebuttal 2:
Title: Response to z6xV by Authors
Comment: We would like to thank you again for the valuable time you devoted to reviewing our paper. Since the end of discussion period is getting close and we **have not heard back from you yet**, we would appreciate it if you kindly let us know of any other concerns you may have, and if we can be of any further assistance in clarifying them.
Thank you once again for your contribution to our paper's refinement. | Summary: This paper presents a method for Ultra-High-Resolution image generation from text prompts. The method is based on StableCascade. The original StableCascade can generate 1024x1024 images. This paper proposes another HR latent diffusion model that can utilize the guidance from 1024 x 1024 images and generate 4096 x 4096 images. Unlike previous methods that directly use the low-resolution output, the method chooses to use the features of the base model as guidance and proposes an implicit-based method to upsample the low-res guidance features.
Strengths: - The idea of guidance feature and implicit-based upsampling is simple but effective.
- The paper reads well, and the presentation is clear.
- The results are very impressive.
- The proposed method only needs light-weight finetuning from StableCascade.
Weaknesses: - More validation and analysis are needed. In the comparison, a traditional image upsampler is used, but the traditional image upsampler is often smaller and also trained on much smaller datasets. For a fair comparison, it will be good to compare with the state-of-the-art generative image upsampler such as StableSR and Stable Diffusion Upscaler.
- A comparison of this baseline is missing: instead of using guidance features, the HR latent model can directly use the LR images / latents from the base model.
- It would be good to have visual results of the ablation on LR guidance timesteps.
- Ablation on scale-aware normalization is missing.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Is the base model frozen from StableCascade?
- Is the implicit model jointly trained with the HR latent model?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: It will be good to show some visual failure cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: `Q1` **Comparison with the SOTA generative upsampler**
Thank you for the valuable suggestion. We compare our method with SOTA diffusion-based SR methods, namely **SUPIR** and **StableSR**. The visual results in **Figure 1** of the rebuttal PDF demonstrate that our method produces more reasonable structures and richer details. As a T2I task, our objective is to generate a high-quality, high-resolution image aligned with the text input, rather than merely enlarging the synthesized low-resolution (LR) image. Notably, our method excels in refining images that contain artifacts in the LR version, enhancing visual quality, e.g., facial and hand details in 2nd example. Besides, processing a 4096x4096 image takes **12 minutes** for SUPIR and **11 minutes** for StableSR on an A100 GPU, while our model requires only approximately **78 seconds**. We also evaluate the PickScore, FID, patch-based FID,Inception-score (IS), patch-based IS, CLIP-score on resolution of 4096x4096, as shown below, where our method achieves superior results. These findings collectively underscore the effectiveness of our approach. We have added these comparisons in the revised manuscript.
| method | PickScore (vs StableSR) $\uparrow$ |PickScore (vs SUPIR)$\uparrow$| FID$\downarrow$| FID_p$\downarrow$| IS$\uparrow$| IS_p$\uparrow$| CLIP$\uparrow$| Latency(sec.)$\downarrow$|
|---|---|---|---|---|---|---|---|---|
| StableSR | 31.3% | - | 65.27| 48.18|27.55|9.25|32.49|728
| SUPIR | - | 34.7% |64.13|46.98|26.16|9.83|31.28|682
| Ours | **68.7%** | **65.3%**|**63.80**|**44.32**|**27.65**|**10.23**|**33.10**|**78**
`Q2` **Comparison with the baseline using latent of base model**
As suggested, we quantitatively and qualitatively compare our UltraPixel method with the baseline that **directly uses the low-resolution (LR) latent generated by the base model (an SR pipeline)**. We evaluate the PickScore, FID, patch-based FID,Inception-score (IS), patch-based IS, CLIP-score on resolution of 2048x2048 in the table below. As shown in the table below, our method consistently outperforms the baseline across all metrics, achieving a **71.1%** win-rate on PickScore . **Figure 3** in the rebuttal PDF illustrates that using LR latents causes the model to overly rely on the LR input, often failing to refine details further. In contrast, our approach, which utilizes multi-level semantics-rich features, provides semantic-rich guidance and allows for additional refinement. Consequently, our method produces higher-quality images with more reasonable structures and richer details. We have included these details in the revised manuscript.
| method | PickScore $\uparrow$| FID$\downarrow$| FID_p$\downarrow$| IS$\uparrow$| IS_p$\uparrow$| CLIP$\uparrow$
|---|---|---|---|---|---|---
| Baseline | 28.9% |63.32|48.19|26.73|11.88|30.38
| Ours | **71.1%** |**62.82**|**43.97**|**29.67**|**14.15**|**33.18**
`Q3` **Visual results of the ablation on LR guidance timesteps**
Thank you for the constructive advice. We include visual results of the ablation study on LR guidance timesteps in **Figure 2** of the rebuttal PDF. The figure illustrates the LR generation process and HR generation processes with different guidances, considering three cases: **(1) ‘t=t^H’** where the timestep of LR guidance is synchronized with the timestep of HR generation, **(2) ‘t=0.5T’** means LR features at the middle timestep, and **(3) ‘t=0.05T’** near the end. The HR generation process establishes the overall semantic plan in the early stage of the sampling process. However, synchronized or middle timestep guidance is too noisy to provide sufficient direction at this stage. In contrast, the final step of LR guidance delivers clear semantic information. With semantics-rich guidance, the HR image generation can produce coherent structures and fine-grained details. We have included this visualization in the revised manuscript.
`Q4` **Ablation on scale-aware normalization**
We have quantitatively evaluated the performance of our model with and without scale-aware normalization (SAN) in **Table 3 of the main paper**. The method labeled ‘INR’ refers to the model without SAN, while ‘INR+SAN’ incorporates SAN. The results indicate that SAN consistently enhances performance across different resolutions. Additionally, we compute feature statistics (mean and variance) for varying resolutions, with and without SAN, as shown in **Figure 5** of the rebuttal PDF. With SAN, the features of different resolutions exhibit similar distributions, while the mean values vary significantly across resolutions without SAN. This demonstrates that SAN enables our model to handle different resolutions stably, resulting in better visual results.
`Q5` **Is the base model frozen from StableCascade**
Yes. As described in lines **170-172**, the base model is frozen
`Q6` **Is the base model frozen from StableCascade**
Yes. As described in lines **170-172**, the implicit model is jointly trained with the HR branch.
---
Rebuttal 2:
Title: Response to rwnV by Authors
Comment: We would like to thank you again for the valuable time you devoted to reviewing our paper. Since the end of discussion period is getting close and we **have not heard back from you yet**, we would appreciate it if you kindly let us know of any other concerns you may have, and if we can be of any further assistance in clarifying them.
Thank you once again for your contribution to our paper's refinement. | null | null | Rebuttal 1:
Rebuttal: **Response to AC and reviewers (with PDF)**
We sincerely appreciate your time and efforts in reviewing our paper. We are glad to find that reviewers recognized the following merits of our work:
- **Innovative and effective solution [rwnV, z6xV, UnLF]**: The proposed UltraPixel introduces a Low-Resolution (LR) guidance feature to reduce the complexity of High-Resolution (HR) image generation. Additionally, it employs an implicit function and scale-aware normalization to assist the network in generating images of varying resolutions. This approach is both innovative and novel.
- **Impressive results [rwnV, z6xV, UnLF]**: UltraPixel generates ultra-high-resolution images with impressive quality and rich details, effectively addressing a major challenge in image synthesis.
- **Clarity and Readability [rwnV, z6xV, UnLF]**: Our paper is well-motivated, clearly articulated, and easy to read.
We also thank all reviewers for their insightful and constructive suggestions, which help further improve our paper. In addition to the pointwise responses below, we summarize the major revision in the rebuttal according to the reviewers’ suggestions
- **Comparative study on advanced T2I and diffusion-based super-resolution methods**: We have incorporated extensive quantitative and qualitative comparisons with leading commercial T2I products, DALLE 3 and MidJourney V6, as well as state-of-the-art SR methods, SUPIR and StableSR. These results further demonstrate UltraPixel's impressive ability to generate ultra-high-resolution images and its superior efficiency.
- **Enhanced visual analysis and method clarification**: We have added visual comparisons of different timesteps for extracting LR guidance, feature distribution across different resolutions, and an in-depth analysis of the proposed method.
Best,
Authors
Pdf: /pdf/58f0255293f59ff63bcede9ccc5cb80c7598b104.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Remix-DiT: Mixing Diffusion Transformers for Multi-Expert Denoising | Accept (poster) | Summary: This paper proposes Remix-DiT, which creates multiple experts by mixing fewer basis diffusion transformers, allowing each expert to specialize in the denoising task for corresponding timestep intervals. It achieves performance improvements by having each expert responsible for a larger number of timestep intervals with fewer total trainable parameters than previous multi-expert methods. Also, the paper analyzes the coefficients of how much each expert uses bases, demonstrating the denoising task similarity for adjacent timesteps, as well as the use of specialized bases for lower timesteps.
Strengths: * The paper is structured well, making it easy to understand and follow.
* The proposed mixing basis strategy is interesting as it achieves better performance with fewer parameters compared to existing multi-expert methods.
* Ablation studies on mixing methods are comprehensive.
Weaknesses: * **Lack of experiments.** The authors have to validate the performance of Remix-DiT by reporting comparisons with previous methodologies on the FFHQ or MS-COCO datasets. It would make the manuscript more solid if Remix-DiT achieves consistent performance improvements on multiple datasets.
* **Lack of comparison.** There are two methods, DTR [1] and Switch-DiT [2], to address the multi-task learning aspect of diffusion training by designing distinct denoising paths for 1000 timesteps in a single model. These are more parameter-efficient methods where they use no additional parameters or 10%, respectively. The authors should analyze them with respect to Remix-DiT.
[1] Park et al., Denoising Task Routing for Diffusion Models, ICLR 2024.
[2] Park et al., Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Is the Exponential Moving Average (EMA) model used to further train a pre-trained diffusion model?
* It would be better that the authors provide an affinity matrix between 20 timestep clusters based on the learned mixing coefficients. I think the affinity matrix could explain the similarity between denoising tasks.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: Lack of experiments. The authors have to validate the performance of Remix-DiT by reporting comparisons with previous methodologies on the FFHQ or MS-COCO datasets. It would make the manuscript more solid if Remix-DiT achieves consistent performance improvements on multiple datasets.**
Thank you for the invaluable advice. We agree with the reviewer that incorporating more datasets is important to validate the robustness of RemixDiT and will add more datasets to polish this work. However, due to the limited rebuttal period, we are not able to present the results on large-scale datasets like COCO. We are currently working on FFHQ following DTR and should be able to update the results in the coming days once they are available.
> **Q2: Lack of comparison. There are two methods, DTR [1] and Switch-DiT [2], to address the multi-task learning aspect of diffusion training by designing distinct denoising paths for 1000 timesteps in a single model. These are more parameter-efficient methods where they use no additional parameters or 10\%, respectively. The authors should analyze them with respect to Remix-DiT.**
Thank you for the highly relevant pointers. We have supplemented the following results to compare RemixDiT to the mentioned baselines. We trained the proposed RemixDiT from scratch for 400K steps on ImageNet, which yielded a comparable FID to DTR. However, we also observed that the FID is higher than that of Switch-DiT-S, as training RemixDiT from scratch is less efficient, as discussed in Line 188 of our paper, where we have four basis models to build. Another reason for this might be the non-optimal hyperparameters for the scratch training. These baseline methods are very insightful and helpful. We will continue to improve this work to make our method more robust to different training configurations and will incorporate these results into the manuscript.
| **Model** | **Train Steps** | **FID** |
|--------------------|-----------------|----------------|
| DiT-S | 400K | 43.88 |
| DTR-S [1] | 400K | 37.43 |
| Switch-DiT-S [2] | 400K | 33.99 |
| RemixDiT-S | 400K | 36.68 |
[1] Park et al., Denoising Task Routing for Diffusion Models, ICLR 2024.
[2] Park et al., Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts, ECCV 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response. It well addressed my concerns. I will increase my rating to weak accept. | Summary: The paper introduces Remix-DiT, a modification to the diffusion transformer architecture that incorporates the multi-expert denoiser framework during both training and inference. Unlike traditional multi-expert methods that train $N$ separate individual experts independently for each time interval, Remix-DiT employs $K$ base models combined with $N$ mixing coefficients to dynamically compute time-specific experts. This approach enhances efficiency and leverages task similarities between adjacent intervals more effectively. Experiments on ImageNet demonstrate that Remix-DiT improves the performance of DiT across various model sizes.
Strengths: - The paper is well-motivated and represents a valuable step towards integrating the multi-expert denoising framework into standard diffusion models.
- The main idea of the paper (using global mixers to compute the final experts) is novel and interesting to me in this context.
- The method is simple and effective, making it more suitable for practical use cases.
- The experiments are well-designed, and the ablations clearly illustrate the impact of various aspects of Remix-DiT.
- The paper is well-written and generally easy to understand.
Weaknesses: - While the authors show the benefits of Remix-DiT on finetuning a pretrained DiT model, it would be interesting to see its effect when training all components from scratch. If the compute budget allows, I suggest that the authors also add this experiment for better insights into what happens if one uses the remixing scheme from the beginning of training (perhaps after a small warmup)
- The performance gain seems to diminish as the size of the base model increases. Hence, a more detailed discussion on this issue is needed for the final version. For example, the performance gain is almost 30% for DiT-S, while it drops to only 15% for DiT-L.
**Minor comments:**
Please fix the following issues in terms of writing in your draft:
- L114 "refer to" -> "refers to"
- L144 -> citation is missing
- L215 -> I assume 100M steps should be 100K steps
- L290 -> it seems that it should be written as N experts because K is the number of base models
- L295 -> "can found" should be "can find"
Please also cite GigaGAN [1] as the mixing part of the paper is related to their method of mixing different convolution kernels during training.
[1] Kang M, Zhu JY, Zhang R, Park J, Shechtman E, Paris S, Park T. Scaling up gans for text-to-image synthesis. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2023 (pp. 10124-10134).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The EDM paper [1] suggests that for denoising, only the middle noise levels are important, while this paper suggests that the noise levels towards 0 are more crucial. Do you have an intuition on the difference between these two conclusions?
2. Is the performance of Remix-DiT more sensitive to the number of sampling steps compared to a normal DiT? In other words, how do the experts perform when using a deterministic sampler with low NFEs (<50)?
3. Can you also visualize some examples generated by DiT and Remix-DiT? While the metrics are valuable, a qualitative evaluation is interesting as well.
[1] Karras T, Aittala M, Aila T, Laine S. Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems. 2022 Dec 6;35:26565-77.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have mentioned this in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: While the authors show the benefits of Remix-DiT on finetuning a pretrained DiT model, it would be interesting to see its effect when training all components from scratch. If the compute budget allows, I suggest that the authors also add this experiment for better insights into what happens if one uses the remixing scheme from the beginning of training (perhaps after a small warmup)**
Thanks for the invaluable suggestion. We use pre-trained models to avoid training basis models independently since they share some common ability across timesteps. We agree with the reviewer that it is valuable to explore scratch training with our method. Due to the time limits of this short rebuttal period, we can only provide 400K results for scratch training. It can observed that our method still achieves competitive FID in the setting of scratch training.
| Model | FID @ 400K |
|--------------|------------|
| DiT-S | 43.88 |
| RemixDiT-S | 36.68 |
> **Q2: The performance gain seems to diminish as the size of the base model increases. Hence, a more detailed discussion on this issue is needed for the final version. For example, the performance gain is almost 30\% for DiT-S, while it drops to only 15% for DiT-L.**
Thank you for the insightful comment! The key motivation behind RemixDiT is that the capacity of a single model is insufficient for multi-step denoising tasks. Therefore, with a smaller model like DiT-S, our method can offer more significant benefits compared to relatively larger models like DiT-L. Additionally, training large models can be more challenging. We will include a more detailed discussion about this phenomenon in the revised version, following the reviewer's advice.
> **Q3: Please fix the following issues in terms of writing in your draft.**
We will revise the draft accordingly.
> **Q4: Please also cite GigaGAN [1] as the mixing part of the paper is related to their method of mixing different convolution kernels during training.**
It's indeed a highly related paper from the perspective of techniques. We will cite it properly.
> **Q5: The EDM paper [1] suggests that for denoising, only the middle noise levels are important, while this paper suggests that the noise levels towards 0 are more crucial. Do you have an intuition on the difference between these two conclusions?**
A large coefficient indicates that this slot is more unique and challenging, necessitating greater capacity to handle it. The segmented patterns in Figure 4(a) demonstrate the importance of both early and intermediate steps. Our method allocates more model capacity to the early stages (0-50) and the intermediate stages (150-500). So, the results is, to some extend, aligned with the observation in the EDM paper.
> **Q6: Is the performance of Remix-DiT more sensitive to the number of sampling steps compared to a normal DiT? In other words, how do the experts perform when using a deterministic sampler with low NFEs (<50)?**
In Table 2 of the submission, we present results for 100-step sampling, where our method continues to show positive results compared to the baseline DiT. In response to your question, we further reduced the number of steps to 25. The following table demonstrates that even with this lower number of NFEs, our method, RemixDiT-S, still outperforms the baseline DiT-S.
| Model | FID-10K (Steps=25, cfg=1.5) |
|--------------|-----------------------------|
| DiT-S | 49.82 |
| RemixDiT-S | 44.75 |
> **Q7: Can you also visualize some examples generated by DiT and Remix-DiT? While the metrics are valuable, a qualitative evaluation is interesting as well.**
Thanks for the suggestion. We supplement visualization results within the attached PDF, which compares the proposed RemixDiT-B to a DiT-B. It can be observed that the shape of objects can be improved using the proposed methods. This is as expected since we allocate model capacity to the early and intermediate stages of denoising, which mainly contributes to the image contents rather than details. We will incorporate visualization results into the experiment sections.
---
Rebuttal 2:
Title: Response to the rebuttal
Comment: I would like to thank the authors for taking the time to answer my questions in detail. Since I believe that the role of time step in diffusion networks has been less explored and my concerns have been resolved by the rebuttal, I would like to increase my score from 6 to 7.
---
Rebuttal Comment 2.1:
Comment: Thank you so much for the positive feedback! We will polish our draft with the above experiments. | Summary: The paper proposes Remix-DiT, a model architecture designed to enhance the capacity of a standard DiT model without significantly increasing inference costs. This is accomplished by training mixing coefficients to adaptively fuse multiple DiT models and developing specialized experts for multi-expert denosing. A key advantage highlighted in this paper is that Remix-DiT achieves better generation quality while maintaining inference speed comparable to that of a standard DiT. Experimental results on ImageNet-256 demonstrate favorable outcomes compared to baseline methods.
Strengths: 1. The visualization results in Figure 4 are very interesting. It seems that the model has a certain preference in allocating the capacity of basis models, with clear segmentation across the timesteps. Additionally, a high coefficient is observed at early timesteps, such as 0-150. Does this imply that those steps are more challenging for the diffusion model to learn?
2. The idea of mixing multiple basis models is clear and easy to implement. It does not requires the expensive training of independent experts for different steps.
Weaknesses: 1. Using multiple base models may introduce more training costs. However, in Table 3, the GPU memory usage only slightly increases from 13G to 16G for DiT-B. Can the authors provide more details about the reason? Will Remix-DiT introduce a substantial backward and forward footprint?
2. This method utilizes the pre-trained model as the initialization. This might make the mixed experts always the same after mixing since they are working on the same basis model initially. Will this be a problem?
3. Why does the proposed method outperform naively training independent experts? In this method, the experts are crafted by mixing, which should theoretically be upper bounded by the naïve method mentioned above.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: This paper discusses limitations such as sparse gradients and the training difficulty associated with a large number of experts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: The visualization results in Figure 4 are very interesting. It seems that the model has a certain preference in allocating the capacity of basis models, with clear segmentation across the timesteps. Additionally, a high coefficient is observed at early timesteps, such as 0-150. Does this imply that those steps are more challenging for the diffusion model to learn?**
Thank you for the question. Allocating a high score to a certain slot indicates that this step is distinct from others and might be challenging during training. In the diffusion process, the shape of objects emerges quickly, followed by the incorporation of finer details in the later stages. The algorithm adaptively allocates model capacity to these specific steps, ensuring that critical stages receive the necessary resources for accurate denoising.
> **Q2: This method utilizes the pre-trained model as the initialization. This might make the mixed experts always the same after mixing since they are working on the same basis model initially. Will this be a problem?**
Thanks for the comment, this issue indeed exists at the early training steps. Therefore, we introduce the prior coefficients in Line 188 of the paper to force a prior allocation of model capacity. For example, we initialize the coefficients for the first timestep region with [1,0,0,0], this forces the first basis model to learn from early steps. This trick will lead to diverge basis models, which is important for our method.
> **Q3: Using multiple base models may introduce more training costs. However, in Table 3, the GPU memory usage only slightly increases from 13G to 16G for DiT-B. Can the authors provide more details about the reason? Will Remix-DiT introduce a substantial backward and forward footprint?**
Thank you for the comment; this issue indeed exists in the early training steps. To address this, we introduce prior coefficients, as mentioned in Line 188 of the paper, to enforce a prior allocation of model capacity. For example, we initialize the coefficients for the first timestep region with [1,0,0,0], which forces the first basis model to learn from the early steps. This approach helps diverge the basis models, which is crucial for the effectiveness of our method.
> **Q4: Why does the proposed method outperform naively training independent experts? In this method, the experts are crafted by mixing, which should theoretically be upper bounded by the naïve method mentioned above.**
Thanks for the invaluable question. Our method is more parameter efficient than training $N$ independent experts. First, our method can support training 20 experts simultaneously by optimizing 4 basis models. Under the same budget, our method can be fully optimized while the naive expert training is still underfitted. In addition, our method is able to adaptively allocate model capacity to different timesteps, which improves the utilization of network parameters. | Summary: To improve the generation quality of diffusion transformers, Remix-DiT proposes to enhance output quality at a lower cost and aims to create N diffusion experts for different denoising timesteps without the need for expensive training of N independent models. Remix-DiT achieves this by employing K basis models (where K < N) and using learnable mixing coefficients to adaptively craft expert models. This approach offers two main advantages: although the total model size increases, the model produced by the mixing operation shares the same architecture as a plain model, maintaining efficiency comparable to a standard diffusion transformer. Additionally, the learnable mixing adaptively allocates model capacity across timesteps, effectively improving generation quality. Experiments on the ImageNet dataset show that Remix-DiT achieves promising results compared to standard diffusion transformers and other multiple-expert methods.
Strengths: Novelty: Model mixers for efficient multi-expert diffusion model training is innovative and unique.
Significance: Addressing the challenge of efficient training of multi-expert diffusion transformers is significant in the field of diffusion models.
Methodology: The proposed algorithm is well-formulated and clearly explained.
Results: Experimental results demonstrate promising improvements over existing methods such as DiT.
Weaknesses: 1. Lack of Visualization Results: The paper does not include any visualization results. Providing visual examples of generated outputs is crucial for qualitatively evaluating the effectiveness of the proposed method.
2. Insufficient Motivation for Multi-Expert Training: The rationale behind adopting a multi-expert training approach is not fully well-motivated, particularly in the context of quantitative comparisons. A more detailed explanation of why multi-expert training is beneficial and how it compares quantitatively to other methods would strengthen the argument. Clarifying the advantages and potential trade-offs in performance and efficiency would provide a more compelling case for this approach.
3. High Training Cost: The training cost associated with the proposed method is substantial. It would be beneficial to provide a thorough analysis of the computational resources, time, and energy required for training compared to other existing methods. Discussing potential ways to mitigate these costs or offering insights into why the increased training cost is justified by the performance gains would add valuable context for evaluating the practicality of the method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Performance Comparison Between Multi-Expert and Single Larger Models: Is it possible for the multi-expert small models to outperform a single, larger model? To fully validate the potential of the multi-expert approach, it is crucial to provide a thorough performance comparison. This should include quantitative metrics and benchmarks that demonstrate the advantages, if any, of using multiple experts over a single larger model in terms of both output quality and computational efficiency.
2. Scalability and Efficiency of Increasing the Number of Experts: If the number of experts is increased for the same basis models, how easily can the system be scaled, and does this lead to more efficient training? It would be important to discuss the scalability of the multi-expert framework, including any potential challenges or limitations in transferring the model to a larger number of experts. Additionally, insights into how the efficiency of training might be affected by increasing the number of experts would be valuable.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weakness and question part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1: Lack of Visualization Results: The paper does not include any visualization results. Providing visual examples of generated outputs is crucial for qualitatively evaluating the effectiveness of the proposed method.**
Thanks for the suggestion. We supplement visualization results in the attached PDF, which compares the proposed RemixDiT-B to a DiT-B. It can be observed that the shape of objects can be improved using the proposed methods. This is as expected since RemixDiT allocates more model capacity to the early and intermediate stages of denoising as illustrated in Figure 4 of the paper, which mainly contributes to the global shape rather than detailed patterns. We will incorporate visualization results into the experiment sections and polish the draft following the advice.
> **Q2: Insufficient Motivation for Multi-Expert Training: The rationale behind adopting a multi-expert training approach is not fully well-motivated, particularly in the context of quantitative comparisons. A more detailed explanation of why multi-expert training is beneficial and how it compares quantitatively to other methods would strengthen the argument. Clarifying the advantages and potential trade-offs in performance and efficiency would provide a more compelling case for this approach.**
The key motivation for the multi-expert approach lies in the limitations of a single model's capacity for the 1000-step denoising tasks. Figure 4 (c) provides a quantitative perspective on this issue by illustrating the performance of experts in both their specialized and non-specialized regions. It can be observed that all experts achieve low losses within their designated time slots, but in the "non-professional" timesteps, an expert may yield relatively large losses. This disparity is due to the limited model capacity when faced with the extensive number of timesteps. To address this, we propose utilizing multi-expert strategies to enhance diffusion models without significantly increasing the number of parameters. We will incorporate the reviewer's advice to clarify the motivation in the revised version.
Based on this, the main advantage of RemixDiT lies in its ability to provide a flexible trade-off between the number of experts and the total training costs through the mixing of a few basis parameters. This allows training $N$ experts for different regions by optimizing $K (K<N)$ basis models.
> **Q3: High Training Cost: The training cost associated with the proposed method is substantial. It would be beneficial to provide a thorough analysis of the computational resources, time, and energy required for training compared to other existing methods. Discussing potential ways to mitigate these costs or offering insights into why the increased training cost is justified by the performance gains would add valuable context for evaluating the practicality of the method.**
The training cost of RemixDiT can be found in Table 3, which measures the training throughput and memory usage. To obtain 20 experts for different regions, our method incurs 5\% ~ 20\% additional costs compared to a standard DiT. Notably, the training cost does not linearly increase with the number of experts or basis models, as the additional computation primarily arises from the lightweight mixing operation, which will not introduce too much forwarding or backwarding efforts during training. We will revise lines 298-303 of the paper to include a more comprehensive discussion on training and inference efficiency.
> **Q4: Performance Comparison Between Multi-Expert and Single Larger Models: Is it possible for the multi-expert small models to outperform a single, larger model? To fully validate the potential of the multi-expert approach, it is crucial to provide a thorough performance comparison. This should include quantitative metrics and benchmarks that demonstrate the advantages, if any, of using multiple experts over a single larger model in terms of both output quality and computational efficiency.**
Based on our results in the submission, it is challenging for RemixDiT-B with four basis models (FID=9.02) to outperform a DiT-L that is four times larger (FID=3.73), with around 1M training steps. This limitation arises because the number of effective parameters in RemixDiT-B remains equivalent to that of a single DiT-B. Currently, our proposed RemixDiT can ensure better performance under a comparable inference cost. We appreciate the suggestion to explore the upper bound of this method and are working on training DiT baselines of different sizes.
> **Q5: Scalability and Efficiency of Increasing the Number of Experts: If the number of experts is increased for the same basis models, how easily can the system be scaled, and does this lead to more efficient training? It would be important to discuss the scalability of the multi-expert framework, including any potential challenges or limitations in transferring the model to a larger number of experts. Additionally, insights into how the efficiency of training might be affected by increasing the number of experts would be valuable.**
We agree that scalability is important for this method. As shown in Table 2, our key observation is that increasing the number of experts is not always beneficial. There are three reasons for this: 1) As illustrated in Figure 4 (a,b), some denoising steps share similar mixing coefficients, which limits the maximal number of effective experts in our method; 2) The total capacity is also bounded by the number of basis models; 3) As discussed in the limitations (Line 310), optimizing a large number of experts can lead to sparse gradients in the mixing coefficients, making optimization more difficult as the number of experts increases.
Therefore, we chose to use 20 experts in our experiments, which we found to be a good balance between performance and optimization difficulty. | Rebuttal 1:
Rebuttal: We would like to extend our sincere gratitude to all the reviewers for their time, effort, and insightful feedback on our submission. In response to reviewers' questions, we included some visualization results in the attached PDF file to compare the RemixDiT-B to a standard DiT-B, where our method is able to improve the object shape by allocating more model capacity to early and intermediate timesteps.
Pdf: /pdf/fe8942b13a7a7de0f04af883b02b86333dea24d3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
PaDeLLM-NER: Parallel Decoding in Large Language Models for Named Entity Recognition | Accept (poster) | Summary: This paper introduces a novel approach to reduce generation latency in Named Entity Recognition (NER) using Large Language Models (LLMs). The primary issue addressed is the high latency caused by the sequential decoding process in LLMs, which significantly lengthens the sequence by autoregressively generating all labels and mentions for NER. To tackle this, the authors propose Parallel Decoding in LLM for NER (PaDeLLM-NER), which integrates into existing generative model frameworks without requiring additional modules or architectural changes. PaDeLLM-NER enables simultaneous decoding of all mentions, effectively reducing generation latency. Experimental results show that PaDeLLM-NER can improve the inference speed than the traditional autoregressive approach.
Strengths: - The parallel decoding strategy is well-designed and experimental results prove the effectiveness.
- The authors provide comprehensive experiments with different setting and with furthre analysis.
- The paper is easy to follow.
Weaknesses: - The proposed method cannot improve the inference speed in scenarios where only one type of entity is predicted.
- Since the method focuses on the inference efficiency of LLMs-based NER, it is better to report both inference speed and performance compared to zero-shot (Table 3) and supervised (Tables 4 and 5) methods. Notably, Table 3 only reports performance without considering the efficiency of different LLMs. Furthermore, why not report the performance of AutoReg_aug and AutoReg_struct in Table 3?
- For better understanding of the training resource usage when compared with other methods, it is better to report the base language models used (SOTA methods) in Tables 4 and 5.
- The writing of this paper could be further improved. For example, Line 219, “As per Ning et al...” appears to be a typo; the meanings of the underline (second performance) and bold (best performance) are not provided; and there is no explanation for why “*” indicates that results are not directly comparable in the Table 5 caption.
- Comparing with fixed few-shot in-context learning of LLMs may also be worth considering, as caching the fixed prompt could improve the inference speed of LLMs.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weakness.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors provide one limitation section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s novelty and writing. We are grateful for the opportunity to address the concerns raised.
**W1-speedup is weak when only one entity type**: Even if the entity type is one, if there are many mentions in that entity type, PaDeLLM can still improve inference speed through Step 2, because we predict all label-mention pair in parallel. As compared in the table, the string length of PaDeLLM still shorter than AR method (<mention x> is the templated added not generated by the model).
| | Prediction |
|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Autoregressive | str1: "LOC: England, India, China, the US" |
| PaDeLLM | str1: "<mention 1>England"
| | str2: "<mention 2>India"
| | str3: "<mention 3>China"
| | str4: "<mention 4> the US" |
**W2-reporting other method latency**:
- Our primary comparison for inference efficiency is between two typical AR methods, AutoReg_aug and AutoReg_struct. Other methods, such as BINDER, are encoder-only models and are not comparable. Methods like GoLLIE and DeepStruct use larger LLMs and different prompt engineering, making direct inference speed comparisons unfair.
- We did not report the results for AutoReg_aug and AutoReg_struct in Table 3 because these methods are not immediately suitable for the zero-shot setting. Our preliminary experiments show that they have significantly higher latency compared to PaDeLLM, and their F-score is quite limited. We will include these experimental results in the updated version.
| Latency | AI | Literature | Music | Politics | Science | Avg |
|----------|--------|------------|--------|----------|---------|--------|
| PaDeLLM | 398.37 | 357.45 | 352.85 | 366.76 | 375.02 | 370.09 |
| Auto_Aug | 1529.95| 2096.08 | 2545.20| 2364.87 | 2334.05 | 2174.03|
| F-score | AI | Literature | Music | Politics | Science | Avg |
|----------|------|------------|-------|----------|---------|-------|
| PaDeLLM | 60.7 | 66.1 | 67.6 | 68.1 | 64.4 | 65.38 |
| Auto_Aug | 0.19 | 0.15 | 0.94 | 0.13 | 0.21 | 0.324 |
**W3-report base model**: We have added a column specifying the base language model used by each method and will update these tables in the next version. Except for GoLLIE and DeepStruct, all other methods are encoder-only, which results in a smaller number of parameters. GoLLIE and DeepStruct use backbones of 34B and 10B parameters, respectively, which are larger than PaDeLLM based on LLaMA-2 with 7B parameters.
| Method | Base Language Model |
|---------------|---------------------|
| BINDER | BERT-base 110M |
| Gollie | Code-llama 34B |
| DeepStruct | GLM10B |
| AutoRegAug | LLaMA-2-7B |
| AutoRegStruct | LLaMA-2-7B |
| PaDeLLM-NER | LLaMA-2-7B |
| Method | Base Language Model |
|---------------|-------------------------|
| NEZHA-BC | NEZHA-base 110M |
| SSCNN | not report |
| W2NER | Transformer-based 110M |
| AutoRegAug | Baichuan2-7B |
| AutoRegStruct | Baichuan2-7B |
| PaDeLLM-NER | Baichuan2-7B |
**W4-writing improvement**
- Thank you for pointing out the typos. We will correct them in the updated version.
- The meanings of underline and bold formatting will be clarified in the updated version.
- The reason for the asterisk (*) is explained in the caption of Table 9. We will provide a more explicit explanation in the updated version.
**W5-considering ICL**: This is a good suggestion. We have acknowledged it in lines 287-289, and we will rephrase the wording to make it clearer. We'll also list caching mechanism as a potential area for future exploration (line 310).
---
Rebuttal 2:
Title: To Authors
Comment: Thanks for your responses.
> W1-speedup is weak when only one entity type...
This may need to be acknowledged in your paper. Since only one mention also existing in many scenarios.
> W3-report base model...
Thanks your adding this column. This is very important for readers to understand the cost of reaching the coressponding performance.
Though the decoding techniques could improve the efficiency. I still believe that using caching techniques and long-context ICL, i.e., fixed 1000-shots, may reach better performance but good efficiency. But I acknowledge that these techniques could be used in the decoding methods propoed in this paper.
Since most of my concerns have been addressed. I increase my soundness score to 3 and over accessment score to 6.
---
Rebuttal Comment 2.1:
Title: Thanks
Comment: Thank you for the thoughtful review. We will address the speedup weakness and other necessary modifications in the revised version. | Summary: They create an NER system where an LLM first outputs the number of mentions there are of a given type (for all possible types). Then all mentions can be generated in parallel.
This results in faster inference times as each generation is short, and they can be done in parallel.
Strengths: They compare to several different baseline on multiple NER datasets in multiple different settings.
Their method is much faster than others.
Weaknesses: Their reformulation of NER as predicting (label, mention) pairs removes a critical component of classical NER, the actual alignment of the mention to the tokens. Polysemous words are often mentions in some context and not in others and it if often important to know which one was the actual mention, especially if it is used for things like editing downstream.
The deduplication strategy is very aggressive and removes the possibility that some surface text is a label for multiple types in a single sentence. For example, "It is England vs. Italy on this sunny day in England", England is both a place (LOC) and a sports team (ORG) this would get filtered by their setup.
The prose's definition of " prediction quality [...] that is on par" is rather loose, with their model being 6 points behind on average for zero-shot (table 3) and behind by a point of two on most supervised datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: How do you expect this to scale to NER datasets like Ontonotes where a there are 20+ different mention category? Similarly what about long documents that could have 10-30+ mentions of a given type?
Did you see inconsistencies in the model outputs? For example a model that output that there is `1` person but then generated <mention 2> ${person name}?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions. We are grateful for the opportunity to address the concerns raised.
**W1-loss aligment of mention to the tokens**: We acknowledge that PaDeLLM loses token position information, and we recognize that incorporating token location (like start and end index) might enhance performance. We will consider this in future work, as suggested in [1].
[1] A Unified Generative Framework for Various NER Subtasks (Yan et al., ACL-IJCNLP 2021)
**W2-dedpulication is aggressive**: Yes, we acknowledge the de-duplicate strategy is aggressive. In our dataset, instances where two mentions appear under multiple labels are rare, as evidenced by our statistics. This is why we propose using the de-duplication mechanism. However, in real-world applications where a mention might be allowed to appear under multiple labels, we can choose not to use the de-duplication mechanism. This decision is a trade-off.
| Dataset | Count | Percentage |
|---------|-------|------------|
| ACE05 | 1 | 0.00034 |
| ConLL03 | 1 | 0.00017 |
| Genia | 0 | 0 |
| ecom | 0 | 0 |
| msra | 1 | 0.00013 |
| weibo | 0 | 0 |
| youku | 2 | 0.0012 |
| resume | 0 | 0 |
**W3-improper wording**: We will tone down the wording in the updated version.
**Q1-large number of entity types or mentions**:
- Theoretically, this method is effective regardless of the number of mention categories because each label-mention pair is handled by a different string. If you have more than 20 different mention categories, you can process them in parallel using over 20 different sequences to predict the number of mentions. Once that is complete, proceed to Step 2 to predict the label-mention pairs.
- In our in-house NER tasks, where there are often numerous mentions of a given type, we found that the method can manage tens of mentions without significantly losing accuracy. However, in practice, we typically limit the input to 512 tokens to prevent potential issues.
**Q2-generating inconsistency**: During inference, <mention x> is added to the prompt, which is not generated by the model. For instance, if the model outputs that there is `1` person at Step 1, we include <mention 1> in the prompt. If the model indicates there are `2` person at Step1, we duplicate the input string and add <mention 1> and <mention 2> to two strings, respectively. This approach effectively avoids inconsistencies. See Figure 2 caption the dashed box vs the solid box, and line 139-144. | Summary: This paper presents PaDeLLM-NER, a novel approach for accelerating Named Entity Recognition (NER) inference in Large Language Models (LLMs) through parallel decoding. A reformulation of the NER task that enables parallel generation of label-mention pairs, significantly reducing inference latency. A two-step inference process involving mention count prediction and parallel mention generation.
Extensive experiments demonstrated significant speedups (1.76x to 10.22x faster) compared to autoregressive approaches while maintaining or improving prediction quality across multiple datasets and two languages.
Strengths: The parallel decoding strategy for NER is innovative and addresses a significant bottleneck in LLM inference speed, which is important in some speed-sensitive applications. The authors conduct extensive experiments across multiple datasets, languages, and settings (zero-shot and supervised), proving the method's effectiveness. The reported speedups are substantial and could have a meaningful, practical impact on NER applications. The method is compatible with existing LLM architectures and can be integrated with other acceleration techniques. The methodology is well-explained with helpful diagrams and examples.
Weaknesses: Some details and corner cases are not well explained. For example, I didn't see the token location information in Figure 2. If the input has multiple and same mentions (e.g., "Donald Trump owns the Trump Organization" ), how does this framework distinguish with the same mentions? (e.g. Trump in the above example)
In addition, it is not clear how the de-duplicate model processes the partially duplicated mentions. For example, in the above case, the "Trump organization" was recognized as ORG, and what if the person module predicted the "Trump" in the "Trump organization" as a person? Will the de-duplicate model filter this case?
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions, writing and novelty. We are grateful for the opportunity to address the concerns raised.
**W1-token location information**:
- NER traditionally relies on sequence labeling, where each token is assigned a class based on its position. This requires token location information. However, in the seq2seq approach such as PaDeLLM, Autoregressive_struct and autoregressvie_aug, token location information is not mandatory.
- For the sentence "Donald Trump owns the Trump Organization," PaDeLLM can differentiate and label the entities as "PER: Trump" and "ORG: Trump" if the de-duplication mechanism is not employed, but the token location information is losing. We recognize that including token location might improve performance and will consider this in future work, as suggested in [1].
[1] A Unified Generative Framework for Various NER Subtasks (Yan et al., ACL-IJCNLP 2021)
**W2-drawback of de-duplication strategy**: Yes, the de-duplication mechanism will filter this case. In our dataset, instances where two mentions appear under multiple labels are rare, as evidenced by our statistics. This is why we propose using the de-duplication mechanism. However, in real-world applications where a mention might be allowed to appear under multiple labels, we can choose not to use the de-duplication strategy. This decision is a trade-off.
| Dataset | Count | Ratio|
|---------|-------|------------|
| ACE05 | 1 | 0.00034 |
| ConLL03 | 1 | 0.00017 |
| Genia | 0 | 0 |
| ecom | 0 | 0 |
| msra | 1 | 0.00013 |
| weibo | 0 | 0 |
| youku | 2 | 0.0012 |
| resume | 0 | 0 | | Summary: This paper proposes an interesting extension of the parallel text generation paradigm, where the authors tackle the NER task and propose to generate the labels independently. For each label prediction, the proposed method first predicts the number of mentions and then predicts the exact entity. The results show that the proposed model performs reasonably well, while achieving faster inference.
Strengths: 1. The proposed method is a pioneer work to accelerate LLM generation following the parallel generation paradigm.
2. We do observe significant speed-up empirically, which suggests the proposed method may be of value in real word applications.
Weaknesses: 1. The importance of the two-step prediction for each entity is not justified. I feel there should be a baseline such that the multiple mentions can be predicted together in an autoregressive fashion. For example, I can predict "entity type: LOC Italy English" as a whole.
2. Fundamentally, parallel predictions should be weaker than autoregressive predictions due to the drop in dependency capturing. However, we observe from Table 4 that AR models are noticeably worse than the parallel approach. Since these results contradict common wisdom, there needs more effort to justify them. For example, the authors may need to reveal the full training/testing configurations of both the AR and parallel models, and there could be some more detailed error analysis to show how AR models are making more mistakes than the parallel approach.
3. The proposed approach may face difficulty when a word is used multiple times with different types. For example, in "Washington lives in Washington," the proposed approach may predict "LOC" and "PER" for both "Washington"; however, it can not align them because the parallel approach is ordered agnostic among entities.
4. The proposed method needs finetuning to adjust the LLMs, which can be difficult when it comes to very large LLMs.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Is the duplication issue, as mentioned in Figure 2, very common? Do you have statistics for this?
2. Do you know if the testing datasets are in the training data of the LLM?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors make earnest efforts to discuss the limitations. In addition, the previously mentioned Weakness 3 is another potential limitation. The authors may include further discussion in this regard.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thoughtful feedback and appreciate the recognition of our paper’s contributions. We are grateful for the opportunity to address the concerns raised.
**W1-Justification of the importance of two-step prediction**:
We conducted an additional experiment using one-step prediction, where all mentions of the same label are predicted in a single sequence. As shown in the results below, the inference speed of this approach falls between the two-step prediction and purely AR model, which is as expected. However, the prediction quality is lower than that of the two-step prediction. In other words, two-step prediction outperforms one-step prediction in both inference speed and prediction quality. We will include these new results in the updated version as strong justification for two-step prediction.
| Latency | ace05 | conll03 | genia |
|----------------|-------|---------|-------|
| PaDeLLM_multi | **255.53**| **229.74** | **316.90**|
| OneStep_multi | 386.93| 272.22 | 513.63|
| f-score | ace05 | conll03 | genia |
|----------------|-------|---------|-------|
| PaDeLLM_multi | **85.02** | **92.52** | **77.66** |
| OneStep_multi | 80.98 | 91.36 | 76.27 |
**W2-Reason that AR model is worse**: We acknowledge that AR methods, which rely on dependency tokens, offer certain advantages. However, they also encounter challenges in specific scenarios:
- Nested NER: For example, in the ACE dataset, the term "human" is nested within the "Human rights group." All AR methods struggle to identify this nested structure. In contrast, PaDeLLM, which decomposes from dependencies, effectively addresses this issue.
||||
|-------|---------------------------------------------------------------------------------------------------------------------------------|---------|
| **Input** | Human rights group Amnesty International said Friday 's verdict `` represents another step in the further deterioration in the human rights situation in the country . | f-score |
| **GT** | "PER": ["Human", "human"], "ORG": ["Human rights group", "Human rights group Amnesty International"], "GPE": ["the country"] | 1.0 |
| **PaDeLLM** | "GPE": ["the country"], "PER": ["human"], "ORG": ["Human rights group", "Human rights group Amnesty International"] | 0.88 |
| **AutoStruct** | ((ORG:Human rights group), (ORG:Human rights group Amnesty International), (GPE:the country), (VEH:null), (FAC:null), (LOC:null), (PER:null), (WEA:null)) | 0.74 |
| **AutoAug** | [ [ Human rights group /ORG] Amnesty International /ORG] said Friday 's verdict `` represents another step in the further deterioration in the human rights situation in [ the country /GPE] . | 0.74 |
- Parse Errors in AR Output: Additionally, AR models can encounter format errors as shown below.
- Overlong Example (MSRA): ((组织:邓小平同志治丧委员会),(名称:江泽民),...,(名称
- This is due to the prediction being cut off mid-output.
- Error Format (Resume): ((公司:环三)+...+(名称:null)+(籍贯:null)+(学历:null)+(国籍:null))
- Incorrect use of "+" causes parsing issues.
- Invalid Null Entries: ((组织:亚协会长),...,(组织:庆祝东方新年),(组织:null),(null))
- Entries with "(null)" lacking an entity type lead to parsing failures.
- Training/test configuration: for AR models and PaDeLLM, we use the same training and testing configuration as reported in Appendix D (i.e., line 581-586)
- We will release all resources including inference results for the community to replicate and analyze the results. And we will add all this analysis to the updated version.
**W3-same mention under multiple labels**:
- We acknowledge the method loses token location information, which will be explored in future work.
- We have identified instances where the same mention is assigned different entity types in the ground truth, as shown in the table. Although such cases are rare, addressing this issue is important for real-world applications. In these situations, we can disable the de-duplication, allowing 'Washington' to be predicted as both 'LOC' and 'PER' simultaneously.
| Dataset | Count | Ratio|
|---------|-------|------------|
| ACE05 | 1 | 0.00034 |
| ConLL03 | 1 | 0.00017 |
| Genia | 0 | 0 |
| ecom | 0 | 0 |
| msra | 1 | 0.00013 |
| weibo | 0 | 0 |
| youku | 2 | 0.0012 |
| resume | 0 | 0 |
**W4-need fine-tune**:
- First, we tested the zero-shot generalizability of PaDeLLM, as detailed in lines 249-260, demonstrating its efficacy without further SFT to handle out-of-domain entity types, as appreciated by Reviewer XzjF.
- Additionally, we can employ resource-friendly fine-tuning techniques such as LoRA, P-tuning, and others to achieve the SFT.
- For larger LLMs, it is feasible to use training-free methods like in-context learning to enable the LLM to make the two-step prediction, as highlighted in line 287-289. This approach will be explored in future work. We will add this discussion to the updated version.
**Q1**: Such situation is too rare, the statistics of prediction is shown below:
| Dataset | Count | Ratio|
|---------|-------|------------|
| ACE05 | 22 | 0.0074 |
| ConLL03 | 10 | 0.0017|
| Genia | 18 | 0.0034 |
| ecom | 2 | 0.0012|
| msra | 5 | 0.00089|
| weibo | 0 | 0 |
| youku | 3 | 0.0019 |
| resume | 0 | 0 |
**Q2**: this is discussed in Section 6, the primary focus of our experiments is the comparison of our proposed method with baseline methods (Auto_Struct and Auto_Aug). Given that these methods employ the same LLM as the base model, data contamination is unlikely to significantly impact the results.
We kindly request your acknowledgement of our reply, and are welcome to further discussions for your questions and concerns. We would be fully appreciated if you would consider to improve the **rating**. We look forward to your response.
---
Rebuttal 2:
Comment: Thanks for the responses. I have some follow-up comments/questions.
**W1**
Which decoding strategies were used in the "one-step" baselines? How do you handle the order, i.e., "LOC Italy England" vs "LOC English Italy."
**W2**
Can you elaborate more on "All AR methods struggle to identify this nested structure? In contrast, PaDeLLM, which decomposes from dependencies?" Why is dropping dependency free from errors for the nested structure? (Additionally, can you provide a more formal definition for this nested structure?)
**W3**
Thanks for providing the statistics. However, I feel this is a fundamental limitation (in terms of capability) of the proposed method and should be addressed explicitly in the paper (at least in the Limitation section). Having post hoc refinements could be a solution, but they may also be applied to other methods, which would not be a fair comparison.
**W4**
Just to confirm: the "zero-shot generalizability of PaDeLLM" refers to the zero-shot datasets, right? The PaDeLLM model itself needs to be trained. Do you think you can make PaDeLLM a prompt-only LLM-based method?
---
Rebuttal 3:
Title: Response for comment
Comment: Thanks for the further discussion and we are grateful for the opportunity to address the concerns.
**W1-one-step pred** :
- In "one-step baselines". all mentions under the same label are predicted in one single sequence. If there is no label related to any mention in the example, the result is empty. Please refer to the exact prediction example below. The latency of the slowest sequence is reported as the overall latency of one example.
| Entity | Text | NER Result |
|--------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------|
| ORG | \<entity>ORG\<text>2004-12-20T15:37:00 Microscopic microcap Everlast, mainly a maker of boxing equipment, has soared over the last several days thanks to a licensing deal with Jacques Moret allowing Moret to buy out their women's apparel license for $$ 30 million, on top of a $$ 12.5 million payment now. | NER result: ["Microscopic microcap Everlast", "a maker of boxing equipment", "their"] |
| PER | \<entity>PER\<text>2004-12-20T15:37:00 ... million payment now. | NER result: ["Jacques Moret", "Moret", "their", "their women"] |
| GPE | \<entity>GPE\<text>2004-12-20T15:37:00 ... million payment now. | NER result: [] |
| LOC | \<entity>LOC\<text>2004-12-20T15:37:00 ... million payment now. | NER result: [] |
- The order of mentions is kept the same as in the ground truth, aligning with the data provided by the respective dataset.
**W2-nested NER elaborate**:
- Definition of nested structure:
In the table (ground truth), the phrase 'highly diverged [Drosophila [homeodomain |DNA] |DNA]' shows a nested structure where 'homeodomain' (DNA) is nested within 'Drosophila homeodomain' (another DNA).
- Elaboration
AR methods struggle with such hierarchies due to their reliance on linear dependencies. The table's results (Auto_Aug, Auto_struct) highlight the need for models to not only **understand the input and effectively model long-range dependencies to produce the correct format to be parsed.**
In contrast, PaDeLLM simplifies nested dependencies by breaking the tasks down during the two-step prediction, allowing it to maintain a shorter output format, making PaDeLLM better suited for handling nested structures compared to AR methods.
We will add this discussion to the Appendix in the updated version.
| | |
| --- | --- |
| **Input** | When the homeodomain from HB24 was compared to known mammalian and Drosophila homeodomains it was found to be only moderately conserved, but when it was compared to a highly diverged Drosophila homeodomain, H2.0, it was found to be 80% identical. |
| **Ground truth** | When the [ homeodomain /DNA] from [ HB24 /DNA] was compared to known mammalian and Drosophila homeodomains it was found to be only moderately conserved, but when it was compared to a highly diverged [ Drosophila [ homeodomain /DNA] /DNA] , H2.0, it was found to be 80% identical. |
| **Auto_Aug** | When the [ homeodomain /DNA] from [ HB24/DNA] was compared to known mammalian and Drosophila homeodomains it was found to be only moderately conserved, but when it was compared to a highly diverged [ Drosophila [ homeodomain /DNA] , [ H2.0, /DNA] it was found to be 80% identical." |
| **Auto_struct** | ((DNA:homeodomain),(DNA:HB24),(DNA:homeodomains),(DNA:Drosophila homeodomain),(DNA:H2.0),(RNA:null),(cell_line:null),(protein:null),(cell_type:null)) |
| **PaDeLLM** | str1:entity type:\nDNA\n\n<num>3\n<mention 1>homeodomain|
||str2:entity type:\nDNA\n\n<num>3\n<mention 2>HB24|
||str3:entity type:\nDNA\n\n<num>3\n<mention 3>Drosophila homeodomain|
**W3**:We agree that this is a fundamental limitation of our proposed method. We will explicitly discuss this in the Limitation section. Additionally, we will include the experimental results without de-duplication in Table 4 (currently only presented in Table 6), to ensure a fair comparison for readers in the updated version.
**W4**:Yes, the "zero-shot generalizability of PaDeLLM" refers to the zero-shot datasets, PaDeLLM model itself needs to be trained. We believe that the strong instruction-following capabilities of LLMs (especially those much larger LLMs) make it feasible to implement PaDeLLM through in-context learning or other prompt engineering techniques without additional training.
If this addresses your concern, we would be fully appreciated if you would consider to improve the rating.
---
Rebuttal Comment 3.1:
Comment: Thanks for clarifying. I'll raise my overall recommendation to 6, and I hope our discussion can help the revision.
---
Reply to Comment 3.1.1:
Title: Thanks
Comment: Thank you for the thoughtful discussion and for raising the rating. We will continue to improve the revision by taking the feedback into account. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
$\epsilon$-Softmax: Approximating One-Hot Vectors for Mitigating Label Noise | Accept (poster) | Summary: This paper proposes $\epsilon$-softmax to deal with label noise. $\epsilon$-softmax modifies the outputs of the softmax layer to approximate one-hot vectors with a controllable error $\epsilon$. Both theoretical and empirical studies show the effectiveness of the proposed method.
Strengths: 1. The writing of this paper is good.
2. The robustness of the proposed loss is theoretically proved.
Weaknesses: 1. I think the motivation or the underlying reason for the effectiveness needs further explanation.
2. In experiment, the advantage of the proposed method over the competitors is probably not statistically significant.
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. The term "symmetric condition" in abstract needs further explanation.
2. In the implementation steps in Line 61, is $\mathbf{p}(\cdot)$ the same as ${p}(\cdot)$? Or what is the relationship between these two notations? I think the mathematical notations should be strictly used and defined.
3. The robustness of the proposed $\epsilon$-softmax loss is theoretically justified. However, I'm curious to know the insight, or the underlying reason for its robustness. The authors wrote that "The distinctive attribute of $\epsilon$-softmax lies in its guarantee to possess a controllable approximation error $\epsilon$ to one-hot vectors, thus achieving perfect constraint for the hypothesis class." But, I cannot figure out why controlling approximation error $\epsilon$ to one-hot vectors and achieving perfect constraint for the hypothesis class are useful for handling label noise? What is the direct reason? It would be better if the authors can provide some intuitive explanations.
4. From the experiments, I note that the performance is a bit sensitive to different selection of m. Therefore, is it possible to give some guidance in choosing m for practical use? Besides, will better performance be obtained if we add the $\epsilon$-softmax loss functions with different m?
5. From the experimental results in Table 2, 4, I can see that the improvement of the proposed method over other methods is quite marginal. I guess if we do statistical significance test, maybe such improvement will not be statistically significant.
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: I think this work will not have negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your valuable comments. We would like to offer the following responses to your concerns.
**1. Response to Weakness 1 and Question 3**
Thanks for your kind comment.
Previous work indicates that, for a fixed vector $\mathbf{v}$ and $\forall L \in \mathcal{L}$, we have
$$\sum _{k=1}^K L(\mathbf{u}, k) = C, \forall \mathbf{u} \in \mathcal{P} _{\mathbf{v}},$$
where $k \in [K]$ denotes the label corresponding to each class, $C$ is a constant. This lemma suggests that when the network output $\mathbf{u}$ is restricted to a permutation set $\mathcal{P}_{\mathbf{v}}$ of a fixed vector $\mathbf{v}$, any loss function will satisfy the symmetric condition.
In this paper, we consider the fixed vector as a one-hot vector $\mathbf{e} _1$. The key challenge is that directly mapping outputs to a permutation set $\mathcal{P} _{\mathbf{v}}$ is a non-differentiable operation. Based on these motivations, we propose a simple yet effective scheme, $\epsilon$-softmax, to approximate one-hot vectors $\mathcal{P} _{\mathbf{e}_1}$. Thus, any loss function using $\epsilon$-softmax can mitigate label noise with a theoretical guarantee. We will include this detailed explanation in the revised version.
**2. Response to Weakness 2 and Question 5**
Thanks for your feedback. We would like to emphasize that our experiments include dozens of baselines and follow the same setting with previous works. On difficult noise and real world noise, our method achieves remarkable results. These comprehensive evaluations aims to provide a robust assessment of our method.
For Table 2, as suggested by other reviewer, we search more carefully about
hyperparameters and obtain better performance. More details are available in the global rebuttal (see response to Q2). The new results are provided as follows, where "S" is symmetric noise, "AS" is asymmetric noise, and * denotes using better learning rate and weight decay.
|CIFAR-10|Clean|S (0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|NCE+AGCE|91.13±0.11|89.00±0.29|85.91±0.15|80.36±0.36|49.98±4.81|89.90±0.09|88.36±0.11|85.73±0.12|79.28±0.37|
|CE$_\epsilon$+MAE*|**91.94±0.18**|**89.76±0.08**|**86.77±0.16**|**80.47±0.87**|**58.96±0.70**|**91.06±0.13**|**89.58±0.29**|**87.78±0.23**|**82.47±0.56**|
|**CIFAR-100**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**|**AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
|NCE+AGCE|68.78±0.24|65.30±0.46|59.95±0.15|47.63±0.94|24.13±0.06|67.15±0.40|64.21±0.17|56.18±0.24|44.15±0.08|
|CE$_\epsilon$+MAE*|**75.62±0.23**|**70.96±0.20**|**64.22±0.50**|**50.69±0.25**|**26.30±0.46**|**72.86±0.06**|**66.70±0.20**|**58.47±0.12**|**48.51±0.36**|
As can be seen, we significantly outperform the SOTA approaches.
For Table 4, we used the t-test to determine if our CE$\epsilon$+MAE (Semi) statistically significantly exceeds the SOTA approaches. The p-values are as follows:
|CIFAR-N|Aggregate|Random 1|Random 2|Random 3|Worst|Noisy100|
|---|---|---|---|---|---|---|
|Divide-Mix|0.018|0.000|0.000|0.000|0.000|0.006|
|SOP+|0.001|0.000|0.000|0.000|0.000|0.000|
|Proto-semi|0.000|0.012|0.002|0.001|0.000|0.000|
As can be seen, in all cases, our approach statistically significantly exceeds the previous sota approaches (p-value $<$ 0.05). This additional analyses will be included in the revised manuscript.
**2. Response to Question 1**
A loss function $L$ satisfies the symmetric condition is defined as (Eq. 1.1 in our Introduction):
$$
\sum_{k=1}^K L(f(\mathbf{x}), k) = C, \forall \mathbf{x} \in \mathcal{X}, \forall f \in \mathcal{H},
$$
where $k \in [K]$ denotes the label corresponding to each class, $C$ is a constant, and $\mathcal{H}$ is the hypothesis class. We will include this explanation in the abstract of the revised version to clarify this detail.
**3. Response to Question 2**
Thanks for your comment. $\mathbf{p(\cdot|x)}$ denotes the prediction probability **vector** for the sample $\mathbf{x}$. $p_k$ denotes the prediction probability **value** for the $k$-th class, i.e., the $k$-th element in the vector $\mathbf{p(\cdot|x)}$. We have followed common definitions of mathematical notations in academic paper, i.e., vectors are typically represented in boldface while scalars are usually represented in italicized font.
**4. Response to Question 4**
Thanks for your suggestions.
--- Guidance on Choosing $m$
An effective guideline is to use a larger $m$ for symmetric noise and a smaller $m$ for asymmetric noise. A larger $m$ imposes tighter symmetric constraints, which enhances robustness. However, if $m$ is too large, optimization on asymmetric noise can become challenging. For instance, on CIFAR-10, the symmetric MAE and CE achieve 45.36\% and 18.95\% accuracy under 0.8 symmetric noise, respectively. Conversely, MAE becomes difficult to optimize under asymmetric noise, resulting in 55.88\% and 75.28\% accuracy under 0.4 asymmetric noise.
--- More experiments on different $m$
We conducted further experiments on CIFAR-10 with various values of $m$. The results are summarized below. "S" is symmetric noise, and "AS" is asymmetric noise. We use m=1e5 for symmetric noise and 1e3 for asymmetric noise in the paper.
|CIFAR-10|Clean|S (0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|m=1e2|91.24±0.13|**89.33±0.09**|86.06±0.06|78.92±0.53|52.28±0.86|90.33±0.10|8.44±0.41 |85.16±0.18|78.36±0.16|
|m=1e3|**91.49±0.22**|89.08±0.36|85.99±0.02|79.29±0.29|54.24±1.24|90.30±0.11|**88.62±0.18**|**85.56±0.12**|**78.91±0.25**|
|m=1e4|91.02±0.21|89.32±0.40|**86.14±0.40**|**79.72±0.06**|57.41±1.32|90.20±0.04|88.49±0.39|82.73±4.26|71.65±0.66|
|m=1e5|91.40±0.12|89.29±0.10|85.93±0.19|79.52±0.14|**58.96±0.70**|**90.44±0.10**|80.53±0.91|68.27±3.30|56.52±0.11|
As shown above, using different values of $m$ can sometimes yield better performance, while the differences are not substantial.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for providing the additional results. I will increase my score. However, I still think the math should be rigorously defined and *explained* if the paper is finally accepted.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Thanks for your feedback! We are pleased to address your concerns. The strict definition and detailed explanation of the mathematical symbol are necessary. We will rigorously define the mathematical symbol in the final version. We truly appreciate the pivotal role that reviewers like yourself play in enhancing the quality of our work. | Summary: This submission proposes a enhanced softmax layer for label-noise learning, namely $\epsilon$-softmax. By incorporating with the well-known $\epsilon$-relaxation, the proposed $\epsilon$-softmax can regularize the outputs of the model and avoid fitting the label-noise sample. This simple and plug-and-play method theoretically bounds the output logits to be an approximated one-hot vector. Extensive experiments demonstrate the effectiveness of the proposed method.
Strengths: - The proposed method is simple, plug-and-play, and effective. Unlike other label-noise robust losses, the proposed method not only works well by itself, but also can be integrated with other label-noise robust method such as DivideMix. To the best of my knowledge, this could one of the first works endow such property.
- The theoretical analysis is comprehensive and make sense. The theoretical results suggests the proposed method possesses the Top-K error consistency and label-noise robustness.
- The analysis between the most related previous works is in reason. The basic idea that balances the label-noise robustness and learning effectiveness has been researched for a long time, e.g., GCE. This submission clearly presents the connection between the proposed method and other symmetric losses.
- The empirical results is effective and enough. This submission presents the comparison results between many label-noise robust losses and the proposed method achieves the best performance in most cases. Additionally, this submission provides the experimental results that demonstrated the plug-and-play property of the proposed method on sample-selection based method and loss-correction based method.
Weaknesses: - The ablation studies on the gradient clipping should be conducted and providing experimental results with different backbones would be better.
- It is exhaustive and labor-expensive to find the optimal $m$ for diverse datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why choose the MAE as the loss to incorporate with ${CE_\epsilon}$? Following GCE and SCE, the MAE is indeed a practical and evaluated choice. Do there have any alternatives?
- What is the connection or relationship between logit adjustment and $\epsilon$-softmax?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No obvious limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns.
**1. Response to Weakness 1**
Thanks for you kind comment.
--- About ablation study on gradient clipping
We fully followed the experimental setup of previous work [1], and we used gradient clipping because it was used in [1]. This PyTorch component stabilizes training without impacting the method comparison. We report the results without using gradient clipping as follows, where "S" is symmetric noise and "AS" is asymmetric noise.
|CIFAR-10 w/o|Clean|S (0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|CE|90.63±0.18|74.67±0.25|57.77±0.69|38.84±0.66|19.41±0.49|87.17±0.23|83.58±0.56|79.69±0.07|74.76±0.11|
|NCE+AGCE|90.85±0.24|88.91±0.11|85.96±0.14|**79.97±0.12**|43.85±3.98 |90.13±0.09|88.40±0.16|85.27±0.36|**79.79±0.37**|
|LDR-KL|91.07±0.11|89.15±0.12|84.98±0.52|74.77±0.31|32.20±0.78|90.22±0.27|88.50±0.47|85.43±0.17|77.86±0.07|
|CE$\epsilon$+MAE|91.20±0.12|**89.22±0.25**|**86.10±0.21**|79.75±0.24 |**57.43±0.48**|**90.34±0.04**|**88.59±0.29**|**85.84±0.22**|78.72±1.09|
|**CIFAR-100 w/o**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**|**AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
|CE|69.11±1.11|53.63±0.47|35.23±2.35|20.04±1.43|7.25±0.35|58.46±3.88|56.45±2.19|47.72±2.78|39.46±0.55|
|NCE+AGCE|69.08±0.34|65.36±0.20|58.65±0.63|46.19±1.03|23.89±0.79|67.13±0.21|63.82±0.23|55.90±0.87|43.68±0.45|
|LDR-KL|71.67±0.30|56.54±0.52|40.17±0.87|22.18±0.44|7.02±0.28|65.42±0.15|58.20±0.24|50.24±0.28|41.23±0.09|
|CE$\epsilon$+MAE|70.20±0.78|**65.90±0.18**|**59.09±0.66**|**47.03±1.04**|**26.47±0.62**|**67.48±0.62**|**64.19±0.31**|**58.12±1.11**|**48.24±0.36**|
As can be seen, whether gradient clipping is used or not does not affect the comparison of methods, and our method still shows excellent performance.
--- About different backbone
We conduct more experiments using VGG-13.
The training settings are listed as follows: batch size 256, learning rate 0.1 with cosine annealing, weight decay 5e-4, and 200 epochs. The results with 3 random trials are as follows:
|CIFAR-10|Clean|S (0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|CE|93.91±0.32|83.21±0.08 |65.96±0.53 |43.30±0.60|21.46±0.82 |91.53±0.08|87.75±0.14|83.11±0.38|77.18±0.31|
|GCE|93.90±0.09|91.15±0.19|82.57±0.21 |57.75±0.65|24.67±0.94|92.28±0.10|87.87±0.32|81.87±0.30|76.99±0.51 |
|NCE+AGCE|90.42±0.09|88.24±0.33|83.61±0.86|50.63±5.62|11.68±2.92|89.98±0.14|88.03±0.45|86.46±0.53|69.02±0.45|
|LDR-KL|92.51±0.13|90.53±0.19|87.91±0.03|79.24±0.13|39.49±0.58|91.64±0.17|89.86±0.35|87.50±0.40|67.63±0.24|
|CE$\epsilon$+MAE|93.61±0.05|**91.62±0.24**|**88.38±0.39**|**80.31±0.47**|**48.05±1.50**|**92.67±0.38**|**90.13±0.19**|**87.99±0.07**|**81.69±0.48**|
|**CIFAR-100**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**|**AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
|CE|71.71±0.18|59.96±0.47|47.26±0.40|30.50±0.14|9.68±0.53|66.62±0.59|60.47±0.33|53.35±1.09|43.26±0.24|
|GCE|70.59±0.30|66.79±0.08|58.90±0.08|41.06±0.42|10.89±0.09|68.99±0.16|59.85±0.17|48.44±0.97|38.68±0.94|
|NCE+AGCE|71.40±0.41|67.25±0.26|59.80±0.23|30.82±2.42|4.90±1.07|**69.85±0.01**|65.63±0.38|53.27±0.14|41.83±0.80|
|LDR-KL|68.68±0.21|54.83±0.33|40.84±0.59|24.71±0.34|8.11±0.07|62.73±0.62|55.65±0.19|48.03±0.53|40.01±0.18|
|CE$\epsilon$+MAE|71.07±0.21|**67.53±0.13**|**60.76±0.24**|**44.49±0.19**|**23.29±0.68**|69.28±0.09|**65.84±0.26**|**57.80±0.10**|**47.09±0.80**|
As can be seen, our approach still has outstanding performance.
**2. Response to Weakness 2**
Thanks for your kind comment. Our method, like most robust loss functions, requires different parameters for different datasets. Many approachs have been proposed to find optimal parameters quickly and economically. One efficient approach is to random select a subset of the training set as a smaller training set, for instance, 1/5 of the original set. We can then train on this smaller set to search for parameters efficiently.
**3. Response to Question 1**
Thanks for your insightful question. We chose MAE because it is the most classic symmetric loss, and there is no other reason. We recommend using any symmetric loss in combination with CE$_\epsilon$ rather than non-symmetric loss, because combining with symmetric loss does not increase the excess risk bound, as demonstrated by Lemma 2.
**4. Response to Question 2**
Thanks for your insightful question. Logit adjustment [2] modifies logits to encourage a large margin between rare and dominant labels for long-tail learning. However, logit adjustment requires prior knowledge of label distributions, which is not available for noisy labels. Therefore, it is not suitable for learning with noisy labels.
$\epsilon$-Softmax mitigates overfitting to noisy labels by adjusting the probabilities. We find that $\epsilon$-softmax is also effective for long-tail learning, where we can use larger $m$ for dominant labels. Specifically, we set $m_k$ for the k-th class as $m \cdot p_k^{label}$, where $p_k^{label}$ is the frequency of the k-th class labels among all labels. We use ResNet-32 for CIFAR-10-Lt and CIFAR-100-LT, following the setting in [2] with long-tail imbalance ratio 100. The average class accuracies with 3 random trials are as follows:
|Method|CIFAR-10-LT|CIFAR-100-LT|
|---|---|---|
|CE|70.21±0.34|39.18±0.27|
|CE$_\epsilon$ (m=0.1)|71.75±0.46|40.08±0.25|
|CE$_\epsilon$ (m=0.5)|**72.53±0.69**|40.32±0.06|
|CE$_\epsilon$ (m=1)|72.50±0.40|**40.42±0.26**|
Thanks again for your kind comment, which has inspired us to explore the application of $\epsilon$-softmax in other areas. We are pleased to see the flexibility of our plug-and-play method.
[1] Asymmetric loss functions for noise-tolerant learning: Theory and applications, TPAMI, 2023.
[2] Long-tail learning via logit adjustment, ICLR, 2021.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my questions and concerns with additional results. I will keep my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback!
Comment: Dear Reviewer MvPF
Thanks for your feedback! We are pleased to address your concerns and greatly appreciate your reviews, which play a crucial role in improving our work.
Best regards,
The authors | Summary: This manuscript proposes a novel method to approximate the symmetric condition of the loss function, which is necessary for robustness to label noise. Specifically, the proposed method, named \\( \\epsilon \\)-softmax, can adjust the model output to approximate one-hot vector. However, the proposed method alone suffers from underfitting, so the authors combined it with MAE to achieve better performance. The authors evaluated the proposed method on datasets with different noise types and rates.
Strengths: 1. This manuscript proposes a novel and simple method to approximate the symmetric condition of loss function.
2. The theoretical analysis focuses not only on robustness to label noise but also on the top-k consistency of the loss function.
3. The proposed method was evaluated on various noise types and rates, including class-dependent noise and real-world noise.
4. It good to see the authors compare their proposed method with temperature-dependent softmax combined with MAE. The experimental results demonstrate the superiority of their proposed method compared to temperature-dependent softmax.
Weaknesses: 1. The theoretical discussion with temperature-dependent softmax is missing. As the authors mentioned in L42 to L53, there are other output restriction-based methods in the literature, such as temperature-dependent softmax. Although the authors claim that these methods “lack predictability, fail to achieve a quantitative approximation to one-hot vectors, and exhibit limited effectiveness,” there is no detailed discussion on why the proposed \\( \\epsilon \\)-softmax has superior properties.
2. A direct comparison with sparse regularization [1] is missing. Sparse regularization utilizes temperature- dependent softmax, which this manuscript has already compared, to approximate one-hot vector output. However, sparse regularization also employs an additional regularization term, \\( \\ell_p \\)-norm \\( \\| p(\\cdot | x) \\|^p_p \\) to enhance performance, and this regularization term is equivalent to MAE only if \\( p = 1 \\). It’s necessary to highlight the advantages of the proposed method compared to this highly relevant approach.
3. There is no ablation study on \\( \\alpha \\) and \\( \\beta \\). As the authors mentioned in L218, \\( \\epsilon \\)-softmax alone suffers from a loss in fitting ability, and they combined it with MAE to balance the robustness and effective learning. However, without the relevant ablation study, it’s unclear how this “trade-off” is achieved.
4. The theoretical discussions and experiments regarding instance-dependent label noise are overlooked. In recent years, the instance-dependent label noise has attracted increasing attention [2,3,4]. Experimenting the proposed method on instance-dependent label noise can provide a better understanding of how the proposed method performs with different types of label noise. I encourage the authors to include related discussion in the revised manuscript.
[1] Learning with noisy labels via sparse regularization, ICCV, 2021.
[2] Part-dependent Label Noise: Towards Instance-dependent Label Noise, NeurIPS, 2020.
[3] Learning with Instance-Dependent Label Noise: A Sample Sieve Approach, ICLR, 2021.
[4] Instance-Dependent Label-Noise Learning With Manifold-Regularized Transition Matrix Estimation, CVPR, 2022.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Is \\( \\epsilon \\)-softmax + MAE still All-\\( k \\) calibrated and All-\\( k \\) consistency?
2. Can the proposed method perform better compared to sparse regularization?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have acknowledged the limitations and societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns.
**1. Response to Weakness 1**
Thanks for your insightful comment.
In the following, we give the theoretical discussion for temperature-dependent softmax.
For model with temperature-dependent softmax, i.e., $f(\mathbf{x})= \tau$-softmax$ \circ h(\mathbf x)$, we have:
$$\min _{\mathbf u \in \mathcal{P} _{\mathbf e_1}} \\|f(\mathbf x)-\mathbf u \\|_2=\sqrt{1-2p _{t}+\sum _{k=1}^K p_k^2} = \sqrt{1-p _{t}-\sum _{k=1}^K p_k(p_t-p_k)} \le \sqrt{1-p_t}\le \sqrt{1-1/K},$$
where $\mathbf p(\cdot|\mathbf x) = f(\mathbf{x})$ and $t=\arg\max _{k\in[K]}p _k$. The equality holds when each term of $h(\mathbf x)$ is equal, no matter what value the temperature
$\tau$ takes, we have $p_k = 1/K, \forall k$.
For $\epsilon$-softmax, we have $\min_{\mathbf u\in\mathcal{P}_{\mathbf e_1}}\\|f(\mathbf x)-\mathbf u\\|_2\le \epsilon = \tfrac{\sqrt{1 - 1/K}}{m+1}$ (Lemma 1). Our $\epsilon$-softmax make every output approximating one-hot vectors with a smaller error $\epsilon$. Thus, a smaller excess risk bound can be derived.
**2. Response to Weakness 2 and Question 2**
Thanks for your kind suggestion.
Sparse regularization (SR) is a regularization method for probabilities, independent of labels and distinct from a loss function. We tend to compare with robust loss functions in this paper.
Following the suggestion, we compared our method with SR. The results for 0.8 symmetric and real-world noise are provided.
|Method|CIFAR-10|CIFAR-100|WebVision|
|---|---|---|---|
|CE+SR|51.13±0.51|17.35±0.13|69.12|
|CE$_\epsilon$+MAE|**58.96±0.70**|**26.30±0.46**|**71.32**|
As can be seen, our approach significantly outperforms SR in difficult and real-world situations. In addition, SR requires dynamic adjustment of hyperparameters at each epoch, making it difficult to train.
**3. Response to Weakness 3**
Thanks for your kind suggestion.
--- About ablation study on $\alpha$ and $\beta$.
We offer the ablation experiments. For simplicity, we fix $\alpha$ and then adjust $\beta$, $\beta=5$ for CIFAR-10 and 1 for CIFAR-100 is used in the paper. "S" is symmetric noise, and "AS" is asymmetric noise.
|CIFAR-10|Clean|S (0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|$\beta=1$|90.19±0.13|87.89±0.25|84.71±0.09|76.90±0.30|44.26±0.78|89.20±0.17|86.85±0.17|82.74±0.32|75.75±0.42|
|$\beta=5$|**91.40±0.12**|**89.29±0.10**|85.93±0.19|**79.52±0.14**|**58.96±0.70**|**90.30±0.11**|**88.62±0.18**|**85.56±0.12**|**78.91±0.25**|
|$\beta=10$|91.31±0.21|89.08±0.19|**86.06±0.08**|77.78±3.32|43.00±3.86|90.14±0.11|88.58±0.39|83.42±4.61|72.87±0.55|
|**CIFAR-100**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**|**AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
|$\beta=0.5$|**71.01±0.05**|64.31±0.32|52.07±0.76|43.32±0.76|15.42±0.88|65.50±0.84|62.22±0.24|53.27±0.31|41.48±0.32|
|$\beta=1$|70.83±0.18 |**65.45±0.31**|**59.20±0.42**|**48.15±0.79**|**26.30±0.46**|**67.58±0.04**|**64.52±0.18**|**58.47±0.12**|**48.51±0.36**|
|$\beta=5$|67.87±0.88|60.05±0.84|51.19±1.62|26.93±1.80|10.30±2.83|63.22±0.46|51.67±0.66|40.34±5.45|25.69±0.77|
--- About the better trade-off
The better trade-off means that we achieve better performance on both fitting ability and robustness compared to previous works.
Previous works like GCE and SCE increase fitting ability but reduce robustness due to the CE term. In contrast, our combination retains robustness, as demonstrated by Lemma 2, and also improves fitting ability. To verify this, we present comparison results on CIFAR-10 symmetric noise:
|CIFAR-10|Clean|0.2|0.4|0.6|0.8|
|:---:|:---:|:---:|:---:|:---:|:---:|
|GCE|89.42±0.21|86.87±0.06|82.24±0.25|68.43±0.26|25.82±1.03|
|SCE|91.30±0.08|87.58±0.05|79.47±0.48|59.14±0.07|25.88±0.49|
|CE$_\epsilon$+MAE|**91.40±0.12**|**89.29±0.10**|**85.93±0.19**|**79.52±0.14**|**58.96±0.70**|
As can be seen, CE$_\epsilon$+MAE achieves better results on both the clean and noisy cases.
**4. Response to Weakness 4**
Thanks for your comment. The results for instance-dependent noise are available in the global rebuttal (please see response to Q1).
Furthermore, we provide the excess risk bound for instance-dependent noise:
$$\mathcal{R} _L(f _\eta^*)\le 2\delta + \frac{2c\delta}{a},$$
where $c = \mathbb{E} _\mathcal D\left(1-\eta _{\mathbf{x}}\right)$, $a=\min _{\mathbf x,k}(1-\eta _y-\eta _{\mathbf x,k})$, $f^* _\eta$ and $f^*$ denote the global minimum of $\mathcal R_L^\eta(f)$ and $\mathcal R_L(f)$, respectively.
The proof is similar to that for asymmetric noise. We will add the proof in the revised version, as there is not enough space.
**5. Response to Question 1**
Thanks for your kind comment. The answer is yes.
We rename the $\alpha, \beta$ for CE$ _\epsilon$+MAE loss as $1-\lambda$ and $\lambda$ for a better presentation, where $0 < 1 - \lambda \le 1$.
For $L=$ CE$_ \epsilon$+MAE, label is $\mathbf q$, and $f(\mathbf x)=\epsilon$-softmax$ \circ h(\mathbf x)$, we have
$L = \sum _{k=1}^K q _k(-(1-\lambda))\log f(\mathbf x) _k + \lambda(1-f(\mathbf x) _k)).$ By Lagrangian multiplier, we have
$$\min _{f(\mathbf x)} \max _{\alpha, \beta _k \mathcal \geq 0} \sum _{k=1}^K q _k \left( -(1-\lambda) \log f(\mathbf x) _k + \lambda (1-f(\mathbf x) _k) \right) + \alpha \left( \sum _{k=1}^K f(\mathbf x) _k - 1 \right) - \sum _{k=1}^K \beta_k f(\mathbf x)_k$$
Consider the stationary condition for $f(\mathbf x)$, we have $\frac{1}{f(\mathbf x)_k^*} + \frac{\lambda}{1-\lambda} = \frac{\alpha - \beta_k}{q_k(1-\lambda)}, 0<f(\mathbf x)_k <1$. By complementary slackness, we can get $\beta_k^* =0$ and $\alpha >0$. Hence, we have $\left( \frac{1}{f(\mathbf x)_k^*} + \frac{\lambda}{1-\lambda} \right) \propto \frac{1}{q_k}$ and consequently $f(\mathbf x)^*$ is rank consistent with $\mathbf q$, i.e., $L$ is All-$k$ calibrated. Since $L$ is non-negative, so $L$ is All-$k$ consistency.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. The authors have thoroughly addressed my questions and concerns. I will maintain my original rating.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer NkMU
Thanks for your feedback! We are pleased to address your concerns and greatly appreciate your reviews, which play a crucial role in improving our work.
Best regards,
The authors | Summary: The paper introduces “ϵ-softmax,” a method to adjust softmax outputs for better approximation to one-hot vectors, thereby mitigating the impact of label noise in classification tasks. The approach modifies the softmax layer outputs to include a controllable error term ϵ, aiming to improve noise robustness without extensive alteration to the network architecture. The authors provide theoretical backing for the effectiveness of ϵ-softmax in achieving noise-tolerant learning across various loss functions. Extensive experiments with both synthetic and real-world noisy datasets are conducted to validate the claims.
Strengths: 1.ϵ-softmax is presented as a plug-and-play module compatible with any existing classifier that uses a softmax layer, enhancing its practical utility.
2.The paper proves that ϵ-softmax can achieve a controlled approximation to one-hot vectors, which is significant for learning with noisy labels.
3.The methodology is backed by extensive experimental results showing its superiority over existing methods in handling label noise, with detailed results across multiple datasets and noise configurations.
Weaknesses: This paper should pay attention to the axis labels of its figures. In Figure 1, the x-label is Epoch and y-label is Test Accuracy. In Figures 2 and 3, the axis labels are missing.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1.This paper seems to focus on classification tasks. Does ε-Softmax also work well for regression tasks?
2.Is ε-Softmax computationally efficient in terms of training time compared with baseline methods?
3.This paper shows the excellent performance of ε-Softmax for label noise. Does ε-Softmax work for input (features) noise as well?
4.I assume that in this paper, for the label noise models, both training and testing labels are noisy. I am curious about the performance when the training labels are clean and the testing labels are noisy.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns.
**1. Response to Weakness**
Thanks for your kind comment. For Figures 2 and 3, we extract the high-dimensional features of the test set at the second-to-last fully connected layer, then project them into 2D embeddings using t-SNE. The x-label and y-label represent the first and second dimensions of the 2D embeddings, respectively. We will add the x-label and y-label in Figures 2 and 3 in the revised version.
**2. Response to Question 1**
Thanks for your kind comment. For regression tasks, the goal is to predict a continuous numerical output rather than a categorical probability. Therefore, the softmax function and our $\epsilon$-softmax are not suitable for regression tasks.
**3. Response to Question 2**
Thanks for your insightful comment. We evaluate the training time of different loss functions on CIFAR100 using ResNet34, and record the average training time of an epoch on NVIDIA RTX 4090. The results are summarized in the table below.
| Loss | Train time (s) |
|:--------:|:--------------:|
| CE | 10.17|
| GCE | 10.24|
| SCE | 10.29|
| NCE+MAE |10.39 |
| NCE+AGCE|10.46 |
| LDR-KL |10.38 |
|CE$_\epsilon$|10.26|
|CE$_\epsilon$+MAE|10.38|
As can be seen, the training time required for all methods is almost the same.
**4. Response to Question 3**
Thanks for your insightful comment. We use random masking, Gaussian blur and solarisation to synthesize symmetric feature noise. The experimental results with 3 random trials are as follows:
| CIFAR-10 | 0.2 | 0.4 | 0.6 | 0.8 |
|---|---|---|---|---|
| CE | 90.76±0.21 | 89.42±0.30 | 85.86±0.29 | 77.80±0.49 |
| GCE | 88.78±0.25 | 86.52±0.05 | 82.12±0.36 | 71.37±0.29 |
| NCE+RCE | 89.45±0.30 | 85.93±0.08 | 79.11±0.39 | 61.06±1.09 |
| NCE+AGCE | 89.26±0.03 | 85.22±0.10 | 77.53±0.51 | 57.76±1.61 |
| LDR-KL | 90.17±0.10 | 87.50±0.20 | 82.62±0.19 | 71.22±0.30 |
|CE$_\epsilon$+MAE|**91.41±0.29**|**90.27±0.15**|**87.30±0.04**|**80.03±0.23**|
| **CIFAR-100**|**0.2** | **0.4** | **0.6** | **0.8** |
| CE | 69.82±0.36 | 66.29±0.17 | 63.47±0.39 | 57.69±1.11 |
| GCE | 67.49±0.79 | 65.82±0.19 | 59.70±0.54 | 48.35±0.86 |
| NCE+RCE | 62.63±0.54 | 52.96±0.40 | 38.42±0.21 | 23.73±1.01 |
| NCE+AGCE | 63.94±0.40 | 55.66±0.50 | 43.13±0.41 | 26.68±0.36 |
| LDR-KL |**70.47±0.32**|**67.83±0.67**| 63.61±0.34 | 57.93±0.27 |
|CE$_\epsilon$+MAE|70.12±0.27 | 67.11±0.20 |**64.06±0.56**|**58.34±0.56**|
We believe that the feature noise primarily tests the fitting ability of the loss function, and we can see that our CE$_\epsilon$+MAE also performs very well for feature noise.
**5. Response to Question 4**
Thanks for your kind comment. In the area of learning with noisy labels, it is commonly assumed that the training labels are noisy and the test labels are clean, which aligns with real-world applications of machine learning models. Having clean training labels and noisy test labels doesn't make sense in practical scenarios, and we don't recommend considering this scenario.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for the response. The authors have addressed my concerns and I do not have other questions. I will keep my rating at 7.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer d71F,
Thanks for your feedback! We are pleased to address your concerns and greatly appreciate your reviews, which play a crucial role in improving our work.
Best regards,
The authors | Rebuttal 1:
Rebuttal: We appreciate all reviewers for their valuable time and insightful comments. We have carefully considered the suggestions and will revise our manuscript accordingly. We conducted some additional experiments in this global rebuttal space. Responses to other specific comments can be found in the individual rebuttal sections for each reviewer.
**Q1: Evaluation on Instance-Dependent Noise**
Following the previous work [1], we generate noise and set noise rates for instance-dependent noise. We use the same experimental settings and parameters as those described in our paper. The best results are highlighted in bold.
| CIFAR-10 IDN | 0.2 | 0.4 | 0.6 |
|---|:---:|:---:|:---:|
| CE | 75.05±0.31 | 57.27±0.96 | 37.62±0.02 |
| GCE| 86.95±0.38 | 79.35±0.30 | 52.30±0.12 |
| SCE| 86.79±0.17 | 74.56±0.49 | 49.63±0.14 |
| NCE+RCE | 89.06±0.31 | 85.07±0.17 | 70.45±0.26 |
| NCE+AGCE | 88.90±0.22 |85.16±0.26| 72.68±0.21 |
| LDR-KL | 88.99±0.15 | 84.10±0.24 | 63.11±0.23 |
|CE$_\epsilon$+MAE|**89.27±0.42**|**85.26±0.29**|**74.32±0.89**|
| **CIFAR-100 IDN**|**0.2**|**0.4**|**0.6**|
| CE | 54.46±1.73 | 40.81±0.25 | 25.57±0.03 |
| GCE | 61.95±1.37 | 56.99±0.42 | 44.19±0.36 |
| SCE | 55.58±0.74 | 39.71±0.39 | 25.63±0.76 |
| NCE+RCE | 64.13±0.49 | 57.15±0.24 | 43.22±2.31 |
| NCE+AGCE | 65.33±0.18 | 58.59±0.68 | 43.42±0.24 |
| LDR-KL | 59.19±0.34 | 43.74±0.12 | 26.10±0.16 |
|CE$_\epsilon$+MAE|**67.44±0.19**|**60.80±0.20**|**46.53±0.54**|
As can be seen, our method achieves the best performance among compared methods for instance-dependent noise.
**Q2: More Hyperparameter Search**
We conduct more hyperparameter search about learning rate and weight decay for our method. We search for learning rates (lr) in [0.01, 0.1] and weight decays (wd) in [1e-5, 1e-4, 5e-4]. lr=0.01, wd=1e-4 for CIFAR-10 and lr=0.1, wd=1e-5 for CIFAR-100 are used in the paper. The best results are highlighted in bold.
|CIFAR-10|Clean|S(0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|lr=0.01, wd=1e-5|90.81±0.29|89.05±0.32|85.59±0.54|80.24±0.43|57.20±0.23|90.03±0.08|88.24±0.13|85.38±0.07|74.82±3.79|
|**lr=0.01, wd=1e-4**|91.40±0.12|89.29±0.10|85.93±0.19|79.52±0.14|**58.96±0.70**|90.30±0.11|88.62±0.18|85.56±0.12|78.91±0.25|
|lr=0.01, wd=5e-4|**91.94±0.18**|**89.76±0.08**|86.01±0.30|78.87±0.18|56.54±0.71|90.68±0.21|88.76±0.36|85.02±0.37|77.78±0.29|
|lr=0.1, wd=1e-5|61.47±1.08|57.09±5.70|50.60±10.72|35.75±1.47|27.88±10.13|80.38±3.68|74.63±7.77|76.36±3.62|59.72±10.09|
|lr=0.1, wd=1e-4|89.00±0.49|87.96±0.31|80.03±4.28|58.34±1.87|42.41±2.65|89.28±0.18|88.27±0.23|86.61±0.30|74.49±0.30|
|lr=0.1, wd=5e-4|91.11±0.36|89.44±0.29|**86.77±0.16**|**80.47±0.87**|35.56±5.59|**91.06±0.13**|**89.58±0.29**|**87.78±0.23**|**82.47±0.56**|
|**CIFAR-100**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**|**AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
|lr=0.01, wd=1e-5|67.62±0.50|61.21±0.23|53.00±0.01|42.29±1.25|20.03±0.93|64.62±0.71|61.69±0.13|55.10±0.52|41.98±1.12|
|lr=0.01, wd=1e-4|69.31±0.49|62.66±0.19|54.69±0.35|43.67±0.22|18.49±0.64|65.13±0.28|63.05±0.30|55.45±0.33|41.52±1.27|
|lr=0.01, wd=5e-4|72.89±0.11|64.62±0.24|55.81±0.11|37.43±1.01|10.57±0.57|70.08±0.13|64.85±0.18|50.25±0.71|31.05±0.42|
|**lr=0.1, wd=1e-5**|70.83±0.18|65.45±0.31|59.20±0.42|48.15±0.79|**26.30±0.46**|67.58±0.04|64.52±0.18|**58.47±0.12**|**48.51±0.36**|
|lr=0.1, wd=1e-4|73.63±0.12|66.89±0.22|59.69±0.42|**50.69±0.25**|13.53±1.28|70.62±0.26|**66.70±0.20**|57.97±0.60|45.78±0.61|
|lr=0.1, wd=5e-4|**75.62±0.23**|**70.96±0.20**|**64.22±0.50**|14.07±0.51|1.10±0.00|**72.86±0.06**| 52.45±0.26|23.77±1.95|12.93±1.41|
As can be seen, we achieve better results in many cases.
[1] Part-dependent Label Noise: Towards Instance-dependent Label Noise, NeurIPS, 2020. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The author proposes the epsilon-softmax technique as a method to address label noise. Epsilon-softmax facilitates peaky predictions by increasing the value of the highest prediction, and it also functions to reduce the magnitude of the gradient when the prediction aligns with the given label. The author introduces the concept of All-k consistency to interpret this paradigm and presents experiments on prominent real-world benchmark datasets in the field of label noise learning, specifically WebVision and Clothing1M.
Strengths: The proposed epsilon softmax by the author takes a different approach compared to existing symmetric-like functions, which aim to reduce the gradient value for entirely incorrect predictions. From my understanding, the author’s approach reduces the gradient value for predictions that match the given label. By providing the value and interpretation of this novel approach, the author has significantly broadened the scope of the label noise learning field with their straightforward yet impactful idea. The proposed method has the advantage of simple gradient computation without requiring additional high-cost operations, making it applicapable to other Label Noise Learning (LNL) methods. The author demonstrates the experimental value of this approach by applying it to both the cross-entropy loss function and the focal loss function.
Weaknesses: This section addresses two major concerns. For minor concerns, please refer to the "Questions" part.
1. The author mentions in line 40 the necessity for a method that can achieve both effective learning and robustness. While the proposed method offers a different perspective compared to symmetric-like loss methods, it is challenging to assert that it fully meets this necessity. Ironically, to balance the trade-off between effective learning and robustness, the author combines CE_(epsilon) loss and MAE loss. This ability to manage trade-offs is also found in other symmetric-like loss-based methods. In this context, I am interested in understanding why the proposed method might offer a better trade-off and whether it truly provides a better trade-off. I attempted to verify this through experimental comparison, but several issues arose: (1) There are no experiments that allow for a comparison of trade-offs. Experiments demonstrating the trade-off by varying alpha and beta are necessary. (2) The performance of existing methods is reported to be lower. For example, refer to the SCE paper.
2. The proposed method introduces two additional hyperparameters: m and alpha / beta. Unfortunately, based on the recorded experimental results, the proposed method appears to be sensitive to these hyperparameters. If this is not true, providing comprehensive experimental results that show the effects of varying these hyperparameters would enhance the perceived value of the proposed method.
And if the proposed method is indeed sensitive to changes in hyperparameters, I would like to see evidence that it is not sensitive to hyperparameter variations within the in-distribution domain. I recommend performing validation and test processes to identify optimal hyperparameters (refer to the processes outlined in the GJS and JS papers).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Based on the gradient analysis, it appears that when m is sufficiently large, the sensitivity of performance to m would not be significant. However, as shown in Table 3, the difference in performance is quite notable. Does the author have an explanation for this discrepancy? Additionally, has the author investigated the results when m is infinite, meaning the CE loss is not used when the prediction matches the label?
2. Has the author ever checked the results of using only the CE_(epslion) loss function? Sections 3.1 to 3.3 and Lemma 2 suggest the importance of the single CE_(epslion) loss function.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for your positive comments. We would like to offer the following responses to your concerns.
**1. Response to Weakness 1**
Thanks for your insightful comments.
--- About the better trade-off
The better trade-off means that we achieve better performance on both fitting ability and robustness compared to previous works.
Previous works like GCE and SCE [1] increase fitting ability but reduce robustness due to the CE term. In contrast, our combination retains robustness, as demonstrated by Lemma 2, and also improves fitting ability. To verify this, we present comparison results on CIFAR-10 symmetric noise:
|CIFAR-10|Clean|0.2|0.4|0.6|0.8|
|:---:|:---:|:---:|:---:|:---:|:---:|
|GCE|89.42±0.21|86.87±0.06|82.24±0.25|68.43±0.26|25.82±1.03|
|SCE|91.30±0.08|87.58±0.05|79.47±0.48|59.14±0.07|25.88±0.49|
|CE$_\epsilon$+MAE|**91.40±0.12**|**89.29±0.10**|**85.93±0.19**|**79.52±0.14**|**58.96±0.70**|
As can be seen, CE$_\epsilon$+MAE achieves better results on both the clean and noisy cases.
--- About issue (1)
We offer the ablation experiments for $\alpha$ and $\beta$. For simplicity, we fix $\alpha$ and then adjust $\beta$. $\beta=5$ for CIFAR-10 and 1 for CIFAR-100 is used in the paper. "S" is symmetric noise, and "AS" is asymmetric noise.
|CIFAR-10|Clean|S (0.2)|S (0.4)|S (0.6)|S (0.8)|AS (0.1)|AS (0.2)|AS (0.3)|AS (0.4)|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|$\beta=1$|90.19±0.13|87.89±0.25|84.71±0.09|76.90±0.30|44.26±0.78|89.20±0.17|86.85±0.17|82.74±0.32|75.75±0.42|
|$\beta=5$|**91.40±0.12**|**89.29±0.10**|85.93±0.19|**79.52±0.14**|**58.96±0.70**|**90.30±0.11**|**88.62±0.18**|**85.56±0.12**|**78.91±0.25**|
|$\beta=10$|91.31±0.21|89.08±0.19|**86.06±0.08**|77.78±3.32|43.00±3.86|90.14±0.11|88.58±0.39|83.42±4.61|72.87±0.55|
|**CIFAR-100**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**|**AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
|$\beta=0.5$|**71.01±0.05**|64.31±0.32|52.07±0.76|43.32±0.76|15.42±0.88|65.50±0.84|62.22±0.24|53.27±0.31|41.48±0.32|
|$\beta=1$|70.83±0.18 |**65.45±0.31**|**59.20±0.42**|**48.15±0.79**|**26.30±0.46**|**67.58±0.04**|**64.52±0.18**|**58.47±0.12**|**48.51±0.36**|
|$\beta=5$|67.87±0.88|60.05±0.84|51.19±1.62|26.93±1.80|10.30±2.83|63.22±0.46|51.67±0.66|40.34±5.45|25.69±0.77|
--- About issue (2)
Our experimental setup precisely follows that of previous work [2], ensuring a completely fair comparison. The results we reproduced are consistent with [2]. We note that the key difference between our results and those in SCE paper [1] lies in the recording of epoch accuracy. [1] recorded the best epoch accuracy, while we recorded the last epoch accuracy. Consequently, [1] avoided overfitting noisy labels in the later stage of training and reported better results. For example, for CIFAR-10 with 0.8 symmetric noise, CE achieved about 39\% accuracy at the best epoch and about 18\% at the last epoch.
**2. Response to Weakness 2**
Thanks for your kind comment. Ours, and most robust loss functions, are indeed somewhat sensitive to hyperparameters. For instance, NCE+AGCE [2] involves four hyperparameters.
As suggested and in line with [3], we provide a more extensive hyperparameter search for learning rate and weight decay. The results are available in the global rebuttal (please see response to Q2). As can be seen, our method achieves better results in many cases.
**3. Response to Question 1**
Thanks for your insightful comment. Due to the excellent fitting ability of the neural network, even if $m$ is about $1e4$, the gradient will still play a guiding role in optimization.
According to your suggestion, we conducted experiments on sufficiently large and infinity $m$. The results on CIFAR-100 symmetric noise are as follows:
| CIFAR-100 | 0.2 | 0.4 | 0.6 | 0.8 |
|---|:---:|:---:|:---:|:---:|
| m=1e10 | 53.63±0.41 |36.94±0.16|24.56±1.31|11.24±0.46|
| m=1e15 | 53.45±0.53 |36.54±0.47|24.51±0.71|11.42±0.53|
| m=1e20 | 53.64±0.94 |36.75±0.01|25.47±0.60|10.95±0.17|
|m=$\infty$|54.07±0.59|36.54±1.44|24.94±0.98|11.12±0.76|
As can be seen, when $m$ is sufficiently large, there is no obvious
difference between the results.
**4. Response to Question 2**
Thanks for your kind comment. As suggested, the results of CE$_\epsilon$ are offered as follows, where "S" is symmetric noise and "AS" is asymmetric noise:
| CIFAR-10 | Clean | S (0.2) | S (0.4) | S (0.6) | S (0.8) | AS (0.1) | AS (0.2) | AS (0.3) | AS (0.4) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| CE | 90.50±0.35 | 75.47±0.27 | 58.46±0.21 | 39.16±0.50 | 18.95±0.38| 86.98±0.31|83.82±0.04 | 79.35±0.66 | 75.28±0.58 |
|CE$_\epsilon$|88.10±0.06 |**86.88±0.18**|**83.51±0.39**|**74.68±0.40**|**29.54±0.89**|**88.06±0.30**|**85.57±0.15**|**80.82±0.41**|**76.74±0.22**|
|**CIFAR-100**|**Clean**|**S (0.2)**|**S (0.4)**|**S (0.6)**|**S (0.8)**|**AS (0.1)**| **AS (0.2)**|**AS (0.3)**|**AS (0.4)**|
| CE | 70.79±0.58 | 56.21±2.04 | 39.31±0.74 | 22.38±0.74 | 7.33±0.10 | 65.10±0.74 | 58.26±0.31 |49.99±0.54|41.15±1.04 |
|CE$_\epsilon$|69.28±0.09 |**64.35±0.20**|**58.21±0.24**|**40.27±1.62**|**19.32±1.09**|**65.89±0.12**|**60.11±0.13**|**52.74±0.11**|**42.05±1.72**|
[1] Symmetric cross entropy for robust learning with noisy labels, CVPR, 2019.
[2] Asymmetric loss functions for noise-tolerant learning: Theory and applications, TPAMI, 2023.
[3] Generalized jensen-shannon divergence loss for learning with noisy labels, NeurIPS, 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your additional interpretation and experimentation. My main concerns have been largely addressed. The answer to Q1 provides a good guideline regarding the size of m, and the answer to Q2 presents results that reflect a fundamental extension of the epsilon-softmax method. Integrating these points into the main text will likely enrich the overall content. I will maintain my approval decision.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer 5cpw,
Thanks for approving our responses! We will incorporate the additional experimental results in the final version. Your guidance has been instrumental in enhancing the quality of our work.
Best regards,
The authors | null | null | null | null | null | null |
Bileve: Securing Text Provenance in Large Language Models Against Spoofing with Bi-level Signature | Accept (poster) | Summary: The robustness of previous watermark algorithms would lead to a type of spoofing attack where attacker would modify the watermarked text to contain harmful contents while ensuring the watermark can still be detected. This paper introduce a bi-level signature scheme called bileve to mitigate spoofing attacks and enhance detectability. Bileve could recognize 5 scenarios during detection, compared to only 2 for previous methods. Using bileve, the LLM owners could verify if the source of the given texts. From experiments, the effectiveness of bileve against spoofing attack is validated.
Strengths: 1. The 3 types of spoofing attacks are clearly listed in this paper, with exploited vulnerabilities explained, which makes the motivation reasonable.
2. The paper provides comparsion between single-level signature and bileve, which is good for understanding.
3. Bileve could produce a total of 5 different detection results, which meets real-world cases.
Weaknesses: 1. Although with the proposed WRA, the text quality is improved as compared to SLS, the difference between bileve and unigram is still noticable.
2. For case 4&5, if the watermarked text is inserted into a long document (copy-paste attack), then the global alignment test would not produce a small p-value while the detected text does contain watermarked text.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. what is the generation/detection complexity? The statistical test during detection seems quite time-consuming.
2. Are all 5 cases tested during evaluation? Details of how local alignment test is conducted, e.g., chunk size, can be clarified in the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper could provide more info on the complexity of the generation and detection process.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions and feedback. We appreciate the opportunity to address your concerns here.
**W1: Quality**
The key baseline of this work is SLS, as it also focuses on defeating spoofing attacks. Bileve has already shown improvements over SLS, indicating potential for further enhancements. This highlights the feasibility of balancing security and quality in future developments. It is important to note that Bileve is primarily focused on establishing a secure foundation. We plan to enhance text quality while maintaining these security features in future work, with potential optimizations discussed in Section 6.
Additionally, the lower perplexity of Unigram may result from repetitive text, where model perplexity favors repetitive text as discussed in recent research [1].
For example, for text completion tasks with prompt
>The lava dome was created by volcanic eruptions in the early Miocene. A total of five large ash flows were produced along with a large rhyolite dome structure. The caldera formed when the dome collapsed about 16 million years ago.
Generation of Unigram using OPT-1.3B:
>A mill eruption approximately 3 million years ago produced many fine-grained lavas, plus fly ash. Over 1,600 more fine-grained lavas were produced around 3.6 Ma. A 6.8 Ma eruption produced a voluminous eruption with numerous fine-grained lavas. The 6.8 Ma eruption also produced numerous small diorites. The most intense eruption from the 6.8 Ma eruption produced more fine-grained lavas, with many made of pyroxene, basalt, and calcrete...
Generation of Bileve using OPT-1.3B:
>During the late Triassic period, Steens mountain began to rise eastward from the Cocoon Valley, and in their place, erupted a series of glacial-related geologic structures. In the early Jurassic period, lava flow-induced tectonic activity in the upper section caused Steen and its outflow area on this portion west to the Canadian National Mountains...
Thus, we plan to incorporate other evaluation metrics, such as human evaluation, in the revised paper to provide a more comprehensive assessment of text quality.
**W2: Potential Attack**
Thanks for pointing out this potential attack. Our work aims to defeat spoofing attacks. In the copy-paste attack you mentioned, it cannot achieve spoofing attacks, i.e., faking the long document as being generated by the target LLM. If a long document with only a small portion of watermarked content is fed into our detector, our detector would not determine that this long document was generated by the target LLM, which makes sense.
Our scheme currently does not differentiate between fully watermarked documents and those with only partial watermarked content. However, in cases where this differentiation is desired, we can perform a linear search along the document. For documents longer than the key sequence, we can check if there is a segment of the document that aligns well with the key sequence. For documents shorter than the key sequence, we can run a local alignment. Unlike case 2 and the results in Figure 4, this approach would result in the p-value being small for the watermarked segment while being large for other parts of the document.
**Q1: Generation/Detection Complexity**
Generation complexity is linear in the length of the input text. During token generation, we need to sample a token that matches the signature bit, which can increase the runtime by up to three times compared to generating tokens without matching signature bits.
The statistical test during detection follows the design in [1], and its computational complexity is linear in the length of the watermark key sequence. However, these statistical tests are only employed when the signature is invalid (i.e., in the event of an attack). Under normal circumstances, we simply verify the validity of the signature.
**Q2: Evaluation**
The results of case 1 and case 4 can be found in Table 3, and case 2 in Figure 4. Case 3 involves unexpected content generated by the target LLM, potentially due to jailbreaking, and is discussed in the discussion section and Appendix B. Case 5 indicates content generated by another source, as it does not use the secret key or key sequence during generation.
For the local alignment test, the number of segments is a hyperparameter set to 5 in our experiments. This setting involves a trade-off: a larger number of segments may overlook local realignment, while a smaller number increases latency overhead. We will add a detailed discussion of this trade-off in the revised version of the paper.
---
[1] Publicly Detectable Watermarking for Language Models, arxiv 2024.
[2] Robust Distortion-free Watermarks for Language Models, TMLR 2024.
---
Rebuttal Comment 1.1:
Title: reply
Comment: Thank you for your reply and explanation, I will maintain my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our rebuttal. If you have any further concerns, please let us know. We would like to address them before the discussion period ends. | Summary: The paper presents a novel approach to secure the provenance of texts generated by large language models (LLMs) through a bi-level signature scheme. This method aims to mitigate spoofing attacks—where malicious actors alter the content generated by LLMs to forge harmful content or misattribute blame—by integrating fine-grained signature bits for integrity checks and a coarse-grained signal for source tracing when signatures are invalidated.
Strengths: 1. This paper reveals a spoofing attack that takes advantage of the robustness features of state-of-the-art watermarking schemes.
2. This paper improves the ability to trace the provenance of text and regulate LLMs by differentiating between five detection scenarios.
3. This paper introduces a novel bi-level signature scheme that enhances text provenance security for large language models (LLMs). It combines fine-grained signatures for integrity checks with coarse-grained signals for source tracing.
Weaknesses: 1. While the experiments demonstrate effectiveness in specific settings with OPT-1.3B and LLaMA-7B models, the generalizability and scalability of the Bileve scheme to other models are somewhat uncertain. Authors could consider using larger or more powerful LLMs to demonstrate the effectiveness of the proposed algorithm.
2. The authors could consider using a more powerful LLM to measure the perplexity, like GPT-3/GPT-4.
3. I suggest reporting TPR scores at fixed low FPR (FPR = 10% or 1%).
4. This paper demonstrates detectability by modifying 10% of the tokens. It would be good to test with a higher rate of token modification, like 20%, 30%, to further validate the detectability.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback, and we would like to address the weakness (**W**) below.
**W1: Generalizability**
The primary goal of our experiments was to establish a proof of concept for the Bileve scheme. We believe demonstrating effectiveness with models like OPT-1.3B and LLaMA-7B would provide a solid foundation for further research. **It is worth noting that Bileve is designed to be generalizable to any auto-regressive LLMs, as the core mechanism is not inherently tied to specific models or affected by model size.** Therefore, we anticipate similar results regarding security with larger models. However, we plan to explore the generation quality when this scheme is applied to more powerful LLMs in future work.
**W2: Perplexity Measured by More Powerful LLMs**
We used LLaMA-13B to measure perplexity since it is an open-sourced powerful LLM, and it outperforms GPT-3 on most benchmarks [1]. Moreover, it is worth noting that **close-sourced models like GPT-4 cannot be used to evaluate the perplexity of watermarked text since it does not return the probability of watermarked tokens.**
However, we would like to introduce other evaluation metrics that can more comprehensively evaluate the generation quality, like zero-shot quality measurements with GPT-4 Turbo, as suggested in recent research [2]. This metric uses GPT-4 to evaluate the coherence and relevance of generated text, and can be used as a supplemental metric to perplexity.
**W3: Fixed FPR**
In our work, we provided a solution to "how to avoid an LLM being wrongly blamed" by adapting digital signatures, a cryptographic mechanism that ensures an FPR of 0 (as reported in Table 3). As shown in L174 and Figure 1(b), verification only outputs True if the decrypted result using the public key matches the message digest. This verification process relies on computational matching, and digital signatures make it computationally infeasible for unauthorized entities to produce a valid message-signature pair without the private key, thus eliminating false positives.
As a result, we cannot set or report FPR values of 10% or 1%, as our scheme's design inherently prevents false positives. This differentiates it from other methods [3,4] where FPR can be adjusted or measured independently.
**W4: Higher Rate of Token Modification**
We provide more results with higher rates of token modification below on OPT-1.3B with two tasks, and we report the performance using the true positive rate (TPR), where a higher TPR indicates better robustness against removal attacks.
When the editing ratio is 20%:
| | OpenGen | LFQA |
|----------|----------|---------|
| Unigram | 0.979 | 0.971 |
| Bileve | 0.989 | 0.987 |
When the editing ratio is 30%:
| | OpenGen | LFQA |
|----------|----------|---------|
| Unigram | 0.958 | 0.925 |
| Bileve | 0.960 | 0.970 |
From the above tables, we can see that although the FPR decreases as the editing ratio increases, Bileve still outperforms Unigram, which can be attributed to the introduced alignment with key sequences. We will conduct a more thorough evaluation with varying edit ratios in the revision.
---
[1] LLaMA: Open and Efficient Foundation Language Models, arXiv, 2023
[2] Publicly Detectable Watermarking for Language Models, arxiv 2024.
[3] A Watermark for Large Language Models, ICML 2023.
[4] Provable Robust Watermarking for AI-Generated Text, ICLR 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarification. I have raised the score to 7. If possible, please incorporate the extra experimental results into the next version of the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and valuable feedback on our work. We will incorporate additional experimental results in the revised version. | Summary: This paper proposes to consider spoofing attack, where an attacker wants to prove the proposition like "The person holding this watermark private key used an LLM to write this text A." where text A is constructed by the attacker. The paper proposes a defense against spoofing attacks.
Strengths: This paper points out the fundamental trade-off between defending against removal attacks and spoofing attacks.
Weaknesses: I have doubt on the significance of spoofing attack. It is important to first clarify a potential misunderstanding. Authors may believe that a watermark in the text proves "the person holding this watermark private key used an LLM to write this text A." But that's not accurate.
However, the watermark only proves that the probability of text A being generated by a process independent of the watermark key holder is very low. It does not conclusively prove the key holder generated that specific text A.
Therefore, I believe the spoofing attack lacks real significance from the outset. If someone wants to prove they said certain things and nothing else, they can just use a traditional digital signature.
The problem is also framed as "How to avoid an LLM being wrongly blamed?" But what can we really blame an LLM for? Sure, there may be instances where a single LLM inference generates a token sequence that is interpreted as harmful by humans.
However, LLMs are probabilistic models that can potentially generate any harmful content given enough inferences. We can only blame an LLM for having a high average probability of generating harmful content, not for the existence of individual harmful inferences.
Moreover the paper appears hard to read to me. For example, "instead of ranking them based on probability like conventional methods [13]" doesn't specify what conventional methods mean in paper [13] Pre-trained language models for text generation: A survey.
Furthermore, t's unclear if the signature preservation attack requires constructing two messages with the same hash, as implied by "replaced token hashes to the same signature bit." If so, that would be extremely difficult for modern hash functions.
More importantly, the paper does not provide any rigorous theoretical guarantees that Bileve actually solves the spoofing attack issue as claimed. The key assertion is that "it is less likely to simultaneously align well with $\Xi$ sequences, thereby effectively mitigating such attacks." However, this statement is quite vague and unconvincing on its own.
What does it mean for a method to be "less likely to simultaneously align well with $\Xi$ sequences"? How much less likely is it quantitatively? Under what assumptions or conditions does this property hold? The paper does not provide clear answers to these crucial questions.
Technical Quality: 1
Clarity: 1
Questions for Authors: In the definition of "signature preservation attack", I saw that "replaced token hashes to the same signature bit". Does it mean that signature preservation attack requires constructing two messages with the same hash? If so, that would be extremely difficult for modern hash functions.
Confidence: 4
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: This paper is difficult to read, e.g. the reference to "conventional methods" in "[13]" is unclear.
Does not provide clear theoretical guarantees that the Bileve method effectively mitigates spoofing attacks as claimed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and we would like to address the weaknesses (**W**) and questions (**Q**) individually.
**W1**
>I have doubt ... specific text A.
First, we understand that you are emphasizing the possibility of false positives in watermark detection, suggesting that non-watermarked text could be incorrectly identified as watermarked. Thus, you question the necessity of conducting spoofing attacks and our motivation to counter them. However, researchers in this field have acknowledged the importance of maintaining a low false positive rate (FPR), which is why learning-based detectors are criticized for their high positive rates and why OpenAI shut down their official detector [1]. **In contrast, watermarking has been envisioned as a more reliable and promising method to identify the source of LLM-generated text [2,3]. When no attacks happen, false positive detections are statistically improbable [2].** As a result, people are prone to believe that the text is from a specific LLM (associated with a watermark key) if the watermark is detected. Leading companies like Google have already deployed watermarks in their products to identify AI-generated content [4].
**However, spoofing attacks have unveiled the vulnerability that existing watermarks are not as reliable as previously thought.** In such attacks, attackers can create content that makes people believe it is from the target victim LLM. When done on a large scale, this invalidates the value of the watermark and may cause reputational damage to the model owner (e.g., if inappropriate text is incorrectly attributed to them). **Spoofing attacks have been identified in several works (listed in Table 2), and their significance is acknowledged by many [3], including the rest of the reviewers.**
**W2**
>If someone wants to prove they said certain things and nothing else, they can just use a traditional digital signature.
We would like to clarify that **the scope of this work is focused on reliable LLM detection, rather than verifying if someone said certain things.** Our method adapts digital signatures into LLM generation to achieve reliable detection. We would appreciate more details on how a traditional digital signature can be used to achieve this goal within the context of LLM-generated content, if not through our approach.
**W3**
>The problem is also framed ... harmful inferences.
When we talk about blaming an LLM, we are essentially addressing the responsibility of the model owner in preventing the LLM from producing abusive or harmful content. If individual harmful inferences were not a concern, there would be no need for red-teaming [5] to ensure the safety of LLMs. These practices are employed precisely because preventing the generation of harmful content is crucial for the responsible deployment of LLMs.
**W4**
>More importantly, ...crucial questions.
**It is important to note that Bileve adopts digital signatures to combat spoofing attacks.** Specifically, it is computationally infeasible for an attacker who only knows the public key to infer the secret key or produce a valid message-signature pair. For example, the security of RSA digital signatures is based on the difficulty of factoring large composite numbers, with the factorization of a 1024-bit RSA key requiring several years with distributed computing resources. The theoretical foundation of digital signatures is well-established, as discussed in Section 3.2 of [6].
The phrase "less likely to simultaneously align well with sequences" is associated with the case of "signature preservation attacks," which are adaptive attacks against Bileve. We use the term "less likely" because, while our experiments (see Section 5.4) demonstrate that our method can effectively defeat these adaptive attacks, this is empirical evidence. Also, we repeat this attack for 100 times and none of them simultaneously align well with sequences. Thus, this term is used to reflect the current state of our findings and the inherent challenges in providing absolute theoretical guarantees against all potential future attacks. We would like to rephrase it as "it is resistant to simultaneously aligning well with $\Xi$ sequences".
**Q: Signature Preservation Attack**
Here we do not mean that attackers would construct two messages with the same hash. In our scheme, the watermarked content consists of a message-signature pair. This attack considers the scenario where attackers only modify the signature part. Since signature bits are associated with tokens (mapped by function h, as shown in Fig 1), it is possible that the replaced token results in the same signature bit as before. This is why we refer to it as a signature preservation attack.
---
[1] https://openai.com/index/new-ai-classifier-for-indicating-ai-written-text/
[2] A Watermark for Large Language Models, ICML 2023.
[3] Watermark Stealing in Large Language Models, ICML 2024.
[4] https://deepmind.google/discover/blog/watermarking-ai-generated-text-and-video-with-synthid/.
[5] Red Teaming Language Models with Language Models, EMNLP 2022.
[6] An Efficient Signature Scheme from Bilinear Pairings and Its Applications, PKC 2004.
[7] Publicly Detectable Watermarking for Language Models, arxiv 2024.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed rebuttal.
I wish to first focus on the definition of "non-watermarked text" and "false positives".
Support author A write an announcement of length 1000 with watermark. Then attacker B add/delete 10 words "not" to revert the meaning of the announcement. Is the changed announcement "watermarked text" or "non-watermarked text"? If watermark is detected, is it "false positive" or "true positive"?
I would consider the changed announcement as "watermarked text", as 99% of the text is the same as original watermarked text, even though semantic is changed.
Therefore, my previous comment is not `emphasizing the possibility of false positives in watermark detection`. Actually I didn't mention the word "false positive" at all and I regards the above example as "true positive".
---
Reply to Comment 1.1.1:
Comment: Thanks for your clarification. **What you mentioned highlights a significant flaw in prior watermarking schemes**: the potential for attackers to exploit the robustness of watermarks to achieve spoofing attacks, a vulnerability newly identified in a recent work [1] pointed out by Reviewer ggNL and in our work. **This issue is exactly what our scheme aims to address.** Our proposed method can differentiate between five cases, including whether the content is originally from the target LLM or has been modified.
In particular, in the case you mentioned, other watermark schemes would identify the text as watermarked. In contrast, our detection first verifies the validity of the signature. In this instance, the signature would be invalid due to the perturbations, so we move to the next-level detection, i.e., checking if the text aligns with the key sequence. As reported in Table 3, the alignment with the key sequence is designed to be robust to perturbations. Thus, we can still detect the presence of the watermark in the text. **Ultimately, the combination of an invalid signature and good alignment with the key sequence indicates that the text originated from the target watermark but has been modified by others.**
Thank you for your prompt response and the effort you put into reviewing our work. Please let us know if you have any further concerns.
---
[1] Attacking LLM Watermarks by Exploiting Their Strengths, Pang et al. arXiv 2402.16187
---
Rebuttal 2:
Comment: I still have doubt on the significance of spoofing attack. It slightly modify a watermarked content into another watermarked content, and I should expect the new content still be detected instead of not detected.
If an author want to prove to others about: 1. whether LLM is used for generating the content, 2. whether content has been modified by others, can they simply use 1. watermark during generation 2. traditional digital signature to sign the generated content? It seems watermark detection can answer first question for reliable LLM detection and traditional digital signature can answer second question.
> Ultimately, the combination of an invalid signature and good alignment with the key sequence indicates that the text originated from the target watermark but has been modified by others.
The above scheme can do the same thing. The positive detection result and invalid signature indicates that the text originated from the target watermark but has been modified by others.
---
Rebuttal 3:
Comment: Thank you for your timely response again! We understand the misunderstanding now.
**The spoofing attack described in our work is NOT as you proposed above or described in the summary**, i.e., `an attacker wants to prove the proposition like "The person holding this watermark private key used an LLM to write this text A." where text A is constructed by the attacker. `
Instead, the attack scenario is: when model owners deploy watermarks in their models, someone could forge the watermark in content that is not generated by the models or manipulate the content without removing the watermark. If the content is bad and the watermark's existence is detected, people would believe this was generated by the victim model. Then, the model owner could be blamed for not achieving safe deployment. We will clarify our threat model in the revision.
Our work aims to help LLM model owners deploy a reliable watermark that cannot be spoofed by attackers.** The application scenario would be, for example, OpenAI deploying a watermark in ChatGPT, and ensuring that when someone claims content is generated by ChatGPT, it is indeed fully generated by it and not crafted by others or modified to twist its meaning. **This is different from someone wanting to demonstrate that they used an LLM to generate the entire content.**
**To achieve our objective, your proposed method is invalid**, i.e., `they simply use 1. watermark during generation 2. traditional digital signature to sign the generated content.` The reason is that if the model first embeds existing watermarks and attaches the signature after generation, the output from the model would be the response to the prompt and a string of meaningless signatures. Thus, users would simply discard the signature when they copy and paste the response somewhere. As a result, no one can test if the content is truly from ChatGPT due to the missing signature.
Therefore, our objective is to make the signature self-contained in the generated text. This way, if some generated text is given, OpenAI can extract the signature and verify if it is from ChatGPT. **Our work provides a reliable detection scheme for model owners, allowing them to protect their own interests by preventing spoofing attacks where attackers may mislead the attribution of bad content to the victim model.**
---
Rebuttal Comment 3.1:
Comment: We appreciate your thoughtful review. As the rebuttal deadline approaches, we kindly ask if our responses have sufficiently addressed your concerns. Should you require further clarification, we are prepared to provide additional information.
Sincerely,
Authors | Summary: The submission proposes a spoofing attack on LLM watermarks and a new bi-level scheme meant to protect against spoofing by distinguishing five possible scenarios. The scheme is based on signature bits for integrity checks and rank-based sampling on top of a Kuditipudi-like random key sequence.
Strengths: - The paper takes a somewhat original approach compared to most contemporary methods.
- On a high-level, the problem of preventing spoofing is well-motivated and important for the community.
Weaknesses: - Weak experimental results, bringing the practical value of the defense into question:
- The provided quality evaluation, despite its limitations (see below), clearly shows an order-of-magnitude increase in perplexity which strongly suggests that produced text are of impractically bad quality; there is no evaluation that would test this. This is the most important weakness in my opinion.
- Limited experimental evaluation, in ways that make it hard to evaluate the merit:
- Text quality is measured only as PPL of Llama-13B and only on one small 1.3B model; there is no qualitative evaluation of text quality so the negative effect on text quality can't be well understood.
- Only Unigram and SLS are considered as baselines, while self-hash and other variants of the KGW scheme are generally considered more promising, esp. from the perspective of spoofing.
- Watermark removal is evaluated only as 10% editing attack which ruins text quality, no paraphrasing attack is evaluated.
- Bigger framing issues around Table 2 and the attack:
- The framing of Table 2 seems inappropriate. "Knowing the secret key" is not a spoofing attack but simply an application of the watermark, this seems to be introduced as a way to suggest that symmetric schemes are flawed by design, which is not necessarily true in cases where there is no detector access.
- The attack is framed as a "novel advanced spoofing attack" while it is (1) in the opinion of this reviewer a direct result of scheme robustness and very limited in scope and thus hardly advanced (2) more importantly, already proposed in a different form in prior work [1] which was not cited, making this an overclaim. To elaborate on (1), for example, [7, 9] would be able to produce a detailed watermarked response to a harmful query such as "Teach me how to steal someone's identity" while there is no way to produce such a response by a few token modifications of a non-harmful response.
- This attack type is used as a key motivation, setting aside the true spoofing attacks from [7,9], which are much more relevant. This is evident in claims such as "anti-spoofing requires perturbation-sensitivity". Further, the robustness of Bileve to such approaches based on learnability is claimed but not substantiated.
- Poor writing: The paper is often quite hard to read and understand. On top of that there is a very large amount of typos. I advise the authors to work on improving the writing for the next version. Here is a list of some examples that I found, in hopes this helps.
- "Symmetric characteristic" and "learnability" in Introduction are unclear without being defined
- Paper keywords typo: "provence"
- L50: unforgettable
- L285 L325 L50: tempering / temper-evident
- Table 2: model'
- L87: simply
- Algo1: $h$ is undefined, although $H$ (a different symbol) is defined outside in the main text
- L211: "associate"
- L283: resulted
- L284: "the source are"
- L284: the failure verification
- L308: "tokens also"
- L311: "return"
- L312: "the rest segments"
- L314: "shows"
- L315: "t0"
- L316: "cause"
- L327: "limitaition"
- L456: "neucles"
[1] Attacking LLM Watermarks by Exploiting Their Strengths, Pang et al. arXiv 2402.16187
Technical Quality: 2
Clarity: 1
Questions for Authors: - Can the authors provide evidence of practical text quality of Bileve texts?
- Can the authors include the missing experiments discussed above?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: The authors include a discussion of limitations and societal impact in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your input. We address the weaknesses (**W**) and questions (**Q**) below.
**W1: Experimental Results**
It is important to clarify that the increase in perplexity observed is **not an order-of-magnitude increase**. Additionally, **high perplexity does not necessarily indicate bad quality**. Perplexity is a measure of how well a probability model predicts a sample, and while it can be correlated with quality, it is not a direct measure of it.
Moreover, perplexity does not always capture the nuanced aspects of human-readable text quality, such as relevance. This is why we are planning to incorporate other evaluation metrics, such as human evaluation, to provide a more comprehensive assessment of text quality. **It is noted that Bileve aims to establish a secure foundation against spoofing attacks. Compared to SLS, the perplexity is improved and shows potential for further enhancements as discussed in Section 6.**
**W2: Experimental Evaluation**
* More results have already been provided in Appendix E. To provide qualitative evaluation, we used zero-shot quality measurements with GPT-4 Turbo following [1], where a higher score indicates better quality. For the question-answering task using OPT-1.3B, the score of Unigram is 16±6.52 and 16±9.62. We would like to extend this evaluation of all tasks and models in our work in revision.
* Unigram is a variant of KGW and retains the key merits of KGW. Its vulnerabilities are inherited from the robustness of KGW. As long as other variants like the self-hash of KGW are also robust against perturbation (as discussed in the KGW paper), they suffer from the same vulnerability. Could you provide evidence that self-hash is more promising in terms of defeating spoofing?
* The objective of evaluation on watermark removal is to show that even at 10% editing, a stronger attack than <10% editing, the detectability is still well preserved. Therefore, <10% editing would not change the conclusion obtained from Table 3. Additionally, a semantic manipulation attack, a variant of paraphrasing attacks, is evaluated and the results are provided in Table 4.
**W3: Framing Issue**
* **We did not claim that "Knowing the secret key" is an attack; instead, it is an attacker’s capability**. We did suggest that symmetric schemes fall short in the case of using watermarks for transparent detection (instead of black-box API) or regulation.
If "no detector access" refers to only providing a black-box API, then these watermarks cannot advance the societal goals of ensuring safe, transparent, and accountable LLM regulation as Bileve does. If there is no access at all, then it is unclear about the objective of deploying watermarks.
* This spoofing attack does not aim to produce responses similar to jailbreaking (i.e., your example). Instead, if it can generate harmful content and cause damage to the model owner's reputation, it is considered a successful attack, and the consequence is also discussed in [2]. The term "advanced" is used to emphasize that attackers can achieve spoofing attacks with minimal capabilities.
Also, thanks for pointing out this preprint work [1]. We acknowledge that it also proposes a method to achieve spoofing attacks, but our methods are different. Moreover, their work further highlights the vulnerability of spoofing attacks and underscores the need for effective defenses. While they suggest compromising watermark robustness to mitigate spoofing attacks, our work demonstrates how to achieve effective mitigation without sacrificing robustness. We will include a detailed discussion in the revision.
* The claim of unlearnability in our work is based on cryptographic security principles, particularly those of digital signatures. This ensures that the watermark cannot be spoofed or tampered with [3], thereby maintaining the integrity and robustness of the system.
**W4: Writing**
* We clearly state in L30 that the explanations of `symmetric characteristic` and `learnability` are detailed in sec 2.3.
* Thanks for your thorough reading and pointing out typos. We will fix them in the revision.
**Q1: Practical Text Quality**
In addition to examples in response to Reviewer 6UZB, we provide examples of LFQA tasks using LLaMa-7B, which show that **the higher PPL does not indicate impractically bad quality**.
Ex1- Prompt:
>Q: What does a Mayor even do?
Unigram:
>Uhhhhhhhhhhhhhhhhhhhhhhh...
Bileve:
>Most of the problems being experienced by our City are a result of bad planning, decisions, and practices of the City Council. Unfortunately, the City Council receives the majority of adulation for what\u2019s going on in the City.\nThe mayor is the City Manager...
Ex2- Prompt:
>Q: Mandatory arbitration
Unigram:
>I am sorry for this but I am out of answers. I will ask others for a solution. Thank you. Please ask more questions later on. If you wait 2 hours I will be back...
Bileve:
>Mandatory arbitration is a means for eliminating affected commerce and eliminating employees' rights to sue as private citizens. Unions do not like the term...
Ex3- Prompt:
>Q: when does a case need jurors?
Unigram:
>a court can order a jury as a court order. If a party asks for a jury. If a party appeals a court. If a party files a law suit (which must be done before a court can be held)...
Bileve:
>Tuesday at all times and Thursdays at 9:00 am. If you are qualified, you may be called for a case or cases may be filled from qualified jurors already on the list...
We would like to include additional metrics, like the one in **W2** and human scoring, to evaluate quality more comprehensively in the revision.
**Q2: Additional Experiments**
When the editing ratio is 5% (measured by TPR on OPT-1.3B):
| | OpenGen | LFQA |
|----------|----------|---------|
| Unigram | 0.993 | 0.998 |
| Bileve | 0.999 | 0.999 |
---
[2] Watermark Stealing in Large Language Models, ICML 2024.
[3] Publicly Detectable Watermarking for Language Models, arXiv 2024.
---
Rebuttal Comment 1.1:
Comment: As the discussion period is about to conclude, we would appreciate it if you could review our above response and let us know if our rebuttal addresses your concerns. Thanks! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms | Accept (poster) | Summary: The paper presents an alternative backpropagation scheme for deep learning with algorithmic losses that combines a preconditioned step on the loss with a gradient step on a least square objective. Two preconditioning methods are investigated: using the Hessian, or the empirical Fisher. Experiments demonstrate that the proposed plugin consistently improve performance of algorithms across architectures and losses on two benchmarks. An ablation study of the potential additional tikhonov regularization is give, as well as a discussion of runtime comparisons.
Strengths: - The main strength of the paper is its experimental evaluation. Two relevant benchmarks are analyzed. 4 losses are considered in the frist benchmark, 3 in the second benchmark. Ablation studies and runtime comparisons provide a rather full picture on the algorithm.
- Overall the proposed method clearly provides gains across settings. Its theoretical motivation may be unclear but such experimental evidence invites for further research on the subject.
Weaknesses: - The soundness of the approach from a theoretical viewpoint is lacking. However, it is probably better to have clear experimental evaluations than wobbly theoretical explanations. And theoretical explanations can be given later.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why do the authors multiply an inverse of the Hessian with the gradients? Such operation amounts to solve a linear system and is always best handled with matrix-free linear algebra solvers that can make use of Hessian vector products.
- Can the approach be cast as appropriate block-coordinate minimization of the Lagrangian associated to the composition of the loss and the network? As the authors acknowledge if both steps in 2.a, 2.b are gradient steps, we retrieve the classical gradient back-propagation as demonstrated in [57]. Such a setting may help uncover the principles underlying the improved performance.
- I don't see how the learning rates were tuned in the experiments. Since it is generally a crucial hyperparameter for optimization methods, can the authors comment on this?
- The authors keep on mentioning vanishing/exploding gradient phenomena. Could the authors actually demonstrate this issue numerically? I suppose the vanishing/exploding gradient phenomena depends on the number of steps taken by the algorithm defining the loss. Maybe one could correlate the number of steps taken by the underlying algorithm and the improved performance of the proposed approach.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations (or rather scope) of the approach are discussed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for dedicating their time to evaluate our work and helping us improve it further. Below, we have addressed your insightful comments.
**Weaknesses**
> The soundness of the approach from a theoretical viewpoint is lacking. However, it is probably better to have clear experimental evaluations than wobbly theoretical explanations. And theoretical explanations can be given later.
Indeed the focus of our paper is rather on the experimental part rather on a detailed theoretical analysis (4 pages of experiments compared to 2 pages of the derivation of the method).
We consider Section 3.2 to be rather a derivation of the method than a theory section with theoretical guarantees.
**Questions**
> Why do the authors multiply an inverse of the Hessian with the gradients? Such operation amounts to solve a linear system and is always best handled with matrix-free linear algebra solvers that can make use of Hessian vector products.
In all our experiments, the cost of inverting the Hessian and the multiplication is negligable due to the sizes of the involved spaces. We will add a remark mentioning the solver-based alternative option.
> Can the approach be cast as appropriate block-coordinate minimization of the Lagrangian associated to the composition of the loss and the network? As the authors acknowledge if both steps in 2.a, 2.b are gradient steps, we retrieve the classical gradient back-propagation as demonstrated in [57]. Such a setting may help uncover the principles underlying the improved performance.
This is an interesting question. As the reviewer correctly points out, if both steps (2a) and (2b) are gradient steps, this is equivalent to using a gradient descent step for (1), as shown in Lemma 2.
Beyond that special case, we are not aware if the combination of steps (2a) and (2b) can recover any other classical approaches, such as block-coordinate minimization. Our notation with the $\phi$-function may suggest a similarity to the (block)-coordinate descent method. However, unlike coordinate descent methods, we do not split our variables $x$ or $\theta$ into sub-coordinates.
> I don't see how the learning rates were tuned in the experiments. Since it is generally a crucial hyperparameter for optimization methods, can the authors comment on this?
For the sorting and ranking experiments, we use a learning rate of 0.001, which is an established value for 100 000 step long training on the benchmark. We want to point out that, as we use Adam, the optimizer is (apart from numerical precision) agnostic to factors of the gradient magnitude.
For the shortest path experiments, we determine the optimal initial learning rate for the baseline and then apply the same learning rate for Newton Losses; the values for SS / PO methods and the grid from AlgoVision is provided on page 15. The learning rate decay schedule is predefined by the benchmark.
> The authors keep on mentioning vanishing/exploding gradient phenomena. Could the authors actually demonstrate this issue numerically? I suppose the vanishing/exploding gradient phenomena depends on the number of steps taken by the algorithm defining the loss.
To illustrate the characteristics of the losses and their gradients, we provide 6 figures in the supplemental author response PDF page.
We illustrate the gradient of the NeuralSort and logistic DSN losses in dependence of one of the five input dimensions for the $n=5$ case.
In the illustrated example one can observe that both algorithms experience exploding gradients when the inputs are too far away from each other (which is also controllable via steepness / tau), see Figures (c) / (d).
Further, we can observe that the gradients themselves can be quite chaotic, making the optimization of the loss rather challenging.
In Figures (e) / (f), we illustrate how using the Fisher Newton Loss recovers from the exploding gradients experienced in (c) / (d).
> Maybe one could correlate the number of steps taken by the underlying algorithm and the improved performance of the proposed approach.
We agree; from Table 1, we can derive such a comparison: using a DSN with 10 inputs has twice the number of steps of a DSN with 5 inputs. Here, we can see that the performance improvements with 10 inputs are greater than the performance improvements in the 5 input case.
Please let us know whether you have any other questions or concerns regarding our paper and response, or whether we have successfully answered and resolved all of your questions and concerns.
---
Rebuttal Comment 1.1:
Title: Acknowledging rebuttal
Comment: I thank the authors for their answers to my questions and comments.
- Surely, the paper would benefit from a better theoretical understanding.
- That said, the paper presents an extensive set of experiments and clear improvements with the proposed method. These experimental results may be a base for a better understanding of the approach from a mathematical viewpoint. Numerous experiments diagnose the approach to build upon it. Though I would not argue for a strong accept, I believe the value of this experimental work should not be completely dismissed and I maintain my score.
- Regarding the regularization comments of reviewer tok2, some recent works have analyzed the use of quadratic regularizations of Newton method with proven convergence guarantees (see [1] and further references of [1]). Most importantly, cubic regularizations can be expensive to compute and so not practical. Given that this paper is interested in practical improvements, analyzing the performance with either empirical Fisher* or Newton's method with quadratic regularization is valuable empirically.
[1] Super-Universal Regularized Newton Method, https://arxiv.org/pdf/2208.05888v1
* To avoid further confusion about the Fisher matrix and its "empirical part", I would suggest the authors to rather speak about the second (uncentered) moments of the gradients.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer nJzH,
thank you for acknowledging our rebuttal and your positive remarks.
Thanks for the reference to [1], we will add a discussion to the camera-ready.
Also, thanks for the "second (uncentered) moments" suggestion; we will adjust the paper accordingly.
---
(Meta: Regarding your "Though I would not argue for a strong accept", we just wanted to make sure that you are aware that between the currently selected "weak accept" and a "strong accept", there is also the score "accept" this year.) | Summary: The paper proposes second-order optimization with splitting for hard objectives that arise as smoothing of such hard problems as sorting and ranking to address the problem of vanishing/exploding gradients.
Strengths: It is a well-written and very complete description of algorithms for reproducibility, which is a very good thing in itself.
Weaknesses: 1. Insufficient experiments. I'd appreciate adding a comparison here with the SFA technique from there, as it will rely only on first-order information: https://arxiv.org/pdf/2003.02122
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Considering Newton's method in the case of non-convex objectives is a mistake. No matter how much it is regularized, as long as regularization is L2, had authors considered cubic regularization? E.g., https://link.springer.com/article/10.1007/s10107-006-0706-8
2. Adding to the first question, for ranking objectives, Hessians are expected to converge to zero. Have you considered an increasing learning rate schedule? I am somewhat sure that this hessian/fisher type of method, due to vanishing gradients, also vanishes, resulting in effectively increasing the learning rate up to $\propto \lambda^{-1}$. I will appreciate experiments against the first-order method with the learning rate scheduler growing up to that value.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for dedicating their time to evaluate our work and helping us improve it further. Below, we have addressed your insightful comments.
**Weaknesses**
> 1. Insufficient experiments. I'd appreciate adding a comparison here with the SFA technique from there, as it will rely only on first-order information: https://arxiv.org/pdf/2003.02122
Thank you for this suggestion. We will include this paper in the related work.
However, we were not able to find any source codes for this work, and reimplementing it from scratch within the author response period is not feasible.
We will contact the authors regarding the source code, and if possible include these results in the camera-ready.
Nevertheless, we would like to point out the extensiveness of our experimental results, as we apply Newton losses to 11 algorithms, which comprises direct comparisons to the methods of 6 published articles (and on the benchmarks that these works utilize).
**Questions**
> Considering Newton's method in the case of non-convex objectives is a mistake. No matter how much it is regularized, as long as regularization is L2, had authors considered cubic regularization? E.g., https://link.springer.com/article/10.1007/s10107-006-0706-8
We thank the reviewer for highlighting Nesterov's work on cubic regularization. As noted in the mentioned paper, various regularization techniques can be applied when using the Newton method for non-convex or ill-conditioned problems. The most classical technique is Tikhonov regularization (or Levenberg-Marquardt regularization). Given that our Tikhonov-regularized version already demonstrates good empirical performance, we did not explore more sophisticated regularization techniques such as cubic regularization. However, this is indeed an interesting direction for future research.
We also want to point out that we always include results with the empirical Fisher, which, by definition, is always positive semi-definite.
> Adding to the first question, for ranking objectives, Hessians are expected to converge to zero. Have you considered an increasing learning rate schedule? I am somewhat sure that this hessian/fisher type of method, due to vanishing gradients, also vanishes, resulting in effectively increasing the learning rate up to $\propto \lambda^{-1}$. I will appreciate experiments against the first-order method with the learning rate scheduler growing up to that value.
As the optimizer for all experiments in this paper is the Adam optimizer, such effects can be rather excluded, as multiplying the loss by a factor does not affect updates (apart from numerical precision effects).
Using the Adam optimizer is the established experimental setting on each of these benchmarks.
In Adam, the sign and relative magnitude of the gradient to the previous step are the important factors.
Thus, if the gradient magnitudes increase or decease over time by a certain factor, it should not significantly affect training, as Adam normalized for gradient magnitudes over time (via the square-root of the uncentered variance component).
As an ablation study prior to writing the paper, and not included in the paper, we did actually multiply the loss value (and thus also gradient) by factors, and observed that this had no effect on optimization.
Please let us know whether you have any other questions or concerns regarding our paper and response, or whether we have successfully answered and resolved all of your questions and concerns.
---
Rebuttal Comment 1.1:
Comment: >We thank the reviewer for highlighting Nesterov's work on cubic regularization. As noted in the mentioned paper, various regularization techniques can be applied when using the Newton method for non-convex or ill-conditioned problems. The most classical technique is Tikhonov regularization (or Levenberg-Marquardt regularization). Given that our Tikhonov-regularized version already demonstrates good empirical performance, we did not explore more sophisticated regularization techniques such as cubic regularization. However, this is indeed an interesting direction for future research.
That is the point that you'll need to overregularize offsetting entirely your negative spectrum, which happens to be as large as positive one, as smoothed objectives are necessarily hyperbolic in nature. So while "you can" it's absolutely meaningless type of regularisation in this case.
>As the optimizer for all experiments in this paper is the Adam optimizer, such effects can be rather excluded, as multiplying the loss by a factor does not affect updates (apart from numerical precision effects).
It's about loss <-> lr scaling, not about the fact that it actually will have an effect if you make learning rate scheduled
I'll keep my score as is.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer tok2,
thank you very much for responding to our rebuttal, and for your clarification.
> That is the point that you'll need to overregularize offsetting entirely your negative spectrum, which happens to be as large as positive one, as smoothed objectives are necessarily hyperbolic in nature. So while "you can" it's absolutely meaningless type of regularisation in this case.
We would like to kindly to point at the fact that the empirical Fisher is always PSD, which resolves this concern for the Newton Loss (Fisher).
For the Hessian case, you are right that it could require overregularizing, but the empirics show that using the Hessian still works very well, in most cases even better than the Fisher variant.
> [...] resulting in effectively increasing the learning rate up to $\propto \lambda^{-1}$. I will appreciate experiments against the first-order method with the learning rate scheduler growing up to that value.
>
> [...] about the fact that it actually will have an effect if you make learning rate scheduled
We apologize as we might have misunderstood you earlier.
To address this concern with concrete experiments, we have now run a set of experiments with learning rate scheduling.
In particular, we schedule the learning rate, as suggested to grow the learning rate to initial learning rate $\cdot\lambda^{-1}$.
We use both linear as well as eponential growth for the learning rate. Further, we use NeuralSort (n=5), and ran experiments for both $\lambda=0.1$ and $\lambda=0.01$, covering the lambdas of both Hessian Newton Loss and Fisher Newton Loss.
We ran each experiment for 5 seeds.
| Method | Full Rankings Acc. | (Individual Ranks Acc.) |
|--|--|--|
| Baseline | 71.33±2.05 | 87.10±0.96 |
| Hessian Newton Loss (ours) | 83.31±1.70 | 92.54±0.73 |
| Fisher Newton Loss (ours) | 83.93±0.62 | 92.80±0.30 |
| GrowLR $\propto\lambda^{-1}$ ($\lambda=0.1$, linear) | 69.99±3.17 | 86.39±1.58 |
| GrowLR $\propto\lambda^{-1}$ ($\lambda=0.1$, exponential) | 70.66±1.21 | 86.73±0.58 |
| GrowLR $\propto\lambda^{-1}$ ($\lambda=0.01$, linear) | 68.44±3.40 | 85.68±1.64 |
| GrowLR $\propto\lambda^{-1}$ ($\lambda=0.01$, exponential) | 68.19±1.73 | 85.48±0.86 |
This shows that introducing a learning rate scheduler as suggested does not improve performance, and that the method is not "resulting in effectively increasing the learning rate up to $\propto \lambda^{-1}$".
In case it is relevant to you, we also extend the table by loss factors (multiplying the loss by 0.1 or 10, and thus effectively multiplying the gradient by the same factor), which shows that loss factors do not have any significant effect.
| Method | Full Rankings Acc. | (Individual Ranks Acc.) |
|--|--|--|
| Loss factor 0.1 | 72.04±1.24 | 87.43±0.61 |
| Baseline (loss factor 1.0) | 71.33±2.05 | 87.10±0.96 |
| Loss factor 10.0 | 71.43±1.25 | 87.17±0.60 | | Summary: The paper proposes a new method to optimize complex possibly non-smooth and algorithmic losses for neural networks. The approach is based on splitting the problem into two-step procedure, where in the first step we construct and optimize the so-called Newton loss and the second step is based on SGD-type procedure for MSE loss with the first step. The authors present a wide experimental comparison of the proposed Fisher and Newton approaches with existing methods.
Strengths: The paper has a strong literature review and motivation for solving different applications. The experimental section is well described, it contains 4 different non-trivial problems to solve. For the presented cases, the proposed methods outperform the baselines.
Weaknesses: The paper does not contain any proofs or convergence guarantees. The mathematical formulation of the main problem is also quite confusing for me.
For example, is vector $x$ fixed for all steps or is it a batch of data? Is it a sum-type problem or an expectation problem? What are the properties for $l(\cdot)$? Is it differentiable, smooth? Because some parts of the text said that the loss is non-smooth and later we calculate the Hessian of such a function.
In Formulas 1 and 2, it is not clear what are the fixed parameters or data. Should $\theta$ in 2a be $\theta_{t-1}$? Also, I think the mention of Lemma 2 in the main text could be very helpful.
For the experimental section, personally, it feels that the most of space is taken by the description of the problems and the setup and not the actual comparison. As the paper is mostly experimental and empirical, one would expect a better comparison of the proposed methods with the multiple benchmarks. There are no convergence figures with the per-iteration or per-gradient performance. As the authors claim, the main issues in the existing approaches are vanishing and exploding gradients. However, I didn’t find any clipping method for the comparison, which are the possible solutions for exploding gradients.
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weaknesses
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer for dedicating their time to evaluate our work and helping us improve it further. Below, we have addressed your insightful comments.
**Weaknesses**
> The paper does not contain any proofs or convergence guarantees.
It is important to keep in mind that the methods to which we apply Newton Losses do not come with any theoretical convergence guarantees by themselves, even for only SGD optimization, which stems from the non-convexity and complexity of such algorithmic loss functions.
Providing convergence guarantees for a simplification that lies outside the space of our considered losses, e.g., a convex original loss function, while trivial, would be rather misleading in our opinion.
> For example, is vector $x$ fixed for all steps or is it a batch of data?
We apologize for any confusion. The input vector $x$ throughout our paper refers to a batch of data. We will clarify this in the revised version.
> Is it a sum-type problem or an expectation problem?
We generally consider non-separable loss functions. For example, for the ranking supervision experiments, the relation between individual samples and the sorted population is considered. Thus, these ranking losses can not be expressed as a sum over individual samples. Between independent sets, however, the loss can be expressed as an expectation.
> What are the properties for $l(.)$? Is it differentiable, smooth? Because some parts of the text said that the loss is non-smooth and later we calculate the Hessian of such a function.
We apologize for the confusion. We assume that the loss function is twice differentiable, ensuring the existence of a Hessian, or at least differentiable for ensuring the existence of the gradient and empirical Fisher matrix.
For settings with non-smooth objectives (e.g., shortest-path problem), we apply smoothing techniques to approximate the loss with a smooth counterpart, such as stochastic smoothing (see Section 4.2.2), such that the loss function $\ell(\cdot)$ to which we apply Newton Losses is smooth.
We will clarify this in the revised version.
> In Formulas 1 and 2, it is not clear what are the fixed parameters or data. Should $\theta$ in 2a be $\theta_{t-1}?
Thank you for spotting this typo.
Indeed, in Equation (2a), $\theta$ should be $\theta_{t-1}$. We will fix it.
> Also, I think the mention of Lemma 2 in the main text could be very helpful.
Thank you for the suggestion. We will provide a discussion of Lemma 2 in the main text in the revised version.
> For the experimental section, personally, it feels that the most of space is taken by the description of the problems and the setup and not the actual comparison. As the paper is mostly experimental and empirical, one would expect a better comparison of the proposed methods with the multiple benchmarks.
Thank you for this suggestion.
As we have an additional page for the camera-ready, we will extend the discussion of the results and actual comparisons in the camera-ready; however, we feel that it remains important to maintain the detailed discussions of the benchmarks as well as continuous relaxations that we apply Newton Losses to.
> There are no convergence figures with the per-iteration or per-gradient performance.
Could you please clarify what precisely you would like us to plot, considering that we show a per-epoch convergence plot in Figure 3?
> As the authors claim, the main issues in the existing approaches are vanishing and exploding gradients. However, I didn’t find any clipping method for the comparison, which are the possible solutions for exploding gradients.
Thank you for this suggestion.
On these types of algorithmic loss benchmarks, to our knowledge, gradient clipping has not been previously considered in the literature.
If you do have a respective reference, we would greatly appreciate it.
Nevertheless, we have implemented gradient clipping for Table 1:
| n | NeuralSort | SoftSort | Logistic DSN | Cauchy DSN |
|--|--|--|--|--|
| n=5 | $73.79\pm3.79 (88.23\pm1.84)$ | $71.76\pm1.39 (87.22\pm0.64)$ | $61.28\pm16.67 (81.54\pm8.97)$ | $84.75\pm0.54 (93.13\pm0.30)$ |
| n=10 | $25.02\pm6.17 (74.56\pm3.29)$ | $28.85\pm6.20 (76.68\pm3.05)$ | $22.60\pm1.54 (73.51\pm0.87)$ | $54.30\pm2.12 (86.65\pm0.80)$ |
The results show consistent improvements for the first 3 algorithmic losses, but unfortunately it degrades Cauchy DSNs, even compared to the vanilla method.
These results are averaged over 5 seeds.
**In each case, Newton Losses performs substantially better than gradient clipping.**
Please let us know whether you have any other questions or concerns regarding our paper and response, or whether we have successfully answered and resolved all of your questions and concerns.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
thank you for the detailed answers and comments.
*Could you please clarify what precisely you would like us to plot, considering that we show a per-epoch convergence plot in Figure 3?*
The same as Figure 3, but for all problems. They can be presented in the Appendix for the lack of space. Also, the dependence on gradient/Hessian-vector products or time performance could be interesting to have.
Regarding the rebuttals, I would like to increase my score from 4 to 5.
---
Reply to Comment 1.1.1:
Title: Thank you for your response and increasing your score.
Comment: Dear Reviewer qoSU,
Thank you very much for responding to our rebuttal and for increasing your score.
To address your clarification wrt. the plot request, we offer to include the same as Figure 3, but for all problems in the appendix in the camera-ready.
We can also make additional versions with time on the x-axis. Regarding plotting "dependence on gradient/Hessian-vector products", could you clarify what you mean such that we can also include it in the appendix? Was "dependence on gradient/Hessian-vector products" meant as an equivalent to the "time performance" or was it, e.g., number of floating-point operations?
Best regards,
The Authors
---
Rebuttal 2:
Comment: Dear Reviewer qoSU,
Thank you very much for the clarification! Accordingly, we will include both per-epoch and per-time plots in the appendix of the camera-ready version.
Please let us know in case any further questions or concerns come up. | null | null | Rebuttal 1:
Rebuttal: 1 page rebuttal attachment: Illustrations of the gradient of the NeuralSort and logistic DSN losses.
Pdf: /pdf/2fd9c91b7402bb975771152d2e88e3c2fa1ebdae.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improved off-policy training of diffusion samplers | Accept (poster) | Summary: The paper studies the problem of training diffusion models to sample from a target distribution. The contributions are summarized as follows:
1. A codebase is provided for the study of diffusion-based samplers, due to the issue of inconsistent experimental settings in previous research;
2. Exploration in the target space can be enhanced by GFlowNet-based off-policy training objectives and local search with the use of replay buffer.
3. Experimental results validate the effectiveness of the proposed approach.
Strengths: Sampling from a target distribution can be challenging in high-dimensional spaces, especially when the distribution of interest has many separated modes. This paper explores diffusion models to address this challenge. Unlike existing reverse KLD-based methods, such as PIS and DDS, this paper considers GFlowNet-based training objectives (e.g., trajectory balance, sub-trajectory balance), which enable off-policy training. This means that training trajectories are not necessarily from the current forward process, thus enhancing exploratory capability. Additionally, local search using a replay buffer can further enhance exploration in the target space. In general, the paper is well-written and well-organized.
---
**After rebuttal:** I will increase my score to 7. Typos or incorrect writing should be corrected upon acceptance.
---
Weaknesses: Please see the below questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: - In terms of Table 1, which proposed methods perform best according to $\log Z^{LB}$ and $\log Z^{RW}$? Both evaluation metrics reflect different aspects of performance. For example, $\log Z^{LB}$ should indicate mode collapse behaviour, with lower values suggesting more serious mode collapse (e.g., PIS+LP: 13.19 vs. TB+Expl: 4.01, as illustrated in Figure 1)? In contrast, lower $\log Z^{RW}$ values suggest a closer approximation to the ground-truth $\log Z$.
- In terms of C2 variance-preserving noising process,
- As far as I undertand from the paper, GFlowNet uses PIS architectures, where the initial state is a point mass at 0. How did you design it under VP settings, where the marginal distribution $p_{t}^{ref}$ is an invariant Gaussian distribution, i.e., the initial state is Gaussian distributed, not a point mass at 0?
- In VP settings, we gradually add more and more noises to data (data --> Gaussian: $\beta$ increases). However, the sampling process starts with simple Gaussians and ends with samples, so should $\beta$ decrease, when Gaussian --> data?
- I am curious if the authors have ever tried the detailed balance objective, due to its superior generalization capability (https://arxiv.org/pdf/2407.03105, different settings though)?
- The results for DIS and DDS over Manywell are still missing compared to the previous version. However, the DIS paper does include the results. Any reasons?
- Missing reference - Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling. Both papers share the same goal. It would be better to discuss it in the paper.
Potential typos:
- Eq.7: To define the reverse KL, a distribution is missing between $\int$ and $\log$? Why does $\mu_{0}(x_{0})$ appear before $d x_{\Delta t}$?
- Eq.8: $d x_{0}$ is missing?
- Eq.10 & 11: $p_{target}(x_{1})$ --> $R(x_{1})$?
- Line 181: $Z_{\theta}$ is missing in the equation?
- Eq.12: $f(x_{m})$ --> $f(x_{m \Delta t})$, as well as $f(x_{n})$?
- Line 209: $I_{d}$ is missing in $p_{t}(x_{t})$?
- Line 290 & 294: $\log$ is missing before $\int$, and $P_{F}(\tau | x_{1})$ --> $P_{F}(\tau | x_{0})$?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. Limitations were included.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and positive assessment of the paper.
### Evaluation metrics
Thank you for your question regarding the different aspects of the methods' behaviours, as measured by various evaluation metrics (here: $\log \hat Z^{\rm RW}$ and $\log \hat Z$). We agree with your suggestions about the importance of one of them in measuring mode collapse behaviour and the other with the partition function approximation.
However, for measuring both mode collapse and mode coverege, we have also a few other options. If we have access to ground truth samples, sample-based metrics (such as those typically used for assessing generative models trained from data) are available. One of them is $\mathcal{W}_2^2$, which we already included in Appendix C.1.
Other options are, e.g., EUBO, ESS, MMD, and EMC, which are presented in the very interesting "Beyond ELBOs" paper you mentioned (which appeared after the submission deadline, but which we were already aware of). The question of which approach is the best in each metric is already hard. However, we noticed that $\log\hat Z^{\rm RW}$ results are better for the methods using Langevin Parametrization (e.g., together with Local Search), whereas $\log \hat Z$ performance is better when the method utilizes Local Search.
### VP setting
Good question and observations! As you noticed, we used the PIS model architectures, but changed the noising (backward) process to add increasing levels of noise and made the initial state Gaussian-distributed and not a point mass at 0 (which can be thought of as a fixed first transition from an abstract initial state to a sample from the Gaussian). The exact procedure is presented in Appendix C.2.
### DB objective
The DB objective fails to discover multiple modes even on the simplest task (25GMM) and thus performs significantly worse than SubTB, which is in turn worse than TB. We guess that this is due to poor credit assignment over long trajectories, as the learning signal to the policy is mediated by learning of the intermediate flow functions (densities).
The preprint you mention tests generalization abilities only in a single environment, which (1) is discrete, (2) has much shorter and variable-length trajectories, (3) is simple enough that all learning algorithms converge to the optimum when modes are not masked. We guess that all three of these limit the generalizability of those results to the diffusion case.
However, it would indeed be interesting, in future work:
- to study the regularizing effect of local-in-time objectives like DB in the diffusion sampler setting, as has been done in the standard diffusion setting in [Lai et al., "FP-Diffusion", ICML'23];
- to understand the meaning of these objectives in the continuous-time limit.
### Missing results
We will include the missing results for DIS and DDS on Manywell. The main reason is that our (and, e.g., PIS's) Manywell setting is slightly different than the one used in the DIS paper. We are currently working on making DDS and DIS work on our Manywell version and plan to share those results before the end of the discussion period.
### Missing reference
Thank you for pointing out this paper. We are aware of it and will happily include it in the revised version of our paper. (Note that that paper appeared only on June 11, after the NeurIPS submission date.)
### Potential typos
Thank you for reading carefully! We fixed the small mistakes.
**Thanks you again, and please let us know if we can provide any further information during the discussion period.**
---
Rebuttal Comment 1.1:
Title: Additional DDS+DIS results
Comment: As promised, here are the results for DDS and DIS on Manywell.
First, we clarify the difference between the task as considered in DIS and in this work, noting that both have been studied in other papers as well and neither choice seems obviously better.
| | Ours | DIS (hardest version) |
|---|---|---|
| Dimension | 32 | 50 |
| No. of double wells | 16 | 5 |
| No. of modes | $2^{16}$ | $2^5$ |
We consider Manywell as a product of 16 double well distributions having unnormalized density given by $R(x,y) = \exp(-x^4 + 6x^2 + 0.5x - 0.5y^2)$, following [Midgley et al., "Flow annealed importance sampling bootstrap", ICLR'23], whereas in the DIS paper, the authors consider the more general case of the distribution being a product of some number of double well distributions. However, despite their considering this problem in higher dimensionality, they are stacking a smaller number of double wells (no more than 5), which makes the number of modes in distribution much smaller.
We ran DDS and DIS with our setting (after searching for the best hyperparameters, DDS was still somewhat unstable and often encounters an explosion of the loss and mode collapse, in which case we take results from the preceding checkpoint):
| Algorithm | $\Delta\log Z$ | $\Delta\log Z^{\rm RW}$ | ${\cal W}_2^2$ |
|---|---|---|---|
| DIS (4 runs) | 10.52 ± 1.02 | 3.05 ± 0.46 | 5.98 ± 0.46 |
| DDS (3 runs) | 7.36 ± 2.43 | 0.23 ± 0.05 | 5.71 ± 0.16 |
| VarGrad + LP + LS (ours) | 4.11 ± 0.45 | 0.02 ± 0.21 | 5.30 ± 0.02 |
We see that our best method outperforms both DIS and DDS in both metrics, but both are comparable with the various baselines in Table 1 of the paper.
We hope you find this comparison helpful. Thank you again for your time.
---
Rebuttal 2:
Comment: Thank you for your detailed responses, and I apologize for the delayed reply.
### Evaluation metrics
- I agree that if ground-truth samples are available, then we can use sample-based metrics for evaluation.
- I am still a bit confused. For example, if we look at TB + Expl. + LS (ours), we see that $\Delta \log Z = 0.171$ and $\Delta \log Z^{\mathrm{RW}} = 0.004$, which are the best in the column. We might conclude that it performs the best? It gives the lowest approximation error for $\log Z$, while suffering from more serious mode collapse?
### VP setting
> Good question and observations! As you noticed, we used the PIS model architectures, but changed the noising (backward) process to add increasing levels of noise and made the initial state Gaussian-distributed and not a point mass at 0 (which can be thought of as a fixed first transition from an abstract initial state to a sample from the Gaussian). The exact procedure is presented in Appendix C.2.
- Do you mean that the policy at the first step, from $s_{0} = ((0, 0), 0)$ to (x_{1}, 1) is constrained to be a unit Gaussian, i.e., $P_{F}((x_{1}, 1) | ((0, 0), 0)) := P(x_{1})$, while subsequent steps are conditional Gaussians with a known variance, i.e., $P_{F}(\cdot | (x_{t}, t))$?
> In VP settings, we gradually add more and more noises to data (data --> Gaussian: $\beta$ increases). However, the sampling process starts with simple Gaussians and ends with samples, so should $\beta$ decrease, when Gaussian --> data?
- In terms of the equation below Eq.(16), we see that $\beta$ increases from $0.01$ to $4$, implying that we are adding more and more noise for the forward policy. This approach works for diffusion generative modeling. However, the sampling process starts with simple Gaussians and ends with samples, so shouldn't $\beta$ decrease (i.e., $\beta$ decreases from $4$ to $0.01$)? This is my previous concern.
[Updated] Thank you for providing new results.
---
Rebuttal Comment 2.1:
Comment: Thank you for following up.
**On the evaluation metrics:** we are not sure to understand your question. In fact, "TB + Expl. + LS (ours)" shows **less** serious mode collapse and correspondingly lower $\log Z$ estimation error.
**On the VP setting:**
> Do you mean that the policy at the first step ... is constrained to be a unit Gaussian?
Yes, precisely as you wrote (assuming integer indexing of time steps).
> However, the sampling process starts with simple Gaussians and ends with samples, so shouldn't $\beta$ decrease?
We have mistakenly written the noise schedule in equation (16) **in reverse** (i.e., the time indexing convention for diffusion models trained from data, not the one for diffusion samplers). In fact, $\beta$ decreases linearly from 4 to 0.01 (as we have just confirmed in the code that was run for the experiments).
Thank you again for reading so carefully.
---
Rebuttal 3:
Comment: That task (25GMM) is easy enough that none of the methods in the last group (using either LS or LP) are exhibiting mode collapse -- they sample points around each of the 25 modes.
Note that:
- a sampler that finds 9 of 25 modes (the inner 3-by-3 square) will have a $\log Z^{\rm RW}$ estimation error of about $-\log\frac{9}{25}\approx0.44$ (as TB and VarGrad without any extensions do);
- a sampler that finds only one mode will have an error of about $-\log\frac{1}{25}\approx1.40$, and two modes $-\log\frac{2}{25}\approx1.10$ (and the methods with error >1 do indeed usually find one or two modes).
---
Rebuttal Comment 3.1:
Comment: Thank you for taking the time to clarify. I am willing to increase my score to 7. Typos or incorrect writing should be corrected upon acceptance.
---
Reply to Comment 3.1.1:
Comment: Much appreciated, and thank you yet again for your detailed reading and finding all the small mistakes that we missed. | Summary: This paper proposes an off-policy diffusion-based sampler training method to match a target distribution and a corresponding exploration strategy and credit assignment to improve it.
Strengths: 1. The proposed idea of this paper is interesting, which connects the Euler-Maruyama sampler and GFlowNets.
Weaknesses: 1. Although the authors mention that traditional MCMC have high cost in sampling, the proposed method based on neural sde seems to still have this problem. To the reviewer’s knowledge, the solving procedure of neural sde is time-consuming as well.
2. The experimental target distribution also seems relatively simple. In the reviewer’s opinion, for GMM, we can first sample a mode according to the weights of different modes and then obtain a sample in this mode. Hence, it seems unnecessary to use complex model like diffusion.
3. In many real-world applications like image generation, the pdf (may be unnormalized) of the target distribution is unavailable and we can only achieve data samples from the target distribution. Hence, the application scenarios of the proposed model are limited.
4. Besides, as mentioned in the conditional sampling case, the proposed method seems to need an extra trained vae to perform sampling. However, the vae can directly do the image generation. In that case, what is the real contribution of the proposed method?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The biological sequence design seems more appropriate to be the validation benchmark for the proposed model, which is also considered in GFlowNets. So the reviewer wonders how the proposed method is compared with GFlowNets in such tasks.
2. Could the authors explain why they use the log-partition function estimation error as the metrics rather than the log-partition function itself? Similar to MLE (Maximum Likelihood Estimation), the model can be considered better with higher $\log Z$.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: See Weakness 3.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. Below we've tried to address what we believe to a number of misunderstandings.
> Strength: The proposed idea of this paper is interesting, which connects the Euler-Maruyama sampler and GFlowNets
First, we'd like to point out that this paper is not the first to connect Euler-Maruyama samplers of SDEs with GFlowNets (this was already done in [41] and a few other papers). The goal of this paper is to study methods that, for the first time, achieve results competitive with simulation-based methods on several continuous sampling benchmarks.
Next, we answer the rest of the points:
### Applicability of diffusion samplers is limited
We respectfully disagree:
You are correct that there are many domains where we want to train a generative model from ground truth samples. That is an easier problem than sampling unnormalized densities, which is perhaps one reason for the greater attention it receives.
However, many important Bayesian inference problems require sampling distributions that are only available through unnormalized density functions, and sampling methods for such distributions are an active area of research. As noted by the other reviewers:
> The studied problem of sampling from a distribution is an important issue with a long history in statistical inference (**6eYp**)
> Enabling diffusion models to be efficiently applied to sampling from unnormalized probability distributions is a problem with high potential for impact (**5SPJ**).
A few such problems **beyond** Bayesian deep learning are simulations in statistical physics (e.g., [Nicoli et al., "Asymptotically unbiased estimation of physical observables with neural samplers", Physical Review E, 2020]), molecular dynamics problems (e.g., [Noé et al., "Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning", Science, 2019]), imaging inverse problems in many scientific fields, etc. In fact, it is these problems that have motivated the benchmarks that are adopted in the literature for **entirely data-free** inference methods such as ours (e.g., Manywell is a simple statistical physics model).
### Neural SDE integration is slow compared to MCMC
This is inaccurate. The solution of a neural SDE requires as many neural net evaluations as the number of discretization steps chosen (for us, this is 100, in line with past work). This is *far* less expensive than the time to run MCMC chains to convergence on any nontrivial density.
### Target distributions are simple
We kindly direct you to the response to all reviewers.
### Why test on a Gaussian mixture?
Of course, you are correct that a Gaussian mixture density does not require a diffusion model to sample. However, 2D Gaussian mixture densities have been used as a benchmark in many past works ([41,85,87], among others) to test the ability to discover separated modes in an easily interpretable setting.
### The proposed method seems to need an extra trained vae to perform sampling
This question appears to rest on a misunderstanding about what that experiment is showing. The goal there is to *learn an encoder for a trained VAE*, which is an established problem in the diffusion samplers literature (e.g., [87]). The goal is **not** to unconditionally sample new images, which could indeed be done by simply decoding noise samples with the pretrained decoder. Here, we need a pretrained decoder to provide the reward for the diffusion encoder that we train (the reward is the joint likelihood -- the product of the decoder likelihood and the prior density).
### Comparing with GFlowNets for biological sequence design
There seems to be a misunderstanding here. Biological sequence design is a **discrete** sampling problem in which GFlowNets have indeed been used successfully. The goal of this paper is precisely to study the adaptation of GFlowNets and related methods to **continuous** sampling problems.
### $\log Z$ estimation error
There may be some misunderstanding about the meaning of this metric. The metrics are **variational lower bounds** on $\log Z$, meaning that they are lower than the true log-partition function in expectation. The estimation error is a more meaningful metric than the raw value because the best possible value (0) is known and invariant to additive changes to the target log-density, which do not change the target distribution.
When the true $\log Z$ is not known, one can report the estimate, not the estimation error, as we indeed do in some of the experiments (LGCP in Table 1 and VAE in Table 2).
**Thank you again for your feedback. Please let us know if we have misunderstood anything in your comments and we will be happy to provide further clarification. If we have sufficiently clarified these points and answered your questions, please consider updating the score.**
---
Rebuttal 2:
Comment: Thanks for the detailed responses. However, the reviewer still considers it is necessary to do some experiments on high-dimensional real-world applications to show the applicability of the proposed method or make enough theoretical contribution like DDS anyway. Otherwise, it is hard to judge whether the proposed method is useful and efficient.
In that case, the reviewer decide to reject this paper and wish the authors could find appropriate complex real-world applications for their method later. Moreover, I guess a potential application for your method is the robot locomotion task, which is considered in CFlowNets [1].
[1] Li Y, Luo S, Wang H, et al. Cflownets: Continuous control with generative flow networks[J]. arXiv preprint arXiv:2303.02430, 2023.
---
Rebuttal Comment 2.1:
Comment: Thank you for following up.
As pointed out in [41, Section 3.1], the referenced paper [1] makes some serious errors in math -- incorrectly doing a change of variables in an integral -- that make the algorithm unsound. Formulating this task as a continuous sampling problem requires access to the Jacobian of the environment’s transition function, which isn’t available.
As noted in the initial response, we have demonstrated the effectiveness of our sampling method on a standard collection of tasks in the diffusion samplers literature. If you see a task of higher difficulty that is standard in the field, please let us know. | Summary: This paper focuses on the problem of sampling with distributions defined by a black-box and unnormalized energy function. This work provides a comprehensive review of existing works, including both variational methods and policy-based methods, and offers a codebase and benchmark to replicate and evaluate the existing works. Additionally, this work proposes a method to improve existing policy-based methods via local search and a replay buffer.
Strengths: 1. The studied problem of sampling from a distribution is an important issue with a long history in statistical inference. The paper provides a good review of recent works on this topic by leveraging diffusion models. The codebase that unifies existing methods is certainly useful to the community for continuing research on this topic.
2. The experiments are comprehensive in baselines, including not just diffusion-based methods but also classical MCMC algorithms. The results clearly show the advantages of diffusion-based methods and the techniques proposed in this work.
Weaknesses: 1. It appears to me that this work only tests the algorithm on relatively simple and manually-constructed scenarios. Are there any real and important applications within the field? I am not very familiar with this field, but I think that only conducting experiments on synthetic datasets makes this topic less practical. I believe the main advantage of the diffusion-based method over classical methods is in modeling complex distributions, making experiments on synthetic examples less meaningful.
2. Additionally, the tested scenarios are all low-dimensional cases. I wonder how this algorithm performs on high-dimensional cases, such as when the energy function is learned through neural networks. For example, is it possible to apply this algorithm to image generation where the energy function is represented by an image classifier? Testing the algorithm on high-dimensional tasks like these would provide a better understanding of its scalability and practicality in more complex and realistic settings.
Technical Quality: 2
Clarity: 3
Questions for Authors: Is there any benchmark in this field involving a real application and complex high-dimensional distribution?
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: See my comments on the weakness above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, in particular, for acknowledging the strength of our comprehensive benchmarking.
Regarding your questions about the benchmarks and their dimensionality, first, we kindly direct you to the response to all reviewers for discussion of the choice of target densities.
Second, the dimensions of the tasks studied are: 2, 10, 20 (with 784-dim conditioning vector), 32, and 1600. As just noted, this is in line with past work on diffusion samplers. But it is also worth noting that sampling from complex densities in high dimension **without any prior knowledge** (such as a pretrained model or known samples from the target) is generally an unsolved problem, which is perhaps why it has not been the subject of much study in this past work. Our methods, which use off-policy training, can be combined with prior knowledge, such as known target samples (such as by placing those samples in the replay buffer), as has already been demonstrated, e.g., in [42]. Simulation-based methods like PIS, DDS, etc. do not have this ability.
The use of diffusion samplers in settings with prior knowledge, while not the subject of this paper, is indeed a very interesting direction of research and likely important in more difficult applications.
**Thank you again for your review. Please let us know if we can provide any other clarification that would help you in understanding and assessing the paper.**
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 6eYp,
This message is to ask if you have any final questions and if our response has affected your assessment of the paper.
Thank you,
The authors. | Summary: The paper presents a variety of improvements to off-policy strategies for training diffusion models to sample from unnormalized densities. Equation 13. These include maintaining a replay buffer (obtained with Langevin sampling) to enable efficient off-policy exploration and incorporating an inductive bias into the neural network which estimates the SDE drift term. They also present a software library containing unified implementation of these techniques and e.g. diffusion model training.
Strengths: - Enabling diffusion models to be efficiently applied to sampling from unnormalized probability distributions is a problem with high potential for impact
- Thorough experimental analysis and comparison of different alterations to the training procedure. On most problems considered, the authors' contributions are necessary to achieve good results with trajectory balance.
- The contribution of a software library could be valuable to the community.
Weaknesses: - The results are not overwhelming - although the proposed contributions are helpful compared to a basic version of TB, there is only one modeling task in Tables 1-2 (25GMM) where they provide a statistically significant improvement over the baselines.
- The experiments are on synthetic energy functions and MNIST VAE. Including more real-world data or models would be informative.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses. My main concern is the underwhelming results when compared to DIS, DDS, PIS, PIS+LP. Is there any reason why the proposed method(s) should be preferred to these? Or e.g. expected to scale better to more complex problems?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Below we will answer your questions and concerns.
### Statistically significant improvement over the baselines
Firstly, we want to highlight that our method achieved comparable or better results to current SOTA models. In Tables 1 and 2, we highlighted the best results (red) and those that are not statistically-significantly worse according to the Welch test (blue). Whereas the baselines achieve good results usually in one or two tasks, our method and its variations are the best or among the best results in **nearly all tasks simultanously**.
### Experiments on synthetic energy functions
We thank the reviewer for their suggestion on including the real-world sampling example. Note that the LGCP task is already a real-world example (it is derived from observations of pines in Finland). However, we recognize the importance of applications and will add more experiments (e.g., molecular conformations) in the revision.
We kindly direct you to the response to all reviewers for further discussion of the choice of densities.
### Underwhelming results when compared to SOTA?
Simulation-based methods, such as PIS, are predominantly on-policy, making it challenging to explore low-energy regions in high-dimensional spaces and prone to mode collapse in complex densities (see Figure 1 in the Manywell task). In contrast, training a sampler using off-policy techniques allows for the utilization of powerful methods like local search, which enhances exploration and performance. This off-policy trick has been verified in various problems, including simple energy (25GMM), complex multimodal energies (Manywell), high-dimensional tasks (LGCP), and conditional posterior sampling tasks (VAE).
**Thanks again for your feedback. Please let us know if you have any more questions and if any further clarifications would help you assess the paper.**
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 5SPJ,
This message is to ask if you have any final questions and if our response has affected your assessment of the paper.
Thank you,
The authors.
---
Rebuttal Comment 1.2:
Comment: Thank you for the rebuttal. After considering it, I believe that this paper does present a valuable contribution through its thorough comparisons and the generally-good performance of the proposed method. I still believe that it would be strengthened by experiments on more complex/high-dimensional problems but given that, as the author's point out in the rebuttal, the evaluations tasks are standard in the continuous sampling community, I believe they are sufficient for acceptance. I have raised my score to 6. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the effort they put into reviewing our paper and are grateful for the constructive feedback. We appreciate the reviewers remarking that the paper is well-written and well-organized (FAJh), studies a important problem (5SPJ, 6eYp), and does comprehensive benchmarking to demonstrate the effectiveness of the algorithms (6eYp).
Thank you for noting that the paper well-written and well-organized (FAJh), studies a important problem (5SPJ, 6eYp), and does comprehensive benchmarking (6eYp).
Several reviewers (5SPJ, 6eYp, fepi) raised concerns about the difficulty/dimensionality of the densities studied. We respond to those points now.
First, we note that **our evaluation followed the common procedure in the continuous sampling community** [8,41,77,85,87,etc.], which typically evaluates on the same tasks that we chose, with some variations, or even on a subset of them. For instance, DDS and DIS papers evaluate the methods also only on synthetic data and VAE/NICE (DDS) or sampling one image (DIS).
Furthermore, **as a secondary contribution, we identified discrepancies and reproducibility issues in past work** on these benchmarks (see Appendix B.1). We believe that this is important for sound progress in this field in the future.
The dimensions of the tasks studied are: 2, 10, 20 (with 784-dim conditioning vector), 32, and 1600, which is indeed less than the dimension on which diffusion models are typically trained from data, e.g., images. However, sampling from densities in high dimension **without any prior knowledge** (such as a pretrained model or known samples from the target) is a harder problem than maximizing [a variational bound on] the likelihood of samples, which is perhaps why it has not been the subject of much study in this past work.
Our methods, which use off-policy training, can be combined with prior knowledge, such as known target samples (such as by placing those samples in the replay buffer), as has already been demonstrated, e.g., in [42]. Simulation-based methods like PIS, DDS, etc. do not have this ability. The use of diffusion samplers in settings with prior knowledge, while not the subject of this paper, is indeed a very interesting direction of research and likely important in more difficult applications. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can We Leave Deepfake Data Behind in Training Deepfake Detector? | Accept (poster) | Summary: This study introduces a novel training strategy for Deepfake detection using real, blendfake, and deepfake datasets. By designing an oriented progressive regularizer and a feature bridging module, the proposed approach effectively extracts forgery information from the training data, resulting in enhanced generalizability.
Strengths: The proposed method categorizes forgery faces into several types: SBI, CBI, and Deepfake faces, each containing distinct forgery artifacts, such as blending clues, identity inconsistencies, and generative fingerprints. The fine-grained learning scheme encourages the model to learn representative features from the training data, thus achieving robust and general face forgery detection.
Weaknesses: 1. The method employs a progressive transition from real to blendfake to deepfake samples. However, the necessity of continuity in these features remains unclear. The transition from real to fake faces, as depicted in Fig. 2, appears conceptually weird. The rationale behind the feature bridging and transition design is not well-explained. The progressive transition between adjacent anchors seems unusual, and the reasoning for a continuous rather than discrete transition is not justified.
2. Despite the generative artifacts present in deepfake data, it remains ambiguous why directly incorporating blendfake and deepfake data during training degrades performance. The authors suggest that direct VHT may fail to disentangle the learned representation in the latent space, but no experiments support this claim.
3. Fig. 1(b) does not appear to be an experimental result, which is crucial for validating the work's motivation.
4. In Line 44-45, the authors raised a question “Can the blendfake face entirely substitute the actual AI-generated deepfake face in training deepfake detectors?” However, this question has already been addressed by Face X-ray and SBI, which successfully use blendfake data to train general models.
5. The term A^T in Eq. (6) is not explained.
6. It is unclear why features augmented with noise should be mapped to a more fake distribution.
7. More Deepfake datasets, such as WildDeepfake and DeepForensics-1.0, should be included in cross-dataset evaluations.
8. For robustness evaluations in Table 6, the method should be compared with recent state-of-the-art deepfake detection methods, and more severity levels for each perturbation type should be included to mimic complex real-world scenarios.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Table 4, is R2B2D the same as D2B2R?
2. What is the real/fake label of the mix of F_r and F_s?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the insightful comments. Here, we carefully clarify each issue mentioned by the respected reviewer.
**Q1. What is the motivation for conceptualizing real to fake as a progressive transition? Why it should be continuous rather than discrete?**
**R1.** Thanks for this concern. We endeavor to address this with a systematic and comprehensive response:
- **Motivation of Progressive Transition**: Our paper seeks to effectively utilize both DF and BF data by taking the SBI and CBI as the intermediate anchors, where we **conceptualize "Real->SBI->CBI->DF" as a process of getting more and more fake**. It is intuitively apparent (Fig. 2 and Fig. 5 in the manuscript) and could be emphasized as accumulating forgery attributes (*i.e.*, blending clue, ID-inconsistency, and generative artifacts). Hence, we leverage this 'inductive bias' to organize the latent-space distribution, and thus achieve utilizing both DF and BF data effectively.
- **Continuous or Discrete**: Building continuous bridges between adjacent anchors is based on the **inherent natures** and realizations of SBI, CBI, and DF. **Here, we would like to illustrate these natures in detail.**
- Taking Real->SBI as an example: SBI is generated by blending two versions of one Real transformed by augmentations (*e.g.*, blur, compression, and landmark shifting) with different severity levels. It can be emphasized that the unalignment levels of two transformed versions are controlled by an abstract parameter $\epsilon$, which represents the severity level of the blending clue. *Blending with a small $\epsilon$ produces an image that can be treated as Real with little perturbations, while large $\epsilon$ produces an SBI with severe blending clue*. Notably, we cannot assert a certain value of $\epsilon$ that when below it, the transformed image is fake and above it is real. That is, **we cannot define a clear decision boundary between Real and SBI**. Therefore, we can conclude that **the transition between Real and SBI is Continuous** according to the increase of blending clues.
- Analogously, SBI to CBI might controlled by the increase of ID-inconsistency, and from CBI to Deepfake might controlled by the generative artifacts. Both can be subsequently treated as continuous transitions.
Therefore, we conceptualize the process of 'Real->SBI->CBI->DF' as a **progressive and continuous transition**. The necessity of continuity and progress, as we analyzed above, explains the rationale behind deploying transition and bridging. That is, the motivation to apply bridging and transition loss to simulate the conceptualized continuous and progressive transition.
**Q2&3. Lacks experimental validation to the claimed motivation, that is, the latent space of VHT is not organized.**
**R2&3.** Please refer to **Common Responses R2**, where we carefully address this concern.
**Q4. The successful results in Face X-ray and SBI have already addressed the issue that we raised about substituting DF with BF.**
**R4.** Thanks for your thoughtful consideration. We would like to clarify it as follows.
- As mentioned by the reviewer, the results of Face X-ray and SBI empirically show that **BF *may* substitute DF** considering 'DF-only < VHT< BF-only'. However, **it is precisely the 'stereotype' that our paper actually seeks to break down**.
- The contribution and conclusion of our work is: using both of them to organize the latent space progressively can be more effective than using BF only, and thus **BF cannot substitute DF**. We show detailed analysis and validation for this point in **R2 of the Common Responses**.
- This new answer is validated by the results of 'DF-only < VHT < BF-only < Ours' (Tab. 2 in manuscript).
**Q5. $A^T$ is not defined**
**R5.** Thanks for the kind mention. The superscript $T$ is commonly used to represent **transpose matrix**. We will explicitly define $T$ in the updated manuscript.
**Q6. The rationale behind augmented with noise should be mapped to a more fake distribution**
**R6.** We are grateful for your insightful concern. The transition loss is applied to **simulate the progressive transition between two adjacent anchors**. Specifically, we leverage '**noise+swallow network mapping**' to build a less complex mapping path between adjacent anchors, simulating the path of progressive transition. Therefore, augmenting noise to feature actually represents the transition relationship between two adjacent anchors. Whereas in the case of Real->BF->DF, it represents getting more fake.
**Q7. More datasets (WDF and DF-1.0) should be included.**
**R7.** We have provided more experimental results including the datasets you've mentioned. Please refer to **Common Responses R1**.
**Q8. Robustness comparisons with SoTA and multi-intensity.**
**R8.** Thanks. In Author-Rebuttal-PDF-Fig. 1, we hereby provide robustness evaluation with multi-level unseen perturbations and comparing with SoTAs. The robust performance of our method indicates its **effectiveness in complex real-world scenarios**. We will **update our manuscript** with this more comprehensive robustness evaluation.
**Q9. In Table 4, is R2B2D the same as D2B2R?**
**R9.** Thanks for this insightful concern. From the perspective of implementation, the code of R2B2D should be the same as D2B2R except for inverting the attribute labels. Therefore, **their experimental results are also expected to be consistent**. Theoretically, D2B2R should be considered as the opposite of R2B2D, that is, the conceptualized progressive transition is **getting more and more REAL**.
**Q10. What is the real/fake label of the mix of F_r and F_s?**
**R10.** Thanks for your concern. As illustrated **in Eq. 4 and line 172 of our manuscript**, the label after bridging is assigned based on the mixing ratio $\alpha$. For example, the labels for $F_r$ is 0 and for $F_s$ is 1, and they are mixed with $\alpha=0.3$, the label for their mixed results should be $0.3×0+0.7×1=0.7$.
---
Rebuttal 2:
Comment: Dear Reviewer 6DHe,
Thank you for your thorough evaluation of our work. We are committed to incorporating your feedback comprehensively into our revised paper to improve both its content and overall quality.
As the discussion period draws to a close, we hope our response has sufficiently addressed your concerns. If there are any additional issues or points that require further clarification, we are more than willing to address them promptly.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the authors' response. Some of my minor concerns have been addressed. However, the justifications in responses R1-R4 are still not convincing. The proposed method shows limited robustness and has not been compared with the SOTA methods. Therefore, I will keep my rating unchanged. | Summary: The authors introduced a method aimed at detecting deepfakes. Their approach, known as Oriented Progressive Regularizor (OPR), employs a progressive transition strategy. This strategy is designed to enable the model to effectively train on a combination of blendfake and deepfake data, ultimately leading to improved performance. The experimental results indicated that this method surpasses current state-of-the-art (SOTA) approaches when tested on deepfake datasets.
Strengths: The paper provides a fresh perspective on the well-known problem of deepfake detection, which should be appreciated.
The paper is mostly well-written. The arguments and results presented are easy to understand.
The authors performed an extensive evaluation.
Weaknesses: Argument on Blendfake: The argument that blendfake data alone is sufficient for training deepfake detectors is based on empirical observations on certain datasets or benchmarks with some particular deepfake detection models and may not hold universally. I would suggest toning down that claim or providing the exact conditions when this argument holds.
CDFv1 vs CDFv2: I believe that using CDFv1 for evaluation may not be necessary. It would have been more beneficial to utilize a different deepfake benchmark dataset such as FakeAVCele, DFD from Google/Jigsaw, or RWDF-23 (please refer to this repository for additional information https://github.com/Daisy-Zhang/Awesome-Deepfakes-Detection). The same applies to DFDC and DFDCP. I advocate for incorporating more diversity in the selection of benchmark datasets. In essence, the authors compared against three datasets instead of five, which is still an acceptable number.
Datasets: The authors utilized widely known deepfake datasets from 2019 in their research. However, considering the rapid advancements in deepfake technology since then, I believe these datasets may no longer accurately represent the current landscape. It would be valuable for the authors to include an assessment of their method using real-world deepfake videos sourced from social media and other online platforms. By doing so, they can demonstrate the effectiveness of their proposed solution in addressing contemporary and future iterations of deepfakes.
Progressive transition: Currently, the Progressive transition goes like this "Real --> Blendfake (SBI) --> Blendfake (CBI) --> Deepfake". I could imagine it being further extended to have addition of compression or adversarial artefacts (i.e., "Real --> Blendfake (SBI) --> Blendfake (CBI) --> Deepfake--> compression and other artefacts"). That way one could really see a generalisable pipeline that could incorporate the variance in the types of deepfakes available on social media and will greatly increase the quality of the work.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the above comments.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the above comments.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects', 'Ethics review needed: Human rights (including surveillance)']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # **Response to Reviewer Ssp1**
We are thankful for the reviewer's positive comments and interest in our research. We hope that the following point-by-point responses will enable the respected reviewer to further recognize our work.
**Q1. The argument of 'blendfake data is sufficient' is based on empirical observations on certain datasets, which may be not the universe.**
**R1.** We appreciate the reviewer's insightful observation. Our main point is that *while BF-only might not always outperform DF-only across all datasets, our method still demonstrates non-trivial advantages.*
- To comprehensively investigate the effectiveness of DF and BF (using more datasets, not limited to "DF-series"), we enlarge our original evaluations by using **9** distinct deepfake datasets (see **Common Responses R1**). The results show that the situation of 'BF-only<DF-only' indeed sometimes exists, which validates the scoping concern from the respected reviewer.
- However, **this does not undermine the significance of our method**. Namely, it can be observed that the VHT still fails to consistently achieve the effect of "1+1>2" and *shows degradation compared to BF-only*. In contrast, our method *effectively leverages both DF and BF data* and achieves superior performance compared with VHT, and so does DF-only and BF-only.
**Q2. More advanced and in-the-wild datasets (FakeAVCele, DFD, and RWDF-23) are recommended to be included.**
**R2.** We have provided more experimental results including the datasets you've mentioned. Please refer to **Common Responses R1**.
**Q3. The progressive transition pipeline could be promoted to be more generalizable, for instance, by incorporating compression.**
**R3.** We appreciate the reviewer's interest in gaining deeper insights into the proposed progressive transition pipeline. We value this suggestion very much and carefully conduct an experiment based on the pipeline of **'Real->Real (Compression)->SBI->CBI->DF'**. Notably, we do not place the Real (Compression) after DF as mentioned by the respected reviewer since we believe Compression is *not a faker version of DF*, considering it has no id-inconsistency or DF artifacts. Instead, Compression can be closer to the SBI, which actually involves compression operations in its creation process. The *frame-level* AUC results are shown below:
|Methods|CDFv2|DFDCP|DFD|Avg.|
|-|-|-|-|-|
|Ours|0.8448|0.8116|0.8581|0.8382|
|Ours+Compression|0.8351|0.8019|0.8604|0.8325|
It can be observed that adding compression **exerts little effect** on experimental results, which may be due to the fact that random compression is already deployed as a data augmentation strategy for all training data. Still, we believe that this insight from the respected reviewer is **of great value**, which is very inspiring to us. For example, we may incorporate "**Facial Beautification Algorithm**" or "**Photoshop Image**" into the progressive pipeline to enhance the generalization and application scope. We are passionate about validating these ideas in our future works.
---
Rebuttal 2:
Comment: Dear Reviewer Ssp1,
We are encouraged by your thoughtful feedback and grateful for your interest in our research.
As the discussion period coming to a close, we would like to inquire if our response has adequately addressed your concerns. If there are any additional issues or points that require further clarification, we are more than willing to address them promptly.
Best regards,
The Authors | Summary: This paper investigates the generalization ability of deepfake detectors and proposes a novel training approach using "blendfake" data to enhance the model's learning of generic forgery artifacts. The authors point out that existing state-of-the-art methods do not incorporate deepfake data in their training process, which contradicts previous empirical observations. The paper introduces an "Oriented Progressive Regularizor" (OPR) to establish constraints on anchor distribution and proposes feature bridging to facilitate smooth transitions. Experimental results indicate that the proposed method effectively utilizes forgery information from both blendfake and deepfake.
Strengths: - Proposes a new training method that may enhance the generalization capability of deepfake detectors.
- Introduces OPR and feature bridging techniques to improve the model's recognition of forgery features.
Weaknesses: - The attribution of the unorganized latent-space distribution lacks comprehensive experiments.
- There are some minor writing issues, such as the consistency of using SOTA and SoTA.
Technical Quality: 2
Clarity: 3
Questions for Authors: - In ablation Table 2, the AUC of VHT is lower than that of BF-only in cross-dataset comparison, which is opposite to the statement in line 230.
- Are all the training sets for the SOTA model comparisons the same? How do you control the blendfake and deepfake training datasets for different models?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors believe that the reason VHT performs worse than blendfake-only is due to the unorganized latent-space distribution. Although the results indicate that the proposed method is effective, there is a lack of detailed experimental validation for attribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # **Response to Reviewer t2ao**
Thanks for your comments. Below we provide a point-by-point response to address the concern from the respected reviewer.
**Q1. The attribution of the unorganized latent-space distribution lacks comprehensive experiments.**
**R1.** Thanks for the concern. Please refer to **Common Responses R2**, where we comprehensively clarified this concern.
**Q2. Writing issues**
**R2.** Thanks. Please refer to **Common Responses R3**.
**Q3. Results in Tab. 3 may be opposite to our statement in line 230.**
**R3.** Thanks for the concern. Actually, in line 230 we stated: 'VHT is **outperformed by** BF-only', which is **consistent** with the experimental results in Tab. 3, that is the reviewer mentioned 'VHT is **lower than** that of BF-only'.
**Q4. Concerns about the training dataset for different methods**
**R4.** Thanks for your thoughtful consideration. Here, we redesign the structure of the table to clarify our training data settings. All results are extracted and summarized from the corresponding parts in Tab. 1, 2, and 3 in our manuscript. For fair comparisons, all methods are deployed with the same backbone, *i.e.*, EfficientB4.
| Methods| Training set | Celeb-DF-v1 |Celeb-DF-v2 | DFDCP |C-Avg.|
|----------|----------|----------|----------|----------|----------|
| EfficientB4 | DF | 0.7909 | 0.7487 | 0.7283 | 0.7560 |
| SBI | SBI | 0.8311 | 0.8015 | 0.7794 | 0.8040 |
| Face X-ray | DF,CBI | 0.7093 | 0.6786 | 0.6942 | 0.6940 |
| BF-only | SBI, CBI | 0.8413 | 0.8006 | 0.7791 | 0.8070 |
| VHT | SBI, CBI, DF | 0.8145 | 0.7710 | 0.7577 | 0.7810 |
| Ours | SBI, CBI, DF | 0.9094 | 0.8448 | 0.8116 | 0.8553|
Our paper holds the view that naively hybridizing more training data may undermine the cross-dataset performance (validated by BF-only > VHT). Therefore, the difference in training set **is not a biased setting**, instead, it demonstrates the superiority of our method in leveraging multiple data effectively.
---
Rebuttal Comment 1.1:
Comment: I carefulky read through your respnose and apprecite your extra experiments. But the latend distribution concern remains, as evidence or experimental cues are missing, even with your response to R2. So I decidrd to hold my original decision.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer t2ao,
We are grateful for your comments and feedback. We understand that **your biggest concern is about the evidence of latent distribution**.
Actually, we have **already provided evidence of the latent distribution** from visualization approaches like **t-SNE.** Please see **Figure 4** and **Figure 6** in the manuscript. In these figures, we can see that the latent spaces of both VHT and Ours obey our expectation.
If you have **any further questions** about our visualization and validation experiments, we sincerely anticipate further discussing and addressing any concerns you may have.
Best, Authors
---
Rebuttal 2:
Comment: Dear Reviewer t2ao,
We deeply appreciate your dedicated efforts and insightful concern regarding our manuscript.
With the discussion period coming to a close, we would like to inquire if our response has adequately addressed your concerns. If there are any additional issues or points that require further clarification, we are more than willing to address them promptly.
Best regards,
The Authors | Summary: The paper explores the utilization of blendfake and pseudo-fake data in training deepfake detectors. It argues that the significance of deepfake samples has been underestimated due to insufficient exploration. To better exploit both pseudo-fake and deepfake data, the paper introduces a progressive transition from "real to blendfake to deepfake" and proposes a hybrid training scheme. This scheme includes an oriented progressive regularizer (OPR) to model the transition and a feature bridging strategy to simulate a continuous transition.The paper explores the utilization of blendfake and pseudo-fake data in training deepfake detectors. It argues that the significance of deepfake samples has been underestimated due to insufficient exploration. To better exploit both pseudo-fake and deepfake data, the paper introduces a progressive transition from "real to blendfake to deepfake" and proposes a hybrid training scheme. This scheme includes an oriented progressive regularizer (OPR) to model the transition and a feature bridging strategy to simulate a continuous transition.
Strengths: 1.The paper is well-motivated, and the proposed solution is both intuitive and effective.
2.The experiments robustly demonstrate the rationality and effectiveness of the proposed design.
Weaknesses: 1. Choice of Blend Algorithms: The paper does not provide sufficient explanation or discussion on the choice of blendfake image algorithms (SBI and CBI). As mentioned in Section 2.2, there are many other methods for crafting blendfake images. Would these methods be effective as well?
2. Interpolation Strategy: In Section 3.2, the paper introduces an interpolation strategy to achieve a smoother transition from real to deepfake. Why was interpolation performed at the feature level, and would setting multiple mixing parameters (alpha) for more interpolations further improve performance?
3. Possible Typos: There might be a typo on line 149, $M_a$.
Technical Quality: 3
Clarity: 4
Questions for Authors: See in weakness
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See in weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # **Response to Reviewer LWpa**
We sincerely appreciate the reviewer's positive comments and rating on our paper, and the following are our point-to-point responses
**Q1. Choice of Blend Algorithms: Why apply SBI and CBI instead of other Blendfake methods?**
**R1.** Thanks for your thoughtful suggestion. Actually, the terms SBI and CBI in our paper **do not refer to two specific methods** (*i.e.*, SBI'22 [1] and X-ray), but a full set of methods with the **techniques of Self-Blended and Cross-Blended**. Therefore, we can surely take other Self-Blended methods or Cross-Blended methods in our current framework. To validate experimentally, we take FWA (another Self-Blended method) and I2G (another Cross-Blended method) to conduct a concise ablation study:
| Methods | Blendfake Types | CDFv1 | CDFv2 | DFDCP | Avg. |
|----------|----------|----------|----------|----------|----------|
| VHT | FWA, I2G | 0.7894 | 0.6953 | 0.6575 | 0.7424 |
| VHT | SBI'22, X-ray | 0.8145 | 0.7710 | 0.7577 | 0.7811 |
| Ours | FWA, I2G | 0.8235 | 0.7746 | 0.7539 | 0.7990 |
|Ours | SBI'22, X-ray |0.9094 | 0.8448 | 0.8116 | 0.8771 |
It is evident that our method can also **enhance the performance of FWA&I2G** trained VHT model, while using SBI'22&X-ary performs essentially better.
**Q2. The reason for applying interpolation on feature-level, and could mixing parameters $\alpha$ are set multiply for enhanced performance.**
**R2.** We genuinely appreciate the consideration.
- For the first question, the latent-space interpolation is performed to achieve **feature bridging** between two adjacent anchors (e.g., from SBI to CBI). This simulates the progressive and oriented transition of data becoming "more and more" fake starting from real. This process is illustrated intuitively in Figure 2 of the manuscript.
- For the second question, $\alpha$ is randomly sampled from $U(0,1)$, where $U(a,b)$ denotes the Uniform distribution with the bounds of $a$ and $b$. Therefore, for diversity and enhanced performance, $\alpha$ is indeed a parameter with multiple values in our method.
**Q3. Possible Typos**
**R3.** Thanks. Please refer to **Common Responses R3**.
[1] Kaede Shiohara and Toshihiko Yamasaki. Detecting deepfakes with self-blended images. In CVPR2022
---
Rebuttal 2:
Comment: Dear Reviewer LWpa,
We greatly appreciate your careful review and the insightful comments you shared regarding our manuscript.
With the discussion period coming to a close, we would like to inquire if our response has adequately addressed your concerns. If there are any additional issues or points that require further clarification, we are more than willing to address them promptly.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their valuable time and constructive comments, and we are strongly encouraged by their recognition of several strengths of our submission, including:
- **Fresh perspective/Well-motivated** (Reviewers LWpa, Ssp1)
- **Extensive/Robust Evaluations** (Reviewers LWpa, Ssp1)
- **Well Presentation and Writing** (Reviewers LWpa, t2ao, Ssp1,6DHe)
Meanwhile, reviewers also indicate several important concerns and suggestions, which we will give **detailed common and individual responses**. Followings are the common responses.
# **Common Responses**
**Q1.** (**From Reviewers 6DHe, Ssp1**) **Cross-dataset evaluation is encouraged to include more datasets, such as {FakeAVCeleb, DFD, RWDF-23} (6DHe) and {WildDeepfake, DeeperForensics-1.0} (Sspl).**
**R1.** Thanks for the valuable suggestion. Following reviewers' suggestions, we further enlarge the evaluation scope from the following aspects:
- **9 different deepfake datasets are used**: Ensuring the diversity and comprehension of the testing data in our evaluation;
- **Latest synthesis methods are considered**: Using the testing data from those **just-released deepfake datasets** (in 2024) with advanced deepfake techniques, that is, {UniFace, E4S, BlendFace, MobileSwap} from DF40 [1] and {DiffSwap} from DiffusionFace [2];
- **Evaluation Design**: Involving different variants (i.e., Deepfake-only, Blendfake-only, VHT, and ours) for the ablation studies using both ***frame-level/video-level*** AUC.
Our results can be seen in the Table below.
| Methods| DFD | DeeperForensics-1.0 | FakeAVCeleb | WildDeepfake| RWDF*| DiffSwap|UniFace| E4S|BlendFace | MobileSwap |
|----------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|-----------|
| DF-only| 0.8144/0.8621| 0.7462/0.7474| 0.8404/0.9150|0.7275/0.6883| - | 0.7959/- |0.7775/0.8212| 0.6514/0.6955 | 0.7813/0.8296 | 0.8475/0.9053|
| BF-only| 0.8378/0.8901 | 0.7345/0.7811 | 0.8627/0.9237| 0.7563/0.7965 |-| 0.8265/-| 0.6745/0.6998|0.6797/0.7113 | 0.8041/0.8529 | 0.8883/0.9399 |
| VHT | 0.8215/0.8505 | 0.7702/0.8312| 0.8402/0.9125| 0.7263/0.7811 | -| 0.7961/-| 0.8445/0.8979 | 0.6704/0.7101 |0.8311/0.8930 | 0.8729/0.9295 |
| Ours | 0.8581/0.9073| 0.7902/0.8536 | 0.9077/0.9766| 0.7718/0.8287| - | 0.8459/ -| 0.8441/0.9077| 0.7103/0.7711|0.8619/0.9287| 0.9285/0.9748 |
**\*** *We are requesting the RWDF dataset (as suggested by Ssp1) from its authors. Once we have obtained the data, we will update the results immediately.*
From the Table, we can observe that our method consistently exhibits superiority in almost every testing data, which empirically suggests an improved generalization result.
**Q2.** (**From Reviewers 6DHe, t2ao**) **Why VHT (*naively combining deepfakes and blendfakes for training*) with *unorganized* latent space achieves *inferior* results than ours (with *organized* latent space).**
**R2.** We sincerely appreciate the insightful concern. We would like to address these concerns as follows:
- **Ambiguous Feature Space in VHT:** In VHT, deepfake (DF) and blendfake (BF) data are naively combined for training deepfake detectors. This approach results in an unorganized latent space where DF and BF data are intermixed, creating an ambiguous feature space.
- **Loss of Discriminative Features:** Due to the unorganized (mixed) latent space in VHT, the classifier tends to "regard" DF and BF as the same forgery type. This causes the classifier to "forget" the *discriminative characteristics of DF and BF*. However, DF and BF can be inherently different; DF contains more forgery information, such as generative artifacts produced by DNNs. Naively combing DF and BF for training (VHT) may result in the loss of these informative features, which can be crucial in detecting deepfakes.
- **Our Method's Advantages:** Organizing latent space is a crucial topic that has been verified effective in many research domains [3,4]. In our case, the proposed method, which progressively organizes the latent space (real -> BF -> DF), can (1) help the model better 'understand' how real data gradually becomes fakes; (2) maintain more informative features to distinguish real from fakes. For these reasons, Ours should result in better generalization performance than VHT.
- **Experimental Validation:** For the claimed feature organization issue, we have verified this claim by visualizing the actual latent space (Fig. 4) and feature distribution (Fig. 6) in the manuscript. **Moreover**, in *Author-Rebuttal-PDF-Fig. 2*, we provide a further investigation of the learned information of VHT and our method. Specifically, we summarize and analyze the distribution of logits output and confidence from VHT and Ours. We notice that VHT is less confident in both fake and real data. As we discussed, this may be because naively combining DF and BF for training confuses the network, thus limiting its confidence in 'understanding' the forgery representation of distinct BF and DF. In contrast, our model can predict both fake and real with high confidence since the model "understands" how real gradually becomes more and more fakes.
**Q3.** (**From Reviewers LWpa and t2ao**) **Typos ($M_a$) and inconsistent spellings (SOTA and SoTA).**
**R3.** We are very grateful to the reviewers for their careful reading. We have carefully re-proofed our manuscript. We will continue to further optimize our writing and update our manuscript.
[1] Yan Z, Yao T, Chen S, et al. DF40: Toward Next-Generation Deepfake Detection. arXiv:2406.13495, 2024.
[2] Chen Z, Sun K, Zhou Z, et al. DiffusionFace: Towards a Comprehensive Dataset for Diffusion-Based Face Forgery Analysis. arXiv:2403.18471, 2024.
[3] Ali S and Kaick O. Evaluation of latent space learning with procedurally-generated datasets of shapes. ICCV 2021.
[4] Yang F and Ma C. Sparse and complete latent organization for geospatial semantic segmentation. CVPR 2022.
Pdf: /pdf/d82f576becc1d441ea3c3fcb854c3f24ab7cc4ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Homology Consistency Constrained Efficient Tuning for Vision-Language Models | Accept (poster) | Summary: A Homology Consistency (HC) constraint for efficient transfer on VLMs is proposed in this paper, which explicitly constrains the correspondence of image and text latent manifolds through structural equivalence based on persistent homology in downstream tuning.
The proposed method tracks the persistence of the homology classes of topological features across multiple scales and guide the directions of persistence tracks in image and text manifolds to coincide each other. Additionally, a deviating perturbation is applied to generalize the persistence coincidence to unseen data. Experiments on recognition and generalization tasks show the superior performance.
Strengths: 1. The paper is well-written with a straightforward motivation.
2. A theoretically well-founded homology consistency (HC) constraint based on persistent homology is proposed for efficient transfer on VLMs.
3. Experiments on recognition tasks show the superior performance.
Weaknesses: The hyper-parameters η, λ, ω should be determined at 16 shots and then migrated to other few-shot settings. If the number of samples is less than 16, how should the aforementioned hyper-parameters be set, and will there be a significant difference in performance?
Technical Quality: 2
Clarity: 2
Questions for Authors: The hyper-parameters η, λ, ω should be determined at 16 shots and then migrated to other few-shot settings. If the number of samples is less than 16, how should the aforementioned hyper-parameters be set, and will there be a significant difference in performance?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The experiments only conducted on recognition tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable concerns.
**W1&Q1**: Referring to the widely-acknowledged evaluation standard in the same field, we conduct experiments on few-shot learning in the 1-/2-/4-/8-/16-shot setting. We find that perturbations near the optimal scaling hyper-parameters $\eta$, $\lambda$, $\omega$ have little effect on model performance, as shown in the ablation study of hyper-parameters $\eta$, $\lambda$, $\omega$ in our paper. That is, our performance is robust to these hyper-parameters. Therefore, we determine the values of these hyper-parameters in the setting with the largest number of shots (i.e., 16) and transfer them to other shot settings on the same dataset.
**L1**: We would like to clarify why we conducted on few-shot recognition tasks.
The efficient transfer learning on VLMs that we focus on is to tune the large-scale VLMs toward downstream tasks under low-data regime. It aims to achieve considerable improvements in tuning VLMs on target tasks with limited data resources.
Few-shot recognition is a *widely-acknowledged evaluation standard* for the efficient transfer learning. Specifically, based on limited samples per class, few-shot recognition on VLMs is to tune the pre-trained semantic alignment to handle the correspondence between images and category texts in target domain. Therefore, the performance on few-shot recognition can effectively reflect the ability of efficient transfer.
In our paper, we conduct experiments on few-shot recognition over 11 datasets (covering a wide range of visual recognition on generic objects, fine-grained categories, scenes, actions, etc.), domain generalization over 4 visual backbones from ImageNet source to 4 target domains ImageNet-V2/-Sketch/-A/-R, and a series of necessary ablation studies, to demonstrate the effectiveness and robustness of our method. | Summary: The paper identifies a key issue with existing methods for tuning pre-trained vision-language models to downstream tasks with limited data: they adjust the alignment between image and text based solely on observed samples, which may not generalize well beyond the training data. To address this issue, the paper proposes a novel constraint from the perspective of topological data analysis.
This constraint employs persistent homology to ensure the structural equivalence of image and text latent manifolds during tuning.
Strengths: 1. The paper offers a new way of looking at model tuning through the lens of topological analysis, with a focus on understanding the structure of data spaces for better semantic alignment in vision-language tasks. I appreciate this perspective on the issue.
2. The proposed method exhibits a thoughtful theoretical underpinning, using persistent homology to enhance the generalizability of image-text alignment adjusting. 
3. The paper is well-written and the reason for leveraging topological data analysis to enhance semantic alignment during the tuning process is reasonable and easy to follow up.
Weaknesses: The paper does not adequately discuss how it relates to existing image and text alignment techniques, including those based on distance metrics, mutual information, adversarial training, and attention mechanisms. This lack of comparative analysis creates a gap in fully appreciating the distinctive contributions and potential advantages.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you please provide insights into the fundamental differences and advantages of topological data analysis in your method over other alignment methods?
2. Where do you foresee potential challenges in extending your proposed method to tasks outside the few-shot learning domain, particularly in scenarios such as zero-shot learning, or applications involving detection and segmentation?
3. Could you elaborate on how incorporating higher-dimensional homology classes into the tuning process might impact the model's performance and behavior, beyond the computational cost?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and insightful questions.
**W1&Q1**: Our main differences and advantages over existing image-text alignment techniques are that we explicitly constrain the structural equivalence between image and text latent manifolds, and achieve a topological consistency of the latent manifolds beyond the localized closeness of discrete samples, so as to enhance the generalization of the learned image-text alignment.
In the widely-used alignment techniques, contrastive learning (e.g., InfoNCE loss, triplet loss) forces semantically related image-text pairs closer and pull unrelated ones away, to enhance the discrimination in the common embedding space. Mutual information maximization is to increase the correspondence probability between semantically related samples to overlap image and text latent manifolds for aligning. Cross-modal attention mechanism learns to focus on similar content in images and texts, thereby capturing the semantic associations across modalities accurately.
Despite various approaches, the focus of existing alignment techniques tends to be limited to the localized closeness on the observed data samples, due to the lack of perspective on the underlying structure of image and text latent manifolds in training. The learned alignment cannot guarantee its generalization beyond training samples, especially in low-data regime.
Our theoretically well-founded HC constructs the discrete analogues of image and text latent manifolds based on data samples, explicitly constrain the structural equivalence between image and text latent manifolds, and achieve a topological consistency on the underlying manifold structures beyond the localized closeness of discrete samples. As a constraint term in the efficient transfer learning on VLMs with limited data resources in our paper, the effectiveness and robustness of HC are demonstrated by extensive experiments.
**Q2**: Thank you for your constructive comments.
For zero-shot learning: In capturing the structural association between a given test sample of unseen category and the training samples in their latent manifold for zero-shot inference, The embedding of test samples of unseen categories may introduce noise into the construction of simplicial complex and topological feature extraction. In addition, if the volume of pre-training set is too large, it is costly to obtain topological structure around the given test sample in manifold.
For image understanding tasks such as detection and segmentation: Unlike manifolds in semantic space considered by our HC, in the pixel space of interest for object detection and segmentation, the topological information of objects, such as shape, connectivity, and boundaries, is more intricate, thus requiring more sensitive topological structure representing methods. Meanwhile, topology information data in these fields is scarce, and annotation requires specialized knowledge.
**Q3**: Beyond the computational cost, we find that adding 1-st homology consistency in addition to 0-th homology consistency brings almost no additional performance improvement in tuning VLMs. We conjecture it is because the 1-st homology classes (i.e., loops) are too intricate for constraining the structural equivalence of image and text latent manifolds, and the existing VLMs' ability to represent data samples is insufficient to support the alignment of such precise structures. In addition, the limited data samples may not be sufficient for capturing the persistence of higher-dimensional homology features in latent manifolds. | Summary: The paper introduces a Homology Consistency (HC) constraint for efficient transfer learning on vision-language models (VLMs), ensuring task-specific image-text alignment while preserving general knowledge by using structural equivalence based on persistent homology. This approach mimics the topology of latent manifolds and tracks the persistence of topological features.
Strengths: 1. This paper is well motivated, and the motivation of using homology consistency is interesting.
2. This paper has a good theoretic support.
Weaknesses: 1. The performance of the proposed method is worse than the baseline method in low-shot (1-shot and 2-shot) tasks.
2. The improvement in Table 2 is marginal. Is the comparison fair with the same random seed? How many runs did you conduct? Could the authors also report the standard deviation of the score?
3. Moreover, is 16-shot common in this benchmark? 16 shot seems a lot in few-shot learning.
4. Can you also elaborate more why with only DP, the performance drops in Table 3?
5. In addition, could you elaborate more why choosing 0-th homology classes? What are the potential effects of using other homology classes?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you visualise the topology of the data before and after adaptation with the HC constraint? It will be very interesting to see how actually the HC constraint can preserve the topology of the manifold during transfer learning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: My concern is the performance improvement is marginal and limited to more shots setting (16-shot).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments.
**W1**: Our main results, the performance comparisons between baselines and our proposed HC/HC* on 1-/2-/4-/8-/16-shot settings over 11 benchmark datasets are shown in Fig. 3, and the corresponding detailed numerical results are in Appendix C. It can be found that HC/HC*+TaskRes and HC/HC*+Tip-Adapter-F outperform their baselines on 1-shot and 2-shot settings in most cases. We wonder if we misinterpreted your question due to some misunderstanding. We sincerely look forward to your reply.
**W2**: Thank you for your concern. Experiments on baseline and with HC / HC* constraint are conducted in the same setting (as stated in Sec. 4.1 Experimental settings). For all experiments reported in paper, we fixed the random seed as 1. Our performance comparison is fair.
Here, to verify the robustness of our improvement, we take 16-shot ImageNet as an example, and set the random seed from 1 to 4. The test accuracies of HC*+Tip-Adapter-F, HC*+TaskRes and their baselines are listed below.
| Method | random seed 1 | random seed 2 | random seed 3 | random seed 4 |
|:----------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| TaskRes | 65.41% | 65.47% | 65.49% | 65.47% |
| HC*+TaskRes | 66.25% | 66.13% | 66.19% | 66.24% |
| Tip-Adapter-F | 65.43% | 65.05% | 65.15% | 64.99% |
| HC*+Tip-Adapter-F | 66.40% | 66.27% | 66.14% | 66.05% |
As shown in table, the average gain of HC*+Tip-Adapter-F compared to Tip-Adapter-F is 1.06% and the standard deviation is 0.098%. The average gain of HC*+TaskRes compared to TaskRes is 0.74% and the standard deviation is only 0.069%.
In fact, our performance improvements are very competitive in the field of efficient transfer learning of VLMs. On the same evaluation on 16-shot ImageNet, GraphAdapter [1] (NeurIPS 2023) improves by 0.95% over the SOTA it compares to (refer to Tab. 1 in GraphAdapter [1]), APE-T [2] (ICCV 2023) outperforms its baseline Tip-Adapter-F by 0.56% (refer to the numerical results APE-T [2] released), which is significantly lower than our average gain of 1.06% on the same baseline.
[1] Graphadapter: Tuning vision-language models with dual knowledge graph.
[2] Not all features matter: Enhancing few-shot clip with adaptive prior refinement.
**W3**: Following the commonly-used evaluation standard in previous work, we perform experiments on few-shot learning in 1-/2-/4-/8-/16-shot settings over 11 datasets (Fig. 3 in paper), and domain generalization on 16-shot ImageNet across 4 visual backbones (Tab. 2 in paper). Meanwhile, in transferring pre-trained VLMs, compared to the web-scale volume of pre-training data, the tuning data of 16 samples per class is significantly less, so it is reasonable to evaluate efficient transfer learning on VLMs in 16-shot setting.
**W4**: Our proposed homology consistency (HC) constraint consists of track coincidence (TC) term and deviating perturbation (DP) term. TC term is responsible for guiding the persistence tracks in image and text manifolds to coincide each other to achieve a homology-level structural equivalence, and DP term drives end-point samples of tracks to deviate from their semantically related hetero-modal seen samples toward unseen ones in embedding (to be distributed uniformly), to enhance the generalization of TC in constraining structural equivalence of latent manifolds.
In other words, TC is leading role in achieving homology consistency, and DP can be regarded as its regularization term. Without coinciding persistence tracks by TC, not only the direct constraint on the structural equivalence of latent manifolds is lost, but also the only DP will cause the track end-point samples to randomly deviate from the hetero-modal training samples, which interferes with the downstream tuning and thus performance drop.
**W5**: We mainly consider two aspects: computational cost and efficacy. *First*, in practice, the training time for constraining both 0-th and 1-st persistence tracks to coincide is about 20 times that for only the 0-th track coincidence. *Secondly*, we find that in tuning VLMs, adding 1-st homology consistency in addition to 0-th homology consistency brings almost no additional gain. We conjecture it is because the 1-st homology classes (i.e., loops) are too intricate for constraining the structural equivalence of image and text latent manifolds, and the existing VLMs' ability to represent data samples is insufficient to support the alignment of such precise structures.
**Q1**: Following your suggestion, we visualize the persistence intervals of homology classes from their birth to death time through persistence diagram (PD), to show the change in the topology of data between the tuning with and without our HC constraint. Persistence diagram draws the paired birth at $b$ and death at $d$ that bound the survival interval of homology classes as a point $(b, d)$, which is a widely-used visual representation of persistent homology in topological data analysis.
Specifically, we visualize the persistence of homology classes of Rips complexes built on the top of test images, text embeddings tuned by TaskRes without HC constraint and text embeddings tuned by HC+TaskRes after applying HC, respectively, on 4 datasets including 2 generic object datasets, ImageNet and Caltech101, and 2 fine-grained object datasets, FGVCAircraft and StanfordCars.
As shown in Fig. 1 in the PDF we submitted, where the coordinates of the red dots represent the paired birth time and death time of the corresponding 0-th homology classes (Note that 0-th homology classes all emerge at 0). It can be found that, compared with the text manifold tuned without HC, the 0-th homology persistence of text manifold tuned by HC+TaskRes is evidently more akin to the image latent manifold. | Summary: This paper proposes Homology Consistency (HC) constraint for transfer learning on VLMs, and it explicitly constrains the correspondence of image and text latent manifolds by structural equivalence based on persistent homology in downstream tuning.
Strengths: 1. The proposed method is well-founded and clearly explains the proposed homology consistency (HC) constraint.
2. Extensive experiments are performed on 11 benchmark datasets.
Weaknesses: 1. The paper lacks discussions on the computational cost of the proposed techniques.
2. The proposed method for constraining the structural equivalence of image and text latent manifolds seems generalizable to other learning tasks for vision-language models. However, the proposed method is only evaluated for few-shot learning of vision language models.
3. Although the model outperforms other methods in most cases, the improvements are relatively marginal.
4. The paper only applies the method to a limited number of adapter models (TaskRes and Tip-Adapter-F).
Technical Quality: 2
Clarity: 2
Questions for Authors: see weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations are not discussed sufficiently (see weaknesses).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable concerns, and we would like to clarify as follows.
**W1**: Since our proposed HC constraint incurs no additional cost in inference and the cost increase in offline training is marginal (less then 1.0%), we do not discuss computational cost in our paper. Following your suggestion, we take the transfer of VLMs on 16-shot ImageNet as an example to analyze the training cost of our HC. The training time of baseline Tip-Adapter-F is 15 s/epoch, and that of HC+Tip-Adapter-F constrained by our HC is 21 s/epoch. The peak total GPU memory usage of baseline Tip-Adapter-F is 16707MB, and 16751MB for HC+Tip-Adapter-F, only a 0.26% increase. We will add a discussion on computational cost in revision.
**W2**: The efficient transfer learning on VLMs is to tune the large-scale VLMs (e.g., CLIP) toward downstream tasks under low-data regime. It aims to achieve considerable improvements in tuning VLMs on target tasks with limited data resources.
Few-shot learning is a *widely-acknowledged evaluation standard for the efficient transfer learning*. Specifically, based on limited samples per class, few-shot learning on VLMs is to tune the pre-trained semantic alignment to handle the correspondence between images and category texts in target domain. Therefore, the performance on few-shot learning can effectively reflect the ability of efficient transfer.
In our paper, we conduct experiments on few-shot learning over 11 datasets (covering a wide range of visual recognition on generic objects, fine-grained categories, scenes, actions, etc.), domain generalization over 4 visual backbones from ImageNet source to 4 target domains ImageNet-V2/-Sketch/-A/-R, and a series of necessary ablation studies, to demonstrate the effectiveness and robustness of our method.
**W3**: The performance improvements brought by our method are robust and reliable. For instance, we fix the random seed in training on 16-shot ImageNet from 1 to 4, and the performance of the baselines and constrained by our method are shown below.
| Method | random seed 1 | random seed 2 | random seed 3 | random seed 4 |
|:----------------:|:-------------:|:-------------:|:-------------:|:-------------:|
| TaskRes | 65.41% | 65.47% | 65.49% | 65.47% |
| HC*+TaskRes | 66.25% | 66.13% | 66.19% | 66.24% |
| Tip-Adapter-F | 65.43% | 65.05% | 65.15% | 64.99% |
| HC*+Tip-Adapter-F | 66.40% | 66.27% | 66.14% | 66.05% |
As shown in table, the average performance gain of HC*+Tip-Adapter-F over baseline Tip-Adapter-F is 1.06%, and the standard deviation is 0.098%. The average gain of HC*+TaskRes over TaskRes is 0.74%, and the standard deviation is only 0.069%.
In fact, our performance improvements are very competitive in the field of efficient transfer learning of VLMs. On the same evaluation on 16-shot ImageNet, GraphAdapter [1] (NeurIPS 2023) improves by 0.95% over the SOTA it compares to (refer to Tab. 1 in GraphAdapter [1]), APE-T [2] (ICCV 2023) outperforms its baseline Tip-Adapter-F by 0.56% (refer to the numerical results APE-T [2] released), which is significantly lower than our average gain of 1.06% on the same baseline.
[1] Li X, Lian D, Lu Z, et al. Graphadapter: Tuning vision-language models with dual knowledge graph[J]. Advances in Neural Information Processing Systems, 2023, 36.
[2] Zhu X, Zhang R, He B, et al. Not all features matter: Enhancing few-shot clip with adaptive prior refinement[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 2605-2615.
**W4**: We would like to clarify why we take TaskRes and Tip-Adapter-F as the baselines for applying our proposed HC constraint. TaskRes and Tip-Adapter-F belong to the two mainstream paradigms of adapter tuning (residual blending based and key-value cache based), respectively. These two paradigms cover almost all existing adapter tuning methods of efficient transfer on VLMs. In our paper (Tab. 1), we compare our performance with a range of SOTA adapter methods, e.g., residual blending based CLIP-Adapter, TaskRes, GraphAdapter, and key-value cache based Tip-Adapter-F, APE-T, etc, and outperform them on the most evaluation.
The models TaskRes and Tip-Adapter-F are representative in their respective paradigms. With the common residual blending idea in the same paradigm, the residual setting of TaskRes is relatively simple yet effective. For the key-value cache based paradigm, Tip-Adapter-F is the pioneering work that established the cache framework and subsequent methods such as APE-T are also improved from it.
For the efficient transfer learning on VLMs, our HC constraint proposes to preserve the vision-language alignment of pre-trained general concepts by constrain the topological equivalence between image and text latent manifolds in downstream tuning, to enhance the generalization of tuned alignment. This perspective has not been explored by existing VLMs transfer methods, and is verified to be effective in extensive experiments on two representative baselines. We believe that our HC constraint can be extended to other efficient tuning methods. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for reviewing this paper, and have provided detailed responses to all the concerns raised by the reviewers.
Pdf: /pdf/b4c1b66a0b7a9a1d93a54da846fb17a1195c5050.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Template-free Articulated Gaussian Splatting for Real-time Reposable Dynamic View Synthesis | Accept (poster) | Summary: The authors propose to combine a reposable 4D reconstruction from multi-view video based on a skeletal LBS model with 3D Gaussian splatting. To this goal they introduce a novel strategy for estimation of the skeletal model from a superpoint clustering. The results demonstrate a superior image quality and, thanks to the representation, also fast rendering.
Strengths: S1) Implementation details provided for reproducibility.
S2) The claims are validated on common datasets.
S3) The result quality is visibly better than the prior work.
S4) The skeleton construction is novel, technically sound and produces reposable models in variety of scenes.
S5) The quality of exposition is good.
Weaknesses: W1) The skeletons are over-segmented and unnatural and likely would not be very friendly for a human animator. They may still be suitable for data fitting but do not provide nearly as much regularization as an "optimal" (ground truth) skeleton would.
W2) The limitations and broader impacts are only discussed in the Appendix which I do not see as a responsible practice. It suggests that the authors do not give downsides of the method the same importance as to the upsides.
W3) The authors claim to report Statistical Significance without further comments (checklist item #7), but I cannot see any such features in the paper.
W4) It may be a good idea to consider a higher quality captured dataset than ZJU-Mocap. It does not seem to allow for a useful comparison between the methods.
Other minor issues and suggestions:
- Figure 1: Each superpoints -> superpoint
- L145: related rotation matrix -> relative?
- $\mathbf{W}$ is overloaded in Eq. 1 and Eq. 6 for two distinct things which is not ideal.
- Eq. 13: $\mathcal{L}$ without suffix is not defined.
------------------------
**Justification of recommendation**
A solid paper with its incremental but non-trivial contribution stemming mainly from the novel skeleton construction. The experimental results are convincing and the main downside is the clutter and complexity of the recovered skeleton. Despite this, I am currently comfortable recommending acceptance under the assumption that the exposition issues are addressed (especially limitations). My final decision might change based on the rebuttal.
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1) The Eq. 13 could use more discussion. Does the formulation avoid over-segmentation of large but correctly rigid parts to multiple sub-parts? It would be useful to see the LBS visualized for the final shapes.
Q2) Why are the skeletons so complex and noisy? Is that perhaps related to my other question about over-segmentation in Eq. 13? Are such skeletons practical for re-animation? How was re-animation done in the video? How many parameters / joint motions had to be defined?
Q3) Why are there different resolutions reported for each method (Table 2 & 3). Does it mean each method was validated against a different reference image resolution? Or does it mean the methods were all tested against a full resolution reference but some were only trained using low-resolution data? That could potentially introduce considerable bias.
Q4) The authors also do not provide any failure case examples/images. Does the method work perfectly for all tested scenes? What about the robots from the limitation Figure 6 of AP-NeRF? Does the new method handle these cases better?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations and broader impacts are only discussed in the Appendix which I do not see as a responsible practice. It suggests that the authors do not give downsides of the method the same importance as to the upsides.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed comments.
- **Limitations/W2**: Limitations and broader impacts
**Answer**: We will move the limitations and broader impacts to the main paper in the final version.
- **Q1**: More discussion about Eq. 13.
**Answer**: Based $g_i$ calculated by Eq. 13, we can densify more superpoints to accommodate complex motions. Therefore, in theory, this formula will not split a large rigid part into multiple sub-parts. Even if more sub-parts are generated, since the motion of these sub-parts is the same, the *merging process* will merge the sub-parts into one again.
- **Q2**: Why are the skeletons so complex and noisy?
**Answer**: To re-animate (/re-pose) the $k$-th joint of the object, we simply apply an additional rotation matrix $\Delta \mathbf{R}_k \in \mathrm{SO}(3)$ on the rotation $\hat{\mathbf{R}}_k$ predicted by deformable filed $\Psi$ (Eq. 11), i.e., $\hat{\mathbf{R}}_k' = \Delta \mathbf{R}_k \hat{\mathbf{R}}_k$. Compared with 6 DoF of superpoints in the Dynamic stage, each joint in the Kinematic Stage has only 3 DoF i.e., only can rotate but not translate. Therefore, the extra DoF of superpoints is the main reason are the main reason why the discovered skeletons are so complex and noisy. As shown in Fig. 1 in the attached `PDF`, there are some inconsistencies in the 3D model in canonical space of Dynamic stage. However, the rendered image in the given timestamp matches the ground truth. The price of achieving all this is that the skeleton is complex and noisy.
Another reason is that reconstucted motion of superpoint is imperfect. The number of superpoints is therefore greater than the number of physically rigid parts. As a result, the skeleton discovered by the motion of superpoints becomes complex and noisy.
- **Q3**: Why are there different resolutions reported for each method?
**Answer**: To solve this concern, we present the results of our method with the same resolutions in Table 1 in the attached `PDF`. The training and testing resolutions are the same. For example, when AP-NeRF is trained on $400\times 400$ images, it is also tested on $400 \times 400$ images.
- **A4**: Any failure case examples/images
**Answer**: For scenes with complex motions (e.g., combining long chains of rotations with texture cue ambiguity), our method can achieve better performance than AP-NeRF. Figure 2 in the attached `PDF` compares our method with AP-NeRF for the failure case of AP-NeRF. As shown in the figure, our method may not be able to reconstruct the endpoints of objects well.
- **W3**: Statistical Significance
**Answer**: The answer of Statistical Significance is No. And the Justification is: We do not report error bars or other information about statistical significance as this is not a common procedure in the field and does not contribute to understanding our evaluation.
- **W4**: Higher quality captured dataset.
**Answer**: Thanks for your suggestion. We follow the experiment setting of AP-NeRF. We will evaluate our method on higher-quality captured datasets in the future.
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: Thank you for the rebuttal, I have no further questions at the moment.
The visualization of the LBS weights in the PDF is useful and while it rather shows limited physiological plausibility of the animation tree, it would be good to include it in the paper to visualize any such limitations. | Summary: This paper presents a novel approach for learning articulated objects, alongside the skeletons/kinematic trees directly from input videos, eliminating the need for pre-defined meshes or hand-crafted skeleton priors.
Specifically, the paper introduces a hierarchical 3D Gaussian representation, where a set of superpoints are used as guidance for deforming the sub-points using linear blend skinning (LBS). The skeletal structure is further derived from the superpoints, providing articulated structures that enable explicit pose control. Rendering is done by Gaussian Splatting, enabling real-time performance.
Overall, the paper tackles a very challenging and useful problem for 3D modeling, the manuscript is easy to follow, and the approach is interesting.
Strengths: The strengths of this paper lie in how it brilliantly leverages 3D Gaussians for capturing the underlying articulated structures in a video, without the need for 3D annotations or pre-defined structure priors.
The design of superpoints naturally models prominent candidates that can serve as control points. Furthermore, as the control points are “learned” automatically, it can potentially arrive at a representation that better suits the possible motion of the articulated subject.
Overall, the approach presented in the paper is pretty neat, and the experiments show pretty promising qualitative and quantitative results.
Weaknesses: The approach does have some room for improvement. Specifically,
- Limited reposability. As mentioned in L372-373, the approach is limited to the motion space in the input video. It would be great if the papers could include visual results for these failure cases. It will be interesting to see how good the learned LBS weights are.
- Evaluated on datasets with limited motions: the videos used in the paper mostly contain repetitive motion sequences, and/or with small motions. It will be interesting to see how the proposed method performs on videos with complex/diverse/large motions (e.g., AIST datasets). Also, it is similarly unclear how the method can perform on in-the-wild videos with uncontrolled lighting, or with only a single view.
Overall, these weaknesses are very common among template-free approaches, not specifically to the proposed method itself. Nevertheless, it would be great if the paper could include more figures, visual results, and analysis regarding these cases.
There are also some issues regarding the experiments, which I detailed in the Questions section below.
Technical Quality: 3
Clarity: 3
Questions for Authors: Some comments regarding the evaluation sections:
- Tab 1, 2: does the performance gain come mainly from using a higher resolution, or does it come from “capturing better articulated structures”? While Gaussian splatting enables us to use higher resolution due to its rendering speed, it would be great if the paper could also include results with resolutions comparable to other setting (400x400 in Tab 1, and 800x800 in Tab 2).
- Is there a way to properly evaluate how good the learned skeleton structure is? E.g., training a skeleton-based 3D articulated model using the skeleton from AP-NeRF v.s. WIM v.s. the proposed method.
Also, one small issue:
- L147: should be (R^t_b)^-1 instead of R^t-1_b
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discussed some of their limitations, but it would be great if the paper could include more analysis/visual results for the issues mentioned above in the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful suggestions.
- **Q1**: Higher resolution
**Answer**: We provide the results at the same resolution as WIM and AP-NeRF in the attached `PDF`. We believe the improved performance mainly comes from the powerful representation ability of 3D-GS and the better capture of articulated structures.
- **Q2**: Evaluate skeleton model
**Answer**: The biggest challenge in evaluating skeleton models is that the learned skeleton is uncertain in terms of the number of joints and the connections between joints.
In our view, there are three imperfect methods that might be used to evaluate the quality of the learned skeleton structure.
1. As shown in WIM, we can first learn a mapping between the learned skeleton model and a template (e.g., SMPL for humans), and then evaluate the mapped template. However, this way requires a template and the mapping process will introduce more errors.
2. Given test images with camera poses, all parameters except the rotation of joints are fixed, and the learned model is then optimized to fit the test images. After test-time optimization, the difference between the rendered image and test images shows how good the learned skeleton structure is. However, test-time optimization may fail or be suboptimal.
3. The repose process can be treated as an image generating process (e.g., GAN, Diffusion model). Therefore, we evaluate the learned skeleton structure by using the metrics (e.g., FID) that are common in the generative model. However, obtaining the rotation range of each joint and test camera pose is challenging.
Overall, rigorously evaluating skeleton models is a challenging problem, and we will try to address in future work.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed responses
Comment: After going through the author's responses, as well as the comments from fellow reviewers. The experimental results, along with the rebuttal responses, adequately support the claim in the paper, and I appreciate how the work neatly combines existing techniques/concepts to capture the 3D appearance and, most importantly, the motion structure. Therefore, I stand by my original rating -- Weak Accept. | Summary: The paper introduces a method combining 3D Gaussian Splatting and superpoints for dynamic object modeling, achieving real-time rendering and high visual fidelity. Empirical results show that the proposed method achieves state-of-the-art results on several benchmarks.
Strengths: 1. The paper is well-written and easy to follow. The main contribution and methodology are well illustrated.
2. The use of an adaptive control strategy to manage superpoints is innovative and helps in optimizing the model, avoiding redundancy, and maintaining efficiency.
Weaknesses: 1. Although this paper achieves real-time rendering compared to AP-NERF, I find it somewhat incremental and lacking in innovation since most parts of the method are existing concepts.
2. This paper emphasizes the concept of "Reposable," but the related experiments are very limited. A thorough analysis of this aspect could effectively distinguish this paper from AP-NERF.
3. This method compares fewer baselines, and the quantitative results do not show significant improvements in rendering effects and speed compared to the baselines, as shown in Table 3, Table 4, and Table 5.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The number of $M$ is not given in the paper, the author should do an ablation study on it.
2. The proposed method and results should be discussed with the latest relevant methods as referenced in [1].
[1] SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please refer to the Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your time and efforts.
- **W1**: Lacking in innovation
**Answer**: While our work builds upon previous works, to the best of our knowledge, it is the first work to discover the skeleton of articulated objects represented by 3D Gaussian Splatting.
- **W2**: Distinguish this paper from AP-NERF by a thorough analysis
**Answer**: Both our method and AP-NeRF are proposed for learning articulated representations of dynamic objects. However, there are two main differences between our method and AP-NeRF:
1. While AP-NeRF is based on point-based NeRF representations, our method is based on 3D Gaussian Splatting, which is key to achieving real-time rendering.
2. While AP-NeRF extracts the skeleton using the Medial Axis Transform (MAT), our method discovers the skeleton based on the motion of superpoints.
- **W3**: This method compares fewer baselines, and the quantitative results do not show significant improvements
**Answer**:
- Unlike other methods focused on reconstructing dynamic scenes (e.g., D-NeRF, HyperNeRF, Deformable-3D-GS, 4D-GS), our work, alongside with WIM and AP-NeRF, uniquely addresses explicit skeleton extraction from videos without any priors.
- As evidenced in Tables 3, 4, and 5, our approach achieves real-time rendering (>100 FPS), surpassing WIM and AP-NeRF (<1 FPS). Furthermore, our method exhibits superior rendering quality on the D-NeRF and Robots datasets.
- **Q1**: Ablation study for the number of $M$
**Answer**: $M$ is the number of superpoints, which is initialized as 512 as shown in L212. We conduct the ablation study for $M$. The results are shown in Table 2 in the attached `PDF`.
- **Q2**: Comparison with SC-GS
**Answer**: Quantitative comparison between our method and SC-GS is provided in Table 1 in the attached `PDF`. Besides,
- **Difference**: Our method extracts the explicit skeleton model and refines it during the *Kinematic* stage, while SC-GS does not.
- **Similarity**: Both our method and SC-GS utilize the sparse points (i.e., superpoints in ours, control points in SC-GS) with LBS. Notably, the Dynamic stage in our approach can be replaced with SC-GS.
- **Difference**: While SC-GS employs the Gaussian-kernel RBF to compute the weights of LBS, our method directly learns it (i.e., $\mathbf{W}$ in Eq 7).
- **Difference**: SC-GS can edit motion by minimizing APAR energy. Compared with SC-GS, our method directly manipulates the skeleton, which is more efficient and simpler.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I have carefully read your reply but still have some concerns.
Firstly, regarding the novelty of the work, while it may be the first to propose the concept of a skeleton in 3DGS, SC-GS has already introduced the use of control points in 3DGS, although it did not explicitly calculate the connections. This significantly impacts the novelty of the work.
Secondly, the difference in LBS design mentioned by the authors as a distinction from SC-GS seems to be a relatively minor modification.
Lastly, and more critically, in the PDF provided by the authors, there is a significant performance gap between the proposed method and SC-GS, which makes me question the rationale behind these design differences.
In conclusion, I remain negative about the work's overall contribution.
---
Rebuttal 2:
Title: Thank you for your feedback
Comment: First of all, although SC-GS uses control points to implement motion editing, there are no constraints between the control points. Therefore, compared to re-pose with skeleton, the motion editing in SC-GS :
- 1) is easy to generate physically unrealistic motion, and thus render unrealistic images;
- 2) need an additional optimization process, which is more complex and time-consuming.
We believe that discovering the skeleton of an object will be helpful in certain fields, such as game production, video generation, physical simulation, etc.
Second, it must be pointed out that the design of LBS is **not** the contribution of our paper. Our main contribution is to discover the skeleton of an articulated object by utilizing the motion of superpoints.
Third, the main reasons for the performance gap are as follows:
- To simplify the skeleton, the proposed adaptive control strategy significantly reduces the number of superpoints. ($512\to \sim 100$). The number of superpoints has an impact on performance.
- According to the code of SC-GS, SC-GS uses more tricks than ours, such as longer training time (80k), an extra MLP for input time (called time-net), additional hyper-features to help k-nearest neighbor search, the initialization train for control points and so on.
We are incorporating SC-GS into our work to achieve even higher performance.
If you have any further questions or would like additional clarification, please do not hesitate to contact us. We would be more than happy to provide additional information or discuss any aspect of our work in greater detail. Your feedback is deeply appreciated, and we remain fully committed to addressing any concerns you may have.
---
Rebuttal Comment 2.1:
Comment: Thanks for your response. I understand that the rebuttal time is limited, and I also acknowledge the technical differences between this work and SC-GS.
However, I believe the overall motivation remains the same, which is to incorporate articulated information into 3DGS to control dynamic generation. The potential issues with SC-GS mentioned by the authors, as well as the performance gap, are not supported by any experimental evidence, so they fail to convince me.
Additionally, if the focus of the paper is on obtaining the object's skeleton, there are many methods, such as [1], [2], and [3], that can extract heterogeneous skeletons from videos or meshes.
Finally, I will maintain my score unless there is additional evidence that can prove the points raised by the authors.
[1] CASA: Category-agnostic Skeletal Animal Reconstruction, NIPS 2022.
[2] Object Wake-Up: 3D Object Rigging from a Single Image, ECCV 2022.
[3] RigNet: Neural Rigging for Articulated Characters, TOG 2020.
---
Reply to Comment 2.1.1:
Comment: Thanks for your feedback.
There are significant differences between SC-GS's motivation and ours:
- SC-GS is proposed to solve the novel view synthesis problem for dynamic scenes, which introduces sparse control points together with an MLP for modeling scene motion. The main motivation is that we can use a sparse set of base to represent motions within the dynamic scene. Therefore, SC-GS does not involve articulated information.
- Our approach focuses on the task of extracting explicit skeletons from images. According to the methods you listed, this task is important and challenging. And the motivation of ours is that we can extract explicit skeleton from the learned motions of superpooints.
Compared with the listed methods, our method **does not require any priors**, such as templates, 3D datasets, etc. Specifically,
1. To recover the skeletal shape of an animal from a monocular video, CASA[1] first retrieves the most relevant articulated shape from a 3D character assets bank.
2. To reconstruct, rig, and animate the 3D objects from single images, Object Wake-Up[2] trained on the dataset with rigged 3D characters. The used “ModelsResource-RigNetv1” dataset only focuses on limited categories, e.g., chair, sofa, desk, and table.
3. While RigNet [3] aims to produce animation rigs from input character models, it also requires the dataset of 3D articulated characters to train the model.
We are working to incorporate SC-GS into our work to provide further evidence. | Summary: The paper proposes a novel approach for reconstructing reposable dynamic 3D objects from RGB videos using Gaussian Splatting, without requiring any template as input.
To achieve this, the paper suggests grouping Gaussians around superpoints, which are intended to represent rigid parts of the scene. By optimizing and analyzing these superpoints, a full skeleton model of an articulated object in the input video can be built, refined, and used for reposing purposes.
**Details**
The approach consists of two main stages:
1. *Dynamic Stage*. After optimizing a canonical 3D Gaussian Splatting (3DGS) representation for a few iterations, a set of superpoints is initialized in the scene. A deformable field mapping each superpoint to a time-variant 6DoF transformation is optimized. These transformations are used to derive the motion of Gaussians by interpolating transformations with neighboring superpoints through linear blend skinning (LBS). The paper also proposes a gradient-based strategy to control (prune, merge, or densify) the number of superpoints in the scene. Toward the end of the dynamic stage, a skeleton structure with joints is enforced and discovered in the scene by analyzing the distance between and configuration of superpoints.
2. *Kinematic Stage*. After discovering the skeleton model of the scene, the number of Gaussians and superpoints is fixed and optimized along with a new MLP mapping skeleton joints to time-variant rotation matrices. These matrices are used to compute the motion of each Gaussian along the kinematic chains using LBS. After full optimization, the skeleton can be used for reposing and editing the reconstructed object.
The paper presents extensive experiments and demonstrates higher rendering performance and speed compared to concurrent reposable dynamic Radiance Field methods.
Strengths: 1. The paper is well-written and easy to follow.
2. The task addressed by the paper is challenging but crucial for many applications in both graphics and robotics. I appreciate the proposed strategy, which successfully retrieves kinematic chains from RGB videos.
3. The quantitative evaluation presented in the paper is convincing and clearly demonstrates the superiority of the approach over concurrent methods.
Weaknesses: 1. The paper may lack sufficient skeleton examples to effectively demonstrate that the proposed approach can recover meaningful structures from RGB videos. Indeed, the primary goal is to recover skeleton structures and enable reposing capabilities, but only a single skeleton example is provided (Figure 5). Including more qualitative examples would likely make the paper more convincing.
2. The paper does not provide details on the optimization time and required resources (e.g., VRAM) for the proposed approach. It appears that a large number of training iterations is needed; a comparison with previous state-of-the-art models would be valuable.
3. The limitations of the approach are interesting but are only discussed in the supplementary material, which is problematic in my opinion. These limitations are crucial for further research and should be included in the main text.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. What are the optimization time and required resources (e.g., VRAM) for the proposed approach? How does it compare to state-of-the-art methods?
2. Do the authors have insights on how the method would perform in the context of a monocular video with a moving camera, which is a more realistic setting than multi-view videos?
3. In Figure 5, the skeleton appears to be quite accurate, but some bones are located outside the geometry. Would it be possible to enforce the skeleton to be located “inside” the geometry, perhaps by applying a penalty that encourages superpoints to be close to the centroid of their associated Gaussians?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: As I already mentioned it, the limitations are only discussed in the supplementary material, which is problematic in my opinion. I encourage the authors to try to move the limitations to the main paper in the final version.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful and valuable comments. Below are our responses to the specific points raised.
- **Q1/W2**: Optimization time and required resources
**Answer**: Similar to 3D-GS, the optimization time and required resources are dependent on the number of Gaussians. For the D-NeRF dataset, the average optimization time is 6.3 hours, and the average peak GPU memory is 4.08 GB. Please refer to Tables 1 and 3 in the attached `PDF` for details.
- **Q2**: Performance on monocular video with a moving camera
**Answer**: The camera setup in the D-NeRF dataset can be considered as using a camera to capture 360-degree surround images centered on the target object. Therefore, using a monocular video with a moving camera as input, our method can achieve good performance. However, using forward-facing monocular videos as input may result in reconstruction failures or inaccurate skeletons.
- **Q3**: Enforce the skeleton to be located "inside" the geometry
**Answer**: We acknowledge the suggestion to add an extra regularizer term to encourage the skeleton to be "inside" the geometry. We will explore this idea in our future work.
- **W1**: More qualitative examples
**Answer**: We have provided additional qualitative examples in the supplementary video material.
- **W3**: Limitations in the supplementary material
**Answer**: Thanks for your suggestion. We will move it into the main text in the final version.
---
Rebuttal Comment 1.1:
Title: Thanks!
Comment: I would like to thank the authors for the efforts they made during the rebuttal as well as the answers to my questions.
I think the paper tackles a very challenging task (recovering skeletons with a template-free optimization pipeline) and provides an interesting solution relying on 3DGS.
Even though the output skeletons are still over-complicated for potential applications in animation, the proposed approach is certainly a step forward for the field.
For these reasons, I recommend acceptance and decide to maintain my initial rating. | Rebuttal 1:
Rebuttal: We thank the reviewers for their positive and constructive feedbacks.
The attached `PDF` contains 3 tables and 2 figures.
Pdf: /pdf/2c72137e9137f6bedad6dae3ce1b3b67fa96ef28.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models | Reject | Summary: This paper explores compute-optimal inference for large language
models (LLMs), focusing on designing models and strategies that
balance additional inference-time computation with improved
performance. The study evaluates the effectiveness and efficiency of
various inference strategies, including Greedy Search, Majority
Voting, Best-of-N, and Weighted Voting, across different model sizes
(e.g., 7B and 34B) and computational budgets. Experimental results
indicate that smaller models with advanced tree search algorithms can
achieve a Pareto-optimal trade-off, offering significant benefits for
end-device deployment. For example, the Llemma-7B model matches the
accuracy of the Llemma-34B model on the MATH500 dataset while using
half the FLOPs. These findings suggest that smaller models with
sophisticated decoding algorithms can enhance problem-solving accuracy
across various generation tasks.
Strengths: - The paper focuses on an interesting topic and should be of interest
to the audience of NeurIPS.
- It considers a comprehensive experimental investigation to confirm
the claims.
- The proposed tree search algorithm is interesting and seems to
outperform the competition.
Weaknesses: - Although the paper offers quite thorough experimental analysis, it
does not look deep in terms of theoretical ideas (although there are
2 theorems), which may be a problem for a flagship venue like
NeurIPS.
- Overall findings on the possibility to train an equally accurate
model with fewer computational resources do not look surprising.
- The paper would benefit from additional proof-reading as there are a
large number of typos present.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper concentrates on mathematical problem-solving tasks using 7B
and 34B models, with findings potentially not applicable to other
domains. Future research should explore a broader range of model sizes
and different training datasets to better understand compute-optimal
inference in mathematical problem-solving.
I should also say that these limitations have been explicitly
discussed by the authors themselves (so not a criticism).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Concern 1: Although the paper offers quite thorough experimental analysis, it does not look deep in terms of theoretical ideas (although there are 2 theorems), which may be a problem for a flagship venue like NeurIPS.
Our main focus is on formalizing the compute-optimal inference problem, designing a new method, and empirical analysis. To further aid in reasoning about inference strategies in our problem setting, we provided theoretical analysis. We respectfully disagree that these contributions are together insufficient for a venue like NeurIPS. We also follow your suggestion and additionally prove the asymptotic convergence bounds for majority voting and weighted majority voting to understand the performance saturation of the voting methods.
**Notations and assumptions.** Let $\mathcal{V}$ be a _finite_ vocabulary and $\mathcal{V}^*$ its Kleene closure, i.e., the set of all strings. Given a problem $x$, we say a language model answers $y$ to this problem if the model outputs $r\mathrm{e}y$ where $r\in\mathcal{V}^*$ can be any ``reasoning path'' and $\mathrm{e}\in\mathcal{V}$ denotes a special token that marks the end of reasoning. We further assume that the answer string is always shorter than $L$ tokens, i.e., $|y|\leq L$ for some fixed $L\in\mathbb{N}^*$ where $|y|$ denotes the length of $y$.
For a language model $\pi$, denote by $\pi(v|w)$ the probability of generating $v$ given input (prompt) $w$. For a reward model $\rho$, denote by $\rho(v)$ the score it assigns to the string $v$.
We use $\mathbb{I}$ to denote the indicator function.
**Theorem.** Consider a dataset $\mathcal{D}=\\{(x_i, y_i)\\}^m_{i=1}$ where $x_i$ and $y_i$ denote input and true answer, respectively. For a language model $\pi$, denote by $\mathrm{acc}^{\mathrm{MV}}_n (\mathcal{D}; \pi)$ the accuracy on $\mathcal{D}$ using Majority Voting with $n$ samples. Following the notations and assumptions defined above, we have:
$\mathbb{E}\left[\mathrm{acc}_ n^{\mathrm{MV}}(\mathcal{D}; \pi)\right] = \frac{1}{m}\sum_{i=1}^m \mathbb{I}\left[y_i = \arg\max_{|y|\leq L} \sum_{r\in\mathcal{V}^*}\pi(r\mathrm{e}y|x_i)\right] - \mathcal{O}(c^{-n})$ for some constant $c>1$.
For weighted voting, similar convergence results hold, i.e., the accuracy of weighted voting also saturates at the speed of $ \mathcal{O}(c^{-n})$ where $c>1$. We will include these results and proofs in the paper revision.
> Concern 2: Overall findings on the possibility to train an equally accurate model with fewer computational resources do not look surprising
There might be some misunderstanding, since we study the compute-optimal _inference_ strategy in this paper (rather than _training_). Our main findings are: 1. Generally more computation at inference time leads to higher performance, but finally saturates at some point. 2. Using the same training dataset, smaller models can be more compute-optimal than larger models during inference. They typically achieve comparable performance with less computation. 3. While typical tree search algorithms like MCTS are not compute-optimal (in that they may improve performance, but by using much more computation), we found that our proposed new tree search algorithm REBASE can get higher performance with less computation than sampling. This gives the compute-optimal inference strategy: using a smaller model with a sophisticated tree search algorithm (REBASE).
> Concern 3:The paper would benefit from additional proof-reading as there are a large number of typos present.
Thanks for pointing it out! We have fixed some typos and will do more proof-reading for the final version.
We sincerely hope that our responses address your concerns and you reevaluate our work based on the responses. Thank you again for your time!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. | Summary: The paper presents an approach to select an optimal inference strategy for LLMs and empirical analysis on Math problem solving tasks. The main idea is to select an inference strategy based on a computational budget (FLOPs). The underlying policy model samples solutions by generating tokens based on the budget and a ranking model consumes these tokens. A new reward model is developed to explore the solution space more effectively. The reward acts as a weighted majority function over the solutions.
Experiments are performed on Math problem solving benchmarks. Some of the key insights from the experiments is that a smaller LLM can outperform the larger LLM in terms of using a smaller computational budget while maintaining similar accuracy. They also show that the proposed approach with a smaller budget has comparable accuracy than sampling with a larger budget.
Strengths: - The insights that inference time strategy can compensate for using smaller LLMs in generation seems to be interesting
- The experiments also provide a basis for analyzing scaling properties of inference which can be significant
Weaknesses: - In terms of the method itself, I was not sure if it is very novel. It seems to be a smaller variation on the tree search methods that search for solutions in the generated space
- In terms of comparisons, I was not sure about the significance of the benchmark, i.e., are there some properties that make the proposed reward reranking more optimal in Llema model specifically (due to the structure of math problems, etc.). In general, since the main contribution of the paper is empirical, I think there should be experiments or discussions different LLMs to make the contribution more significant.
-Overall, the empirical conclusions seem very tied to the specific benchmarks, so I was a little unsure regarding the significance of the conclusions.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Is the comparison based on state of the art inference strategies for compute-optimal inference? Specifically, the other methods are all agnostic of the computational limits, so I was wondering if there are other approaches that do take computational limits into account (the related works do not mention any so it is possible there are not)?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations regarding the datasets are mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Concern 1: In terms of the method itself, I was not sure if it is very novel. It seems to be a smaller variation on the tree search methods that search for solutions in the generated space
Our emphasis in this work is on formulating and studying a new setting of compute-optimal inference. As part of this, we design a new tree search algorithm that performs well in terms of accuracy and compute budget. This results in two kinds of novelty:
- First, we are the first to formulate the compute-optimal inference problem (at least in the context of LLM problem solving). Previous research on inference methods for problem solving mainly targets accuracy improvements and pays little attention to the additional increase in compute budget.
- Second, we propose the REBASE method which combines the merits of MCTS and sampling in a novel way. REBASE combines a highly parallel tree search with a new branch cut and exploration mechanism. This results in performance improvements from the new exploration mechanism, with a cost comparable to sampling. While on the surface it may appear as a variation of existing tree search methods, it is nontrivial to design such an algorithm and achieve a good accuracy-cost tradeoff. We are not aware of other methods that have done so, and/or used the same mechanisms that we propose.
> Concern 2: In terms of comparisons, I was not sure about the significance of the benchmark, i.e., are there some properties that make the proposed reward reranking more optimal in Llema model specifically (due to the structure of math problems, etc.). In general, since the main contribution of the paper is empirical, I think there should be experiments or discussions different LLMs to make the contribution more significant. -Overall, the empirical conclusions seem very tied to the specific benchmarks, so I was a little unsure regarding the significance of the conclusions.
MATH and GSM8K are the two most widely used benchmarks for evaluating LLM math abilities. Although it is common in recent papers to use only these two benchmarks, to address your concern we additionally provide results on a code generation task (the MBPP benchmark) with different LLMs. Please check the detailed results in the general response.
In our original paper, we discussed various models, including Lemma models and Mistral, which have entirely different architectures. For the new code generation task, in addition to presenting the results from Llama3-8B, we have included the results from CodeLlama-7B-Instruct to demonstrate the effectiveness of REBASE across different LLMs.
**CodeLlama-7B-Instruct**
| Sample num | Sampling FLOPS ($\times 10^{12}$) | Sampling Pass@n | Rebase FLOPS ($\times 10^{12}$) | Rebase Pass@n |
|------------|------------------------|-----------------|----------------------|---------------|
| 8 | 13 | 45.6% | 10.5 | 57.6% |
| 16 | 26 | 54.2% | 21 |65% |
| 32 |52 | 62.8% | 42 | 69% |
| 64 | 104 | 68.6% | 84 |72.6% |
> Question: Is the comparison based on state of the art inference strategies for compute-optimal inference? Specifically, the other methods are all agnostic of the computational limits, so I was wondering if there are other approaches that do take computational limits into account (the related works do not mention any so it is possible there are not)?
We use state-of-the-art inference strategies (for instance, weighted majority voting and MCTS have been used to achieve state-of-the-art accuracy in problem solving tasks [1-4]), but a key contribution of our paper is pointing out that these methods may not be optimal when cost is taken into account. Since this compute-optimal setting is new when it comes to problem solving, we are not aware of a “state-of-the-art” for compute-optimal inference. Instead, we use strong, commonly used inference strategies, analyze their performance tradeoffs, and show that designing a tree search algorithm with cost in mind can lead to a better tradeoff. The comparison of our Rebase with weighted majority voting and MCTS is shown in figure 1. We also present an example comparison of the Llema 7B performance on the different inference strategies here:
|inference strategy| FLOPS $\times 10^{13}$|Accuracy on MATH500|
|----------|--------|-------|
|weighted majority voting|25.1|45.2%|
|MCTS|23|44%|
|Rebase|14.8|46.8%|
[1] Wang, Xuezhi, et al. "Self-consistency improves chain of thought reasoning in language models." arXiv preprint arXiv:2203.11171 (2022).
[2] Zhang, Shun, et al. "Planning with large language models for code generation." arXiv preprint arXiv:2303.05510 (2023).
[3] Liu, Jiacheng, et al. "Making ppo even better: Value-guided monte-carlo tree search decoding." arXiv preprint arXiv:2309.15028 (2023).
[4] Tian, Ye, et al. "Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing." arXiv preprint arXiv:2404.12253 (2024).
We sincerely hope that our responses address your concerns and you reevaluate our work based on the responses. Thank you again for your time!
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response. Based on the response, I think the proposed approach seems to yield a state of the art inference strategy. I still think since the main contribution focuses on engineering a better inference strategy and empirical analysis of LLMs, the type of datasets could be broader as suggested by other reviewers as well. At the same time, the proposed approach could improve the usability of LLMs in specific domains (e.g. MATH understanding). I will increase my score based on the discussions.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response and for taking the time to reevaluate our work. While the rebase method does improve the performances on benchmarks, we would like to clarify that our paper is not primarily focused on proposing new methods. Instead, our key contributions lie in the development of inference scaling laws and the formulation of the inference compute-optimal problem. To illustrate our novel findings, we use the Rebase method, demonstrating that employing smaller models with advanced inference strategies is, in fact, compute-optimal.
---
Reply to Comment 1.1.2:
Comment: Thank you for increasing our score. Besides the math tasks, we also added benchmark MBPP in code generation, please see the detailed results in our general response. | Summary: This paper investigates the optimal training configurations of large language models (LLMs) during inference. The proposed inference strategy, REward BAlanced SEarch (REBASE), combines the strengths of Monte Carlo Tree Search (MCTS) with reduced inference costs, resulting in improved performance on math-domain tasks.
Strengths: 1. This paper provides a comprehensive overview, i,e, the inference scaling law, of the performance of different sampling strategies under various inference configurations.
2. The novel REBASE inference strategy achieves better downstream task performance under the same computational budget or even less.
Weaknesses: ### Major
1. Did you take into account the inference cost of the reward model (RM) in your analysis? As the REBASE frequently uses RM to judge the quality of immediate solutions than other sampling strategies, such as, weighted major voting, It's crucial to consider this aspect to provide a holistic view of the efficiency and practicality of your proposed strategy.
2. The base model with post-training techniques such as SFT and RLHF inherently limits the upper bound of performance. It seems that adding more tricks during inference could improve performance, but the marginal effect may result in diminished returns when using models already tuned by the RLHF process. Could you compare the performance gains of REBASE between the base model, the SFT model, and the Chat model? Is the performance gain only significant in models that have not been tuned?
3. In Section 4.2, the observation in "Scaling law of compute-optimal inference" indicates that the optimal inference strategy is invariant to the amount of compute but depends on the model size, i.e., the model's inherent capacity. This raises a concern: does the inference strategy significantly improve the model's performance, or does it only take effect in certain scenarios, such as with base models that have not been aligned?
4. The paper focuses solely on the math domain. To strengthen your claims, a more comprehensive evaluation across general domains using widely adopted benchmarks, such as MMLU, SuperGLUE, HumanEval, etc, is necessary.
5. There appears to be no significant improvement in the GSM8K datasets than MATH500 dataset.
### Minor
1. Figures. 2 and 3 are not referenced in the main manuscript.
2. Figures. 2 and 3 appear to be in draft form and are somewhat vague.
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weakness.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Concern 1: Did you take into account the inference cost of the reward model (RM) in your analysis? As the REBASE frequently uses RM to judge the quality of immediate solutions than other sampling strategies, such as, weighted major voting, It's crucial to consider this aspect to provide a holistic view of the efficiency and practicality of your proposed strategy.
We didn’t take into account the inference cost of the reward model, because when measuring the inference computation, the cost of the RM is negligible compared to the policy model. While the policy model generates a sequence of tokens in an autoregressive way, the reward model only runs one forward pass to get logits for calculating the score. The dominant part of computation is the decoding process, hence we can only take that into consideration.
> Concern 2: The base model with post-training techniques such as SFT and RLHF inherently limits the upper bound of performance. It seems that adding more tricks during inference could improve performance, but the marginal effect may result in diminished returns when using models already tuned by the RLHF process. Could you compare the performance gains of REBASE between the base model, the SFT model, and the Chat model? Is the performance gain only significant in models that have not been tuned?
All of the models in our current paper have been tuned on the GSM8k and MATH training set (i.e., the MetaMath dataset). In order to see how scaling laws of inference vary based on the post-training methodology (or lack thereof), we conducted additional experiments using Llama3-base and Llama3-Instruct on the MBPP benchmark, with the results shown below.
**Llama-8B-Base**
| Sample num | Sampling FLOPS ($\times 10^{12}$) | Sampling Pass@n | Rebase FLOPS ($\times 10^{12}$) | Rebase Pass@n |
|------------|------------------------|-----------------|----------------------|---------------|
| 8 | 5.6 | 25.8% | 8 | 33.2% |
| 16 | 11.2 | 39.8% | 16 | 47.4% |
| 32 | 22.4 | 51% | 32 | 59% |
| 64 | 44.8 | 62.8% | 64 | 68% |
**Llama3-8B-Instruct**
| Sample num | Sampling FLOPS ($10^{12}$) | Sampling Pass@n | Rebase FLOPS ($10^{12}$) | Rebase Pass@n |
|------------|------------------------|-----------------|----------------------|---------------|
| 8 | 8 | 63% | 8.26 | 69.6% |
| 16 | 16 | 69.4% | 17.47 | 72.4% |
| 32 | 32 | 72.4% | 34.9 | 75.8% |
| 64 | 64 | 79% | 69.15 | 81.4% |
We can see that when using REBASE as an inference algorithm, Llama3-8B-Base gets more improvement (7.4%, 7.6%, 8% and 5.2% compared to sampling 8, 16, 32, 64 respectively) than the Llama3-8B-Instruct (6.6%, 3%, 3.4%, 2.4%). This is similar to the “weaker models gain more from the tree search” observation in our paper (section 4.3). However, the performance of Llama3-8B-Instruct–which has undergone several post-training techniques (e.g., SFT, RL)--still significantly improves when using more samples, and the tree-search-based REBASE still outperforms vanilla sampling. Overall thank you for giving this suggestion, which encourages us to conduct extra experiments to strengthen our conclusion that weaker models tend to benefit more from the tree search method.
> Concern 3: Section 4.2, the observation in "Scaling law of compute-optimal inference" indicates that the optimal inference strategy is invariant to the amount of compute but depends on the model size, i.e., the model's inherent capacity. This raises a concern: does the inference strategy significantly improve the model's performance, or does it only take effect in certain scenarios, such as with base models that have not been aligned?
In the paper, all the models (Llemma and mistral) are SFT-aligned. To further address your concern, we provide new experiment results showing that the inference strategies also take effect in RL-tuned models like Llama3-Instruct-8B.
> Concern 4: The paper focuses solely on the math domain. To strengthen your claims, a more comprehensive evaluation across general domains using widely adopted benchmarks, such as MMLU, SuperGLUE, HumanEval, etc, is necessary.
We conduct additional experiments on MBPP to show our findings are also applicable to code generation. Rebase consistently outperforms the sampling method, the accuracy raises from 62.8% to 68% and 79% to 81.4% for Llama3-base and Llama3-Instruct when sampling 64 solutions.
> Concern 5: There appears to be no significant improvement in the GSM8K datasets than MATH500 dataset.
This is because GSM8K is much easier than MATH500, so the error rate ranges of these two datasets are different. In Table 2, we can see that GSM8K’s error rate is typically around 10-15%, so 0.7-1.9% are already relatively significant improvements. For MATH500, the error rate is around 55-60%, so that we can achieve an absolute accuracy improvement of 2.6-5.3%.
We sincerely hope that our responses address your concerns and you reevaluate our work based on the responses. Thank you again for your time!
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses. However, I'm still uncertain about the practical implications of the inference scaling law. It seems more appropriate to apply the RM during the post-training stage rather than the inference stage, as it doesn't enhance the model's inherent capacity. Additionally, the performance improvements don't seem significantly better than the vanilla inference strategy. Thus, I choose to retain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for the feedback. We respond to your follow-up concerns below.
> uncertain about the practical implications of the inference scaling law
Existing research shows that either scaling up model sizes or applying advanced inference strategies improves LLMs’ task performance. However, given an inference compute budget, it is not clear whether using large models with simple inference strategies or small models with sophisticated inference strategies is more favorable. We believe that identifying the right model size and inference strategy is as important as, if not more important than, the research on training scaling law [1] which studies the trade-off between model sizes and training number of tokens.
Our empirical analysis reveals that using smaller models with advanced inference strategy (like Rebase) is compute-optimal under budget constraints. We note that this is a novel and useful finding which provides important guidance to model deployment on end devices.
[1] Hoffmann, Jordan, et al. "Training compute-optimal large language models."
> It seems more appropriate to apply the RM during the post-training stage rather than the inference stage
We feel that there may be some misunderstandings regarding the RMs. The RM in the RLHF scores the whole LLM output by learning from human feedback. In contrast, Rebase uses the process reward model (PRM) [2-3] which scores each reasoning step and navigates for a better path of reasoning. Thus, **both the training methods and working mechanisms of the two “RM”s in Rebase and RLHF are different.** We will clarify this in our paper.
What’s more, recent research has shown that the RL-aligned (with RM) model with vanilla inference strategy underperforms the model using inference strategies with RMs. [4] shows that RL-aligned Llemma 7b mode with the reward model has 34.0% accuracy (greedy decoding), which underperforms the weight majority voting of corresponding non-RL model(42.7%). [5] shows that after alignment against a reward model, the policy model still benefits from reward-model-based tree search, with the performance improved from 61.84% to 67.04% on MATH dataset. Thus, we believe that **using RM in the post-training / inference stage can be two orthogonal choices** to improve performance.
[2] Lightman, Hunter, et al. "Let's verify step by step."
[3] Ma, Qianli, et al. "Let's reward step by step: Step-Level reward model as the Navigators for Reasoning."
[4] Sun, Zhiqing, et al. "Easy-to-hard generalization: Scalable alignment beyond human supervision."
[5] Chen, Guoxin, et al. "AlphaMath Almost Zero: process Supervision without process."
> The Inference strategy doesn’t enhance the model’s inherent capability.
Besides enhancing the model’s inherent capability, inference-time improvements are also important since our ultimate goal is to achieve better task performances. Generally learning and inference are complementary methods [6] for improving LLMs task performances. There are many works focused on inference improvement. For example, [7] introduces search during inference, [8] refine the LLMs output to get better results and [9] integrates the LLM’s output with programs to enhance the reasoning capability.
[6] Jones, Andy L. "Scaling scaling laws with board games."
[7] Yao, Shunyu, et al. "Tree of thoughts: Deliberate problem solving with large language models." NeurIPS 2024.
[8] Madaan, Aman, et al. "Self-refine: Iterative refinement with self-feedback." NeurIPS 2024.
[9] Gao, Luyu, et al. "Pal: Program-aided language models." International Conference on Machine Learning. PMLR, 2023.
> The performance improvements don’t seem significantly better than the vanilla inference strategy.
In our general response, one can see that Rebase leads to huge improvements on the accuracy rate. Specifically, Rebase improves the baseline performance by 5.2%-8% (Llama3-Base) and 2.4%-6.6%(Llama3-Instruct) under different sampling numbers.
In our paper, we compare the Rebase with weighted majority voting in the plots and paper, where both rebase and weighted majority voting have used the reward model. To compare the performance of Rebase and vallina inference strategy (majority voting or greedy decoding), we present the results here:
**Mistral 7B on MATH**
| |greedy decoding|Maj@256|Rebase@256|
|---|-------|------|------|
|Accuracy|28.6% | 37.8% | 46.8% |
**Llemma 7b on MATH dataset**
| |greedy decoding|Maj@256|Rebase@256|
|---|-------|------|------|
|Accuracy|30% | 41.8% | 50.6% |
**Llemma 34b on MATH dataset**
| |greedy decoding|Maj@64|Rebase@64|
|---|------|-------|-------|
|Accuracy|33% | 45% | 51.4% |
Rebase improves over greedy decoding by ~20 points, and naive majority voting by 8 points on average. We believe those performance gains are significant.
We hope our responses address your concerns and you reconsider the score. Thank you for your time!
---
Reply to Comment 1.1.2:
Comment: Thank you for your review. We would appreciate it if you could confirm whether our response has addressed your additional concerns. We have clarified the practical implications of inference scaling, the distinction between PRM in inference and RM in RLHF, and the significance of inference-time algorithms in the last thread. Please let us know if you have any further questions, we hope you might consider increasing the score, thanks! | null | null | Rebuttal 1:
Rebuttal: **General Response**
We are grateful to all reviews for their insightful comments. We appreciate that reviewers found our method to be novel (PMgX), basis for analyzing inference scaling law to be comprehensive (PMgX, pgJ7, drh5), and our topic to be interesting (pgJ7, drh5).
We summarize our contributions and add new experiments as your suggestions here:
**Key contributions**:
- We are the first to formulate the compute-optimal inference problem and conduct a comprehensive study of different model sizes and inference strategies.
- Through our comprehensive study on the compute-optimal inference problem, we find that increasing computational budget during inference generally leads to enhanced performance. Additionally, we demonstrate an upper bound for the voting method and show that advanced inference strategies (weighted voting, Rebase) offer a better trade-off between computation and performance.
- While previous tree search methods sacrifice computation cost for performance improvement, our proposed Rebase method is a compute optimal inference strategy which achieves high performance with less computation.
- We find that using smaller models with sophisticated inference strategies is the compute-optimal approach for inference.
**New Experiments**:
Following the suggestions of studying other benchmarks (PMgX, pgJ7) and comparing the performance of the base model with the rl-tuned model (PMgX), we add new experiments on the code-generation benchmark MBPP with Llama3-base, Llama3-Instruct models. For rebase and sampling, we use the same configuration (prompt (zero shot), temperature, top_p, etc).
**Llama-8B-Base**
| Sample num | Sampling FLOPS ($\times 10^{12}$) | Sampling Pass@n | Rebase FLOPS ($\times 10^{12}$) | Rebase Pass@n |
|------------|------------------------|-----------------|----------------------|---------------|
| 8 | 5.6 | 25.8% | 8 | 33.2% |
| 16 | 11.2 | 39.8% | 16 | 47.4% |
| 32 | 22.4 | 51% | 32 | 59% |
| 64 | 44.8 | 62.8% | 64 | 68% |
**Llama3-8B-Instruct**
| Sample num | Sampling FLOPS ($\times 10^{12}$) | Sampling Pass@n | Rebase FLOPS ($\times 10^{12}$) | Rebase Pass@n |
|------------|------------------------|-----------------|----------------------|---------------|
| 8 | 8 | 63% | 8.26 | 69.6% |
| 16 | 16 | 69.4% | 17.47 | 72.4% |
| 32 | 32 | 72.4% | 34.9 | 75.8% |
| 64 | 64 | 79% | 69.15 | 81.4% |
These new results strengthen our claim that Rebase is a compute-optimal inference approach. Namely, Rebase consistently outperforms the baseline across the different models (base, sft, rl). Also, the new results further clarify the finding that weaker models gain more and stronger models gain less (this is also pointed out in section 4.3 of our paper).
We also find that when the computation is limited and a naive sampling method is used, the gap between the base model and rl-instruct model is huge (25.8% v.s. 63%), but with more computation and an advanced inference strategy, the gap is narrowed (68% v.s. 81.4%). This suggests that the inference-time strategy can bridge the gap between strong and weak models. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Curriculum Fine-tuning of Vision Foundation Model for Medical Image Classification Under Label Noise | Accept (poster) | Summary: The authors propose Cufit, a curriculum fine-tuning method for improving the performance of medical image classification under the noisy labels setting. The method shows strong performance against other baselines on several medical datasets. The authors have also provided results on a non-medical dataset.
Strengths: **Strengths**
1. The proposed strategy Cufit outperforms other baselines included in the evaluation
2. The strategy is agnostic to models (CNNs and ViTs) and images (medical and natural)
3. Cufit does not require knowledge about certain hyperparameters which are required in other methods.
4. The authors have presented results on non-medical images as well.
Weaknesses: **Weaknesses**
1. The paper is very poorly written with spelling and grammatical errors throughout the text. Certain elements in the text are written in a convoluted way that confuses the reader.
2. The entire paper is about the proposed method “Cufit” but no section in the paper describes the algorithm in detail. Section 6.1 (“How does Cufit work?”) does not talk about the method but only about the results.
3. It is important to cite previous work on PEFT in medical image analysis such as [1] in the text.
4. The paper lacks an explanation of the baseline methods used in the evaluation. Methods like CoDis are briefly mentioned in the Related Work section but have not been defined anywhere else. People unfamiliar would not be able to understand Cufit and how it differs from previously proposed methods.
5. The experiments should include the CheXPert dataset. Apart from being frequently adopted for medical image analysis problems, it would also evaluate the proposed methodology for multi-label classification problems. Furthermore, CheXPert is supposed to contain noisy labels due to automatic labelling from free-text reports. Hence, it would adequately test the proposed method Cufit under a noisy label setting.
6. To make the experiments more extensive, more datasets and PEFT methods (see [1] for reference) should be included. LoRA is one of the most popular PEFT strategies used for transformer-based models (especially in the case of medical image classification [1]) and should be included in the experiments.
7. For natural image classification, the authors have adopted the CIFAR dataset. Firstly, in order to provide conclusive results, several natural imaging datasets should be included. Secondly, there are many datasets much more appropriate than CIFAR that should have been used instead.
References
1. Dutt, Raman, et al. "Parameter-Efficient Fine-Tuning for Medical Image Analysis: The Missed Opportunity." *Medical Imaging with Deep Learning*, 2024, https://openreview.net/forum?id=LVRhXa0q5r.
Technical Quality: 2
Clarity: 2
Questions for Authors: **Questions**
1. After reading the paper several times, I am yet to understand how Cufit actually works. There is no section dedicated in the text that explains this.
2. Why are methods like LoRA that are very frequently used for doing PEFT in transformer-based models and datasets like CheXpert excluded from the analysis?
Please address the **Weaknesses** section for more questions.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Please address the **Weaknesses** section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Q2, W6: Experimental results with LoRA**
We greatly appreciate your suggestion. We have conducted the experiment primarily using the Rein adapter, as it achieves excellent performance in domain-generalized semantic segmentation, and we believe it has chance to have excellent performance in the medical domain. Given that LoRA is a well-known PEFT method, we agree it should be included. We will add the experimental results using LoRA in Section 6.2, 'Performance comparison across various VFMs and adapters.' The following table provides the comparison results with LoRA on APTOS-2019 and HAM10000 with noise-rate of 40%, showing that our method is also applicable with LoRA. We will include this result in Section 6.2.
| Dataset | Full-training | Linear probing | LoRA | Co-teaching | JoCor | CoDis | Ours |
| --- | --- | --- | --- | --- | --- | --- | --- |
| APTOS-2019 | 69.9 | 79.5 | 63.9 | 79.5 | 80.1 | 80.1 | **81.7** |
| HAM10000 | 56.1 | 71.0 | 57.0 | 78.1 | 77.2 | 75.0 | **78.8** |
### **W3: Previous work on PEFT in medical image analysis.**
Thank you for pointing out the related work that we missed. We will cite the paper [1] in the related work section to address the importance of PEFT in medical image analysis.
### **Q2, W5: Experiment with CheXPert.**
Thank you for your valuable insight and feedback. In our current version of the paper, we focused on methods for multi-class classification problems in the medical domain, which is why we excluded datasets for multi-label classification problems. However, we agree with your suggestion regarding the CheXPert dataset. It is frequently used in medical image analysis and contains noisy labels due to automatic labeling from free-text reports. Incorporating CheXPert to evaluate Cufit in multi-label classification problems will be a part of our future research.
### **W4: Other dataset for natural image classification.**
To validate our method on natural image classification, we further evaluate using the Animal10N dataset, a real-world noisy dataset. We follow the experimental settings outlined in "SELFIE: Refurbishing Unclean Samples for Robust Deep Learning" (i.e., batch size of 128 and 100 epochs training). Also, for baseline methods, we set the noise rate of 0.08 which is the estimated noise rate by "SELFIE" paper. Given the small performance gap, we conduct the experiment three times and average the results. The following table provides the experimental results using DINOv2-small with the Rein adapter. We will add these results to the paper.
| Dataset | Full-training | Linear probing | Rein | Co-teaching | JoCor | CoDis | Ours |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Animal10N | 74.5 | 89.1 | 88.0 | 92.2 | 91.9 | 91.7 | **92.3** |
### **W7: Detailed explanation of the baseline methods.**
We agree on your opinion that our paper lacks an explanation of the baselines. We further describe details of the other methods after line 230. For example, the following paragraph will be added in the text.
- Previous method such as Co-teaching, JoCor, and CoDis involve training two neural networks concurrently on the dataset. In the training stage, each network identifies and discards potentially mislabeled examples, subsequently teaching the other network with the remaining clean examples. Specifically, Co-teaching trains two homogeneous networks simultaneously and cross-updates parameters using R(T)% small-loss instances from the given mini-batch (R(T)% is the estimated noise rate). Additionally, JoCor encourages two different networks to make predictions closer to each other by explicit regularization loss during training to select the small-loss sample with assumption that two networks easily reach an agreement on its predictions for clean samples. CoDis selects high-discrepancy examples with a joint loss (joint loss is equal to cross-entropy loss reduced by Jensen–Shannon (JS) divergence loss), leveraging discrepancy for sample selection, thereby improving selection quality.
### **Q1 W2: Detailed explanation of our method**
We appreciate your feedback. To address lack of detailed explanation of our method, we will update the Section 6.1. and Section 4. to provide description about the method. Our method is designed to train the VFMs with a adapter module robustly for noisy medical image datasets. Our key insight is as follows.
- Cufit leverages a curriculum learning approach to robustly fine-tune the VFMs for noisy datasets with three key modules: Linear Probing Module (LPM), Intermediate Adapter Module (IAM), and Last Adapter Module (LAM).
- Linear Probing Module (LPM): The LPM is first trained on all available samples in the given batch. Linear probing is relatively robust against noisy labels as it does not modify the feature extraction capability of VFM, thus preventing memorization of noisy samples. However, since there is no modification in feature extractor, its accuracy is low.
- Intermediate Adapter Module (IAM): For the given batch, the IAM is trained using samples selected by the LPM based on agreement criteria, which considers a sample as clean if its annotation matches the prediction of the LPM. Compared to LPM, IAM can achieve higher accuracy since it has small PEFT on the feature extraction.
- Last Adapter Module (LAM): Similarly, the LAM is trained with samples selected by the IAM. This inter-module curriculum training (LPM → IAM → LAM) increases the number of clean samples available for training the LAM, enhancing its performance. Specifically, LPM only selects few samples since its accuracy is lower than IAM. LPM is used for the inference stage.
### **W1: Spelling and grammatical errors throughout the text.**
We apologize for the poor quality of current version of our paper. We have conducted a thorough review of the paper to ensure it has no spelling and grammatical errors throughout the text. Also, we plan to have the professionally English proofreading service to further refine the language and clarity.
---
Rebuttal Comment 1.1:
Title: Thanks for the Rebuttal
Comment: Thanks for the Rebuttal. My questions have been answered to some extend and I have revised my rating.
---
Reply to Comment 1.1.1:
Title: Thank you for comments!
Comment: Dear Reviewer tyjY,
Thanks for raising the score rating. We really appreciate your feedback to improve our manuscript.
Best regards,
The Authors
---
Rebuttal 2:
Title: Check rebuttal
Comment: The authors have provided feedback to review comments. Please take time to read the rebuttal and provide response. Thank you.
AC | Summary: In this paper, the authors propose a curriculum learning strategy for fine-tuning on noisy medical datasets. The key insight comes from that linear probing with limited training samples can be more robust to label noise. The performance is good compared to the former methods.
Strengths: Generally, I think this is a good paper. Noisy label is a severe and practical problem for medical scenarios due to diagnosis uncertainty and the intuition behind the proposed method is clearly stated. The performance looks good and the proposed method is clean and efficient.
Weaknesses: However, I still have to point out some details are not clearly demonstrated. Please check the questions for more details. I will change my score based on the authors' responses.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Figure 2 is confusing. The core content in the method may fall in the Curriculum order and the agreement criteria. But for now, it is not clearly reflected in the figure which makes section 4.2. I suggest the author reorganize the figure and detail the purple block.
2. I am not very clear about how the mode is trained. Commonly, when referring to curriculum learning, I guess the model will be trained with multiple steps but in equation 6 authors also modify the loss function which makes it also like a multi-task training pipeline. This confuses me a lot and the authors have to clarify it more clearly.
3. For the noise simulation. The authors have to state the details more, e.g., how are the correct labels changed just randomly or with some rules? and for multi-label or multi-class medical classification tasks, are there any differences? Similarly, this also confuses me in the method part. If a case with both pneumonia and bone fracture labels but the pneumonia part is wrong, will the whole case be dismissed for the latter training?
4. From lines 203 to 221, in introducing the dataset, it is important to specifically point out what datasets are multi-label tasks and what are multi-class, since in medical these two types all wide exist.
5. In section 6.2, it is also encouraging to explore more performance differences between general VFM and medical-wise VFM like PMC-CLIP and BiomedCLIP. While considering the rebuttal time limitation, this is just a bonus suggestion.
6. A minor suggestion, I suggest the authors clarify the definition of noise in their introduction part more formally as though for ML domain noise is clear, in medical domain, the noise types are complex which may be hard to understand for the audience in clinical background.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Q1, Q2: Clear description of our method.**
We apologize for the unclear term usage. Our method trains the model like a multi-task training approach (i.e., all modules are trained simultaneously for a current given batch). We use the term “curriculum” to represent the order of the agreement criteria (i.e., the sample selection order is LPM -> IAM -> LAM). We will either change the term “curriculum” or include the sentence, “Our method trains modules simultaneously for the current given batch like a multi-task training,” in the method section. Moreover, we will update Figure 2 to clearly show the curriculum order of the training and the agreement criteria in a given training batch. The updated figure is included in the rebuttal PDF.
### **Q3: Description of label noise simulation.**
We utilize symmetric noise generation for the experiment. For example, when the noise rate is 0.4 and the number of classes is 6, a sample assigned to class 1 has its annotation changed to one of the other classes with equal probability of 0.08. We will add a detailed description on lines 239-240 (implementation details) and a description of multi-class problem in the preliminary section. Also, we conducted the experiment with multi-class datasets only and will focus on the multi-label classification problem in the future.
### **Q4: Clear dataset description.**
Thank you for your In the current version of paper, we only evaluate the method on multi-class dataset. Thus, All dataset introduced in the paper are multi-class dataset. We will clearly mention it in lines 203 to 221. Also, it will be a future research to apply our method for multi-label classification problems.
### **Q5: Performance using medical-wise VFM.**
Thank you for your insightful suggestion that using medical-domain VFM like PMC-CLIP and BiomedCLIP. We ran the experiments using the image encoder of BiomedCLIP with ViT-base architecture following setting of section 6.2 (i.e., 100 epochs training with Adam optimizer). We find a similar trend in medical-wise VFM, where linear probing has better accuracy than adapter training when there are noisy labels on datasets. The following table demonstrates the experimental results (test accuracy) on BiomedCLIP for HAM10000 and APTOS-2019 with noise rate of 0.4 and we will add the results on Section 6.2.
| dataset | Linear probing | Rein | Ours |
|:----------------:|:-------------:|:-----:|:----:|
| HAM10000 | 68.9 | 55.6 | **74.4** |
| APTOS-2019 | 78.6 | 64.2 | **80.0** |
### **Q6: Definition of the label noise in the Introduction part.**
We are thankful for your suggestion. We will add the sentence about definition of noise label in the Introduction section to help understanding the audience in non-ML domain. For example, line 25-26 “Noisy labels occur when the data annotations—the labels assigned to training images—are incorrect or inconsistent” can be replaced as “Noisy labels occur when the data annotations—the labels assigned to training images—are incorrect or inconsistent. In practice, the sample $(x_i, \hat{y}_i)$ is considered as a noisy labeled sample when human-labeled annotation $\hat{y}_i$ is not match the true label $y_i$”.
---
Rebuttal 2:
Comment: Thanks a lot for the timely detailed rebuttals. I appreciate the authors' clarifications. Most of my concerns have been solved. My only left concern falls in the multi-class settings, as in healthcare, most cases are multi-label cases since patients may have different diseases, which will somewhat limit the paper's contributions and interests. Thus, sorry that I cannot increase my score more, while I clearly support this paper is worth a weak acceptance.
---
Rebuttal 3:
Title: Thank you for all comments!
Comment: Dear Reviewer T7UR
We would like to thank you again for all your constructive feedback. We will update our final version accordingly. Additionally, we will continue our work on multi-label cases as future work.
Best regards,
The Authors | Summary: This paper presents a curriculum fine-tuning paradigm called Cufit. This method is designed to fine-tune Vision Foundation Models (VFMs) for medical image classification tasks under the presence of noisy labels. The approach leverages the robustness of linear probing and the generalization capabilities of fine-tuning adapters to improve model performance.
Strengths: - Cufit is technically sound and is well-validated through extensive experimental results.
- The paper is well-structured and effectively conveys the motivation, approach, and outcomes.
- Demonstrated significant improvements in medical image classification performance under label noise.
- Applicability to both medical and natural image classification enhances the relevance of the framework.
Weaknesses: - The training process seems to be complex and computationally intensive.
- I have concerns about the scalability of the proposed method. It may not scale well for very large datasets or in resource-constrained environments.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the computational cost of Cufit compare to other noise-robust training methods?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Q1. Computational cost of Cufit compare to other noise-robust training methods.**
We appreciate your feedback. Since the audience may be curious about the computational cost of our method and other training methods, we will provide the resource usage of these methods in the supplementary material. It may seem that our method is complex and computationally intensive. However, it utilizes resources reasonably compared to other methods. For example, we ran experiments on the HAM10000 dataset with a batch size of 32 using an RTX4090 GPU (the number of training images is 10,015) and found that our method performs effectively. The following table provides the memory usage and time cost for various methods with DINOv2-small architecture and Rein adapter.
| | Full-training | Linear probing | Rein | Co-teaching | JoCor | CoDis | Ours |
|:----------------:|:-------------:|:--------------:|:-----:|:-----------:|:-----:|:-----:|:----:|
| Memory (MB) | 4,350 | 824 | 3,144 | 10,086 | 5,542 | 10,836 | 5,604 |
| Time (sec/epoch) | 18.6 | 6.6 | 16.5 | 32.9 | 32.7 | 51.4 | 37.7 |
---
Rebuttal 2:
Title: Check rebuttal
Comment: The authors have provided feedback to review comments. Please take time to read the rebuttal and provide response. Thank you.
AC
---
Rebuttal Comment 2.1:
Comment: I have read the rebuttal and other reviews. I have no more questions. I would like to maintain my initial assessment.
---
Rebuttal 3:
Title: Thank you for comments!
Comment: Dear Reviewer 1c7f,
We appreciate your efforts in the reviewing process and your positive feedback.
Best regards,
The Authors | Summary: The paper presents Cufit, a curriculum fine-tuning paradigm for Vision Foundation Models (VFM) aimed at improving medical image classification under label noise. This method leverages the robust feature extraction capabilities of pre-trained VFMs and employs a linear probing strategy to mitigate the impact of noisy labels. The curriculum fine-tuning process then utilizes clean sample selection to enhance the classification performance. The experimental results demonstrate that Cufit outperforms existing methods on various medical image benchmarks, showing significant improvements in classification accuracy.
Strengths: 1. The presentation is good.
Weaknesses: 1. The paper includes experimental comparisons with methods like JoCor and CoDis, but the discussion about these methods' performance is insufficient. The authors should provide a more detailed analysis of why JoCor and CoDis do not perform as well as Cufit. Understanding the strengths and weaknesses of these methods in comparison to Cufit would offer valuable insights.
2. The paper should further discuss the impact of noisy labels on different types of biomedical images. For some image types, noise may be less detrimental, while for others, it could significantly affect diagnostic accuracy. A more detailed exploration of how noise impacts various biomedical image datasets would enhance the comprehensiveness of the study.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The paper includes experimental comparisons with methods like JoCor and CoDis, but the discussion about these methods' performance is insufficient. The authors should provide a more detailed analysis of why JoCor and CoDis do not perform as well as Cufit. Understanding the strengths and weaknesses of these methods in comparison to Cufit would offer valuable insights.
2. The paper should further discuss the impact of noisy labels on different types of biomedical images. For some image types, noise may be less detrimental, while for others, it could significantly affect diagnostic accuracy. A more detailed exploration of how noise impacts various biomedical image datasets would enhance the comprehensiveness of the study.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **Q1: Discussion of previous methods**
We greatly appreciate your valuable feedback about including strengths and weaknesses of previous methods in the paper. Our method outperforms previous methods in the medical image classification using VFMs. As shown in Figure 3 in the paper, our modules provide the higher label precision for selecting clean samples, which is based on the linear probing’s robustness for noisy labels. However, previous methods do not leverage the linear probing’s robustness, which leads to lower label precision. The followings are strengths and weaknesses of previous methods we argue.
- (Strengths) Although previous methods do not leverage the linear probing, they can be applicable to any network. However, our method requires well pre-trained network such as DINOv2 or CLIP to start the training which is our main weakness and their strength against ours.
- (Weaknesss) Previous methods do not leverage the linear probing’s robustness, which leads to lower label precision. Since they design the method based on the assumption that network is training from the scratch, their methods train the network for all training data without sample selection during early training stage (e.g., 1 to 10 epoch in their default setting). However, this training for all data may harm the feature extraction ability of VFMs and lead to degraded performance. For instance, the Co-teaching method can have better performance using different sample selection start epoch for APTOS-2019 with 40% noise rate as shown in the following table. This demonstrates that the previous methods have varying performance according to the hyper-parameter and can have the best performance when the sample selection is started at right time (i.e., when a VFM with adapter memorizes clean samples but not yet memorizes noisy samples).
| Sample selection start epoch | 1 | 3 | 5 | 10 | 25 |
|:----------------:|:-------------:|:--------------:|:-----:|:-----------:|:-----:|
| Test accuracy | 78.9 | **80.8** | 80.1 | 79.5 | 76.2 |
We will include the analysis on the previous methods in the Discussion section and add limitation of our method in the Conclusion.
### **Q2: Discussion of the impact of noisy labels on different types of dataset**
Thank you for your insightful feedback to improve our paper. Our main observation from experiments on the HAM10000 dataset (dermatoscopic images for skin lesion diagnosis) and the APTOS-2019 dataset (fundus photographs for grading diabetic retinopathy from “0-level no DR” to “4-level proliferative DR”) is that noisy labels harm APTOS-2019 less than they do HAM10000. Specifically, the relative performance drops for HAM10000 and APTOS-2019 are 34.3% and 7.1%, respectively (e.g., a 40% noise rate causes a 34.3% performance drop). For example, DINOv2s with Rein achieves 83.6% test accuracy with no noise and 54.9% test accuracy with 40% noise. We believe the relatively non-detrimental effect of noisy labels on APTOS-2019 arises from its continuous disease severity annotations. We will add this discussion to the Experiments section.
---
Rebuttal 2:
Title: Check rebuttal
Comment: The authors have provided feedback to review comments. Please take time to read the rebuttal and provide response. Thank you.
AC | Rebuttal 1:
Rebuttal: We appreciate the reviewers for their valuable comments and constructive feedback on our paper. As summarized by all reviewers, we propose a novel parameter-efficient fine-tuning (PEFT) framework for medical image classification under noisy labels. We believe our framework can outperform previous noise-robust training methods, which focused on CNN networks trained from scratch.
This rebuttal addresses all reviewers’ concerns. Additionally, we provide a PDF file showing the modified version of Figure 2. Furthermore, we are open to discussions and committed to addressing any additional concerns from the reviewers to ensure the refinement of our paper.
Pdf: /pdf/d580595b1fe1cf614f006cc1930a50f1cbf13ad2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unified Speech Recognition: A Single Model for Auditory, Visual, and Audiovisual Inputs | Accept (poster) | Summary: This paper proposes a unified architecture and training method for auditory/visual speech recognition. Building upon this model, the authors introduce a semi-supervised pseudo-labeling method to leverage unlabeled audio-visual data, as well as self-supervised pre-training to enhance model performance. Experiments indicate that the model achieves state-of-the-art performance on A/V/AVSR.
Strengths: 1. This work for the first time proposes an effective model and training procedure for unifying auditory and visual speech content recognition, which is of high novelty and practical significance.
2. The author conducted comprehensive and extensive ablation studies, verifying the characteristics of the model and the effectiveness of each step in the training paradigm. The experimental results are robust and credible, offering significant guidance for related research.
Weaknesses: The article has no obvious flaws, but there are some questions that I hope the authors can clarify (see questions).
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How is the weight of the teacher model in self-supervised pretraining initialized? Is it initialized randomly or with pretrained weight on another task?
2. Did the author make a comparison between the teacher-student self-supervised pretraining in the paper with masked-autoencoding training of audio and/or visual features? Is the proposed pretraining method superior?
3. Did the author investigate the effect of different masking ratios?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitation of their work in appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised.
> How is the weight of the teacher model in self-supervised pretraining initialized? Is it initialized randomly or with pretrained weight on another task?
The teacher model is randomly initialised in pre-training and improves throughout training via an exponential moving average operation through bootstrapping [44], as in related works [15, 17, 18]. We will emphasise this more in the paper.
> Did the author make a comparison between the teacher-student self-supervised pretraining in the paper with masked-autoencoding training of audio and/or visual features? Is the proposed pretraining method superior?
We are not aware of works in audio-visual self-supervised learning for speech recognition that employ masked autoencoding (MAE) for pre-training, i.e., predicting raw audio and/or visual features as in [45]. Although the investigation of MAE in the context of audio-visual speech recognition is outside the scope of our work, we agree that it is an interesting direction for future research. Our pre-training task, which combines masked prediction (similarly to MAE) with teacher-student training, aligns more closely with recent successful models in this area, such as RAVEn [17], AV-data2vec [15], and BRAVEn [18].
> Did the author investigate the effect of different masking ratios?
Initially, we conducted a set of preliminary experiments and chose to use the masking ratio from [18]. However, we agree that exploring different masking ratios is a valuable ablation study, and thank you for prompting us to investigate this further. We have now conducted experiments with different masking ratios for pre-training (see Table 2a in the rebuttal PDF). We find that the model performs well with a masking probability of 0.4-0.6 and is, interestingly, quite robust to the exact choice of masking ratio. We added this ablation in the Appendix.
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks the authors for actively addressing my concerns. I think the current score already reflects my positive evaluation of the paper, so I choose to maintain the existing score. | Summary: This paper proposes a training methodology for a *single* model which can use *either* audio, visual, or audiovisual features as input for automatic speech recognition. This is done by enforcing a training batch always includes (feature,label) pairs of all three modalities, using a 1D/2D ResNet-18 feature extractor for audio and video, respectively. These features are processed by a Transformer encoder-decoder model to obtain an ASR prediction. Furthermore, the authors explore a semi-supervised fine-tuning approach and a self-supervised initialization stage, both using a student-teacher approach, and within the same unified methodology. This allows the authors to produce a model which is competitive with state-of-the-art models while using a significantly less data.
Strengths: I think the proposed method is interesting for researchers in the audio-visual ASR domain and will spur future work. The paper is well-written with clear English, barring some questions I have stated below. The authors do a good job presenting their results, referring to details in the appendix where required. The ablation experiments clearly show readers how their proposed methodology behaves and why certain design decisions were made. The authors also shared their code and model checkpoints, which significantly increases the reproducibility and impact of this paper.
Weaknesses: The model architecture seems a bit unclear to me. Specifically, line 88 states the use of a transformer encoder-decoder model. However, line 104 states a single FC layer on top of the encoder for vocabulary predictions, while line 107 states to use the decoder output sequence, which is subsequently not used as $1 - \lambda_{ctc}=0$. So the decoder is not actually used during fine-tuning? How is inference actually done?
I see no mention of a fixed random seed for running experiments, are all model initialized equal? This seems important as the paper does not have error bars/does not run experiments multiple times
Minor editing comments:
* Table titles must appear above the table as per the formatting instructions.
* The table/figure combinations on Page 6 are confusing. Could you separate the figures as not part of a (sub)table?
* A small description of LRS3 would be desirable for those not familiar with the dataset (e.g., how many hours does the unlabeled portion have (line 190), what is the data source, how was it collected, how large is the test set?)
* line 97: 0.4 and 0.6 seconds for each second of ...
Technical Quality: 3
Clarity: 3
Questions for Authors: In which settings/experiments is the transformer decoder used?
In table 3 (A), is there a reason for not trying targets A + V + AV, as during fine-tuning?
You state in line 103 that features from the 3 modalities are concatenated along the batch dimension for efficient processing. However, Table 1 (B) shows that random sampling of modalities performs much worse, requiring 3x more epochs for similar performance. So it seems to me it's not only done for efficient processing, but also for effective optimization?
Also, do [13, 15] in line 179 share parameters for each task or not? According to Table 4 they do not, but if you use random sampling of modalities, how does this explain their relevance to Table 1 (B)?
What is the CTC attention in Table 2 (C)? Is this simply equation 3 with $\lambda_{ctc} < 1$? I might have missed it, but it seems to me the method section does not explain these 2 different loss types?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The methods requires all data to be audio-video paired. An interesting future direction could be the inclusion of audio-only data in the framework.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised.
> Line 104 states a single FC layer on top of the encoder for vocabulary predictions, while line 107 states to use the decoder output sequence, which is subsequently not used as 1−λctc=0. So the decoder is not actually used during fine-tuning? How is inference actually done?
We apologise for the confusion; this is a typo. The intended value for the CTC loss weight was 0.1, not 1, and the decoder is indeed used during fine-tuning. Inference is then performed with both CTC and attention scores (see Appendix C.6). We have now fixed this typo in the paper.
> I see no mention of a fixed random seed for running experiments, are all model initialized equal?
Thank you for raising this point. We indeed use a fixed random seed (42) for our experiments, and have now clarified this in the paper. Due to high computational demands and in line with previous studies [13-18, 20], we do not include error bars for our main results. However, we do show error bars for a subset of our main experiments in Table 13, where we observe that the results are consistently stable around the mean.
> Minor editing comments
Thank you for the editing suggestions, which have now been incorporated into the paper. Specifically, we made the following changes: we placed the captions above all tables; we separated Table 2a and the rest of Table 2 into distinct figures and tables; we added a description of the datasets in the Appendix; and we fixed the typo you pointed.
> In which settings/experiments is the transformer decoder used? What is the CTC attention in Table 2 (C)?
We apologise again for the confusion caused by the typo regarding the CTC loss weight, which should be 0.1. In Table 2d, CTC-attention (i.e., including the Transformer decoder) is our default loss. The CTC-only configuration, which corresponds to a CTC loss weight of 1, is used solely for the purposes of this ablation study.
> In table 3 (A), is there a reason for not trying targets A + V + AV, as during fine-tuning?
Please note that during fine-tuning, we use only the AV targets (see Figure 1, top-left) because they provide the richest information and allow us to amortise the pseudo-labelling cost across the three modalities. Incorporating all three targets during fine-tuning is challenging, as it is unclear how to efficiently combine / predict the per-modality pseudo-label sequences, which are generated auto-regressively by the decoder. However, during pre-training, which involves only the encoder, we can more easily combine the targets from the three modalities. In Table 2b of the rebuttal PDF, we compare audio-visual targets (our default) with the sum of the per-modality targets. We observe that combining the per-modality targets results does not outperform predicting only the AV targets. Additionally, predicting all targets is more computationally expensive because the teacher encoder must process auditory, visual, and audio-visual features rather than just the audio-visual ones. We added this experiment in the Appendix.
> You state in line 103 that features from the 3 modalities are concatenated along the batch dimension for efficient processing. However, Table 1 (B) shows that random sampling of modalities performs much worse, requiring 3x more epochs for similar performance. So it seems to me it's not only done for efficient processing, but also for effective optimization?
Randomly sampling the modalities means that, on average, the model is exposed to only one-third of the modalities at each iteration compared to concatenating all three modalities along the batch dimension. As a result, it would require approximately three times more epochs to achieve similar performance. However, an advantage of our approach is that it allows us to use the same pseudo-labels for all three modalities at each iteration, thus amortising the pseudo-label generation cost across modalities, which would not be possible with random modality sampling. We have updated the caption of Table 1b to explain why random sampling is trained for longer.
> Also, do [13, 15] in line 179 share parameters for each task or not? According to Table 4 they do not, but if you use random sampling of modalities, how does this explain their relevance to Table 1 (B)?
[13, 15] do share parameters during pre-training, hence their relevance to Table 1b, but they separately fine-tune the resulting model on ASR, VSR, and AVSR, resulting in a separate model for each task during inference. In contrast, USR yields a single unified model that is capable of performing all three tasks during inference (section 2, “Single model for multiple modalities” includes a relevant discussion).
> The methods requires all data to be audio-video paired. An interesting future direction could be the inclusion of audio-only data in the framework.
Indeed, thank you for this interesting suggestion, which we have now added to our Conclusion.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the rebuttal. I see no reason to change my (favorable) score. | Summary: This paper proposes USR, a unified speech recognition model that leverages pseudo labels during fine-tuning. It introduces a single model capable of handling three tasks—ASR, VSR, and AVSR—simultaneously, delivering state-of-the-art performance.
Strengths: 1. The paper is well-organized. Although the USR system is relatively complex, the paper presents each module with detailed descriptions and clear illustrations, making it easy for readers to follow.
2. The experiments, including ablations, are extensive. All experimental details are included, making it easy to reproduce the results.
3. The USR system leverages pseudo labels during the fine-tuning stage. While pseudo labeling is not a novel technique in ASR or AVSR, USR enhances the performance of ASR, VSR, and AVSR through carefully designed training procedures. The illustration of the pseudo labeling process is also clear.
4. The system achieves nearly state-of-the-art performance across all tasks.
5. The literature review is thorough.
Weaknesses: 1. While not a unique weakness to this paper, the complexity of training current SSL-based VSR or AVSR systems remains a challenge. Introducing additional modalities significantly increases complexity compared to speech-only SSL systems. Notably, the reduction in GPU hours is minimal compared to previous works, and the convergence speed is exceedingly slow. Future work should address these issues.
2. Performance is highly sensitive to certain configurations, such as the ratios of pseudo labels and the use of EMA. However, the paper lacks an analysis of why this sensitivity occurs or suggestions on how to mitigate it. These are common weaknesses in related work.
3. The results do not consistently achieve state-of-the-art performance. The authors should experiment with other hyperparameters, such as learning rates, during fine-tuning to improve outcomes.
4. Failure cases were not discussed too much.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. During pretraining, have you explored using audio-only targets? If so, what was the performance like compared to AV targets? How does it compare to AV-HuBERT?
2. Why do you incorporate all three features (audio, video, audio-visual) during fine-tuning? Is there a rationale or experimental evidence supporting this approach?
3. There's no need to adhere strictly to the architectures like AV-HuBERT or AV-data2vec. Consider experimenting with more advanced video encoders since visual features are often not well-extracted in previous studies.
4. For pseudo label sampling, why opt for a greedy search? Have you considered trying soft sampling instead?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised.
> The complexity of training current SSL-based VSR or AVSR systems remains a challenge.
We recognise that VSR and AVSR systems present unique challenges compared to audio-only systems, and one of our future goals is to improve the computational efficiency of multi-modal speech recognition. Despite these challenges, we believe that audio-visual speech representation learning is an exciting and promising area of research. Leveraging visual modalities like lip movements offers significant advantages in disambiguating difficult utterances, particularly in noisy environments or when audio is unavailable (see Table 12, Appendix). VSR and AVSR can also improve communication for individuals who have difficulty producing voiced speech. Additionally, evidence suggests that incorporating multiple modalities during pre-training can enhance the performance of audio-only systems (e.g., see [13, 17]).
> Performance is highly sensitive to certain configurations, such as the ratios of pseudo labels and the use of EMA.
While self-supervised learning systems can indeed be sensitive to hyperparameters, we believe that USR significantly reduces this sensitivity. Our semi-supervised pseudo-labelling framework is sensitive to extreme threshold values (see Table 2a) and somewhat sensitive in terms of VSR to the weighting of labelled and unlabelled losses (see Table 2b). In Section 4.2, we hypothesise that this sensitivity arises from the inherent trade-off between label quantity and quality, which must be balanced. However, as shown in Table 3, USR’s performance remains stable across a range of pre-training hyperparameter configurations. We attribute this stability to our semi-supervised method, which leverages abundant unlabelled samples during fine-tuning. Appendix E demonstrates that our method is more robust to pre-training target types than supervised fine-tuning (compare Table 11b and Table 3a) and does not require additional training tricks for strong performance (see Table 11a), unlike other works (e.g., [13, 17, 18]). We have updated the paper to better highlight USR's reduced sensitivity to pre-training hyperparameters.
> The results do not consistently achieve state-of-the-art performance.
We have indeed carefully tuned hyperparameters, such as the learning rate, to obtain our final results. In the well-established and highly competitive LRS3 high- and low-resource benchmarks (see Table 4), our model surpasses (sometimes by a large margin) or matches the previous state-of-the-art modality-specific models in 16 out of 18 cases. In the remaining two cases, we are marginally behind (1.6% vs. 1.4% and 2.4% vs. 2.3% WER). *Crucially, we achieve these results using a single model for ASR, VSR, and AVSR, while other methods require separate models for each modality, resulting in ~3x the number of weights during inference.* We also achieve SotA results on WildVSR (Table 9) and LRS2 (Table 10).
> Failure cases were not discussed too much.
Thank you for the suggestion. We added a discussion on failure cases in the Appendix. See Table 3 in the rebuttal PDF. We observe that, while VSR tends to produce more errors than ASR and AVSR, these errors are often related to phonetically similar sounds, such as "this" vs. "these" or "disguised"" vs. "denies." Additionally, using both visual and auditory modalities (AVSR) can improve the model's ability to distinguish challenging samples, such as "Mali Wear" vs. "malware".
> During pretraining, have you explored using audio-only targets? If so, what was the performance like compared to AV targets? How does it compare to AV-HuBERT?
We have explored using audio-only targets for pre-training with our semi-supervised fine-tuning in Table 3a, where we observe that AV targets work best. Still, with audio-only targets, we achieve WERs of 37.3% for VSR, 3.2% for ASR, and 3.1% for AVSR, significantly outperforming AV-HuBERT's 51.8%, 4.9%, and 4.7%, respectively. Additionally, we experimented with supervised fine-tuning (see Appendix E), observing WERs of 43.9%, 4.8%, and 4.6% for the three tasks (see Table 11b). Notably, AV-HuBERT uses a separate model for each task, whereas we use a single unified model.
> Why do you incorporate all three features (audio, video, audio-visual) during fine-tuning?
Our goal was to develop a single, unified model capable of performing well on audio-only (ASR), video-only (VSR), and audio-visual (AVSR) data during inference, thereby reducing the computational and memory redundancies associated with separate models per task. To achieve this, we fine-tune the model using all three types of data, enabling it to effectively perform each task. We have now emphasised this point in Section 3.1.
> Consider experimenting with more advanced video encoders.
We used Transformer-based architectures with convolutional frontends to align with closely related works [13-18, 20] for fair comparisons (e.g., see Table 4). However, given that USR is agnostic to the choice of encoder architecture, we agree it would be interesting to explore other architectural variants, which could improve results even further. Still, we believe this direction lies outside the scope of our present work and therefore defer it to future research. This idea for future work has now been included in the Conclusion.
> For pseudo label sampling, why opt for a greedy search? Have you considered trying soft sampling instead?
We initially chose a greedy search for its efficiency and effectiveness. Based on your suggestion, we now also experimented with a soft sampling approach, where we used weighted sampling at each generation step. The results are in Table 1b of the rebuttal PDF. We see that hard sampling outperforms this variant of soft sampling but believe that exploring more sophisticated methods to effectively increase pseudo-label variety is a promising direction for future research.
---
Rebuttal Comment 1.1:
Comment: Thanks for including the new experiments/results such as failure cases and soft sampling. Although those results were made after the paper submission, I believe that by including them, it will be a more solid paper in the future. I still have some questions:
"In the remaining two cases, we are marginally behind (1.6% vs. 1.4% and 2.4% vs. 2.3% WER)." Have you just tried tuning the learning rate or the mask ratio, or what else? Sometimes you might be able to achieve SOTA by doing this. It is a pity that it is just slightly behind.
Another question is: "we hypothesis that this sensitivity arises from the inherent trade-off between label quantity and quality." Can you detail this? Would high-quality or low-quality labels lead to higher sensitivity? Likewise, what about the quantity?.
---
Reply to Comment 1.1.1:
Comment: Thank you for your prompt follow-up.
> Thanks for including the new experiments/results such as failure cases and soft sampling. Although those results were made after the paper submission, I believe that by including them, it will be a more solid paper in the future.
Indeed, thank you for the suggestions, which we believe have strengthened our paper. The conference allows us to revise the paper for the camera-ready deadline, and so we have added these new experiments in the Appendix.
> "In the remaining two cases, we are marginally behind (1.6% vs. 1.4% and 2.4% vs. 2.3% WER)." Have you just tried tuning the learning rate or the mask ratio, or what else? Sometimes you might be able to achieve SOTA by doing this. It is a pity that it is just slightly behind.
We extensively tuned hyperparameters (including learning rate, weight decay, and all hyperparameters in our ablations) using the Base model in the low-resource setting and then applied most of the same hyperparameters to train the larger models with more data. The only exceptions were the learning rate and drop path rate, which we separately adjusted for the larger models. While tuning all hyperparameters for each of the six settings in Table 4 could potentially improve results, the high computational demands of training the larger models on the larger datasets made this impractical. Additionally, one of our objectives was to demonstrate the scalability of our method across different model and dataset sizes with minimal extra hyperparameter tuning. We will make these points clearer in the revised paper.
We also emphasise that, in these two cases, our method is only marginally behind BRAVEn in ASR performance, despite using a single model for ASR, VSR, and AVSR, while BRAVEn employs separate models - each of the same size and architecture as our single model - for each task. Given this, we believe that being just 0.1-0.2% behind the best modality-specific model in these two cases is still a strong outcome. Moreover, our paper goes beyond this, matching or surpassing the state-of-the-art modality-specific models in all other tasks and settings (Tables 4, 5, 9, and 10).
> "we hypothesis that this sensitivity arises from the inherent trade-off between label quantity and quality." Can you detail this? Would high-quality or low-quality labels lead to higher sensitivity? Likewise, what about the quantity?
This point refers to the 'Quantity/quality trade-off' paragraph in Section 4.2, where we highlight that pseudo-labels, while more abundant due to the availability of unlabelled data, are generally noisier and of lower quality. In contrast, groundtruth labels are of higher quality but less abundant. Our hyperparameters, $\gamma_a$ and $\gamma_v$, adjust the balance between quantity and quality by controlling the weighting of the labelled versus unlabelled losses for audio/audiovisual and visual inputs, respectively. We hypothesise that moderate sensitivity to these hyperparameters arises because finding the proper balance between quantity and quality is important. | Summary: This paper unifies the ASR, VSR, and AVSR tasks in a single model and shows the performance benefits of a single model in LRS3 data. There are several attempts at unifying these three models, but I think this is the first successful trial of realizing it. The paper proposes an effective training strategy to avoid losing performance on each task. Together with their self-supervised training, the model archives SOTA performance in a similar range of the training data.
Strengths: - the first successful method of realizing the ASR, VSR, and AVSR tasks in a single model while maintaining/improving the performance for each task
- Good reproducibility based on the code release, use of the public data, and detailed experimental configurations/analyses.
- Easy to read. Although the technique is a little bit complicated with a lot of terms depending on the architecture (CTC, attention, modality, training modes (self-supervised/supervised), the paper always provides some rationales (e.g., from the reference or experiments) to justify their methods
- detailed ablation experiments support their design choices and strategies.
- The paper also shows the effectiveness with multiple databases (LRS3, LRS2, and WildVSR)
Weaknesses: - the technical novelty is not very strong. Most techniques are well-known or straightforward (e.g., the use of CTC, pseudo-label filtering, etc.).
Technical Quality: 4
Clarity: 3
Questions for Authors: - Page 4, line 110: I'm a bit confused about "We set $\lambda_{\text{ctc}}$ to 1." Do you mean that you always set $\lambda_{\text{ctc}}$ to 1? No attention weights? Is it related to Table 2-d? Please clarify it.
- Equation (4): Why didn't you prepare a different weight for a and av?
- Section 3.2, Filtering: Did you use the same threshold for CTC and ATT? The dynamic range of c and a could be different, and I'm not sure that using the same threshold is optimal.
- Section 4: Did you only use a Transformer architecture? How about using a Conformer architecture?
- It is not a question but a suggestion. I recommend you emphasize the results of the multiple databases in the abstract to claim the generalization of this work across the database.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The paper has independent sections about limitations and Societal Impact, which describe the current issue due to the computational cost, the importance of the VSR, and the risk of general speech recognition technology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and time. Below we address the key concerns raised.
> the technical novelty is not very strong. Most techniques are well-known or straightforward (e.g., the use of CTC, pseudo-label filtering, etc.).
While individual components of our work have been previously used in other studies (as discussed in the Related Work), we believe that the USR framework as a whole represents significant novelty. As noted by the reviewer, earlier efforts to unify ASR, VSR, and AVSR have often lagged behind modality-specific approaches. Our research demonstrates, for the first time in the literature, that a combination of self-supervised and semi-supervised learning can produce a unified model that achieves state-of-the-art performance across all tasks. This success is attributed to key technical design choices, including (but not limited to) the use of a greedy, computationally efficient attention-based pseudo-labelling approach; a multi-modal feature extraction step that enables amortisation of the pseudo-label generation costs across the three modalities; and multi-modal (audio-visual) target prediction in self-supervised pre-training, which previously proved unsuccessful with supervised fine-tuning. Furthermore, we believe that the straightforward and intuitive nature of USR enhances its utility and potential impact in the community.
> Page 4, line 110: I'm a bit confused about "We set to 1." Do you mean that you always set to 1? No attention weights? Is it related to Table 2-d? Please clarify it.
We apologise for the confusion; this is a typo. The intended value for the CTC loss weight was 0.1, not 1. We always use CTC-attention training, except for the ablation in Table 2d where we study the performance of a CTC-only loss. We have fixed this typo in the paper.
> Equation (4): Why didn't you prepare a different weight for a and av?
In preliminary experiments, we observed that the training dynamics for ASR and AVSR were very similar, and hence decided to use a combined weight for the two modalities in Eq. 4 and Eq. 7 in order to reduce the number of hyperparameters. We have now added this point to the paper (Section 3.1).
> Section 3.2, Filtering: Did you use the same threshold for CTC and ATT? The dynamic range of c and a could be different, and I'm not sure that using the same threshold is optimal.
Similarly, we use the same threshold for CTC and attention for simplicity. However, we agree that the dynamic ranges between the two could be different and have run experiments with separate thresholds to investigate this point. The results are in Table 1a in the attached rebuttal PDF. We observe that USR's performance remains consistent across a range of different thresholds, with no clear improvement when using separate thresholds. We added this experiment in the Appendix.
> Section 4: Did you only use a Transformer architecture? How about using a Conformer architecture?
We used Transformer-based architectures with convolutional frontends to align with closely related works [13-18, 20] for fair comparisons (e.g., see Table 4). However, given that USR is agnostic to the choice of encoder architecture, we agree it would be interesting to explore other architectural variants, including Conformers, which could improve results even further. Still, we believe this direction lies outside the scope of our present work and therefore defer it to future research. This idea for future work has now been included in the Conclusion.
> It is not a question but a suggestion. I recommend you emphasize the results of the multiple databases in the abstract to claim the generalization of this work across the database.
Thank you for this suggestion. We have updated the abstract accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed answers.
As I mentioned in my previous review, although this paper does not have strong technical novelty, it has a lot of insightful findings and values for the first successful method of realizing the ASR, VSR, and AVSR tasks. I appreciate these benefits and make this paper "6: Weak Accep" despite its weakness. Thanks for your explanations for the weakness point, but these explanations are basically the same as my understanding in the first review, and it is not sufficient to change this point. Also, thanks for your additional experiments, which solve some of my questions but they do not change my overall impressions. Thus, I want to keep my score as it is. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their thoughtful comments, which have greatly contributed to improving our paper. We are pleased that the reviewers recognise the effectiveness of our method (Reviewers d2RY, d9WG, WdRe), the quality of our experiments (Reviewers d2RY, d9WG, Fi7g, WdRe), and the reproducibility of our results (Reviewers d2RY, d9WG, Fi7g). We also appreciate their acknowledgment of the potential impact and practical significance of our work (Reviewers Fi7g, WdRe), as well as the quality of our writing and presentation (Reviewers d2RY, d9WG, Fi7g, WdRe).
We have addressed the reviewers' concerns with individual responses to each review. Please see the attached rebuttal PDF, which includes new experimental results. Key changes to the paper are summarised as follows:
* We included in the Appendix more ablations for self-supervised pre-training (different mask probabilities and target types) as well as for semi-supervised fine-tuning (additional filtering thresholds and comparisons between hard and soft sampling).
* We provided in the Appendix detailed descriptions of the datasets used in the paper.
* We added failure cases and a corresponding discussion in the Appendix.
* We moved captions above the tables and separated Table 2a and the rest of Table 2 into distinct figures / tables.
* We highlighted our state-of-the-art results on LRS2 and WildVSR in the Abstract.
* We improved the clarity of the text, for example, by emphasising our method’s reduced sensitivity to pre-training hyperparameters and clarifying that the teacher is randomly initialised in the pre-training stage.
* We added further ideas for future work in the Conclusion, including exploring alternative encoder architectures and the use of extra audio-only data.
* We fixed typos identified by the reviewers, including an error in the CTC loss weight, which was mistakenly listed as 1 instead of 0.1 in the original paper.
Pdf: /pdf/4856602201c1c51ed9057f05b964b216182c63bd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control | Accept (poster) | Summary: This paper presents a way of pre-training vision encoder for robot control. Specifically, instead of using vanilla contrastive or masked autoencoder approaches, this method creates two models: 1) an inverse dynamics model that estimates the transition latent (actions) and 2) a forward dynamics model that takes in the current encoded visual latent and the transition latent and predicts the next latent observation. The results suggest that the method improves upon existing visual pre-training methods for robotics.
Strengths: 1. Few of the past works on visual pretraining for robotics consider the time / action but only focus on the visual observation aspect. This work presents a method that attempts to improve visual pretraining by modeling the dynamics present in the dataset.
2. The results suggest that the method improves upon existing visual pre-training baselines
Weaknesses: 1. Prior visual pre-training for robotics operates under the premise that we have an image or video dataset, where we pre-train on these datasets and then finetune for a particular task. However, this method performs pre-training on the task-specific dataset, which is better aligned with the downstream tasks. Instead of having pre-training and fine-tuning using the same dataset and solving the same task, the objective of visual pretraining (exemplified by MAE, MoCo, etc.) is that if we train on a mass amount of data, we can finetune to a specific task (i.e. ImageNet pre-training then COCO segmentation finetuning).
2. A few prior works [1,2] have tried to model forward and inverse dynamics concurrently. [1] also uses forward and inverse dynamics to train a visual encoder. The key difference between these works is that here action is modeled as a latent variable. Why ground truth action values are not used in pre-training (especially when pre-training and fine-tuning happen on the same task) is not justified in the manuscript. It would be quite convincing if pre-training is done on natural videos, or large-scale robot datasets where action spaces cannot be standardized, and then shows improved finetuning performance.
[1] Agrawal, Pulkit, Ashvin V. Nair, Pieter Abbeel, Jitendra Malik, and Sergey Levine. "Learning to poke by poking: Experiential learning of intuitive physics." Advances in neural information processing systems 29 (2016).
[2] Fragkiadaki, Katerina, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. "Learning visual predictive models of physics for playing billiards." arXiv preprint arXiv:1511.07404 (2015).
Technical Quality: 3
Clarity: 3
Questions for Authors: The reviewer wants to ask for two sets of experiments to address weakness 1:
1. How do masked pre-training methods compare to DynaMo when they are trained on the same data? I.e. train two networks analogously with the method provided in VC-1 and MVP on the task-specific dataset, and evaluate their task performance. This experiment would demonstrate that even for specific tasks, pre-training with DynaMo outperforms existing visual pre-training methods on task-specific datasets.
2. How does DynaMo generalize to unseen tasks (in the sense that it can generalize to tasks outside of the dynamics seen in training)? I.e. pre-train DynaMo on (1) Put Yogurt (2) Get yogurt (3) Get Tea and evaluate on (1) Put ketchup (2) Get Water.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitation section is present.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and constructive feedback. We are glad that you found our approach to visual pretraining novel. We will address each of your concerns below.
**"Instead of having pre-training and fine-tuning using the same dataset… train on a mass amount of data… finetune to a specific task"**: We discuss the motivation for in-domain vs. large-scale SSL pretraining in detail in the global comment. In particular, we show that DynaMo is compatible as an in-domain SSL fine-tuning step for Internet-scale pretrained weights like ImageNet (paper Table 5). To clarify, our work specifically tackles the problem of learning efficiently from small-scale decision-making data, often with only hundreds of demonstration trajectories. In this low-data regime, DynaMo improves downstream performance and outperforms pretrained weights and other SSL methods, as shown in Table 1 in the paper. And in principle, DynaMo can be trained on much larger datasets, but unfortunately we have academic compute constraints.
**"Why ground truth action values are not used in pretraining… if pre-training is done on natural videos, or large-scale robot datasets"**: This is indeed a very exciting direction. In fact, the motivation for modeling the action as a latent is to make DynaMo applicable to a wide range of datasets including natural videos, and show that simply modeling dynamics on videos, without any augmentations or actions, is a feasible visual pretraining objective for visuomotor control. We do not have the compute to run on massive datasets in an academic setting, but we hope that this opens up a direction for industry labs to explore how dynamics pretraining can scale to Internet-scale datasets. We will make this clearer in our next revision.
**"How do masked pre-training methods compare to DynaMo… on the same data"**: Thank you for pointing out the missing baseline. We have added MAE as an in-domain SSL baseline, discussed in global comment (1). In summary, MAE underperforms DynaMo by an average of 33% across sim environments, and completely fails to solve Block Pushing.
**"How does DynaMo generalize to unseen tasks"**: we have added a new kitchen task (picking up a bread) to test encoder generalization, detailed in global comment (4). In summary, we use the old DynaMo encoder pretrained on existing kitchen tasks to train a new policy on the unseen task, and find that the policy still manages to complete the task, although pretraining a fresh encoder can improve performance in this low-data regime. We hypothesize that encoder generalization could improve if pretrained on significantly larger datasets, which is an exciting direction that we hope industry labs could explore.
We hope this addresses your concerns and questions. We would be keen to discuss any remaining points that stand between us and a higher score.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. With the added experiments, I decided to raise the score. | Summary: This paper presents a self-supervised model, DynaMo, for pretraining visual encoders adopted for visuo-motor control. The targeted downstream task is imitation learning for robotic manipulation. Instead of using an out-of-domain dataset for pretraining and then transferring to a new domain using alternative techniques, the authors propose exploiting sequences of observations from in-domain demonstrations to pretrain three different models, a visual encoder, a forward and an inverse model. Once this is done a policy can be learned with observations encoded using the pre-trained visual encoder.
Strengths: The most important benefit of DynaMo is that a visual encoder can be trained with limited risk of suppressing data dimensions necessary for visuomotor control, an otherwise frequently occurring problem.
Even if similar models that combine training of forward and inverse models have existed in literature before, the action representation is assumed unobserved in the proposed model, which has rarely been the case before. The literature on imitation learning from observed state sequences is vast, with little cited in the paper. However, the way this is done for pretraining in the proposed model is innovative and easily applicable to a practical scenario.
The experiments are rather exhaustive with five different settings and embodiments tested, two of which are real-world scenarios. In experiments that compare to alternative self-supervised methods and pretrained representations, the proposed visual embeddings are shown to be very competitive. It is also shown that DynaMo can be used to finetune an encoder pre-trained on ImageNet for even better results while being relatively insensitive to the choice of policy class.
Weaknesses: The paper is written as if there were no research in the area before the deep learning boom. Only one citation out of 70 citations is older than 10 years. The paper suggests that training exclusively on in-domain data is new, even if this used to be the way it was typically done before the arrival of data-hungry deep-learning-based models, models that forced people to a greater extent to rely on offline training on out-of-domain data with data augmentation, contrastive learning, etc.
The idea to train pairs of inverse and forward models online has existed in psychology and robotics for at least 25 years, such as in the works of Wolpert et al [1]. Using similar models, imitation learning has been a common theme over the years, with [2] being just an example. Without this connection back to earlier research, this paper gives the impression of trying to reinvent the wheel, and it becomes unclear what the contributions really are.
Even if the experiments suggest that DynaMo can be beneficial also in real-world settings, the presented experiments are too few to be conclusive. The real world is way more diverse with more than just a small selected set of objects that can be manipulated. However, this weakness is pointed out in the conclusions, which makes it less problematic.
[1] Wolpert and Kawato, “Multiple paired forward and inverse models for motor control”, Neural Networks, 11, 1998.
[2] Demiris and Hayes, “Imitation as a dual-route process featuring predictive and learning components: a biologically plausible computational model”, in Imitation in Animals and Artifacts, MIT Press, 2002.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Are the visual encoder, inverse and forward models only trained on the demonstrations from the respective datasets? Even if this is only for pretraining, the demonstrations, at least the real-world ones, are very few compared to the complexity of the tasks learned. Why not exploit all possible sequences available on the same embodiment, even for tasks that will eventually not be of interest?
* How restrictive is the assumption that forward models are unimodal? Has this become a weakness during the experiments?
* Since both the inverse and forward models seem to be ignored after pretraining, what is the motivation for a separation between the two? Why not train a network to predict the next encoded observation from earlier ones, essentially with the inverse and forward models merged into one? Why is the latent action representation needed at all?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, some limitations related the real-world experiments and the unimodality of models are brought up in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review and for suggesting these papers. We are glad that you found our action-free assumption innovative. After reading these papers, it is clear that our work and indeed many others in the field of representation learning and imitation learning have been inspired by these seminal works before the deep learning boom. We apologize for the omission and will include them and further related works in our manuscript. The idea of dynamics and predictive coding is well-established in earlier literature. And for imitation learning in robotics, we see early work such as [1][2][3][4], and many others pioneer the currently prevalent setup of learning from human demonstration data for robotic manipulation. We will add an additional subsection in the Introduction and Related Works sections to highlight this inspiration.
**"It becomes unclear what the contributions really are"**: Our paper focuses on training a good visual encoder for visual imitation learning. A major problem is that in-domain demonstration data is expensive to collect, and underutilized by common training approaches: the visual encoder is usually either trained end-to-end jointly with the policy from scratch, or pretrained on massive out-of-domain datasets that may not generalize to the task at hand. **Our contribution is twofold.** One, we show that simply modeling dynamics on videos, without any augmentations or actions, is a feasible visual pretraining objective. We empirically show that it improves downstream imitation learning performance on simulated and real robot environments, outperforming prior methods. Two, we show that our method is compatible as an in-domain fine-tuning step. Starting with a strong visual encoder trained on large out-of-domain datasets, our method can further improve its performance by fine-tuning on a small task-specific dataset.
**"Experiments are too few to be conclusive… the real world is way more diverse"**: Thank you for raising this important point. We hope that our additional experiments showcase the applicability of our method in various settings. We fully acknowledge that lab settings are rather limited compared to real-world environments. As a community, we need to conduct more experiments outside of labs. Benchmarking the robustness of these encoders in more diverse and realistic environments would be an important direction for future work.
**"Why not exploit all possible sequences available on the same embodiment"**: Please see global comment (4) for a detailed discussion, as well as an additional kitchen experiment exploring encoder generalization to an unseen task. To summarize, DynaMo can be trained on much larger datasets, but unfortunately we have limited compute. Nevertheless, we show that an encoder trained with DynaMo on existing tasks can be used to train a policy on a completely new task, although at this small scale, directly pretraining on the new task improves performance. We hypothesize that generalization would improve if we trained on much larger datasets. We also note that DynaMo can be used to fine-tune other models pretrained on Internet-scale datasets like ImageNet for improved performance.
**"How restrictive is the assumption that forward models are unimodal? Has this become a weakness during the experiments?"**: This is equivalent to assuming deterministic environment transitions. Given the previous state, and an action, we assume that there is a well-determined next state. We believe this is a reasonable assumption for most real-world manipulation tasks like opening a door, putting away toys, etc. This assumption can fail when the environment is stochastic or otherwise hard to predict (e.g. making a sand dune, playing adversarial sports). In that case, we can easily relax this assumption by predicting a distribution of next states instead. To our best knowledge, the simulated environments are deterministic, and the real environments are essentially deterministic (approximately rigid-body Newtonian dynamics).
**"Why is the latent action representation needed at all?"**: Given the same previous state, an agent can act in multiple ways, leading to a distribution of next states, even in a deterministic environment (e.g. picking up a mug by the handle, or by the body). So we first observe both the previous and next states to “fix” the latent action, then predict the deterministic next state given the previous state, and the latent action. We show that this is crucial in the ablation section (§4.6). We will make this clearer in our next revision.
We hope this addresses your concerns and questions. We would be keen to discuss any remaining points that stand between us and a higher score.
[1] N. Delson and H. West. Robot programming by human demonstration: Adaptation and inconsistency in constrained motion. In Proceedings of IEEE International conference on Robotics and Automation, volume 1, pages 30–36. IEEE, 1996.
[2] M. Kaiser and R. Dillmann. Building elementary robot skills from human demonstration. In Proceedings of IEEE International Conference on Robotics and Automation, volume 3, pages 2700–2705. IEEE, 1996.
[3] S. Liu and H. Asada. Teaching and learning of deburring robots using neural networks. In Proceedings of IEEE International Conference on Robotics and Automation, volume 3, pages 339–345. IEEE, 1993.
[4] H. Asada and B.-H. Yang. Skill acquisition from human experts through pattern processing of teaching data. In Proceedings of IEEE International Conference on Robotics and Automation, volume 3, pages 1302–1307. IEEE, 1989.
---
Rebuttal Comment 1.1:
Comment: This reviewer wants to thank the authors for an informative rebuttal and is looking forward to reading the paper with the promised changes included. Despite an unfortunate lack of references back to earlier research, there clearly is a connection that ought to be highlighted, since it partially explains why SSL makes sense at all for an embodied agent constantly adapting to an ever-changing environment. | Summary: This paper presents a self-supervised learning method for robot learning that learns representations by using data from demonstrations. The objective is based on learning latent actions from inverse dynamics, and learning forward dynamics model that uses such latent actions as inputs. Several techniques are utilized to prevent the model from finding trivial solutions and thus collapsing. Experiments are conducted in both real-world and simulation environments.
Strengths: - Clear writing with good figures
- Real-robot experiments!
- Focuses on the important problem of pre-training representations from demonstrations, as utilizing such limited but in-domain data can be crucial in the context of robot learning where in-domain data is scarse but important especially for fine-grained control tasks.
Weaknesses: - As other self-supervised learning models are trained on non-standard robotic datasets, it is not clear whether they are trained well with good hyperparameters -- for instance without collapses -- is there a way to ensure that baseline methods are well-tuned?
- I understand that the main focus of this paper is to introduce a self-supervised learning method and compare its performance to other baselines. But what would the performance look like if you consider the full fine-tuning setup that uses gradients from behavior cloning for updating the encoder? Can we squeeze more information and maybe performance boost from fully fine-tuning the encoder? How would all the methods perform in this setup? This could further strengthen the claims of this paper that we should focus on extracting more information from demonstrations.
- One important missing baseline is [1] that pre-trains (optionally causal) transformer with masked modelling objective. Even though it uses a pre-trained visual encoder, using features from the causal transformer can be still a baseline Moreover, it's a bit awkward that MAE trained on demonstrations is missing from the baseline even though MVP is selected as a pre-trained representation baseline. Including MAE, maybe optionally its multi-view variant [2], can make results be more convincing.
[1] Radosavovic, Ilija, Baifeng Shi, Letian Fu, Ken Goldberg, Trevor Darrell, and Jitendra Malik. "Robot learning with sensorimotor pre-training." In Conference on Robot Learning, pp. 683-693. PMLR, 2023.
[2] Seo, Younggyo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, and Pieter Abbeel. "Multi-view masked world models for visual robotic manipulation." In International Conference on Machine Learning, pp. 30613-30632. PMLR, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review, and pointers to missing baselines. We are glad that you find our in-domain visual pretraining setting important. We will address each of your concerns below.
**"Is there a way to ensure that baseline methods are well-tuned?"**: We monitor the observation embeddings for representation collapse during training. For all SSL baselines, we start with the recommended hyperparameters in the paper or official repo, and tune hyperparameters when there seems to be representation collapse. We will release the hyperparameter details for reproduction with the public release of this work.
**"What would the performance look like if you consider the full fine-tuning setup"**: We have added end-to-end fine-tuning results on the Push-T environment. Please see global comment (6) and PDF Table 3 for detailed experiment results. We find that fine-tuning further improves the performance of pretrained encoders, and that pretrained representations significantly outperform training the encoder and policy jointly from scratch. We would like to note though fine-tuning the full model takes 4x longer to train.
**"Missing baseline… transformer with masked modelling objective"**: Thank you for suggesting RPT as a baseline. We added an observation-only variant of RPT with a ResNet18 backbone. We found RPT to be a strong baseline, outperforming most other SSL baselines, but falling short of DynaMo by 7% across the sim environments. Please see global comment (2) and Table 1 in the PDF for detailed results.
**"MAE trained on demonstrations is missing from the baseline"**: Thank you for pointing this out. We have added MAE as an in-domain SSL baseline, discussed in global comment (1). In summary, MAE underperforms DynaMo by an average of 33% across sim environments, and completely fails to solve Block Pushing.
We hope this addresses your concerns and questions. We would be keen to discuss any remaining points that stand between us and a higher score.
---
Rebuttal 2:
Comment: Thank you for the response. I have read other reviews and the response, and currently decided to maintain the score.
The reason for not increasing the score is that I agree with the other reviewers that the submission can be improved by (i) improving the experimental design especially with architecture and (ii) making the reasoning of not using actions when they are available more clearer.
But I'd also like to note that I disagree with other reviewers in that SSL should be designed for large-scale datasets as representation learning on scare demonstration data can be very useful for robotics, which is why I'm not decreasing the score. | Summary: This paper presents DynaMo, using in-domain data for self-supervision. It jointly learns a latent inverse dynamics model and a forward dynamics model over a sequence of image embeddings. The
Strengths: This paper is easy to follow.
Weaknesses: Simplified Real-World Setup:
The real-robot experiments appear overly simplistic. Objects seem to be fixed in place, indicated by the red marker on the table, suggesting a lack of randomization in object placement. This setup makes the task easier for conventional imitation learning methods like diffusion policy and act, potentially allowing them to achieve a 100% success rate.
Suggestion: Introduce spatial randomization to the scene. Conduct additional experiments under these conditions to demonstrate Dynamo's superiority in more complex and varied scenarios.
2) Unfair Comparisons in Simulation:
In Table 1, Dynamo is compared with several baselines that use different backbones, which makes the comparison potentially unfair.
Impact: The difference in backbones could skew the performance results, making it difficult to accurately assess Dynamo's relative performance.
Suggestion: Include more experiments of Dynamo with various backbones such as ViT and ResNet-50. Compare these results against the baselines to provide a fairer and more comprehensive evaluation.
3) The motivation for using SSL in this context is unclear. Typically, SSL is advantageous due to its ability to learn from massive datasets without human labels. However, in the field of robotics, in-domain data are often scarce. This could make the application of SSL less persuasive and potentially less effective.
Technical Quality: 2
Clarity: 2
Questions for Authors: Currently, I believe this paper has significant flaws in its experimental design, both in simulation and real-robot settings. As such, my initial score is 4, with the real-robot experiments being a notable strength. However, the existing experiments do not sufficiently support the claims made in the paper. If the authors can provide additional experiments based on my suggestions above, and if the results substantiate their claims, I would be willing to raise my rating.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: No.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and constructive comments. We are glad that you consider our robot experiments a notable strength. We will address each of your concerns below.
**Simplified real-world setup**: We would like to clarify that the red marker on the table is for setup reference only. At test time, the object is randomly placed within the convex hull of the demonstration starting positions. We have updated the paper website with all 10 rollout videos for each Allegro task, as well as visualizations of all rollout starting positions in Figure 1 in the PDF. We have also added a kitchen task with more variations in the object starting position in global comment (5).
**Unfair comparisons in simulation**: We would like to kindly note that only MAE-based baselines (VC-1, MVP, MAE) use ViTs as they are incompatible with ResNets. All other baselines use a ResNet18 backbone.
- We have added MAE as an in-domain SSL baseline with the ViT-B backbone. Please see global comment (1), and Table 1 in the PDF for more details.
- We have also run DynaMo (ViT-B) on Push-T. It outperforms MAE (ViT-B) by 28%, but underperforms DynaMo (ResNet18) by 22% while taking 4 days (vs. 1 hour) to train. Overall, we find that ViTs trained on small in-domain datasets perform significantly worse than small ResNets, as vision transformers require much more data to perform well.
- For consistency, we have updated MVP to use the ViT-B backbone, such that all MAE-based baselines use the same backbone. We find that MVP (ViT-S) and MVP (ViT-B) have similar performance across environments.
**Motivation for SSL in the low-data regime**: As you have mentioned, scarce in-domain data is a real problem in robotics. We elaborate in detail our motivation for SSL in the low-data regime in the global comment. To clarify, our work is designed to tackle the problem of learning efficiently from small-scale decision-making data, often with only a few hundred or less trajectories. DynaMo improves downstream performance, is compatible as an in-domain fine-tuning step for other pretrained encoders, and can be trained on much larger datasets, but we have academic compute constraints.
We hope this addresses your concerns and questions. We would be keen to discuss any remaining points that stand between us and a higher score.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for your reply.
1. I appreciate the explanation and checked the new video uploaded by the author. However, I am unsure if updating the webpage violates the rebuttal rules. Therefore, even though this video addresses my concern, my evaluation will not take it into account.
2. Whether ViT performs worse than small ResNets depends on the experimental conditions (e.g., freeze, LoRA, or full parameter fine-tuning) and hyper-parameter settings (e.g., same or lower initial learning rate compared to the pretrain stage). The experimental setup should be clearly elaborated and compared. Current experiments are insufficient and not convincing.
Therefore, I have decided to keep my initial rating after the rebuttal.
---
Reply to Comment 1.1.1:
Title: Author response
Comment: Thank you for your reply. We are confident that our rebuttals are by the rules (“should not contain any links to external pages”). To facilitate the discussion, we have updated the existing paper website for transparency and in good faith. If you would like to disregard that anyway, we invite you to look at **Figure 1 in the uploaded PDF** as an alternative visualization for your point above, as well as **the additional kitchen experiment in global comment (5)** with starting position variations explicitly addressing this concern.
As for your second point, we believe there is a misunderstanding. Allow us to elaborate below.
**Whether ViT performs worse than small ResNets depends on the experimental conditions**: We would like to kindly note that in the low-data regime, larger models ≠ better performance: the original ViT paper [1] has observed that ViTs require larger datasets (“14M-300M images”) to reach performance parity, whereas for robotics and decision-making, most datasets have 10k-100k frames. It is not surprising that ViTs underperform ResNets at this scale. The main focus of this paper is an SSL method that works in the low-data regime for control tasks, rather than a pretrain-then-fine-tune setup for visual encoders in general.
**Current experiments are insufficient and not convincing**: Could you give us specific setups in existing published work that you think we should follow? We are happy to take a look.
**The experimental setup should be clearly elaborated and compared**: To enable us to address your concerns effectively, would you clarify which parts of the experimental setup are unclear? For all our main results, we follow the same evaluation procedure: pretrain an encoder from random initialization, then train a downstream policy on the frozen embeddings and use environment rollout metrics to evaluate the encoder performance. The encoder is not fine-tuned during policy training. This setup is used for the ViT training as well.
[1] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. | Rebuttal 1:
Rebuttal: We thank the reviewers for your insightful and constructive comments, and for finding our robot results strong (96in, kzSJ) and the approach novel (B6gm, MZTu). To address your concerns, we have run all requested experiments within our compute budget. You can find our detailed response in individual replies to your review, as well as a summary of shared concerns, additional results and revisions here:
**SSL on in-domain data vs. massive out-of-domain data (reviewers 96in, MZTu, B6gm)**: We would like to clarify that this work is tackling efficient learning with small-scale decision-making data. Several prominent works [1][2][3] deal with demonstration datasets in the few hundred or less trajectories. Our SSL method, DynaMo, is created and designed to accelerate policy learning in the low-data regime. DynaMo improves downstream policy performance with in-domain SSL pretraining, and significantly outperforms learning the encoder and policy from scratch (see (6) below). We also show that DynaMo can also be used as an in-domain fine-tuning step for encoders pretrained on large out-of-domain datasets like ImageNet. Finally, DynaMo is in principle compatible with Internet-scale datasets, but unfortunately we do not have the requisite compute in an academic lab. Running DynaMo on large datasets is an exciting direction that we hope industry labs can follow-up and explore.
**Comparison with more baselines, and additional experiments**:
1. **Comparisons with MAE (reviewers kzSJ, MZTu)**: We train MAE with a ViT-B backbone on all environments with the official implementation. We find that MAE underperforms DynaMo by 24% on Franka Kitchen, 51% on Push-T, and completely fails to solve the task on Block Pushing. (PDF Table 1)
2. **Comparison with RPT (reviewer kzSJ)**: We train an observation-only variant of RPT on all environments with a ResNet18 backbone. We find RPT to be a strong baseline, outperforming most other SSL baselines, but falling short of DynaMo by 7% across the sim environments. (PDF Table 1)
3. **Baseline backbones (reviewer 96in)**: We would like to clarify that only MAE-based baselines (VC-1, MVP, MAE) use ViTs, as they are incompatible with ResNets. All other baselines use a ResNet18 backbone, which we will make clearer in our next revision. We have unified all ViTs to use the ViT-B backbone. We have also run DynaMo (ViT-B) on Push-T. It outperforms MAE (ViT-B) by 28%, but underperforms DynaMo (ResNet18) by 22% while taking 4 days (vs. 1 hour) to train. Overall, we find that ViTs trained on small in-domain datasets perform significantly worse than small ResNets, as larger models require much more data to perform well. (PDF Table 1)
4. **Generalization (reviewers B6gm, MZTu)**: We have added a new real kitchen task (picking up a bread) to test whether the old encoder trained on existing kitchen tasks can be used to train a policy on the new task. We train a policy head with the old DynaMo encoder, and also with the best baseline (old MoCo encoder). We find that policies trained with the existing encoders still manage to complete the task (DynaMo 4/10, MoCo 3/10; successes/total rollouts). We have also trained DynaMo and MoCo encoders from scratch on the new task only, and evaluated likewise. We find that policies trained with the new in-domain encoders exhibit improved performance (DynaMo 7/10, MoCo 5/10). We hypothesize that encoder generalization will improve if trained on much larger datasets, but we have limited academic compute. (PDF Table 2)
5. **Variations in starting positions (reviewer 96in)**: In the task above in (4), the starting positions of the task object are varied across the workspace (\~20x15cm in size). We find that our encoder completes the task 7/10 times, outperforming MoCo (5/10). We would also like to clarify that for the Allegro hand environment, the task objects do have significant variations in their starting positions (\~25x15cm for the sponge, and \~20x10cm for the teabag). We have included a visualization (PDF Figure 1) of the hand task starting positions, and updated the rollout videos on the website. (PDF Table 2)
6. **End-to-end fine-tuning after encoder pretraining (reviewer kzSJ)**: We fine-tune DynaMo-pretrained, MoCo-pretrained, ImageNet-pretrained, and randomly initialized ResNet18. We find that fine-tuning can further improve the performance of pretrained encoders by up to 12%, and that pretrained representations significantly outperform training the encoder and policy jointly from scratch. (PDF Table 3)
We hope that these updates inspire further confidence in our work. At the same time, we invite any further questions or feedback that you may have on our work.
[1] S. Lee, Y. Wang, H. Etukuru, H. J. Kim, N. M. M. Shafiullah, and L. Pinto. Behavior generation with latent actions. arXiv preprint arXiv:2403.03181, 2024.
[2] C. Chi, S. Feng, Y. Du, Z. Xu, E. Cousineau, B. Burchfiel, and S. Song. Diffusion policy: Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137, 2023.
[3] T. Z. Zhao, V. Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with low-cost hardware. arXiv preprint arXiv:2304.13705, 2023.
Pdf: /pdf/e82f72afc7ea42a0c1859e94a333f894e599a113.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unveiling LoRA Intrinsic Ranks via Salience Analysis | Accept (poster) | Summary: The work presents an algorithm for adapting the rank of the LORA matrices according to a novel “saliency metric” assigned to each singular value of the LORA matrices.
The saliency measure is computed taking into account a sequence of steps (time window) during training and computing two quantities at the end of each time window: the orthogonality-aware singular values and the domain influence of each singular value. The orthogonality-aware singular value is a weighted average of the singular value where the weight takes into account the orthogonality of the SVD decomposition at that step. The domain influence takes into account the correlation between singular values within each time window.
At the end of each step sequence, the ranks are adjusted based on this salience measurement.
Strengths: The authors propose a novel and interesting algorithm. The chosen setup speeds up the LoRA fine-tuning while maintaining accuracy or slightly outperforming other methods on the reported benchmarks. The experimental evaluation is convincing since the authors compare the proposed algorithm with other LoRA improvements on a reasonable number of tasks.
Weaknesses: The main weakness of the work is the clarity of the exposition, which is obscure in some parts.
For example, one of the methods' core building blocks is the decycling operation of the dependency graph mentioned between lines 164-165: the authors must reference the algorithm they use for “de-cycling” the graph, describing its steps at least in the appendix.\
I leave other statements that require clarification in the question section below.
In addition, the paper does not discuss the limitations of the methods.
Technical Quality: 2
Clarity: 2
Questions for Authors: > 1. *line 143 to 146: The authors claim that the weight assigned to singular values of high loss should be small. However, from equation (2), it seems that the steps with higher loss receive the larger weight in the time window.*
> 2. *line 148. What do the authors mean by “we normalize the weights from 0 to 1”? An equation would be helpful to clarify the operation here.*
A key performance measure that is not discussed is memory consumption.
> 3. *Given a fixed parameter budget what is the amount of VRAM consumed by SalientLORA compared to for instance AdaLora?*
Other minor remarks:
When the authors state that AdaLORA has been adopted *in numerous research studies* (line 50) they should cite at least the most relevant ones to support the claim.
line 56 The sentence is somewhat obscure.
> 4. *What do the authors mean by “dominant role of singular values in the SVD matrix”? What is the precise meaning of “dominant role” in this context?*
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: No.
The authors do NOT seem to discuss the limitations of their method in the current version of the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive suggestions provided by the reviewer. We provide a detailed explanation and experimental analysis as follows.
***W1: the authors must reference the algorithm they use for “de-cycling” the graph, describing its steps at least in the appendix.***
We provide a detailed description of the de-cycling process in the Global Response and present the algorithm's pseudocode in the PDF. This detailed description will be included in the final version of the paper.
***Q1: line 143 to 146: The authors claim that the weight assigned to singular values of high loss should be small. However, from equation (2), it seems that the steps with higher loss receive the larger weight in the time window.***
We sincerely apologize for the error of weight calculation formula presented in our paper. The correct formula should be $w^{(i)}=\frac{\sum^n_{j=0}R(P^{(j)}_a,Q^{(j)}_a)}{R(P^{(i)}_a,Q^{(i)}_a)}$. By calculating in this way, singular values with higher losses are assigned lower weights, thereby enhancing the effectiveness of importance assessment with magnitude in a time-series. We thank the reviewer for identifying this correction, and we will amend it in the final camera-ready version of the manuscript.
***Q2: Line 148. What do the authors mean by “we normalize the weights from 0 to 1”? An equation would be helpful to clarify the operation here.***
We appreciate the reviewer’s suggestion. Here, we normalize the weights of the singular values to prevent large discrepancies in weight values from affecting the evalution. We use Min-Max normalization, and the calculation formula is as follows,
$w^{(i)}_{norm}=\frac{w^{(i)}-w\_{min}}{w\_{max}-w\_{min}}$
where $ w_{min}$ and $w_{max}$ represent the minimum and maximum values among $w^{(0)}$ to $w^{(n)}$, respectively. This normalization method ensures a uniform scale for all weights, thereby enhancing the robustness of our results. A comprehensive description of this normalization technique will be included in the camera-ready version of the manuscript.
***Q3: Given a fixed parameter budget what is the amount of VRAM consumed by SalientLORA compared to for instance AdaLora?***
We appreciate the constructive suggestions provided by the reviewer. To compare the memory usage of AdaLoRA and SalientLoRA, we fine-tune the DeBERTaV3-base and BART-large models on the CoLA and XSum tasks respectively. We set the target rank $r_t$ to 144 and the initial total rank $r_i$ to 7.5 times $r_t$. The results in the table below indicate that the training overheads for both fine-tuning methods are quite close, with SalientLoRA exhibiting a marginal increase in memory occupancy of only 0.11%-0.22%. This slight difference arises because SalientLoRA needs to store all the singular values within the time window for salience assessment. However, as each matrix retains only 2-15 singular value, the impact on memory is minimal.
| | CoLA | XSum |
| --------------------- | ------- | -------- |
| AdaLoRA $r_t=144$ | 5.084GB | 22.411GB |
| SalientLoRA $r_t=144$ | 5.095GB | 22.436GB |
| AdaLoRA $r_t=276$ | 5.399GB | 23.362GB |
| SalientLoRA $r_t=276$ | 5.410GB | 23.387GB |
***Q4: When the authors state that AdaLORA has been adopted in numerous research studies (line 50) they should cite at least the most relevant ones to support the claim.***
We are grateful for the reviewer's reminder. Numerous studies have shown that applying AdaLoRA achieves excellent fine-tuning performance in various scenarios, including speech models [3] and several other languages models [1, 2]. We will include these references in the final version of the paper.
[1] Parameter-efficient fine-tuning methods for pretrained language models: A critical review and assessment. Xu L et al., 2023. In arXiv preprint.
[2] AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning. Zhou et al., 2024. In Transactions of the Association for Computational Linguistics.
[3] Whisper-med_15k. 2024. https://huggingface.co/sin2piusc/whisper-med_15k.
***Q5: What do the authors mean by “dominant role of singular values in the SVD matrix”? What is the precise meaning of “dominant role” in this context?***
After performing singular value decomposition (SVD), the singular values represent the primary characteristics of a matrix. Larger singular values contain more information from the matrix and capture more of its essential features, thereby possessing greater importance. Consequently, singular values play a dominant role in the SVD matrix. In light of this, our method utilizes the magnitude of these singular values to assess their significance.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response. I am inclined to keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer SXAJ,
Thank you for your feedback.
In your review comments, you indicated that the main drawback of the paper is the clarity of the presentation, with some details not being sufficiently explained. We have provided a detailed clarification for each of these aspects in our rebuttal, including the decycling process, normalization, and so on. These revisions will be incorporated into the final version of the paper to enhance its clarity. With these improvements, we believe the paper demonstrates better soundness and we hope you will consider raising your score. Your evaluation is very important to us, and we sincerely appreciate your consideration.
---
Rebuttal 2:
Title: We will include the following limitations in the camera-ready version.
Comment: In the final version of the paper, we will include the following limitations.
The proposed mechanism of adaptive time-series window effectively accelerates and stabilizes the fine-tuning process. However, this mechanism employs a relatively rigid adaptive approach, considering only the time step information and focusing on progressively cautious pruning as the process advances. Future work will integrate the consideration of singular values' detailed dynamics, including momentum and interdependencies, to enable a more flexible and adaptive adjustment process.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer SXAJ,
Thank you for your thorough and valuable feedback. We have carefully addressed each of your concerns. We hope that our clarifications provide satisfactory answers to your questions. We are committed to improving our work based on your insights, and we look forward to your response.
Best regards,
The Authors
---
Rebuttal 3:
Comment: Thank you very much for your feedback. We appreciate your thorough and responsible review.
Best regards. | Summary: The paper introduces SalientLoRA, an approach designed to optimize the intrinsic ranks of LoRA components in LLMs through salience measurement. The method first utilizes salience measurement to analyze the variations and inter-dependencies of singular value magnitudes over time, which helps assess matrix importance while mitigating instability and randomness. This analysis informs the adaptive adjustment of the time-series window used for significance measurement and rank reduction during training. This adaptive mechanism allows for rapid and stable rank allocation, permitting an initially higher rank setting to expand the allocation space for ranks.
Strengths: 1. SalientLoRA's use of salience measurement to analyze and utilize the variations of singular values effectively addresses the challenges of instability and randomness in rank optimization. The adaptive adjustment of the time-series window for significance measurement during training enhances the efficiency and stability of rank allocation.
2. Demonstrating substantial performance gains over state-of-the-art methods on diverse NLU and NLG tasks highlights the effectiveness of SalientLoRA in practical applications.
Weaknesses: The proposed method incorporates a sophisticated multi-stage process that involves several critical hyperparameters, such as $\beta$, $\gamma$, $T_i$, and $T_f$. However, the paper currently lacks a detailed analysis of these hyperparameters, which is crucial for understanding their roles and optimal settings within the methodology. Systematically exploring how each hyperparameter impacts the model's performance, including sensitivity analyses or hyperparameter tuning results, would greatly enhance the paper's scientific rigor.
Technical Quality: 3
Clarity: 2
Questions for Authors: To fully evaluate the robustness of the proposed method, could you provide detailed ablation studies and analyses for the hyperparameters, including $\beta$, $\gamma$, $T_i$, and $T_f$?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: There are no potential negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ***Question: To fully evaluate the robustness of the proposed method, could you provide detailed ablation studies and analyses for the hyperparameters, including β, γ, Ti, and Tf?***
We appreciate the insightful suggestions provided by the reviewer. In response, we conduct additional experimental analyses on two sets of hyperparameters to explore their impact. These experiments are conducted on the CoLA and MRPC datasets to fine-tune the DeBERTaV3-base model. The total target rank is set to 144, and all the other parameters are consistent with the main experiments.
(1) $\beta$ and $\gamma$
$\beta$ and $\gamma$ are thresholds that control the relevance and dependency relationships of singular values, respectively. To explore their effects, we keep other hyperparameters constant and change the values of $\beta$ and $\gamma$ separately to observe the timing and effects of SalientLoRA fine-tuning. The results are presented in the table below, where MCC represents Matthew’s Correlation Coefficient and Acc denotes accuracy.
| $\beta$ | $\gamma$ | MCC (CoLA ) | Acc (MRPC) | Time (CoLA ) | Time (MRPC ) |
| ------- | -------- | ----------- | ---------- | ------------ | ------------ |
| 0.5 | 2 | 71.87 | 91.65 | 29min | 42min |
| 0.7 | 2 | 71.82 | 91.57 | 24min | 34min |
| 0.9 | 2 | 71.87 | 91.68 | 21min | 33min |
| 0.9 | 1.5 | 71.78 | 91.63 | 27min | 39min |
| 0.9 | 1 | 71.83 | 91.67 | 34min | 47min |
The performance exhibits minimal sensitivity to variations in the values of $\beta$ and $\gamma$. This insensitivity stems from the decycling operation for the dependency graph , which effectively eliminates the majority of redundant dependency relationships. However, setting excessively low values for $\beta$ and $\gamma$ can lead to a large number of redundant dependencies in the graph, which increases the time cost of the decycling process and thus impacts the efficiency of fine-tuning. Therefore, we select hyperparameter values that ensure high efficiency of fine-tuning, with $\beta$=0.9 and $\gamma$=2.
(2) $T_i$ and $T_f$
$T_i$ and $T_f$ control the sizes of the initial and final time windows in the adaptive time-series window mechanism, respectively. The results indicate that the impact of $T_i$ on performance is minimal, with differences only ranging from 0.02% to 0.09%. However, as $T_i$ increases, the fine-tuning time significantly lengthens. This is due to a slower reduction in rank during the early stages of fine-tuning, which impacts the efficiency of rank allocation.
Moreover, when $T_i$ remains constant and $T_f$ increases from 100 to 250, there is a slight improvement in performance, while the fine-tuning time remains relatively unchanged. This improvement can be attributed to the significance analysis in the later stages of fine-tuning, which incorporates singular values under more time steps, yielding more reliable allocation outcomes. Therefore, we set $T_i=10$ and $T_f=200$ to achieve a balance between performance and efficiency.
| $T_i$ | $T_f$ | MCC (CoLA ) | Acc (MRPC) | Time (CoLA ) | Time (MRPC ) |
| ----- | ----- | ----------- | ---------- | ------------ | ------------ |
| 10 | 200 | 71.87 | 91.68 | 21min | 33min |
| 30 | 200 | 71.81 | 91.59 | 23min | 37min |
| 50 | 200 | 71.85 | 91.64 | 27min | 41min |
| 10 | 100 | 71.52 | 91.32 | 22min | 34min |
| 10 | 150 | 71.71 | 91.56 | 21min | 34min |
| 10 | 250 | 71.84 | 91.64 | 23min | 35min |
#
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses! The responses address most of my concerns related to the hyperparameters. Hence, I will keep my original score.
---
Reply to Comment 1.1.1:
Comment: We are glad that our responses have addressed most of your concerns. We will incorporate this section into the final version of the paper to further enhance its rigor. If you find that the detailed hyperparameter analysis has contributed to improving the paper's soundness, we would be deeply grateful if you could consider raising the score of Soundness or the Overall Rating. Your evaluation is very important to us, and we sincerely appreciate your consideration.
---
Rebuttal 2:
Comment: Dear Reviewer i7vj,
Thank you for your thorough and valuable feedback. We have carefully addressed each of your concerns. We hope that our clarifications provide satisfactory answers to your questions. We are committed to improving our work based on your insights, and we look forward to your response.
Best regards,
The Authors | Summary: This paper proposes SalientLoRA, a new method for adaptively optimizing the intrinsic ranks of low-rank adaptation (LoRA) matrices. The key ideas are:
Using singular value decomposition (SVD) to decompose the LoRA matrices and measure the salience/importance of each singular value based on its magnitude, orthogonality constraints, and influence on other singular values within a time window during training.
Strengths: - Novel salience measurement technique that considers singular inter-dependencies and temporal variations.
- Comprehensive evaluation across many datasets and model types (encoder, decoder, encoder-decoder).
- Achieves new state-of-the-art results on multiple benchmarks while being more efficient than prior LoRA methods.
Weaknesses: - The article contains some details that are not clearly explained, such as how the R function on line 145 is calculated, and what specifically is done in the de-cycling process introduced on line 165.
- More analysis could be provided to interpret why the salience measurement works well. For example, are the average of influence domains consistent across models fine-tuned on different types of datasets?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Taking the last row of Table 1 as an example, initially, the model uses a total rank that is 7.5 times the target rank, so the gpu memory usage is roughly equivalent to that of LoRA with r=8*7.5=60. Although the memory usage may decrease during the model optimization process, can you consider comparing with methods like LoRA and DoRA with r=60?
2. Based on my understanding, the constructed influence domains form an undirected simple graph. If this graph forms a single cycle, how do you perform de-cycling?
3. Do you calculate influence domains starting from vertices with a degree of 0, similar to topological sorting, and then update the degrees of the vertices connected to it, then repeat the process?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the constructive feedback provided by the reviewer. We provide a detailed explanation and experimental analysis as follows.
***W1: The article contains some details that are not clearly explained, such as how the R function on line 145 is calculated, and what specifically is done in the de-cycling process introduced on line 165.***
(1) Function $R(\cdot)$
In our paper, we have elaborated on the function $R(\cdot)$ for calculating the orthogonalization loss in Equation 1. Specifically, $R(\cdot)$ measures the degree of orthogonality of $\textbf{P}$ and $\textbf{Q}$. The equation is as follows:
$R(\textbf{P},\textbf{Q})=||\textbf{P}^T\textbf{P}-\textbf{I}||^2_F+||\textbf{Q}\textbf{Q}^T-\textbf{I}||^2_F$
where $\textbf{P},\textbf{Q}$ denote the left and right singular matrices, respectively.
(2) The De-cycling Process
Due to space constraints, detailed content and analysis of the de-cycling process are provided in the Global Response.
***W2:More analysis could be provided to interpret why the salience measurement works well. For example, are the average of influence domains consistent across models fine-tuned on different types of datasets?***
Due to space constraints, detailed content and analysis are provided in the Global Response.
***Q1: Taking the last row of Table 1 as an example, initially, the model uses a total rank that is 7.5 times the target rank, so the gpu memory usage is roughly equivalent to that of LoRA with r=8\*7.5=60. Although the memory usage may decrease during the model optimization process, can you consider comparing with methods like LoRA and DoRA with r=60?***
We set the average target rank of each matrix in SalientLoRA to 8, and the initial rank to 60, to compare memory usage and fine-tuning performance with $\text{LoRA}\_{r=8}, \text{DoRA}\_{r=8}, \text{LoRA}\_{r=60}, \text{DoRA}_{r=60}$. Specifically, we fine-tune the DeBERTaV3-base and BART-large models on the CoLA and XSum datasets using the aforementioned settings.
The results, as presented in the table below, demonstrate that small changes in rank settings do not significantly impact memory usage. Memory usage of SalientLoRA peaks at only 5.41GB and 23.22GB, which are merely 0.33GB and 0.86GB higher than that of $\text{LoRA}\_{r=8}$. Furthermore, the memory consumption of SalientLoRA rapidly decreases due to rank pruning, ultimately aligning closely with that of $\text{LoRA}\_{r=8}$. Notably, under this setting, our method still achieves the best results, surpassing those of $\text{LoRA}\_{r=60}$ and $ \text{DoRA}\_{r=60}$ by 0.9% and 0.57%, respectively. This also underscores the superior performance of our approach.
| model | GPU Mem (CoLA) | GPU Mem (XSum) | Mcc (COLA) | Rough-L (XSum) |
| ----------- | -------------- | --------------- | ---------- | -------------- |
| LoRA r=8 | 5.08GB | 22.26GB | 69.73 | 34.84 |
| DoRA r=8 | 5.37GB | 22.87GB | 71.46 | 35.47 |
| LoRA r=60 | 5.28GB | 22.98GB | 69.78 | 34.92 |
| DoRA r=60 | 5.58GB | 23.61GB | 71.78 | 35.65 |
| SalientLoRA | 5.41GB -> 5.12GB | 23.12GB -> 22.43GB | 72.68 | 36.22 |
***Q2: Based on my understanding, the constructed influence domains form an undirected simple graph. If this graph forms a single cycle, how do you perform de-cycling?***
In practice, the constructed dependency graph of singular values is a directed graph, where nodes represent singular values and edges represent their dependencies. This can be explained from two aspects.
(1) The dependencies are quantified through slopes between two singular values within a time-series, where the choice of independent and dependent variables significantly influences the slope calculations. For instance, consider two singular values, $\lambda_a$ and $\lambda_b$. If $\lambda_a$ is chosen as the independent variable and $\lambda_b$ as the dependent variable, the resulting slope $k_{ab}$ quantifies the influence of $\lambda_a$ on $\lambda_b$. Conversely, selecting $\lambda_b$ as the independent variable results in the slope $k_{ba}$. Since $k_{ab}$ and $k_{ba}$ are numerically distinct, the weights of the edges from $\lambda_a$ to $\lambda_b$ and $\lambda_b$ to $\lambda_a$ also differ, thus forming a directed graph.
(2) Intuitively, the dependency relationship between two singular values is unidirectional: if a singular value is of significant importance, the variations of its magnitude can impact changes in the other one, reverse is not necessarily true.
***Q3:Do you calculate influence domains starting from vertices with a degree of 0, similar to topological sorting, and then update the degrees of the vertices connected to it, then repeat the process***
The calculation of influence domains resembles a reverse topological sort. We calculate the influence domains of all vertices by proceeding backwards through the graph. Specifically, the process initiates from vertices with an out-degree of zero. These vertices do not influence any other nodes, so their influence domain is directly assigned a value of 1. Subsequently, the influence domain of a node that has subsequent nodes is calculated as the weighted sum of the influence domains of these downstream nodes. This iterative procedure continues until the influence domains for all nodes in the graph have been determined.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer xsy4,
Thank you for your thorough and valuable feedback. We have carefully addressed each of your concerns. We hope that our clarifications provide satisfactory answers to your questions. We are committed to improving our work based on your insights, and we look forward to your response.
Best regards,
The Authors | null | null | Rebuttal 1:
Rebuttal: ***1. A Detailed Explanation of the De-cycling Process.***
Since the dependency graph between singular values is a directed cyclic graph, we use a depth-first search (DFS) algorithm to detect and remove cycles. Specifically, we begin by performing a depth-first traversal of each node in the graph, recording the path in a stack. If a node is encountered that is already present in the path, a cycle is detected. At this stage, we remove the edge with the smallest weight from the cycle. This process is repeated until all nodes have been traversed.
The pseudocode is provided in the following PDF.
***2. A Thorough Analysis of Salience Measurement***
The salience measurement consists of two components: the magnitude of singular values and the influence domain.
(1) Magnitude of Singular Values
After performing singular value decomposition (SVD), the singular values represent the primary characteristics of a matrix. Larger singular values contain more information from the matrix and capture more of its essential features, thereby possessing greater importance. Consequently, singular values play a dominant role in the SVD matrix. In light of this, our method utilizes the magnitude of these singular values to assess their significance, and combines this with the orthogonality loss at each timestep to produce a more robust evaluation.
(2) Influence Domain
Numerous studies [1, 2] demonstrate that there exists the interdependencies among the different structures of models. Constructing a dependency graph of these structures is instrumental in analyzing the importance of model structures. Inspired by this, we assume that the intrinsic ranks among different matrices also possess certain dependencies and influence relationships. The intrinsic rank of important matrices can exert influence over other matrices. Therefore, beyond the magnitude of singular values, we analyze the intrinsic dependencies among singular values in different matrices to measure their importance. These dependencies reveal how variations in singular values influence each other across multiple timesteps. If variations in a singular value can induce changes in several others, it indicates that the singular value has a broader influence domain and is of higher importance.
To intuitively reveal the influence domain of singular values, we visualize their distributions across various datasets, as shown in the following PDF. These distributions exhibit similar characteristics, with deeper layers and FFN (Feedforward Neural Network) layers having a larger influence domain. This aligns with findings from previous studies [3, 4], which indicate that deeper layers and FFNs are more significant in the model.
------
[1] LLM-Pruner: On the Structural Pruning of Large Language Models. Xinyin Ma and Gongfan Fang and Xinchao Wang, 2023. In NeurIPS.
[2] LoRAShear: Efficient Large Language Model Structured Pruning and Knowledge Recovery. Chen et al., 2023. In arXiv preprint.
[3] LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning. Zhang et al., 2024. In ACL.
[4] AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. Zhang et al., 2023. In ICLR.
Pdf: /pdf/6811dd3506ca195b6fa19ccb3e384cea38b2e034.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient | Accept (poster) | Summary: This paper introduces DDiffPG for online reinforcement learning with multi-modal behaviour discovery. DDiffPG consists of two parts: 1) a new policy improvement method to stabilise the diffusion policy by cloning a target action; 2) a mode discovery mechanism to train mode-specific and intrinsic Q functions. In their experiments, the authors have shown that DDiffPG can achieve comparable performance with the baselines while producing multi-modal behaviours, which provides a series of benefits like avoiding mode collapse.
Strengths: The paper has introduced an interesting idea. To the best of my knowledge, this is the first work that allows diffusion policy to learn multi-modal behaviours during online RL. According to the experiments, the proposed method has produced a reasonable performance with nice multi-modal behaviours. Besides, the paper has provided nice visualisations and discussions to help understand the proposed approach.
Weaknesses: There are several main weaknesses of the paper.
- The paper is hard to follow and the presentation has a certain room for improvement.
- In section 3, the formal theoretical derivation of the newly introduced policy improvement objective is missing. Although it shows that this method worked empirically, it remains unclear how the resulting policy theoretically maximises the expected return in general.
- I feel the paper is a bit over-claiming for certain aspects. In section 5.3, the authors claimed that DDiffPG can *overcome* local minimum issues and encourage exploration. However, the exploration comes from the use of RND when learning $Q_\mathrm{explore}$, rather than the architecture itself. In addition, it is a very strong claim that DDiffPG **overcomes** the local minimum issues. The experiments are conducted on only 8 state-based tasks, from my point of view, which is insufficient to support such a general claim. I understand that by capturing multi-modal distributions, DDiffPG allows better generalisation, but I would suggest the authors moderate the claims a bit.
Minor issues:
- In line 157, is this a typo? In $r^\mathrm{intr}(s, a, s’) = \max(\mathrm{novelty}(s’) - \alpha \mathrm{novelty}(s’), 0)$, should this be $r^\mathrm{intr}(s, a, s’) = \max(\mathrm{novelty}(s’) - \alpha \mathrm{novelty}(s), 0)$?
Technical Quality: 3
Clarity: 2
Questions for Authors: - In section 4.2, line 183, I’m not fully convinced that the RL objective can skew the policy towards a single mode. Suppose we have a Q function that nicely captures two modes. During policy improvement, let’s say we sampled a batch of trajectories that equally captures both modes, and we perform policy improvement by $\max \mathbb{E}\left[Q(s, a)\right]$. Given that our Q function already nicely captures both modes, why does such an objective cause mode collapse? Could you provide more explanations? Considering the success of DDiffPG on capturing the multi-modality in the policy space, is this really because of the way you perform policy improvement in Eqn. 1, or is it because the DDiffPG used multiple Q functions for separate modes, which just better fits the multi-modal distribution?
- Regarding the use of mode-specific Q functions, it is a bit unclear to me how to stabilise the training. One issue is that during online exploration, the dataset is continuously being updated and modes are being updated. In this case, how do we fix the correspondence between the Q functions being learned and the mode? Besides, according to line 167, DDiffPG requires no predefined number of clusters, and the number of modes could be dynamic. However, we have to initialise a fixed number of Q functions. This seems a bit contradictory to me. How to define the number of Q functions during training?
- It seems to me the exploration is only guaranteed by the training of $Q_\mathrm{explore}$ using RND. However, how do we balance the exploration and exploitation during RL?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - In line 175, the definition of DTW requires task privileged information, e.g., object positions. This is a major issue and unrealistic. In more realistic scenarios, people normally have no access to such information, and as a result, this clustering approach is not applicable.
- The paper only conducted experiments on 8 simple state-based environments. It is unclear how the approach could generalise to more realistic environments and tasks with visual observations. Although at the current stage, I understand that the current results reasonably demonstrate the capability of the proposed method, more realistic tasks will better support many claims made by the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Due to the rebuttal limit and the number of questions, our responses are concise. We are happy to provide more detailed answers during the discussion period.
> The paper is hard to follow ... for improvement.
Based on the reviewer's feedback, we will:
1. Moderate our claim about overcoming local minima and clarify DDiffPG's continuous exploration ability in Sec. 5.3.
2. Fix the typo in the intrinsic reward definition.
3. State clearly scenarios where an imperfect Q-function can lead to mode collapse.
4. Highlight the reclustering process in Sec. 4.2 and the exploration-exploitation balance in Sec. 4.3.
5. Clarify the information used in DTW computation, such as robot positions.
> In section 3, ... in general.
In our method and concurrent works (Yang et al. 2023, Psenka et al. 2024), policy update for a diffusion process cannot follow the usual deterministic policy gradient derivation. The stochastic nature of diffusion models poses significant challenges in deriving formal theoretical guarantees of policy improvement. However, our approach updates target actions under the guidance of the Q-function. As the Q-function improves, these target actions are refined accordingly, which in turn trains the diffusion policy on these improved targets. While formal proof remains challenging, this intuitive process reflects an alignment between the policy and the evolving Q-function, suggesting effective policy improvement in practice.
> I feel the paper ... moderate the claims a bit.
We apologize for any ambiguity in our claim. **We mean that the continuous exploration capability of DDiffPG helps increase state coverage and overcome local minima**. This capability comes from the exploratory mode, not from RND alone, as the **same intrinsic reward** was used **for all baselines**. DDiffPG maintains an exploratory mode throughout training, allowing it to discover versatile behaviors continuously. Unlike baselines that quickly converge to the first explored solution, DDiffPG keeps exploring, which is crucial for learning multimodal behaviors online. Fig. 6 and 12 support our claim, showing that DDiffPG achieves the highest state coverage, while overcoming the lower-reward goal, reaching the top-left goal with the higher reward, helping to escape the local minimum.
> In line 157, is this a typo? ...
Yes, the intrinsic reward is defined as the novelty difference between $s'$ and $s$, which encourages to visit the boundary of explored region.
> In section 4.2, ... distribution?
We agree that a perfect Q-function nicely capturing both modes could prevent mode collapse. However, in practice, achieving a perfect Q-function is unlikely since both the policy and Q-function are learned from scratch. This often leads to:
1. the policy being guided towards the first explored goal.
2. the policy converging to the solution with higher Q-values due to the discount factor, even if two modes are initially captured.
DDiffPG's design mitigates mode collapse through mode-specific Q-functions, each focused on a single mode, thereby isolating their Q-values. By sampling actions from the replay buffer rather than relying solely on policy-inferred actions, we ensure balanced improvement across all modes. However, **policy improvement alone is not sufficient for multimodal behavior learning**, as confirmed by the results of DIPO (Tab. 3 and 4).
> Regarding the use of mode-specific ... during training?
Yes, we re-cluster trajectories every $F$ iterations, updating modes continuously. After re-clustering, new clusters inherit Q-functions and $a^\text{target}$ from the most overlapping cluster in the previous iteration. If the number of clusters increases, we initialize new Q-functions. This ensures a one-to-one correspondence between modes and Q-functions without needing to predefine a fixed number of Q-functions. This process is explained in Sec. 4.2 (lines 210-214), with further details in Alg. 2 and App. C. The implementation can also be found in our provided code to the Meta-reviewer under `ddiffpg/utils/Q_scheduler.py`.
> It seems ... during RL?
We balance exploration-exploitation by adjusting the proportion of the exploratory mode during data collection. As explained in Sec. 4.3 (lines 230-233), we condition the diffusion policy on mode-specific embeddings, including an exploratory mode trained by $Q_\text{explore}$. We set the proportion for exploration to $p = \max(0.3, 1/\text{modes})$, ensuring at least 30% of actions are exploratory. Implementation details can be found in `ddiffpg/algo/ddiffpg.py` (lines 104-129).
> In line 175, ... not applicable.
We apologise for any confusion. **We only use the robot position for clustering, i.e., the Ant's position and the robot end-effector's position, which are accessible in real-world scenarios**. We do not use any information about objects or goals in the environment. In the PDF, we provide an alternative clustering approach by training a VQ-VAE to learn trajectory representations and using the codevectors for clustering, which learns meaningful representations suitable for our approach.
> The paper only ... by the paper.
We want to emphasize that the 8 robot locomotion and manipulation tasks are nontrivial. They are all **high-dimensional and continuous control** tasks with **sparse rewards**. In the AntMaze tasks, the agent must solve the Ant-locomotion task through joint control, a notoriously difficult continuous control problem, while also learning to steer locomotion gaits to find goals in complex mazes. All tasks are **goal-agnostic**. The agent must explore to find the goal, making it challenging to learn multimodal behaviors due to the vast exploration space. As the first work exploring online multimodal diffusion policies, we see our current tasks as a strong starting point and plan to extend our approach to long-horizon and vision-based tasks with greater multimodality in future work.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed responses by the authors. Most of my concerns have been addressed. I will increase my rating to 6. | Summary: This paper addresses the challenges associated with employing diffusion policy in online reinforcement learning (RL), particularly the intractability of policy likelihood approximation and the bias towards a single mode. The author introduces the Deep Diffusion Policy Gradient (DDiffPG) method, which decouples exploration from exploitation. For exploration, novelty-based intrinsic motivation and hierarchical clustering are utilized to identify modes, while for exploitation, the author describes the mode-specific Q-function and a multimodal data batch. Empirical evaluations demonstrate that DDiffPG effectively masters multimodal behaviors.
Strengths: + The application of diffusion policy for multiple modes in an online setting is promising and addresses a previously unexplored area in the literature.
+ The introduction of a diffusion-based policy gradient method is novel and represents a significant contribution to the field.
+ The work is well-motivated, and the visualization of multimodal behaviors using antmaze examples effectively enhances understanding and illustrates the practical utility of the approach.
Weaknesses: + Several claims require additional support. For instance, the author asserts that standard exploration-exploitation strategies may easily converge towards a single mode (Lines 25-27) without providing theoretical or experimental evidence. Similar issues are present in Lines 35-36 and Lines 52-53. These statements are crucial for constructing the paper's motivation and thus require more substantial support to enhance their reliability.
Technical Quality: 3
Clarity: 3
Questions for Authors: + In Lines 263-267, the author explains that DDiffPG could learn suboptimal paths. Is this statement intended to justify the suboptimal performance compared to TD3? The author suggests that this suboptimal issue can be mitigated by the mode embeddings. It would be more effective to present the best performance and use the suboptimal trajectories as ablations, specifically when blocking the optimal path, to highlight the significance of multiple trajectories.
+ Why does directly using the action gradient to optimize the policy lead to vanishing gradients and instability? Is this due to the large denoising steps? Including corresponding ablation studies would provide a better illustration.
+ Unlike the offline setting where trajectories are stable, the replay buffer with the updated Q function results in changed pairs of $(s, a^{target})$. Does training the diffusion model with a supervised framework on continually changing pairs lead to instability in learning? (Lines 126-129)
+ Why does DIPO, which uses the original $a$ from the buffer, not know the true outcome? I understand that the replay buffer contains the past trajectories $(s,a,r,s')$ (Lines 134-137).
+ Are the mode-specific Q-functions also applicable to other standard policies?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: --
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and insightful comments. We want to address the reviewer's concerns and questions as follows.
> Several claims require additional support. ... enhance their reliability.
We thank the reviewer for the constructive comment. First, given the RL objective of maximizing the expected return, policies in goal-oriented tasks will converge to a solution once it is discovered. Tab. 3 and 4 experimentally show that traditional RL baselines fail to learn multimodal behaviors even when multiple solutions exist.
Second, DDiffPG maintains an exploratory mode throughout training, allowing it to continuously discover versatile behaviors. Fig. 6 demonstrates that DDiffPG achieves the highest state coverage, supporting our claim. Additionally, Fig. 12 illustrates a local minima scenario where the ant faces two goals. Baseline models often get trapped pursuing the easier, lower-reward goal, whereas DDiffPG continues to explore and eventually learns to reach the higher-reward goal, helping to escape the local minimum.
Third, DDiffPG achieves explicit mode discovery via hierarchical clustering with the DTW metric, preserves the multimodal action distribution by constructing multimodal training batches for policy updates, and improves the modes collectively with mode-specific Q-functions.
> In Lines 263-267, ... significance of multiple trajectories.
We agree with the reviewer that presenting the best performance is beneficial. DDiffPG achieves the same success rate as TD3 in Fig. 4 but usually requires more steps to reach the goal in Tab. 3 and 4. In response, we provide the episode lengths that execute only the mode with the shortest path below. We will include these results in Tab. 3 and 4 in the revised version. Note that further training of DDiffPG would lead to further optimizing all paths. We report here the numbers obtained by the policies reported in the paper.
||DDiffPG|TD3|
|-|-|-|
|Maze-v1|65.3|59.2|
|Maze-v2|40.9|35.6|
|Maze-v3|79.8|83.4|
|Maze-v4|103.5|88.1|
|Reach|19.1|18.0|
|Peg-insertion|5.0|4.7|
|Drawer-close|20.5|22.0|
|Cabinet-open|16.8|14.3|
> Why does directly ... better illustration.
We thank the reviewer for the insightful suggestions. Directly backpropagating the action gradient through the diffusion model for policy optimization is challenging and can cause instability due to the Markov chain in the diffusion process and its stochastic nature [1, 2, 3]. This issue is evident in the high variance of the baseline Diffusion-QL in Fig. 4. Additionally, we have included plots in the attached PDF to measure the gradient in Diffusion-QL, confirming the vanishing gradient issue.
[1] Psenka, Michael, et al. "Learning a diffusion model policy from rewards via q-score matching." ICML, 2024.
[2] Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." CVPR, 2024.
[3] Pascanu, Razvan, et al. "On the difficulty of training recurrent neural networks." ICML, 2013.
> Unlike the offline setting ... (Lines 126-129)
In the proposed diffusion policy gradient, we can tune the step size $\eta$ of the action gradient for stability. In Section 5.4, we provided a hyper-parameter analysis on the step size and found that a step size of 0.03 can generally work well. However, an aggressive value, e.g., 0.05, could speedup the learning at the early stage but increases the variance. Consequently, we used a step size of 0.03 for all tasks without any further tuning.
> Why does DIPO, ... (Lines 134-137).
According to Algorithm 2 in [4], DIPO stores transition $(s_t, a_t, s_{t+1}, r(s_{t+1}|s_t, a_t))$ into the replay buffer, performs gradient ascent on $a_t$, and replaces $a_t$ in the original buffer. Therefore the transition $(s_t, a_t, s_{t+1}, r(s_{t+1}|s_t, a_t))$ no longer aligns with the current MDP dynamics and reward function, due to the replacement of $a_t$. Given that DIPO is an off-policy algorithm, the reuse of these replaced transitions for training the Q-function could be problematic, as the agent is training values of actions that have not been actually played out in the environment, and their true outcome (reward and next state) are unknown.
[4] Yang, Long, et al. "Policy representation via diffusion probability model for reinforcement learning." arXiv, 2023.
> Are the mode-specific ... standard policies?
We agree with the reviewer that the mode-specific Q-functions may be applicable to other candidate model parameterizations. The most straightforward example is to learn a separate unimodal actor for each Q-function. However, we advocate for the benefits of learning a unified model:
1. Learning a single model allows to share information (e.g., representations) across modes, which is particularly beneficial for future research, e.g., on image-based-observation tasks [5]. Our multimodal policy learning also draws an analogy to multitask RL, in which the objective is to solve multiple tasks with a single policy rather than separate policies per task, benefiting from knowledge sharing [6].
2. Learning separate unimodal policies would significantly increase the computational time and agent interactions with the environment. With separate policies, we need to iteratively update them; however, only one backpropagation is needed for a single multimodal policy.
3. When a new mode is discovered, our diffusion model continues learning, adding this mode in its landscape, without forgetting previous knowledge. However, if we have separate policies, we have to initialize a new policy and learn from scrach, otherwise additional techniques have to be introduced to transfer knowledge from previous training.
[5] Kalashnikov, Dmitry et al. “QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation.” CoRL, 2018.
[6] Hendawy, Ahmed et al. “Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts.” ICLR, 2024.
---
Rebuttal Comment 1.1:
Comment: I have reviewed the authors' rebuttal and their responses to the other reviewers, which have addressed my concerns. I believe this paper has a good contribution to online diffusion policy learning, particularly in multimodal policy. Given the engineering focus, I agree that extensive proofs are not necessary. I will adjust my score accordingly. | Summary: This paper aims to solve online RL problems with diffusion policy. It includes 1. a diffusion policy optimization method for diffusion online training. 2. A combination of intrinsic rewards motivated skill discovery method and model-seeking Q-learning to facilitate exploration and prevent mode-collapse behavior. 3. Several self-designed environments where there might be multiple optimal solutions and thus require expressive exploration policy.
Strengths: 1. The paper shows diffusion policy has a big potential in online RL because it enables multimodal exploration.
2. The self-designed environments are a good contribution to the research field by showcasing the necessity of diffusion exploration.
3. Performance clearly surpasses several baselines.
follow up:
The experiments basically support comments in the paper. The paper sets out to handle the single-mode exploration problem in online RL, and the self-designed environments, unlike most previous classics, allow diverse optimal behaviors and can benefit from multimodal exploration. The experiments show that the proposed method outperforms several classic baselines including some diffusion-based methods.
Weaknesses: 1. The proposed diffusion training objective seems handcrafted and requires a lot of tunning. This may limit the algorithms' further application.
2. Besides the diffusion optimization methods. Other proposed techniques are more like a good combination of previous work. This indicates limited theoretical novelty.
3. Code is not provided. For this style of paper, I think code quality is essential, and a mere promise to release the code is not convincing.
follow up:
1. The ablation studies are not strong enough to prove the improved performance number actually comes from multimodal exploration. I cannot be certain which part of the method works from the experiments. More visualization/empirical results/analyses should be given.
2. The formatting of the table/figure can be greatly improved. For instance, the title of Figure 4 is wrong/incomplete. Table 3/4 is referenced as the main results in the paper but only put in the appendix.
3. The diffusion optimization results also lack very strong novelty. The loss function is basically a supervised learning loss adapted for online RL, without strong convergence or policy improvement guarantee. Still, the diffusion+online RL theories are a known unsettled and hard problem, so this kind of exploration is fine and meaningful.
Technical Quality: 2
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback and insightful comments. We address the reviewer's concerns and questions as follows.
> The proposed diffusion training objective ... further application.
We proposed diffusion policy gradient, a method that combines RL and behavioral cloning (BC) for stable online diffusion policy updates. Inspired by methods like the Deterministic Policy Gradient (DPG) [1], the policy is optimized to follow the action-gradient $\nabla_{a} Q(s, a)$.
Directly backpropagating the action gradient through the diffusion model for policy optimization is known to be difficult and may cause the gradient vanishing problem and instability [2, 3]. This issue was evident in the high variance of the baseline Diffusion-QL in Fig. 4. We have also included additional plots to visualize this issue in the attached PDF. To mitigate this issue, we imitate the $a^{target}$ obtained via action gradient ascent. This way, we obtain an action that follows the updated Q and serves as the target for our BC objective. The solution we provided is general and requires no tuning. This can also be seen in the codebase we provided to the Meta-reviewer.
We validated our method on 8 challenging robotic tasks with high-dimensional and continuous action spaces, requiring no handcrafting of the policy update objectives. The proposed objective only introduces an additional hyper-parameter, which is the step size $\eta$ of the action gradient. In Sec. 5.4, we provided a hyper-parameter analysis on the step size and found that a step size of 0.03 can generally work well. As a result, we used this value for all tasks without any further tuning, proving the generality of the proposed method.
> Besides the diffusion ... limited theoretical novelty.
While we acknowledge that some concepts used in our paper have been explored in prior works, **the combination of these techniques in an online setting to learn multimodal behaviors using diffusion policy has not been studied before**. For instance, DIPO [4] and QSM [2] focused only on the diffusion policy update objective. They do not study how to take advantage of the diffusion model to represent multimodal action distributions, nor do they consider the additional exploration challenges for discovering and learning multiple behavioral modes online. Our experiments show that naively learning a diffusion policy does not yield multimodal policies and might even be an "overkill" if multimodality is not a concern. Additionally, we highlighted the issue of the greedy RL objective in learning multimodal behaviors, which was not examined in prior works.
> Code is not provided. ... the code is not convincing.
We agree with the reviewer on the importance of open-source code for the community. We have sent the code link of an anonymous repo to the AC, and kindly suggest reaching out to them for access to review it.
> The ablation studies ... should be given.
First, we would like to emphasize that our main contribution is learning diffusion policies that capture multimodal behavioral modes. Tab. 3 and 4 show that our approach uniquely masters diverse behaviors. Despite exploring a larger space, Fig. 4 demonstrates that DDiffPG achieves comparable sample efficiency to baselines across all 8 tasks.
Second, we included DIPO as a baseline, which also serves as an ablation of our framework. As discussed in Sec. 3, Lines 130-137, we modified DIPO for consistency in MDP dynamics and reward function. The DIPO lacks clustering, mode-specific Q-functions, and multimodal batch. However, DIPO fails to explore and learn multimodal behaviors, proving the necessity of our design choices.
Third, we appreciate the reviewer's constructive suggestions and provide additional ablations on mode-specific Qs and the multimodal batch. The attached PDF shows that a single Q-function cannot address the greedy RL objective, guiding the policy toward the first explored solution. Moreover, in the ablation of multimodal batch, we find that while the diffusion policy learns multiple solutions, the modes are imbalanced due to the lack of enforced diversity in the batch, causing minority modes to appear less frequently. Thus, both design choices are crucial for learning multimodal behaviors.
> The formatting ... in the appendix.
We thank the reviewer for pointing out these inconsistencies. We will update the caption of Figure 4 to "Performance of DDiffPG and baseline methods in the four AntMaze and robotic manipulation environments." Due to space limitations, we have placed Tables 3 and 4 in the appendix. Since the camera-ready version allows for one additional page, we will include these tables in the main body upon acceptance.
> The diffusion ... fine and meaningful.
In our methodology and concurrent works [2, 4], policy update for a diffusion process cannot follow the usual deterministic policy gradient derivation. The stochastic nature of diffusion models poses significant challenges in deriving formal theoretical guarantees of policy improvement. However, intuitively, our approach updates target actions under the guidance of the Q-function. As the Q-function improves, these target actions are refined accordingly, which in turn trains the diffusion policy on these improved targets. While a formal proof remains challenging, this intuitive process reflects an alignment between the policy and the evolving Q-function, suggesting effective policy improvement in practice.
[1] Silver, David, et al. "Deterministic policy gradient algorithms." ICML, 2014.
[2] Psenka, Michael, et al. "Learning a diffusion model policy from rewards via q-score matching." ICML, 2024.
[3] Wallace, Bram, et al. "Diffusion model alignment using direct preference optimization." CVPR, 2024.
[4] Yang, Long, et al. "Policy representation via diffusion probability model for reinforcement learning." arXiv, 2023.
---
Rebuttal Comment 1.1:
Title: Reponse
Comment: I thank the authors for giving me detailed responses, which have resolved some of my concerns.
Overall, I think the paper is above the acceptance threshold, though marginally. I thus maintain my score. | null | null | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their feedback and constructive suggestions on our manuscript. We are glad that the reviewers find our paper to be:
* introducing a novel and interesting idea (Reviewer HLNL, Reviewer TVD7)
* having informative and good-quality visualizations and experiments (Reviewer U1rA, Reviewer HLNL, Reviewer TVD7)
* showcasing superior performance againt baselines (Reviewer U1rA, Reviewer TVD7)
* having practical impacts on the community (Reviewer U1rA, Reviewer HLNL)
In response to the reviewers concerns we add the following responses
### Code Release
As Reviewer U1rA suggested, we sent an anonymous repo to the meta-reviewer to be distributed internally.
### Additional experiments in attached PDF
1. As Reviewer U1rA suggested, we conducted additional ablations on mode-specific Q-functions and multimodal training batch. Based on the state visitation that demonstrates the behavior of the policy during training and the behaviors in evaluation, we found that a single Q-function cannot address the issue of greedy RL objective. For the ablation of the multimodal batch, while the diffusion policy learns multiple solutions, the modes are imbalanced due to the lack of enforced diversity in the batch, causing minority modes to appear less frequently. Thus, both design choices are crucial for learning multimodal behaviors.
2. As Reviewer HLNL suggested, we measure the actor gradient norm between our approach and Diffusion-QL to verify the gradient vanishing problem. As shown in the figure, the actor gradient of Diffusion-QL is very small and particularly, the high variance in the results reported in Fig. 4 indicates that the gradient of certain seeds remains zero throughout the training.
3. As Reviewer TVD7 suggested, we implement an alternative clustering approach on the high-dimensional state space by training a vector-quantized variational autoencoder (VQ-VAE) to learn trajectory representations. We then performed clustering using the codevectors. The visualized cluster performance and projected embedding space verify that VQ-VAE learns meaningful representations and can be used in our approach, as an alternative clustering approach. We would like to point that any unsupervised clustering method can be coupled with our approach.
The figures and results are included in the attached PDF. Next, we will address each reviewer’s concerns individually.
Pdf: /pdf/38c572d33e38e086d3298a24c9115d5fb133d58a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models | Accept (poster) | Summary: The paper "AdaNeg: Adaptive Negative Proxy Guided OOD Detection with Vision-Language Models" presents a novel approach to out-of-distribution (OOD) detection using pre-trained vision-language models (VLMs). The primary innovation is the introduction of adaptive negative proxies, which are dynamically generated during testing by exploring actual OOD images. This method addresses the semantic misalignment issues of previous approaches that use static negative labels. AdaNeg utilizes a feature memory bank to cache discriminative features from test images, creating task-adaptive and sample-adaptive proxies that better align with the specific OOD datasets. The approach combines static negative labels with adaptive proxies to enhance the performance of OOD detection, achieving significant improvements in benchmarks like ImageNet. The method is training-free, annotation-free, and maintains fast testing speeds.
Strengths: 1. Innovative Approach: The introduction of adaptive negative proxies to address semantic misalignment is a significant advancement. This dynamic generation of proxies during testing offers a novel solution to improve OOD detection.
2. Effective Use of Vision-Language Models: Leveraging VLMs to integrate textual and visual knowledge enhances the robustness and accuracy of OOD detection.
3. Performance Improvement: The method shows substantial improvements in standard benchmarks, particularly a 2.45% increase in AUROC and a 6.48% reduction in FPR95 on the ImageNet dataset.
4. Training-Free and Annotation-Free: AdaNeg does not require additional training or manual annotations, making it highly efficient and practical for real-world applications.
5. Scalability and Efficiency: The method maintains fast testing speeds and can dynamically adapt to new OOD datasets without significant computational overhead.
6. Comprehensive Evaluation: Extensive experiments and analyses demonstrate the effectiveness and robustness of the proposed approach across various benchmarks.
Weaknesses: 1. Potential Overhead in Memory Management: The implementation of a memory bank for caching features may introduce significant overhead in memory management, especially when dealing with large-scale datasets or high-dimensional feature spaces.
2. Generalization to Other Domains: Although the approach demonstrates promising results on existing public datasets, its effectiveness in other domains or with different types of data remains uncertain and requires further investigation.
3. Testing Phase Dependency: It is unclear whether the approach can maintain the same level of reliable performance when only a small number of images are tested in practical applications. This dependency on the number of test images warrants additional examination.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weakness.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: 1. Generalization to Other Domains
2. Dependency on test data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer *eV4A*,
We sincerely thank you for the constructive comments and recognition on our work! Please find our responses below.
### **Q1: Potential Overhead in Memory Management: The implementation of a memory bank for caching features may introduce significant overhead in memory management, especially when dealing with large-scale datasets or high-dimensional feature spaces.**
**A1:** Thanks for the question. Indeed, the introduction of a memory bank introduces additional memory overhead, which we have briefly discussed in Section 5 of the main paper. However, we clarify that our memory overhead does not continuously increase with the scale of the dataset. This is because we drop image features with high prediction entropy when the memory bank is full, as detailed in Line 184 and Appendix A.2 of our submission
For a typical high-dimensional feature of 512 dimensions and a maximum memory length $L$ of 10 for each class, our memory bank occupies a storage space of 214.75MB when using the ImageNet dataset as ID. This storage requirement is negligible compared to the memory consumption of the CLIP model during forward passes.
### **Q2: Generalization to Other Domains: Although the approach demonstrates promising results on existing public datasets, its effectiveness in other domains or with different types of data remains uncertain and requires further investigation**
**A2:** Thanks for the nice suggestion. We further validate our method on the BIMCV-COVID19+ dataset [a], which includes medical images, following the OpenOOD setup [68, 71]. Specifically, we selected BIMCV as the ID dataset, which includes chest X-ray images CXR (CR, DX) of COVID-19 patients and healthy individuals. For the OOD datasets, we follow the OpenOOD setup and use CT-SCAN and X-Ray-Bone datasets. The CT-SCAN dataset includes computed tomography (CT) images of COVID-19 patients and healthy individuals, while the X-Ray-Bone dataset contains X-ray images of hands.
As illustrated in the table below, our AdaNeg method consistently outperforms NegLabel on this medical image dataset. We will add these analyses in the revision.
| Methods | CT-SCAN AUROC$\uparrow$ | CT-SCAN FPR95$\downarrow$ | X-Ray-Bone AUROC$\uparrow$ | X-Ray-Bone FPR95$\downarrow$ | Average AUROC$\uparrow$ | Average FPR95$\downarrow$ |
|------------------|:------------------------:|:--------------------------:|:--------------------------:|:----------------------------:|:------------------------:|:--------------------------:|
| NegLabel | 63.53 | 100 | 99.68 | 0.56 | 81.61 | 50.28 |
| **AdaNeg (Ours)**| **93.48** | 100 | **99.99** | **0.11** | **96.74** | **50.06** |
[a] BIMCV COVID-19+: a large annotated dataset of RX and CT images from COVID-19 patients
### **Q3: Testing Phase Dependency: It is unclear whether the approach can maintain the same level of reliable performance when only a small number of images are tested in practical applications. This dependency on the number of test images warrants additional examination.**
###
**A3:** Thank you for your suggestion. We examined the dependency of our approach on the number of test images by evaluating its performance across different scales of test samples. As the number of test samples increases (from 900 to 90K), the cached feature data also increases, leading to an improvement in our method's results, as shown in the table below. Even with a small number of test samples (e.g., 90 and 900), our method significantly reduces FPR95 compared to NegLabel, demonstrating its robustness across different numbers of test images.
Note that with only 90 test images, the task of distinguishing between ID and OOD samples degenerates into a simpler task since the number of test images is even smaller than the number of classes (e.g., 1000 for ImageNet). Consequently, both NegLabel and our method achieve lower FPR95 in such an easier scenario.
We will add these findings to the revised paper.
| Num. of Test Images | 90 | 900 | 9K | 45K | 90K |
|---------------------|------|------|------|------|------|
| NegLabel | 14.00| 20.44| 20.71| 20.51| 20.53|
| **AdaNeg** | **6.00** | **10.12** | **9.78** | **9.66** | **9.50** |
**Table Caption:** FPR95 ($\downarrow$) with different numbers of test images, where test samples are randomly sampled from ImageNet (ID) and SUN (OOD) datasets while maintaining their relative proportions.
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thanks for your rebuttal, which has solved my concerns.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank this reviewer for the positive feedback!
Authors of paper 9248 | Summary: In this paper, the authors propose AdaNeg, a test-time adaption method for CLIP-based post-hoc OOD detection. AdaNeg is an extension of NegLabel and introduces a class-wise memory bank for each ID and negative labels. The memory bank is gradually filled with ID and OOD features during the model deployment. The author design a margin-based approach to select positive and negative samples with high confidence. And they propose a cache elimination mechanism to update the memory bank. Besides, AdaNeg uses cross attention between the input sample and the memory bank to reweight the cached features. The experimental results show the proposed method outperforms the baseline methods under various benchmarks.
Strengths: <1> AdaNeg uses dynamic OOD proxies instead of the static design of NegLabel, achieving SOTA performance in CLIP-based zero-shot OOD detection.
<2> The multi-modal score is an interesting design and explanation that demonstrates the improvement brought by using both text and image encoding capabilities in a multi-modal model.
<3> The paper is well organized and easy to follow.
Weaknesses: **Major concerns**
<1> AdaNeg is a test-time adaption approach that caches features w.r.t. ID labels and negative labels by maintaining a class-wise memory bank. For OOD detection, the biggest problem of the test-time adaptation method is that the arrival time of the OOD sample is uncertain. Compared with non-TTA methods, AdaNeg has greater uncertainty in its performance during the deployment phase and may even risk causing model collapse.
For example, when the model is deployed to a close-world environment, almost all input samples are ID samples (I believe this is a very common scenario). In this case, the memory banks of negative labels will gradually be filled with ID samples (in long-term deployment, there will always be misclassified ID samples that enter the negative memory banks). Since the number of negative labels is much greater than that of ID labels, more and more ID samples will be misclassified as OOD over time. I suggest the author conduct an experiment using the 1.28M training set images of ImageNet-1k as input (this still meets the zero-shot setting of CLIP) and observe how the proportion of samples misclassified as OOD changes with the number of input samples. If 1.28M images are repeatedly input into multiple rounds, will the misclassification rate increase further? In contrast, the other case is that the OOD samples are far more than the ID samples. Will this cause a greater false positive risk? I hope the authors can test their method with different ID and OOD sample mixture ratios, such as 1:100, 1:10, 1:1, 10:1, 100:1.
In summary, I suggest the authors to further study the setting of TTA in OOD detection to improve the motivation of the work, since the input samples may come from two different distributions, ID and OOD. How to ensure the stability of TTA OOD detection algorithm when the input stream is a non-stationary process is a problem worth studying.
<2> The negative labels provide the initial memory bank slots for AdaNeg, but it seems to me that the negative labels are not necessary. This suggests that we need to rethink AdaNeg's motivation for negative labels. Why do samples that are judged as negative need to be placed in the memory bank w.r.t. the negative label? What if the authors directly use the MCM score to judge negative samples and then let them organize themselves into OOD proxies? The authors need to provide a more detailed analysis (preferably theoretical analysis) to prove that the **negative label-based** memory bank design is necessary.
Further, negative labels simply select words that are semantically far from ID labels. For some OOD samples, they may be far away from both ID labels and negative labels. According to the mechanism of AdaNeg, they cannot enter the memory bank of negative labels. Is this a negative impact of designing a memory bank based on negative labels?
**Minor concerns**
<1> The authors need to provide more detailed experimental settings. The paper mentions that memory banks are task specific. When evaluating the model, taking the ImageNet-1k benchmark as an example, do the authors maintain an independent memory bank for each OOD dataset (precisely, each ID-OOD pair), or did the four OOD datasets share one memory bank?
<2> There seem to be some typos and symbol issues in the paper.
a) L247: temperature $\tau = 100$ seems to be $\tau = 0.01$ because $\tau$ is in the denominator.
b) The subscript NL is not case-inconsistent, e.g., Eq. (4) and Eq. (8).
Technical Quality: 3
Clarity: 3
Questions for Authors: see Cons
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer *Pmx9*,
We sincerely thank you for the constructive comments and recognition on our work! Please find our responses below.
### **Q1: Analyses on the stability of our method with different ID and OOD sample mixture ratios**
**A1:** Many thanks for the detailed and valuable comments. Following this reviewer's suggestion, we investigated the stability of our method by constructing test sets with various mixture ratios of ID and OOD samples. Specifically, we adopted the 1.28M ImageNet training data as ID and randomly sampled 12.8K and 1.28K instances from the SUN OOD dataset to construct the ID:OOD ratios of 100:1 and 1000:1 settings, respectively. To construct the ID:OOD ratios of 1:100, 1:10, 1:1, and 10:1 settings, we used the full 40K SUN dataset as OOD and randomly sampled 400, 4K, 40K, and 400K instances from the ImageNet training data. We did not validate the setting where the test set contains only ID data, as the absence of OOD data makes it impossible to calculate evaluation metrics (e.g., FPR95 and AUROC).
As shown in the table below, our method outperforms NegLabel across a wide range of mixture ratios (from 1:100 to 100:1), validating the robustness and reliability of our approach. As pointed out by this reviewer, unbalanced mixture ratios do pose a challenge to our method. Our approach performs the best in scenarios with a balanced mixture of ID and OOD samples, reducing the FPR95 by 11.18\%. As the mixture ratio becomes increasingly unbalanced, the improvement brought by our method gradually decreases. When the unbalanced ratio reaches 1000:1, our method shows some negative impact. We will include these analyses in the limitation part of our revised manuscript, and attempt to address this challenging setting in future work.
| ID:OOD Ratio | 1:100 | 1:10 | 1:1 | 10:1 | 100:1 | 1000:1 |
|--------------|-------|------|------|------|-------|------|
| **NegLabel** | 22.42 | 21.11 | 20.99 | 20.92 | 21.48 | **23.69** |
| **AdaNeg** | **21.00** | **12.49** | **9.81** | **15.61** | **20.71** | 26.28 |
**Table Caption**: FPR95 ($\downarrow$) with different mixture ratios of ID and OOD samples.
### **Q2: The necessity of negative labels in the initialization of the memory bank.**
**A2:** Thanks for the insightful questions. Our use of negative labels to extend the memory bank is carefully designed for effective implementation. Initially, our memory bank is empty, containing only zero values. Consequently, the derived negative proxies are also zero vectors. In other words, during the early stages of testing, it is impossible to generate effective negative proxies solely based on the memory bank. To enable our method to be operational from the beginning of the testing phase, we extend the memory bank with negative labels. It is important to note that while negative labels play a crucial role during the early stages, their influence diminishes as the memory bank gradually accumulates data, and the negative proxies will be progressively dominated by cached negative images.
Regarding the reviewer's suggestion to use the MCM score to judge negative samples and organize them into OOD proxies, this is indeed feasible. One could cluster the negative samples selected by the MCM score and use the cluster centers as OOD proxies. However, this method has a significant drawback: it requires detecting a portion of negative samples before applying our method. In other words, this approach cannot be applied online to the initial test data. From this perspective, we utilize the negative labels as predefined initial cluster centers in our AdaNeg.
During testing, these cluster centers are gradually refined as more image features are cached into the memory.
Considering that negative labels are words semantically distant from ID labels, if OOD samples are similarly distant from both ID and negative labels, they typically fall into the intermediate hard examples between ID and negative labels. Handling such intermediate hard examples is a long-standing challenge for all methods, including NegLabel and our AdaNeg. Our AdaNeg partially addresses these hard examples via an easy-to-hard strategy. Specifically, our method first caches easy OOD samples, which are closer to negative labels and farther from ID labels, into the memory. These easy OOD samples, being closer to the intermediate hard OOD examples, can serve as bridges to facilitate the detection of hard OOD samples. The effectiveness of our method is validated by the improved OOD detection performance shown in Table 2 of the main paper.
### **Q3: More detailed experimental settings.**
**A3:** Thanks for the kind suggestion. In our experiments, we maintain an independent memory bank for each OOD dataset (i.e., each ID-OOD pair). In implementation, we clear the memory bank when switching to a new OOD dataset.
We validated performance using a shared memory bank across four OOD datasets: iNaturalist, SUN, Places, and Textures. In this setup, the memory bank retains features from previous OOD datasets when testing a new one. As shown in the table, shared memory banks outperform independent ones, suggesting that cached features aid in recognizing new OOD datasets. However, shared memory banks may leak information between datasets, which can be problematic in practice. Therefore, we use independent memory banks by default.
| Types of Memory Bank | INaturalist | SUN | Places | Textures | Average |
|----------------------|:-----------:|:----:|:------:|:--------:|:-------:|
| Independent | **0.59** | 9.50 | 34.34 | 31.27 | 18.92 |
| Shared | **0.59** | **9.13** | **32.68** | **29.08** | **17.87** |
**Table Caption**: FPR95 ($\downarrow$) with different types of memory banks.
### **Q4: Typos and symbol issues**
**A4:** Thank you for the kind correction. We will correct $\tau$ = 0.01 in L247 and unify the subscript NL as $S_{nl}$ in the revision. We will further proofread the manuscript carefully.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses. The authors have addressed most of my issues. I think sample imbalance is an important challenge for TTA. Therefore, I suggest that the authors may add an adaptive module to AdaNeg to alleviate the ID-OOD imbalance. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Pmx9,
Thank you for highlighting the issue of model instability due to ID-OOD imbalance.
**Following your suggestion**, we have implemented an adaptive gap (AdaGap) strategy to adjust the memorization selection criteria dynamically. This approach builds on the observation that as the score $S_{nl}$ increases/decreases, the probability that a sample is ID/OOD also increases accordingly. By enforcing a stringent selection criterion, we can effectively minimize the inclusion of misclassified samples in our memory. Specifically, we first online estimate the ratio of ID to OOD samples in the test data using a First-In-First-Out queue, which caches the ID/OOD estimation (cf. Eq. 8) of the most recent N samples:
**MR = (Estimated ID Number) / (Estimated ID Number + Estimated OOD Number)**,
where the ID and OOD numbers are calculated within the queue.
Leveraging the estimated mix ratio (MR), we can dynamically adjust the gap g in memory caching to minimize the presence of misclassified samples within the memory. For instance, if ID samples predominate in the test samples (i.e., MR > 0.5), this could lead to an increased proportion of ID samples in the OOD memory. To counteract this, we refine the selection criterion for OOD memorization to cache only those OOD samples with higher confidence. This refinement involves modifying the selection criterion for memorization in Equation (8) as follows:
**Negative:** From $S_{nl}$(v) < $\gamma$ - g$\gamma$ **to** $S_{nl}$(v) < $\gamma$ - max(g, MR)$\gamma$
**Positive:** From $S_{nl}$(v) $\geq$ $\gamma$ + g(1-$\gamma$) **to** $S_{nl}$(v) $\geq$ $\gamma$ + max(g, 1-MR)(1-$\gamma$)
where g =0.5 is the default gap analyzed in Figure 3(b).
In this way, our method remains consistent with our original version under balanced ID/OOD conditions (e.g., MR = 0.5). However, if the proportion of ID samples is higher in the test sample estimation (e.g., MR > 0.5), we increase the threshold for storing negative samples in the memory. In the extreme case where MR = 1, we estimate that there are no OOD samples among the test samples; thus, we stop storing test samples in the negative memory and only selectively cache test samples into the positive memory. We adjust our approach conversely when the MR value is lower than 0.5. This strategy enhances the robustness of our method against ID-OOD imbalance, as demonstrated in the table below:
| ID:OOD Ratio | 1:100 | 1:10 | 1:1 | 10:1 | 100:1 | 1000:1 |
|--------------|-------|------|-------|-------|-------|-------|
| NegLabel | 22.42 | 21.11| 20.99 | 20.92 | 21.48 | 23.69 |
| AdaNeg | 21.00 | 12.49| 9.81 | 15.61 | 20.71 | 26.28 |
| AdaNeg (With AdaGap) | **20.50** | **12.22** | **9.73** | **12.98** | **15.61** | **18.43** |
**Table Caption:** FPR95 (↓) with different mixture ratios of ID and OOD samples.
Please kindly note that the MR is estimated with the most recent N test samples, allowing for dynamic online adjustment of the selection criterion. We set N=10,000 by default. This dynamic adjustment ensures that our memory caching strategy remains responsive to the evolving nature of the test sample distribution, thereby optimizing memory utilization and enhancing the accuracy of our domain distinction process. We will include these analyses and the AdaGap module in the revision. | Summary: This paper introduces a new algorithm for Out-Of-Distribution (OOD) sample detection. First, it analyzes the shortcomings of previous Vision-Language OOD detection methods and proposes improvements based on these findings. Specifically, the paper presents a scheme for online updating of the memory bank during testing to design better negative proxies. The authors conducted experiments on datasets such as ImageNet and CIFAR. According to the experimental results, the newly proposed method can enhance OOD detection performance.
Strengths: 1. Currently, vision-language models are developing rapidly, and using them for OOD sample detection is a promising direction. Approaching from this perspective may yield better results.
2. The experiments in this paper are relatively thorough, encompassing both large datasets based on ImageNet and smaller datasets based on CIFAR. According to the authors' experimental results, the newly proposed method can improve the accuracy of OOD detection.
Weaknesses: 1. The motivation in this paper is not very clear. Specifically, in Figure 1(a), it is not evident why the newly proposed AdaNeg is better than NegLabel. On the contrary, the distribution of OOD samples seems to be closer to NegLabel.
2. The method proposed in this paper is based on the features and results of test samples during testing, which limits the upper bound of the method. In my opinion, the effectiveness of the proposed method relies on the vision-language model's strong inherent OOD detection capability, meaning that most test samples can be correctly processed. Based on these correctly processed samples, the method can further improve the detection accuracy of other samples. However, if in a certain scenario, the model itself cannot correctly estimate most of the samples, this method might actually make the results worse.
3. This paper merely performs optimizations based on the NegLabel framework, without many innovative points. The novelty of this improvement is insufficient to support a NeurIPS paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: As shown in weakness. What is the meaning of Figure 1, and is the proposed method effective when the base model predicts most samples incorrectly?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Potential negative societal impact is not applicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer *yxd9*,
We sincerely thank you for the constructive comments! We hope our following responses can address this reviewer's concerns.
### **Q1: The motivation in this paper is not very clear. Specifically, in Figure 1(a), it is not evident why the newly proposed AdaNeg is better than NegLabel. On the contrary, the distribution of OOD samples seems to be closer to NegLabel.**
**A1:** Thank you for pointing out this issue. In the visualization of our submission, the adaptive negative proxies are not $L_2$ normalized, whereas the features of ID, OOD, and NegLabel are normalized vectors. This makes the visualization not very clear. We have corrected this issue by revising the visualization with normalized adaptive negative proxies. As expected, the distribution of adaptive negative proxies is much closer to the ground truth OOD labels than to the negative labels in NegLabel.
We have attached an **updated Figure in the PDF of the Global Response** and will revise Figure 1(a) in the manuscript.
### **Q2: The method proposed in this paper is based on the features and results of test samples during testing, which limits the upper bound of the method. In my opinion, the effectiveness of the proposed method relies on the vision-language model's strong inherent OOD detection capability, meaning that most test samples can be correctly processed. Based on these correctly processed samples, the method can further improve the detection accuracy of other samples. However, if in a certain scenario, the model itself cannot correctly estimate most of the samples, this method might actually make the results worse.**
**A2:** Thanks for the comments.
We agree that our method leverages the inherent strong capability of vision-language models (VLMs). However, we would like to emphasize that our method is highly robust to misclassified samples.
For instance, we investigate a challenging task setting with ImageNet as the ID dataset and SSB-hard [58] as the OOD dataset, where NegLabel achieves an FPR95 of 77.26%.
With a threshold $\gamma = 0.5$ and a gap $g = 0$, about 44\% of OOD samples were incorrectly stored in memory slots for ID classes. In this scenario, our method achieves an FPR95 of 72.90\%, showing a 4.36\% improvement over NegLabel. This robustness comes from the weighted combination of cached features used to form the proxies, which resists interference from a few misclassified features.
In a worst-case scenario, where test samples are randomly cached into ID or OOD memories (50\% OOD misclassified), our method achieves an FPR95 of 77.69\%, comparable to NegLabel's 77.26\%. Such extreme confusion is unlikely in real-world scenarios, where there are usually some clues to distinguish between ID and OOD samples.
### **Q3: This paper merely performs optimizations based on the NegLabel framework, without many innovative points. The novelty of this improvement is insufficient to support a NeurIPS paper.**
**A3:** We appreciate this reviewer's comments; however, we'd like to argue that our method introduces significant improvements to the NegLabel framework and makes notable contributions to negative proxies guided OOD detection. Our contributions and distinguished features are as follows:
- We identify the label space misalignment between existing negative-label-based proxies and the target OOD distributions. To address this issue, we dynamically generate adaptive negative proxies to align with the OOD label space more effectively. This is a "novel approach" and a "significant advancement," as recognized by Reviewers ACdF and eV4A.
- We construct adaptive negative proxies using a feature memory bank that incorporates carefully designed write and read strategies. Additionally, we propose a novel multi-modal score that combines complementary textual and visual knowledge, which is an "interesting design" as highlighted by Reviewer Pmx9.
- Our method is simple yet effective (cf. Reviewer ACdF), training-free, and annotation-free (cf. Reviewer eV4A). It exhibits good scalability (cf. Reviewers ACdF and eV4A), has been comprehensively evaluated (cf. Reviewers yxd9 and eV4A), and achieves substantial performance improvements (cf. Reviewers yxd9 and eV4A).
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. The authors have addressed most of my concerns. I will raise my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and your dedicated time to review our paper.
Authors of paper 9248 | Summary: The authors introduce a new approach to leverage the pre-trained vision-language model for identifying out-of-distribution (OOD) samples. Compared to prior works that employ consistent negative labels across different OOD datasets, they introduce adaptive negative proxies to dynamically generate text labels during testing by exploring actual OOD images, thereby aligning more closely with the underlying OOD label space. Empirically, the proposed method demonstrates state-of-the-art performance across various OOD detection benchmarks especially on the large-scale ImageNet benchmark.
Strengths: - Dynamically generating negative proxies is a simple and effective strategy.
- The setting studied is very natural and this paper can easily stimulate further research in the area.
- The proposed approach performs well, particularly on large-scale datasets such as ImageNet, effectively demonstrating its scalability.
- The paper is nicely written.
Weaknesses: - While the proposed AdaNeg shows clear improvements over training-free baselines, its overall performance on ImageNet still lags behind training-based methods. This raises the question of whether there are opportunities for complementarity between the two approaches.
- Can the dynamic update of the memory bank and refinement of OOD proxies during the testing stage be considered a form of test-time training? The authors are requested to clarify the inherent connections and distinctions, especially from the perspectives of training versus training-free approaches.
- If negative proxies can directly identify true out-of-distribution (OOD) test images during the testing phase, is it possible to use the identified OOD samples to update the model parameters online?
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer *ACdF*,
We sincerely thank you for the constructive comments and recognition on our work! Please find our responses below.
### **Q1: While the proposed AdaNeg shows clear improvements over training-free baselines, its overall performance on ImageNet still lags behind training-based methods. This raises the question of whether there are opportunities for complementarity between the two approaches.**
**A1:** Thanks for the nice suggestion!
We validated the complementarity between our AdaNeg and the existing state-of-the-art method NegPrompt [31], which is characterized by learnable negative prompts.
We reproduced the results of NegPrompt with the author released codes. As shown in the table below, our method brings significant performance improvements over NegPrompt, validating the complementarity between our approach and training-based methods. We will include these analyses in the revision.
| Methods | INaturalist | SUN | Places | Textures | Average |
|----------------|:-----------:|:-----:|:------:|:--------:|:-------:|
| **NegPrompt** | 6.76 | 23.41 | 28.32 | 34.57 | 23.27 |
| + **AdaNeg** | **3.87** | **11.35** | **25.45** | **29.79** | **17.62** |
**Table**: FPR95 ($\downarrow$) of AdaNeg based on the training-based NegPrompt method.
### **Q2: Can the dynamic update of the memory bank and refinement of OOD proxies during the testing stage be considered a form of test-time training? The authors are requested to clarify the inherent connections and distinctions, especially from the perspectives of training versus training-free approaches.**
**A2:** Thanks for the suggestions.
Our online update of the memory bank and refinement of OOD proxies is a kind of training-free test-time adaptation. Unlike existing training-required test-time adaptation methods [13, 15, 69], which typically require test-time optimization and subsequently slow down the testing process, our approach is optimization-free. It introduces only a lightweight memory interaction operation, enabling rapid and accurate testing, as analyzed in Table 4.
We will further clarify this point in the revision.
### **Q3: If negative proxies can directly identify true out-of-distribution (OOD) test images during the testing phase, is it possible to use the identified OOD samples to update the model parameters online?**
**A3:** Thank you for the comments.
We performed experiments to use the identified OOD samples to update the text prompt with cross-entropy loss, similar to the training objective in [1]. However, it is unstable and often leads to model collapse, especially in challenging settings. For example, in the ImageNet (ID) and SSB-hard (OOD) setup, around 44\% of OOD samples were misclassified as ID, disrupting model learning.
Our training-free method is more robust to such misclassifications due to the weighted combination of cached features. Additionally, updating model parameters online would significantly slow down the testing process. In contrast, our method ensures rapid and accurate testing with only lightweight memory interaction.
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. My previous concerns have been well addressed. After carefully reviewing the other reviewers' comments and the authors' replies, I believe the paper has no significant flaws, and therefore, I choose to maintain my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and your dedicated time to review our paper.
Authors of paper 9248 | Rebuttal 1:
Rebuttal: ## **Common Responses to All Reviewers**
**Dear Reviewers, Area Chairs, and Program Chairs:**
We are grateful for the reconstructive comments and valuable feedback from the reviewers. We are glad that the reviewers found our idea novel (Reviewers ACdF and eV4A) and design interesting (Reviewer Pmx9), and we appreciate their recognition on the wide scalability (Reviewers ACdF and eV4A), comprehensive evaluation (Reviewers yxd9 and eV4A), and substantially improved performance (Reviewers yxd9 and eV4A) of our method.
To address the reviewers' concerns, we have attached an updated Figure 1(a) in the enclosed PDF file and provided additional experiments and analyses on training-based methods, misclassified samples, different setups of test data, and medical image datasets. Please find our itemized responses to all reviewer’s comments below, and we sincerely hope our responses can well address the reviewers' concerns.
Best regards,
Authors of Paper9248
Pdf: /pdf/57576f31c5bbdf969890ee98545fb2447b495a77.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Supra-Laplacian Encoding for Transformer on Dynamic Graphs | Accept (poster) | Summary: This paper introduces a new method called Supra-Laplacian Encoding for spatio-temporal Transformers(SLATE) to deal with dynamic graph challenges. Its core approach is to enhance the graph transformer(GT) architecture by integrating spatio-temporal information more efficiently. It deploys a new technology to convert discrete-time dynamic graphs into multiplayer graphs and exploit the spectral properties of their associated super-laplacian matrices. SLATE also implements a cross-attention mechanism to explicitly model pairwise relationships between nodes. SLATE can capture the dynamic nature of graphs more accurately with this implementation. SLATE provides a powerful tool for applications ranging from social network analysis to understanding complex biological networks.
Strengths: 1.SLATE applies spectral graph theory to the dynamic graph domain in a novel way.
2.The quality of this study is evident in the rigorous experimental setup and the comparion with SOTA methods. It is able to outperform many existing models on nine datasets.
3.The authors provide a detailed explanation of the method and the underlying theoretical concepts. And the open-source code and the instructions for reproducing the results enhances the clarity and accessiblility.
Weaknesses: 1. The experimental results of CanParl in Table 2 is not very good.
2.The permutation setting of SLATE may limit its ability to generalise to unseen nodes and large-scale graph data.
Technical Quality: 4
Clarity: 4
Questions for Authors: How does the model perform as the size of the graph increases?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank reviewer n3Tf for their meaningful and valuable comments.**
**Q.1) How does the model perform as the size of the graph increases?**
Scaling our SLATE method to graphs with ~10, 000 nodes has been one central objective in our submission. Through engineering techniques like using FlashAttention and a temporal window, we were able to achieve strong performance on the HepPh (15k nodes) and Flights (13k nodes) datasets, outperforming the baselines on both.
Following reviewer n3Tf's requests on the possibility to apply SLATE to even larger graphs, we conducted several experiments using Performer, a method that approximates attention computation with linear complexity in the number of nodes instead of the quadratic complexity of a classic Transformer. This allowed us to significantly reduce memory usage while maintaining stable performance, as shown in the results presented in ***Table 2*** of the global answer. We show that SLATE/Performer was still able to significantly outperform state-of-the-art results on USLegis and Trade.
We also successfully scaled our model on the Facebook dataset containing ~45k nodes, with a memory usage of 31 GB on our NVIDIA RTX A6000 (49 GB) card. Due to the dataset's size, we only had time to run a single trial of our model, achieving an AUC performance of 77.5\% AUC. This is a very promising result, which still outperforms strong baselines such as EvolveGCN and DySat on this large-scale dataset (as per the results reported for those methods in the HTGN paper [11]).
**W.1) Regarding performance on CanParl:**
While SLATE's performance on the CanParl dataset is not the highest, it consistently outperforms other models on the remaining datasets, such as USLegis, Flights, Trade, and UNVote. Overall, SLATE achieves an average performance that is 13 points higher than the second-best baseline across all datasets. This demonstrates SLATE's robustness and effectiveness in capturing the dynamics of various graph structures.
[11] Menglin Yang, Min Zhou, Marcus Kalander, Zengfeng Huang, and Irwin King. Discrete-time Temporal Network Embedding via Implicit Hierarchical Learning in Hyperbolic Space. ACM SIGKDD 2021. | Summary: This paper proposes SLATE, a novel method for link prediction in dynamic graphs. SLATE transforms dynamic graphs into multi-layer networks and generates a unified spatio-temporal encoding by leveraging the spectral properties of the supra-Laplacian matrix. It uses a fully connected transformer architecture to capture long-range dependencies between nodes across multiple time steps. The authors introduce a cross-attention-based edge representation module for dynamic link prediction. They claim that SLATE significantly outperforms existing state-of-the-art methods on several benchmark datasets
Strengths: 1. The idea of transforming dynamic graphs into multi-layer networks and utilizing the supra-Laplacian is innovative.
2. Extensive experiments were conducted on various datasets and baselines.
3. The method shows superior performance compared to state-of-the-art approaches on multiple datasets.
Weaknesses: W1. The explanation for adding temporal connections in the supra-Laplacian construction stage seems insufficient.
W2. The description of how to construct the supra-Laplacian is not comprehensive enough.
W3. The characteristics of the SLATE model are not clearly defined. For example, the necessity of each step (a)-(d) in Figure 2 lacks convincing arguments.
W4. This paper discloses all data and code upon acceptance, which limits the ability to verify the reproducibility of this paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1. The depiction of adding temporal connections in Figure 2 is difficult to understand. The explanation for adding temporal connections seems inadequate, especially the description of AddTempConnection in Algorithm 1.
Q2. When adding virtual nodes, are multiple virtual nodes created? The explanation regarding virtual nodes appears to be insufficient.
Q3. What are the theoretical advantages of Supra-Laplacian encoding compared to existing graph positional encoding methods?
Q4. What would be the impact of using sparse attention mechanisms instead of fully connected transformers?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This paper discloses all data and code upon acceptance, which limits the ability to verify the reproducibility of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank reviewer G1C1 for their questions and remarks, we hope the explanations below will answer their concerns.**
**Q1 & W1.) The explanation for adding temporal connections seems inadequate, especially the description of *AddTempConnection()* in Algorithm 1. The explanation for adding temporal connections in the supra-Laplacian construction stage seems insufficient.**
Having temporal connections between a node and itself at previous timesteps allows us to build a connected graph, whose spectral analysis provides relevant information of the spatio-temporal structure of the dynamic graph. A single eigenvector incorporates information about the dynamics of the graph, *i.e.* the first non-zero eigenvector if this multi-graph is connected (see Figure 1 left). In the simple case where there are no isolated nodes, this amounts to stacking the adjacency matrices diagonally, as illustrated in Equation 7 (appendix A.1). In practice, we only add a temporal connection if the nodes are not isolated, that is the role of *AddTempConnection()* in Algorithm 1 (Appendix A.2). We checked the correctness of this algorithm.
Additionally, we have included a visualization (numbered 3) in the PDF attached to the rebuttal to better illustrate the importance of temporal connections. This visualization shows that the spectrum associated with the discrete dynamic graph does not capture spatio-temporal structure, unlike when inter-layer connections are added to model temporal dependencies between nodes.
More generally, the core of our method is to transform a discrete dynamic graph (DTDG) into a multilayer graph in order to exploit their rich spectral properties, as motivated in lines 42 to 45 of our paper. A dynamic graph can be seen as a MultiLayerGraph, each snapshot of the dynamic graph is represented by a layer of the multilayer graph. In such graphs, identical nodes are connected to each other [8]: in a dynamic graph, this is represented by a connection between a node and its past.
**W2.) The description of how to construct the supra-Laplacian is not comprehensive enough.**
The construction of the supra-Laplacian is detailed in section 3.1 of the paper and illustrated on Figure 2. We give theoretical and technical precisions in Appendix A (especially sections A1 and A2), and we also resort to common background knowledge on (multi-layer) graphs, *e.g.*, the definition of a supra-laplacian matrix [8,9].
**Q2.) When adding virtual nodes, are multiple virtual nodes created?**
As indicated on lines 151-152, we add as many virtual nodes as there are snapshots; if $W = 3$, then we add 3 virtual nodes. The purpose of these virtual nodes is to make our snapshot connected in case there are several clusters in it. We've illustrated the need for virtual nodes in Figure 2.a (1 page pdf, global response): at time $t$, there are two clusters, so adding a virtual node connects these two clusters. We also show in ***Table 5*** of the answer to RN2C2 the experimental improvement of adding virtual node.
**W3.) The characteristics of the SLATE model are not clearly defined. For example, the necessity of each step (a)-(d) in Figure 2 lacks convincing arguments.**
Figure 2 is the main figure describing the details of the method. Each step is detailed in the text, with references to the precise sections at the top of the columns on the Figure (*e.g.*, step (c) is detailed in section 3.2). In each of those sections, we motivate the necessity of each step (*e.g.*, in section 3.3, we explain how we compute an edge representation from the output of the fully-connected transformer).
**Q3.) What are the theoretical advantages of Supra-Laplacian encoding compared to existing graph positional encoding methods?**
As detailed in lines 142-145 in the submission, we remind that the main objective of our pre-processing steps is to obtain a connected graph. This ensures that the spectral analysis of the supra-Laplacian matrix provides relevant information about the global dynamics of the graph, which can then be utilized as a powerful spatio-temporal encoding mechanism in graph transformers.
Our Supra-Laplacian encoding offers a unified approach that captures both spatial and temporal information simultaneously, unlike previous graph positional encoding methods, which separately computed spatial and temporal encodings (*e.g.*, Taddy [10] in the context of anomaly detection). As demonstrated in our ablative experiment (***Table 3*** of the paper), this unified spatio-temporal encoding significantly enhances performance.
**W4.) This paper discloses all data and code upon acceptance, which limits the ability to verify the reproducibility of this paper.**
All datasets used in our study comes from public datasets, such as those presented in [6]. In our paper, we have also taken care to detail all the parameters we optimized (see Table 8 in the appendix). SLATE’s source code will released to the community. We are currently working on providing the Area Chair with an anonymized and easily executable version of the code to verify the performance of our model.
**Q4.) What would be the impact of using sparse attention mechanisms instead of fully connected transformers?**
As indicated in l-262, we have compared our method to state-of-the-art approaches using sparse attention mechanisms, either discrete models such as DySAT, or continuous ones, such as TGAT, TGN and TCL.
Our experimental results show that SLATE performs significantly better than those methods (*e.g.*, see in the paper ***Tables 1 and 9*** for SLATE *vs* DySat, see ***Tables 2 and 12*** for SLATE vs TGAT)
[8] Radicchi, et al. Abrupt transition in the structural formation of interconnected networks. Nature Physics 2013
[9] Cozzo et al. Multilayer networks: metrics and spectral properties. arxiv preprint (2015)
[10] Liu et al. Anomaly Detection in Dynamic Graphs via Transformer. IEEE Transactions on Knowledge and Data Engineering 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response.
However, I still have questions regarding Q2:
In the figures provided, it's not clear which nodes are virtual nodes. Could you please identify specifically which node numbers in the visualizations represent virtual nodes? Additionally, while Algorithm 1 in Appendix mentions `AddVirtualNode()`, it doesn't provide details on how this is done. Could you explain the specific algorithm or rules for adding virtual nodes? For instance, how do you determine where to add these nodes, and how many are typically added?
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply to our rebuttal and we hope we've been able to clarify as many of your questions as possible.
**About the figures :**
Due to space constraints (1 page) and for better clarity in the figures, we did not include virtual nodes in the visualizations. A virtual node is connected to all other nodes in the snapshot, which can overcrowd the figures and make them visually complex.
It is important to note that the purpose of the visualization is to highlight the importance of having a connected graph. Figure 2 demonstrates the significance of removing isolated nodes (*RemoveIsolatedNodes*), and Figure 3 shows the impact of adding temporal connections (*AddTemporalConnections*). Due to space limitations, we were unable to include a fourth figure illustrating the benefit of a virtual node in a snapshot with multiple connected components. However, we plan to add these visualizations using the toy dataset in the appendix of the final paper if it is accepted.
**Clarification on virtual nodes:**
*where to add these nodes?*: We add one virtual node per snapshot (l-152).
*How many are typically added?* : If we have $W$ snapshots, we’ll add $W$ virtual nodes.
Explanation of the *AddVirtualNode()* function using an example:
Assume a dynamic graph $\mathcal{G} = [G_1,G_2]$ containing two snapshots. Where $G_1 = (V_1,E_1)$ and $G_2 = (V_2,E_2)$. Let $V_1 = (v_1,v_2,v_3,v_4)$ and $E_1 = ((v_1,v_2),(v_3,v_4))$. Now let’s add a virtual node $v_{n_1}$ to the graph $G_1$. We have $ V_1^{'} = V_1 \cup (v_{n_1}) $.
A virtual node is a node connected to all the other nodes in $G_1$, so we add the following edges to $G_1$: $E_1^{'} = ( E \cup (v_{n_1},v_1),(v_{n_1},v_2),(v_{n_1},v_3),(v_{n_1},v_4))$. We apply the same procedure to the next snapshot (i.e. $G_2$). We add a second virtual node $v_{n_2}$ and connect it to the nodes of $G_2$. In the end, we'll have added $v_{n_1}$ to the graph $G_1$ and $v_{n_2}$ to the graph $G_2$. In an adjacency matrix, as shown in Figure 2a.), this adds a new row and column, with 1's in each position corresponding to connections with all other nodes, indicating that it is connected to all existing nodes.
We hope this example helps to clarify your understanding on virtual nodes.
We hope that we have addressed all your concerns, and that you’ll be able to increase your rating. We remain at your disposal.
---
Rebuttal 2:
Comment: Thanks for answering my follow-up question. The additional explanation helped me understand your virtual nodes. I think it would be good if you could revise the paper to make this part a little more clear. I raise my score to 4.
---
Rebuttal Comment 2.1:
Comment: Thank you for engaging in the discussion and raising your score. We'll be updating the final paper to make the explanations of the various steps as clear as possible. We'll also include the rebuttal visualizations and an additional visualization of the virtual nodes in the supplementary. | Summary: This paper proposes a spatial-temporal encoding for transformers on dynamic graphs. Specifically, graphs at each time step are treated as a single multilayer graph and packed into a larger adjacency matrix, with temporal self-connections between each node and its past. Eigenvectors of the constructed Laplacian are used as positional encoding and concatenated with node features. A standard transformer encoder layer then generates all representations for each node at each time step within a selected time window. To predict a link between two nodes, the model does cross-attention between their representations within the time window. Experimental results show that the proposed model performs better than existing methods.
Strengths: * The positional encoding proposed by this paper aims to jointly model spatial and temporal dependencies, which is a plausible improvement over existing methods.
* The proposed model shows strong empirical performance compared with existing approaches. In particular, the proposed positional encoding works better than simply concatenating LapPE and sin/cos
Weaknesses: * Scalability/efficiency may still be a concern on large graphs, though the paper shows good engineering (e.g., Flash-Attention) can help
* Some finer-grained ablation studies are missing. For example:
* Instead of removing isolated nodes in preprocessing, can we keep the disconnected graph and just use eigenvectors corresponding to non-zero eigenvalues?
* The transformer itself can already get global information and I see no strong reason to use virtual nodes additionally. How would the model behave without virtual nodes? How would "virtual nodes + GNN" behave?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see my 2nd point in Weaknesses
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank reviewer nC2C for their meaningful and valuable comments.**
**W1 on scalability:**
As any Transformer model, SLATE’s main bottleneck in terms of scalability is the quadratic complexity of the attention matrix. However, as you pointed out, we use Flash attention to mitigate this issue. Based on this efficient engineering technique, we manage to apply SLATE to dynamic graphs having more than 10k nodes, with a broad-audience GPU device (NVIDIA RTX A6000 with 49 GB memory). We show in ***Table 1*** of the global response on Flights (13,169 nodes) that the memory demand of SLATE is comparable to several state-of-the-art methods for dynamic link prediction, *e.g.*, EvolveGCN, Dysat, ROLAND-GT with Flash attention.
To further challenge the scalability of STATE to larger-scale graphs and explicitly address reviewers’ concerns, we propose to apply attention approximations to reduce our memory consumption. Specifically, we explore the use of the Performer method [2], which approximates the softmax calculation using Random Fourier Features, thus breaking the quadratic complexity to linear complexity. Using Performer, SLATE is able to scale to datasets like Facebook containing over 45,000 nodes with the same NVIDIA RTX A6000 (49 GB). We conducted preliminary experiments on this Facebook dataset using the default Performer’s hyperparameters, and reached 77.5% AUC score, which is already above EvolveGCN and Dysat. We expect to significantly improve these results by a more careful parametrization.
We also apply Performer on several datasets used in the submission, *i.e.*, AS733, USLegis and Trade, and show that the performance of SLATE/Performer remains close to the performance of SLATE, and remain significantly above state-of-the-art results on those datasets (see ***Table 2*** of the global answer). We believe that the application of such attention approximation is a very promising way of scaling SLATE to larger datasets, while maintaining excellent performances. We would be glad to include these new results and discussion in the final paper if accepted.
**W2 on ablation studies:** Following RnC2C’s request, we provide finer-grained ablation studies regarding supra-Laplacian’s computation. We would be glad to include those results in the final paper.
**W2.1 on pre-processing isolated nodes:**
This is an excellent question, and one that we've studied extensively. To fulfill *RNc2c’s* request, we evaluated the spectral analysis obtained when keeping isolated nodes but only considering eigenvectors associated with non-zero eigenvalues. Qualitatively, we provide visualizations (see the visualization 2 of the pdf file attached to the rebuttal) showing that the resulting spectrum is significantly more noisy than that obtained after removing isolated nodes, especially because many eigenvectors encode a projection on the kept isolated nodes. Those projections do not capture meaningful information on the spatio-temporal structure of the dynamic graph.
To consolidate the comparison, we also provide a quantitative ablation study on the ***Table 4*** below on the USLegis dataset: we can see that that there is a huge drop in performance (~30 pts in AUC) when not removing isolated nodes but ignoring 0 eigenvectors (‘SLATE with isolated nodes 0’) compared to removing them in SLATE. This clearly validates the importance of this pre-processing step.
***Table 4: AUC Performance of SLATE Compared to SLATE with Isolated Nodes, Considering Only Eigenvectors Associated with Strictly Positive Eigenvalues.***
| Models | Colab | USLegis |
| :---- | :---- | :---- |
| SLATE with isolated nodes 0 | 86.73 | 66.57 |
| SLATE | **90.84** | **95.80** |
**W2.2 on virtual nodes:**
Transformers can indeed capture global information without virtual nodes (VNs). Although recent works use virtual nodes with static GNNs to get global information [7], the purpose and motivation in our paper is fundamentally different. Indeed, we use virtual nodes as one step to ensure that our multi-graph is connected, which is crucial for designing a relevant spatio-temporal encoding based on the supra-Laplacian matrix, enabling us to extract meaningful information from the spectral analysis of the supra-Laplacian. In this context, the second smallest eigenvalue represents the graph's dynamics. This is particularly important in cases where a snapshot contains multiple clusters of nodes and remains disconnected even after removing isolated nodes.
Based on RNc2c’s recommendation, we experimentally demonstrate the importance of virtual nodes. As shown in the ***Table 5*** below, this approach yields a gain of +1.21 points in AUC on the Enron dataset.
***Table 5: AP and AUC Performance of SLATE With and Without Virtual Nodes on the Enron dataset.***
| Models | AP | AUC |
| :---- | :---- | :---- |
| SLATE w/o VN | 93.74 | 95.18 |
| SLATE | **95.40** | **96.39** |
To answer RNc2c’s request on "virtual nodes + GNN", we implemented the addition of a virtual node per snapshot on a standard DTDG architecture already included in the article, namely GRUGCN, which combines a spatial GCN and a temporal RNN. However, in contrast to results reported on static graphs, we were not able to improve performances over GRUGCN in our dynamic context.
[7] Cai et al. On the Connection Between MPNN and Graph Transformer. ICML 2023
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and additional experimental results. I think they are good additions to the paper and I suggest they are included in the updated version. I'm willing to increase my rating to 6
---
Reply to Comment 1.1.1:
Comment: We would like to thank you for your reply and for the positive response to our work. We are delighted that the additional experiments and the rebuttal were able to provide you with answers. We'll be adding the scalability experiments as well as the detailed preprocessing experiments for the supra-adjacency matrix. | Summary: This work introduces Supra-Laplacian encoding for spatio-temporal Transformers (SLATE) which aims to learn both spatio and temporal information in a dynamic graph with a transformer architecture. The key is to convert Discrete Time Dynamic Graphs into multi-layer networks and then extract the spectral features of their supra-Laplacian matrix to improve upon existing dynamic graph transformer designs. Additionally, SLATE employs a cross-attention mechanism to accurately model nodes' pairwise relationships, improving dynamic link prediction. The proposed SLATE model performs competitively to both CTDG and DTDG methods on discrete graphs.
Strengths: - **originality**: connecting DTDGs into a multi-layer graph and then compute spectral properties of a Supra-Laplacian matrix is a novel approach in the literature. The empirical performance also demonstrates that this approach can outperform existing methods with its spatio-temporal reasoning capabilities.
- **extensive evaluation**: The proposed SLATE method compares favorably to both CTDG and DTDG methods on discrete datasets with existing evaluation of testing 1 positive edge against 1 negative edge. In addition, model analysis experiments and ablation studies provides insights into the model components and choices. Additional experiments with hard negative samples are also included in the appendix.
- **clear presentation**: the paper is easy to follow and the main idea is presented well
Weaknesses: - **scalability**: my main concern is the scalability of the method as the authors also pointed out as a limitation. Even with the time window (which truncates the history of the temporal graph), the $N^2 w^2$ complexity remains very high and only feasible for networks with up to thousands of nodes, In addition, there is a large amount of precomputation needed for the supra-Laplacian and computing its eigenvectors.
- **window size**: one of the core hyperparameter of SLATE is the choice of window size, as the study in Figure 4 shows that there are some common optimal window size for the CanParl Colab and USLegis datasets. These datasets mostly contains a small number of snapshots thus might be why 4 is a good choice. In practice though, it might be difficult to tell which window size is optimal without extensive experiments to select it. It would also be interesting to see if the length of the window is related to other factors in the architecture, size of the multi-layer network, transformer dimension etc.
Technical Quality: 3
Clarity: 4
Questions for Authors: - In the ROLAND paper, which is a close competitor to this work, the MRR metrics is used for evaluation, why is the AUROC adapted in this work instead as it has been shown to be very limited and biased towards the 1 negative sample that is compared against each positive test edge.
- there are types in the paper for example "loose" on line 5 should be "lose"
- how are the CTDG methods applied on the discrete datasets, some performance looks low.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations are discussed sufficiently in the paper. More discussion on negative societal impact might be included such as downstream applications of anomaly detection or recommendation systems for temporal graph learning methods,
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **We thank reviewer kt1z for their meaningful and valuable comments.**
**W1 on scalability:**
Like any Transformer model, SLATE's primary scalability bottleneck is the quadratic complexity of the attention matrix. However, as noted by reviewer nC2C, we mitigate this with FlashAttention, allowing SLATE to handle dynamic graphs with over 10k nodes. As shown in ***Table 1*** of the global response, SLATE's memory usage on Flights is comparable to that of several state-of-the-art dynamic link prediction methods, *e.g.,* EvolveGCN and DySAT.
To further test SLATE's scalability on larger graphs, we use Performer (applied for static graphs in GraphGPS), which approximates softmax calculations with Random Fourier Features and reduces attention's complexity from quadratic to linear. With Performer, SLATE scales to datasets like FB with over 45k nodes. In preliminary experiments using Performer's default hyperparameters, SLATE achieved a 77.5% AUC score on the FB dataset, above EvolveGCN and DySAT. We expect to improve these results with more careful parameter tuning.
We also applied Performer to several datasets used in the submission, such as AS733, Legis, and Trade. As shown in ***Table 2*** of the global response, SLATE/Performer's performance remains close to SLATE's and significantly exceeds state-of-the-art results on these datasets. We believe attention approximation is a promising way to scale SLATE to larger datasets while maintaining excellent performance. We would be glad to include these new results and discussions in the final paper if accepted.
Regarding the scalability of the eigenvector calculation, its complexity is $\mathcal{O}(k^{2}N^{'})$ [3], with $k$ the number of kept eigenvectors and $N^{’}$ the number of nodes in the supra-adjacency matrix after removing isolated nodes. Note that $k$ is relatively small in our experiments (around 10-15). In practice, the eigenvector calculation is small in SLATE's computation: on the Facebook dataset, our model ran in 303 seconds per epoch ($k=6$) without pre-computing the eigenvectors compared to 577 seconds for VGRNN.
**W2 on time window selection**
The time window is indeed an important hyper-parameter; however, we obtained stable results using a fixed window size $W$ of 3 for all experiments. An extensive grid search across all datasets could improve performance; for example, on CanParl, a window size of 4 yields better results.
To explicitly fulfill the reviewer's request regarding the impact of W on datasets with a larger number of snapshots, we conducted several experiments on UNVote, which contains 72 snapshots. As shown in the ***Table 3*** below, we observed similar trends for SLATE than those on Figure 4 in the paper. The optimal window size in these runs is 3.
To address the reviewer's questions about the relationship between model and window size, we evaluate models of different nature and complexity than SLATE. We observed a similar trend among different models, *i.e.,* a local temporal window is preferable than keeping the full history, *e.g,* DySAT performances dropped by 9 points with $W=\infty$, where $\infty$ considers all snapshots for predictions. We would be glad to elaborate this interesting discussion in the final paper.
***Table 3: AUC Performance Based on Temporal Window Size on the UNVote Dataset***
| Model | # of parameters | W \= 1 | W \= 2 | W \= 3 | W \= 4 | W \= 5 | W \= $\infty$ |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| SLATE | 2.1 M | 96.68 | 98.77 | 98.74 | 95.90 | 95.79 | 92.24 |
| EGCN | 1.8M | 86.96 | 86.48 | 86.74 | 87.66 | 85.26 | 86.14 |
| Dysat | 1.8M | 83.93 | 81.90 | 86.15 | 88.71 | 80.08 | 77.04 |
**Q1 on MRR:**
The choice of evaluation metrics is crucial and reflects the specific task we are addressing. In our work, we focus on dynamic link prediction as a classification task, while the Mean Reciprocal Rank (MRR) metric is typically used for ranking tasks. Dynamic link prediction aims to classify whether a link between nodes exists. For such tasks, metrics like AUROC are preferred because they provide a threshold-independent measure of model performance across all classification thresholds [4,5]. This is the standard evaluation of models like TGAT or TGN.
MRR is suitable for ranking tasks, such as user-item recommendation (bipartite graphs), when the objective is to rank nodes according to some relevance measure. Papers like JODIE employ MRR to evaluate dynamic link prediction within bipartite graphs like Wikipedia or LastFM, where ranking tasks differ from our classification focus.
Extending SLATE's evaluation to other tasks, such as link ranking, is greatly insightful. Due to the short duration of the rebuttal period, it was challenging to obtain results for these additional tasks. For link ranking, we would need to compute $N \times (N-1)$ links per node. However, we look forward to exploring this in future work.
**Q2 on typo:**
Thank you for catching that typo. We have corrected it.
**Q3 on CTDG performances:**
In the evaluation of CTDG models, as described in [4,6], discrete datasets like CanParl and USLegis are treated as continuous datasets, with timestamps at regular intervals. For instance, in a dataset with 3 snapshots, timestamps might be $t_1=100$ $t_2=200$, and $t_3 = 300$. CTDG models (e.g., TGAT, DyRep) use these integer values to calculate temporal embeddings, similar to how they process continuous datasets.The performance of CTDG models on these discrete datasets has been taken from [6] as mentioned in l-243 to 247 of our paper. We followed the same evaluation protocol.
[3] Kreuzer et al. Rethinking Graph Transformers with Spectral Attention. NeurIPS 2021
[4] Poursafaei et al. Towards Better Evaluation for Dynamic Link Prediction. NeurIPS 2022
[5] Yang et al. Evaluating Link Prediction Methods. Knowledge and IS 2015
[6] Yu et al. Towards Better Dynamic Graph Learning: New Architecture and Unified Library. Neurips 2023
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: I thank the authors for their detailed rebuttal. Here are comments from my end:
- W1: scalability. I agree that using transformers with linear complexity can address the scalability challenge to a certain extent. Some larger datasets with million of nodes such as those seen in OGB[1] and TGB[2] might still prove to be a problem. Nonetheless, my concern here is mostly addressed.
- W2: addressed.
- Q1: I understand the limitation on time and proper evaluation of ranking links require significant computational time. Despite many prior work treating temporal link prediction as binary classification due to low computational cost for this evaluation, it is not a realistic measure for practical applications. In any recommendation systems, there is no guarantee of having half positive links and half negative links (as assumed in binary classification), I hope the authors can include ranking results in future versions of the paper.
Overall, I decide to retain my current score.
[1] Hu W, Fey M, Zitnik M, Dong Y, Ren H, Liu B, Catasta M, Leskovec J. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems. 2020;33:22118-33.
[2] Huang S, Poursafaei F, Danovitch J, Fey M, Hu W, Rossi E, Leskovec J, Bronstein M, Rabusseau G, Rabbany R. Temporal graph benchmark for machine learning on temporal graphs. Advances in Neural Information Processing Systems. 2024 Feb 13;36.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response to our rebuttal and for the positive opinion of our work. We are delighted to have been able to clarify certain concerns about our method.
We prioritized the experiments on scaling, temporal windows, and preprocessing, as these were requested by multiple reviewers. We are pleased that these additional experiments have addressed your questions. The experiments for the ranking task are currently underway, and we plan to include them in the final paper. | Rebuttal 1:
Rebuttal: # Global response to reviewers
We would like to thank all the reviewers for their excellent feedback, their relevant questions, the enthusiastic reception of our SLATE method and their encouragement.
We would like to clarify here some key points raised by the reviewers along the two main lines of concerns: i) the scalability of SLATE and ii) the construction of the multilayer graph from the discrete dynamic graph. We also individually respond to each reviewer below their reviews.
### Scalability (RNc2c, Rn3Tf, Rkt1z)
Reviewers raised concerns on the scalability of our SLATE model due to the quadratic complexity of the Transformer's attention matrix computation.
Firstly, we want to highlight that based on efficient engineering techniques, especially FlashAttention [1], we manage to apply SLATE to dynamic graphs having more than 10k nodes, with a broad-audience GPU device (NVIDIA RTX A6000 with 49 GB memory). In this setting, the memory demand is comparable to several state-of-the-art methods for dynamic link prediction, *e.g.*, EvolveGCN, Dysat, ROLAND-GT with Flash attention — as shown in the ***Table 1*** (below).
***Table 1 : Memory demand for different models on the Flights dataset (13k nodes), with NVIDIA RTX A6000 (49 GB)***
| Models | Memory usage | Time / epoch |
|---|:---:|:---:|
|EvolveGCN | 46 GB | 1828 s |
|Dysat | 42 GB | 1077 s |
|VGRNN | 21 GB | 931 s |
|ROLAND-GT w/of Flash |OOM | - |
| ROLAND-GT | 44 GB | 1152 s |
| SLATE w/o Flash | OOM | - |
|SLATE - Flash | 48 GB | 1354 s |
|**SLATE - Performer** | 17 GB | 697 s |
To further challenge the scalability of SLATE to larger-scale graphs and address reviewers’ concerns, we propose to apply attention approximations to reduce memory consumption. Specifically, we explore the Performer method [2], which approximates the softmax calculation using Random Fourier Features, thus reducing quadratic complexity to linear complexity. Using Performer, SLATE scales to datasets like Facebook with over 45,000 nodes on the same NVIDIA RTX A6000 (49 GB), while all tested methods but VGRNN are out of memory on this device. Preliminary experiments on this Facebook dataset using the default Performer’s hyperparameters reached a 77.5% AUC score, which is already above the EvolveGCN and DySAT results reported in the HTGN paper [11]. We expect to improve these results with careful parametrization. We also apply Performer on several datasets used in the submission, *i.e.*, AS733, USLegis, and Trade, and show SLATE/Performer performance remains close to SLATE and significantly above state-of-the-art results on those datasets (see ***Table 2*** below). We believe such attention approximation is a promising way of scaling SLATE to larger datasets, while maintaining excellent performance. We would be glad to include these results and discussion in the final paper if accepted.
***Table 2 : SLATE with a Transformer encoder vs a Performer encoder***
| Models | AS733 | Legis | Trade |
| --- | --- | --- |--- |
| SLATE-Transformer | 97.46 | 95.80 | 96.73 |
| SLATE- Performer | 95.28 | 95.02 | 96.49 |
### Clarifications on supra-Lapacian computation (RG1C1, RNc2c)
To compute the supra-Laplacian matrix from our dynamic graph, we apply 3 main pre-processing steps, as shown in Figure 2a: removing isolated nodes, adding virtual nodes, and adding temporal connections. We remind that the main objective of those pre-processings is to obtain a connected graph from which the spectral analysis of the supra-Laplacian matrix contains relevant information on the global graph’s dynamics to be used as a powerful spatio-temporal encoding in graph transformers (l. 142-145 in submission).
The reviewers raised questions related to the importance of those 3 pre-processing steps.
In the specific answer to RG1C1, we provide clarifications regarding the computation for adding temporal connections. To provide additional insights on the importance of those temporal connections, we add a one-page pdf with a visualization of the spectrum on a toy dynamic graph, which complements our Figure 1 in submission. Here, we show that not including temporal connection yields a spectrum whose eigenvectors are unable to capture the spatio-temporal structure of the dynamic graph.
RNc2c requested finer-grained ablation studies to validate the importance of removing isolated nodes and adding virtual nodes.
For the removal of virtual nodes and fulfilling RNc2c’s request, we evaluated the spectrum obtained when keeping isolated nodes but only considering eigenvectors associated with non-zero eigenvalues. Qualitatively, we provide new visualizations (see pdf file attached to the rebuttal) showing that the resulting spectrum is significantly more noisy than that obtained after removing isolated nodes, especially because many eigenvectors encode a projection on the kept isolated nodes. Those projections do not capture meaningful information on the spatio-temporal structure of the dynamic graph. We also provide a quantitative ablation study on the USLegis dataset that shows a massive drop in performances (30 pts) when not removing isolated nodes but ignoring 0 eigenvectors — see specific response to RNc2c.
Regarding virtual nodes, we clarify in the response to RNc2c that their motivation is fundamentally different to works adding them in GNNs, and that this pre-processing is thus not redundant to the full connections in transformers. To further validate the importance of virtual nodes, we provide new ablation studies showing degraded performances when not including them.
We hope that the work done in this rebuttal helps to clarify the reviewers’ concerns. We remain at their disposal for further discussions on this work.
[1] Dao et al. FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. arxiv preprint (2022).
[2] Choromanski et al. Rethinking Attention with Performers. ICLR 2021
Pdf: /pdf/db17295e41b2bc95be128d2a07f2282f97670f3d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Large Pre-trained time series models for cross-domain Time series analysis tasks | Accept (poster) | Summary: Training large time series (TS) models is often limited by the scarce data available for a specific application. Existing pretraining methods use a simplistic tokenization scheme where the TS is cut up into equally sized parts, independent of its content. The newly proposed method *Large Pre-trained Time-series Models*, therefore, adaptively segments the input time series into (potentially) overlapping tokens depending on the TS itself. It shows very good forecasting performance in zero-shot and finetuning settings. It can also be used for classification.
Strengths: - The relevance of the problem and motivation for adaptive segmentation is convincing.
- The method of adaptive segmentation is an interesting solution to the issue.
- LPTM is compared against a plethora of appropriate and challenging baselines.
- It shows promising empirical results.
Weaknesses: - I am under the impression that this paper may have found a strong method yet does not sufficiently investigate *why* it works. The interplay between learning the scoring function and training the encoder is not very clear. See below.
- A lot of experimental claims are not adequately substantiated. It is claimed in question 6 of the checklist that error bars are provided and that statistical significance tests are performed, yet I did not find them. See below.
- The overall presentation (language and formatting) should be improved.
- The provided implementation is not accessible. (Error: "The repository is expired") In the current state, results are not reproducible since key hyperparameters are missing. The authors claim in question 6 of the checklist that they state how hyperparameters were chose, yet I could not find it in the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The problem statement (l. 96-98) only considers in-domain applications. Is it also interesting to consider domain generalization settings?
2. The term "Large Pre-trained Time-series Models (LPTM)" (l. 48-51) is extremely generic and better describes the emerging field than a single method. Would it make sense to focus more on the "Adaptive Segmentation" contribution in the title and exposition?
3. Why did you choose Eq. (2) to be that way? Are all parameters learned?
4. The scaling of the method is not discussed sufficiently; could you comment on that? There are potentially a lot of tokens -- in the worst case, there are as many as input tokens. Even more, the number of evaluations of $s(i,j)$ scales quadratically in the input length. While the heuristic for selecting a good set of segments is defined well, a discussion of why it is sensible is missing. See also question 7.2.
5. Why did you use masked reconstruction as the two pretraining tasks? Could you explain why this is more desirable than alternatives, like contrastive methods?
6. The interplay of training the segmentation strategy and the encoder simultaneously requires a more nuanced discussion. To what extent can this cause something like a "mode collapse" (as in Generative Adversarial Networks), where the $s(i,j)$ always chooses the same segments since they were found to be beneficial at some point and stops "exploring" others? This should (1) be discussed in more depth and (2) may be a significant limitation of the method.
7. This leads to the experimental evaluation.
1. Following (6), can you provide insights into the relative convergence speeds of the scorer and the encoder/decoder? How good is the score function $s$ at predicting the final encoder loss?
2. How many segments are there typically? A histogram would help judge the typical number of tokens resulting from the adaptive segmentation.
3. L. 281: "2x to over 10x smaller than other pre-trained time-series models" -> Where do you provide that comparison of model sizes?
4. Section 5 only discusses (almost) exclusively the forecasting setup. Why were the specific classification datasets chosen? Section 6 mentions 35 datasets (UCI has more time series classification datasets), but Table 4 only contains 32. Were three datasets removed at some point?
5. L. 305: "We observe that LPTM has highest mean rank" -> Please provide that rank in (or alongside) Table 4.
6. The caption of Table 4 talks about statistical significance. What are the test's results? Which significance test did you even perform? How were results aggregated (if it was the arithmetic mean, what is their std. dev.)?
Minor comments:
- The paper would benefit from a language pass.
- Improvements to formatting: References are formatted confusingly, e.g., in lines 22, 23f, 36f, etc. This problem occurs throughout the paper. Table 4 in the Appendix is too wide. REVIN -> RevIN. Formatting of LASTMASK, etc., is inconsistent (l. 163 vs. l. 335), ...
- The example in line 40ff could be given more bluntly and convincingly by discussing the same event in different sample rates.
- L. 91, what is R? The same as in l. 152?
- L. 89, isn't $\mathcal{D}_\text{pre}$ the union over the individual datasets?
- The abbreviation SSL has never been introduced.
- Red and green are difficult to distinguish for people with common color deficiencies. Therefore, Figure 1 could be improved with a different color palette.
- Table 3 should mention that it is about finetuning. Are the baselines trained from scratch or merely finetuned as well?
- The authors should check whether they intentionally want to cite many preprints (e.g., from Arxiv) or their published variants.
- The capitalization of the title is inconsistent.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Depending on the answers to the questions above, possible limitations mentioned could be made more transparent. For example, the insights into the newly induced biases are currently limited.
The answer to section 2. in the checklist is neither sufficient nor truthful. For example, "multivariate" is never mentioned in Section 7.
The discussion of the societal impact could also consider a possible data leakage from one private application (e.g., in the medical domain) to another one. Even a rather mundane problem, like a feasible membership inference attack, could be problematic in privacy-sensitive scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We will address them as follows:
**Domain-generalization settings**
We wish to emphasize that we generalize to a wide range of domains but we require to learn segmentation module for each of the domains.
Generalizing to unseen domains is an important research question to tackle for general pre-trained model
**Title**
We chose LPTM as the title since our innovation using adaptive segmentation for cross-domain pre-training enables state-of-art performance across multiple datasets with heterogenous patterns. We are happy to add to adaptive segmentation to the title.
Moreover, other pre-trained baselines also have generic names such as TimesFM (Time-series foundational model).
**Eq 2**
The intuition for Eq 2 is that since the GRU captures the information at timesteps i and j in context of entire time-series, the segment score function uses this information encoded in z^(i) and z^(j) to derive the segment score. The exact form of the equation was chosen empirically and is similar in form to Bahdanau attention function. All the parameters (W_1, W_2 and b) are learnable.
**Scaling and number of tokens**
Actually, we found that adaptive segmentation doesn’t provide any significant contribution to overall overhead of the model since the transformer backbone consumes the bulk of the compute time. While the entire adaptive segmentation module’s runtime is quadratic in length of the time-series, Empirically we found that the average size of the segments across datasets is around 5.2 with some domains such as ETT having larger average lengths (13.4) and some domains such as behavioral datasets with lower average lengths (3.1).
Regarding the heuristic, solving for the optimal set of segments that provide the maximum score as well as cover the entire length of the input is a non-trivial optimization problem whose complexity could be upto exponential in number of time-steps. Therefore, we use a simple heuristic of eliminating the lowest valued segments till we cannot cover the entire time-series. (l 12-14).
**Choice of SSL tasks**
Randon masking and lask token predictions are popular SSL pre-training methods that are widely adopted in LLMs and some time-series methods like PatchTST. We also point out that using these simple tasks already provides state-of-art results.
While we can use more sophisticated methods, the focus of our work is on using adaptive segmentation to enable multi-domain pre-trained time-series model. Indeed LPTM is very flexible to adapt other SSL loss functions to improve performance could be a interesting extension of our work.
**Stability of training and mode collapse**
We did not observe instability or mode collapse during pre-training. We found that the training of the model was stable across different domains and datasets. This was observed across multiple runs of the model as well. The frequency of updating the segmentation loss relative to SSL loss is key in ensuring the stability of training.
The overall objective of self-supervised pre-training involves a discrete objective of choosing an optimal segmentation strategy along with the prediction of masked segments. Similar to other works in graph learning[1] and reinforcement learning on discrete action space [2] which involve learning over discrete structures that cannot be trivially integrated we use a bilevel optimization approach, and periodically optimize the segmentation by learning on SSL tasks.
Improving the efficiency and stability of this approach and more tightly coupling the two objectives could be an important research direction to improve model performance.
**Convergence of scorer and encoder**
It is not straightforward to compare the SSL loss with the loss for score function since they indicate different objectives. We found that the SSL loss may slightly perturb when there is an update to score function loss but generally follows a relatively smooth downward trend to convergence. The scoring function loss, while trending lower over time, doesn't converge to zero and hence is not a predictor of SSL loss, Eq 5 only forces the scoring function to be trained to update in the direction of decreasing SSL loss.
**Model Sizes**
LPTM model size is around 100M which is 2x smaller than than TimesFM, the smallest of the pre-trained baselines and is around 10x smaller than Chronos which has around 1B parameters.
**Classification tasks**
While we focused on forecasting for most of the experiments similar to other general pre-trained time-series models, we also showcased the flexibility of LPTM to perform classification due to our architectural choice.
Table 4 contains only 32 datasets and this is a typo. We added the other 3 datasets in the additional pdf.
**Mean rank of Models**
We post the mean ranks in the additional PDF. LPTM has the best average rank of 2.5 and is significantly better than other models.
**Significance test**
We used the standard t-test to determine statistical significance. We have added the standard deviations across 10 runs in the additional pdf.
**Regarding Limitations**
We thank the reviewer for the suggestions. We will add the discussion on multivariate datasets and issues regarding data privacy to discussions.
**Code link**
Sorry. We didn’t realize the link was expired during the review process. The code can be found in the LPTM.zip file at https://osf.io/vy3sc/?view_only=c10508d340ba468287c984496cf57be1
---
Rebuttal Comment 1.1:
Comment: Thank you for your insightful responses. I have no further major questions. I hope the above discussion and improvements to the language and general presentation make it into the camera-ready version.
**I raised my score from 3 to 5.**
As a side note, providing Critical Difference diagrams for the classification results would improve the rigor of the analysis.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging our response and are glad to have addressed their concerns. We are grateful for increasing their scores. | Summary: The paper introduces a new approach for creating pre-trained models for time-series data, similar to those used in language and vision tasks. The authors propose a model called Large Pre-trained Time-series Models (LPTM), which includes an innovative adaptive segmentation module to handle diverse time-series data from multiple domains.
Key contributions include:
- Developing a framework for pre-training time-series models on multi-domain datasets, using a novel adaptive segmentation module to tokenize inputs effectively. This is achieved via a self-supervised learning objective.
- Demonstrating that LPTM performs as well or better than state-of-the-art domain-specific models when fine-tuned for various time-series tasks, such as forecasting and classification, with less training data and compute time.
- Proving that LPTM achieves superior results in both zero-shot and fine-tuned settings across diverse domains like epidemiology, energy, and economics, requiring up to 40% less data and 50% less training time compared to existing methods.
Strengths: The paper has the following strengths:
- Well-written, clear, easy to follow. Algorithm is a nice plus.
- Baseline choice reasonable: most recent methods are considered.
- Experimental results good, when considered on the set of datasets chosen (more points on that in the weaknesses section).
Weaknesses: - It's a bit hard to get a good feel for the relative advantage of the proposed method. In table 2, the approach is clearly better, but we are left to infer that from that fact that it is commonly second or first in the rankings. Could the authors maybe add some for of aggregate metric, e.g. the average rank across datasets of a given method?
- Despite mentioning code is available, the link does not work (subscript 3 on page 7, time of access 2024-07-12, and previously): "The repository is expired".
- For a paper dealing in large part with forecasting, I was surprised by the absence of almost all of the classical long-term forecasting datasets used by other papers: traffic, electricity, weather, illness... Given that these are by far the most heavily studied ones in the literature, including them (as proposed in the questions section). While I don't find it a critical (but still important) concern, I strongly advise the authors to consider adding them as it will help avoid concerns other readers might have about cherry-picking of results.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Can the authors add error bars (in the appendix possibly) for their experiments? They mention that they already run 10 experiments per setup so these should be readily available and would give a good idea of the robustness of the findings.
- Can the authors ensure that code to reproduce their experiments is available as stated?
- Can the authors run the same experiments on the "standard" long-term forecasting datasets, as listed in the weaknesses section?
Note: I feel the paper is definitely interesting and makes valid contributions. Addressing my questions, in particular the one about the long-term forecasting datasets would be a strong argument for me to raise my score.
Edit: I've read the rebuttal provided by the authors, and since my open questions have been addressed I'm raising my score to 7.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for comments and suggestions. We address them as follows:
**Aggregate rank metric**
We thank the reviewer for the suggestion. We will add the average rank of each model as:
| Model | Score |
|-----------------------|-------------|
| AutoARIMA | 21.388889 |
| Informer | 14.055556 |
| Autoformer | 12.000000 |
| PatchTST | 6.833333 |
| N-HITS | 11.388889 |
| MICN | 7.277778 |
| TimesNet | 11.111111 |
| LLM-Time | 12.777778 |
| TimesFM | 12.277778 |
| Lag-LLAMA | 17.444444 |
| Chronos | 13.888889 |
| STEP | 9.111111 |
| EpiFNP | 14.666667 |
| ColaGNN | 16.500000 |
| TS2Vec | 19.111111 |
| SimMTM | 15.611111 |
| TS-TCC | 18.111111 |
| LPTM | 2.500000 |
| LPTM-NoSegment | 12.055556 |
| LPTM-NoPreTrain | 10.444444 |
| LPTM-NoLinProb | 6.222222 |
| LPTM-OnlyRandMask | 7.555556 |
| LPTM-OnlyLastMask | 3.666667 |
LPTM has the best average rank of 2.5 with significant lead over second best model.
**Code link**
Sorry. We didn’t realize the link was expired during the review process. The code can be found in the LPTM.zip file at https://osf.io/vy3sc/?view_only=c10508d340ba468287c984496cf57be1
**Long-term forecasting benchmarks**
We have benchmarked on ETT and M4, two popular long-term forecasting benchmarks in the paper. We have added additional benchmarks on ILI, Electricity and Exchange datasets (similar setup to [1]):
| Dataset/Model | AutoARIMA | PatchTST | LLM-Time | TimesFM | Lag-Llama | Chronos | SimMTM | LPTM |
|---------------|-----------|----------|----------|---------|-----------|---------|--------|------|
| Electricity | 0.6 | 0.48 | 0.93 | 0.51 | 0.63 | 0.82 | 0.81 | 0.74 |
| Exchange | 1.23 | 0.94 | 1.72 | 0.86 | 1.51 | 1.54 | 0.93 | 0.91 |
| ILI | 1.95 | 0.97 | 1.11 | 1.83 | 2.11 | 1.85 | 0.84 | 0.96 |
| Traffic | 2.14 | 1.66 | 1.94 | 1.85 | 2.56 | 1.98 | 1.84 | 1.52 |
**Error bars**
We have added the std. Dev across 10 runs in the additional PDF.
### References
[1] Nie, Yuqi, et al. "A time series is worth 64 words: Long-term forecasting with transformers." arXiv preprint arXiv:2211.14730 (2022).
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for responding to my comments and questions. I feel that the added points strenghten the case for accepting the paper, and have raise the score to 7 as a result.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: We are glad that our response has addressed your comments and strengthened your case for the paper. We are thankful for the increase in score.
We would be glad to answer any more of your questions. | Summary: The paper proposes Large Pre-trained Time-series Models (LPTM), a novel method designed to improve the efficiency and performance of time-series analysis across multiple domains.
The key contribution is an adaptive segmentation module that automatically identifies optimal segmentation strategies for diverse datasets during pre-training.
This approach aims to overcome the limitations of fixed-length segmentation, which may not adequately capture the temporal patterns of heterogeneous time-series data.
LPTM demonstrates superior forecasting and classification performance, requiring up to 40% less data and 50% less training time compared to state-of-the-art models.
Strengths: S1. This paper focuses on the time series segmentation problem.
As the basic semantic unit in time series is not as clear as in text, a proper segmentation is a promising direction towards better series modeling.
S2. The proposed segmentation method is adaptively calculated over each specific input series.
S3. The experiments are extensive.
Weaknesses: W1. Although time series has a weaker semantic structure than natural language, it is closer connection to images.
In both time series and images, a semantic unit, e.g., a small item or a texture in an image, can have different lengths and scales.
This raises a challenge against the main motivation: why a full self-attention-based architecture works for images (e.g., ViT), why for time series the segmentation needs to be explicitly done?
It would be interesting if the authors can further discuss this problem and provide their intuitions.
W2. The introduction of the adaptive segmentation module seems to bring instability in the initial model training, as well as requiring longer training time (although the authors propose to backpropagate the gradients every 10 batches).
Specifically, the loss function for segmentation is a hard loss based on the selected subset of best segments.
However, the parameters seem to be randomly initialized, which could provide highly random "best" segments.
Hence, the convergence stability and the training time with and without the dynamic segmentation modules should be discussed.
W3. The dynamic segmentation modules seem not to be fine-tuned with specific attention.
However, as the author(s) mentioned, different datasets could have very different best segmentation.
Hence, it would be interesting to discuss why this is sufficient and provide theoretical or empirical evidences.
Technical Quality: 3
Clarity: 4
Questions for Authors: Q1 (cr. W1). Please provide intuitions and empirical evidences on why full self-attention-based architectures work for images while an explicit segmentation module is required for time series.
Q2 (cr. W2). Please discuss the convergence stability, especially the initial convergence stability and the influence of random initialization, as well as the influence of the adaptive segmentation modules on the pre-training speed.
Q3 (cr. W3). Please discuss the influence of fine tuning on the adaptive segmentation, e.g., which the current framework can make a large change to the existing segmentation.
Experiments with and without fine-tuning the adaptive segmentation would be interesting to report.
Q4. It would be interesting if the authors may show some case studies on the adaptive segmentation results, i.e., whether and how much the adaptive segmentation results conform to the source domain of the dataset, whether some periodic patterns can be well preserved after the adaptive segmentation module.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: L1. There is a lack of explanability and interpretability in the adaptive segmentation results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and we address them as below:
**Intuition on segmentation**
We note that even for vision models such as ConvNets and ViT, the images are ingested as fixed sized patches (such as 16 x 16 patches) which have some similarity to segments in time-series. Moreover, we argue that there is much more diverse heterogeneity in time-series compared to images since even images of different domains have important properties such as edges, shapes that are useful to capture by the model.
However, for time-series, each domain can have a wide range of patterns specific to the domain along with cross-domain information. Further, these patterns could differ across time. Therefore, segmentation that adapts to these patterns across time can better capture these semantic structures and handle cross-domain downstream tasks.
**Stability of adaptive segmentation**
First, we did not observe instability in training. We evaluated our model over multiple runs with different random initializations and we didn't experience large variation in stability or performance
Since the overall objective of finding optimal segments as well as pre-train weights of the backbone model on SSL tasks has discrete components we use a bi-level optimization to simultaneously learn both objectives. We update the loss for segmentation every 10 epochs to make the training stable for SSL. This kind of training strategy is used in other bi-level optimizations encountered in discrete tasks such as graph structure learning and reinforcement learning on discrete action space[1,2].
**Fine-tuning adaptive segmentation**
We fine-tuned adaptive segmentation to provide optimal segmentation for each domain dataset since the patterns observed across different domains is very heterogenous.
For example, while we need to capture monthly or yearly patterns for demand datasets, behavioral datasets need to encode different kinds of patterns occurring in order of seconds or minutes.
Training a single segmentation strategy across all domains therefore would be highly suboptimal. In fact, empirically, we observed that training a single segmentation strategy across all domains lead to unstable training due to which we didn’t explicitly include it as a baseline.
**Interpretability of segmentation**
We also show some visualizations of the segmentations for different domains in Appendix Section D. We can get some intuitive understanding from these visualizations such as LPTM capturing smaller segments near epidemic peak to encode fine-grained variations as well as segment lengths correlating with variance of time-series in general.
### References
[1] Hu, Minyang, et al. "Learning Continuous Graph Structure with Bilevel Programming for Graph Neural Networks." IJCAI. 2022.
[2] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018.
---
Rebuttal Comment 1.1:
Comment: We again thank the reviewer for their valuable comments that helped improve our work. We hope we have addressed their questions and concerns.
Since the discussion period is soon ending, we hope the reviewer can acknowledge our response. We would also gladly answer any further questions or clarifications. | Summary: This paper proposes a novel contribution to pretrained time series models for forecasting and classification by paying attention to the fact that currently several transformer models take time series segmentations of the same size, regardless of the particular characteristics of the time series in consideration. For instance, time series that have yearly frequency or minute frequency might require different segmentation lengths, or it might be that dynamics are more complex in certain time intervals requiring a more detailed segmentation. Based on this observation the authors proposed a model that can find a suitable segmentation schema that later on allows to observe where are the time intervals where more complex dynamics are shown.
The authors perform several experiments and claim empirically that the proposed approach is at least competitive to the state of the art.
Strengths: The authors study a clearly interesting problem: how to provide a suitable segmentation scheme for time series so that different time regions are segmented in different ways, depending on their complexity and amount of information. The motivation for this is well stated by the authors, leading to a novel approach to achieve this.
The authors further set up this in an Self-supervised learning setting, and consider multiple datasets to pretrain their model and further provide several evaluations. This is interesting because depending on the field/topic/area of time series a different segmentation scheme might be more suitable.
Weaknesses: Some of the main limitations are as follows:
- The proposed framework is not differentiable. The authors have acknowledged this in the paper and propose a workaround for this, basically to update the segmentation scores every 10 batches. Yet, this poses challenges like the interpretation of the training loss, and discontinuities in the test loss.
- It is unclear if the proposed approach is able to handle missing values. If not, is there anyway to overcome this? Missing values are very often present in practice and having a sound way to handle them is relevant.
- It is unclear if the current evaluation is fair. The authors present a corpus of datasets for which they pretrained the proposed model, but it is unclear which datasets where hold-out from pretraining. This is relevant as several of the pretrained models considered might have not been exposed to these datasets, which gives an unfair advantage to the proposed model. Further, since the amount of pretraining datasets is rather limited, there is the possibility that the proposed model is overly focused on these datasets, whereas other models, like (Ansari 2024) and (Woo 2024) were trained in a larger corpus of datasets.
Technical Quality: 2
Clarity: 2
Questions for Authors: Question:
* Eq-1: is the GRU applied entry-wise to the time series? Does this imply that we apply $GRU_1$ to each entry of $y$ (which has $t$ entries), and then the resulting $t$ values constitute the hidden embeddings?
* Eq. 2: what is $z_i$? So far we have talked about $z^{(i)}$.
* Missing closing parenthesis in fig 1: $S(y^{(1...T)$
* Eq-1: the larger values of S(i,j) the better? Does it mean that the correlation between $z_i$ and $z_j$ is high, or that $z_i$ and $z_j$ are related somehow?
* Eq-3: index $k$ is never used in this definition of output embeddings.
* Eq-5: as pointed out by the authors, the selection of segments is not differentiable and hence it can not be directly integrated to the loss function. Does it mean that the segments are updated every 10 batches? This means that the loss will not be continuous, and hence it will be unclear if there is progress or not in terms of the training loss. Is this correct? I guess here what nevertheless can hint at improvement is the test loss.
* In line 225: why are time series with missing values removed? is the proposed model able to handle missing values?
* The authors claim that their model is a pretrained model. What datasets were used to pretrain the model? Are all datasets used as well for evaluation in Table 1? If yes, then the comparison is not fair. Several of the pretrained models considered might have not been exposed to those datasets in pretraining, giving an unfair advantage to the proposed approach. Further, doing pretraining in such a small amount of datasets further gives more advantage to the proposed model, as the larger the datasets potentially gives a smaller amount of exposure to each dataset.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have acknowledged limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. We address them as follows:
**The proposed framework is not differentiable..**
As stated in the paper we did not observe any instability in the training.
Yes, the segmentation of time-series is a discrete operation. LPTM tackles this challenge via a novel differentiable scoring module which learns the best segmentation strategy based on pre-trained dataset.
We note that the frequency of training every 10 batches is key to providing stable training since changing the segmentation module every iteration will change the SSL objective function too frequently leading to instability. Similar strategies are used in many papers that encounter bilevel optimization such as for discrete operations such as graph learning [1] and reinforcement learning over discrete action spaces [2].
**Handling missing values**
Handling missing values is not in the scope of our current work. However, LPTM could be extended to handle missing values using a similar method to random masking task by masking the segments associated with missing values. One issue to tackle would be to selectively mask part of the segment.
**Regarding pre-trained data**
For pre-trainied models such as Lag-LLAMA and TimesFM, we start with the weights of the models publicly available and pre-train on the same datasets we use to pre-train LPTM. Therefore, all baselines are pre-trained with at least as much data as LPTM is to provide a fair comparison. In fact, the other pre-trained models such as TimesFM and Chronos have already been pre-trained on much larger pre-trained datasets. In spite of this, LPTM achieves SOTA performance. For fine-tuning, all models are fine-tined on the same train-test splits.
Further, we use all the datasets described in lines 222-244 for pre-training. Only the unseen test split was used for evaluation on Tables 1 and 2.
**is the GRU applied entry-wise to the time series?**
Yes, for each $y^{i}$ we get an embedding $z^{(i)}$ from the GRU.
**what is z_i?**
This is a typo, we meant $z^{(i)}$.
**Value of s(i,j)**
Yes, larger is the value of $s(i,j)$ more important the segmentation module believes the segment between i and j to be. The intuition for Eq 2 is that since the GRU captures the information at timesteps i and j in context of entire time-series, the segment score function uses this information to derive the segment score.
**Index k in Eq 3**
k is just the indexing variable to aggregate the output embeddings of the self-attention for time-stamps from i to j. We will clarify this.
**Regarding Eq 5**
Due to the discrete problem of finding the optimal set of segments, the actual objective function during SSL is not continuous. However, we overcome this by using a bi-level optimization approach and periodically optimize the segmentation by learning on SSL tasks. This provides a more stable approach to training when we encounter discrete objectives similar to such strategies used in many papers that encounter bilevel optimization such as for graph learning [1] and reinforcement learning [2].
**Missing values removed (line 225)**
Many time-series in Project Tycho are very sparse with lots of missing values. We remove these time-series since LPTM and other baselines cannot handle them. However, as noted earlier, extending LPTM to time-series with missing values is an interesting and important challenge for future research.
### References
[1] Hu, Minyang, et al. "Learning Continuous Graph Structure with Bilevel Programming for Graph Neural Networks." IJCAI. 2022.
[2] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018.
---
Rebuttal Comment 1.1:
Title: Response to Reviewer
Comment: We again thank the reviewer for their valuable comments that helped improve our work. We hope we have addressed their questions and concerns.
Since the discussion period is soon ending, we hope the reviewer can acknowledge our response. We would also gladly answer any further questions or clarifications. | Rebuttal 1:
Rebuttal: Tables for the standard deviation of RMSE across 10 runs, mean rank of the models and 3 additional classification tasks.
Pdf: /pdf/523e5175f69e0f08c5403ba64aa64981b1c4d2e4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Weakly-Supervised Cortical Surfaces Reconstruction from Brain Ribbon Segmentations | Reject | Summary: The submission presents a deep learning-based approach for cortical surface reconstruction (CSR) from brain MRI data using weak supervision derived from cortical brain segmentation maps. The claimed contributions are:
1. Weak Supervision Paradigm: The authors introduce a new weakly supervised paradigm for reconstructing multiple cortical surfaces, significantly reducing the reliance on pseudo ground truth (pGT) surfaces generated by conventional CSR methods.
2. New Loss Functions: Two novel loss functions are designed to optimize the surfaces towards the boundaries of the cortical ribbon segmentation maps. Regularization terms are also introduced to enforce surface uniformity and smoothness.
3. Evaluation and Performance: The proposed method is extensively evaluated on two large-scale adult brain MRI datasets and one infant brain MRI dataset, demonstrating comparable or superior performance to existing supervised DL-based CSR methods.
Strengths: 1. The paper presents an approach to leverage weak supervision from segmentation maps instead of relying on pGT surfaces, which is a significant departure from traditional methods.
2. The methodology is explained and the experimental setup is described. The authors conduct evaluations on multiple datasets, evaluating the efficacy and efficiency.
3. The paper is well-structured, with clear descriptions of the problem, methodology, and results. The figures and tables effectively illustrate the performance and comparisons.
4. The approach addresses a critical bottleneck in CSR by reducing the dependency on time-consuming and error-prone pGT surfaces, potentially broadening the applicability of CSR methods to more diverse datasets and clinical scenarios.
Weaknesses: Method
1. It seems that this work combines [1] and [2], and thus has limited technical novelty. The architecture in Figure 1 and the circle consistency loss (Eq. 5) are almost identical to CoCSR [1]. The boundary surface loss and inter-mesh normal consistency loss (Eq. 3-4 and Figure 2) are very similar to the loss functions proposed by [2].
2. Additionally, the customized edge length loss (Eq. 6) has also been proposed by [3]. Considering the large individual differences across human brains, how did the authors choose the area A without knowing the pGT cortical surfaces?
3. It is confusing that the ribbon segmentations are used as both input and pGT. The authors claimed that the ribbon segmentations are inaccurate weak supervision, but still generated the initial surface based on ribbon segmentations according to Figure 1.
4. The velocity field defined in Eq. 1 is time dependent. How did the authors learn non-stationary velocity fields through a 3D U-Net?
5. In line 156, a bijective mapping with continuous inverse is called homeomorphism. A diffeomorphism is defined as a smooth/differentiable bijection with smooth/differentiable inverse.
6. As shown in Figure 2 (b), it is clear to observe that the WM and pial surfaces do not have the same normal directions in some regions. The inter-mesh normal consistency loss could cause inaccurate surface reconstruction. Could the authors provide more insights to solve this problem?
Results
1. The experimental results are unreliable and unconvincing. After careful comparison, it seems that the baseline results (CorticalFlow++, CortexODE, Vox2Cortex, DeepCSR) on the ADNI and OASIS datasets in Table 1 were directly copied and pasted from Table 2 in [1]. This leads to unfair comparisons.
2. Furthermore, as reported in Table 1, SegCSR produced no more than 0.061% of self-intersecting faces (SIF), whereas the authors claimed in line 264 that there are ∼0.3% on average for both white and pial surfaces. This is confusing. Which result is correct?
3. In line 263, the authors claimed that DeepCSR and U-Net produced a large number of SIFs without post-processing. However, the Marching Cubes algorithm only produces topological errors such as holes no SIFs.
4. The BCP dataset only includes 19 test subjects. Cross-validation should be conducted to ensure fair evaluation of the performance.
5. The flow ODE was integrated using the forward Euler method with T=5 steps. Such a large step size could cause unstable ODE solutions and failure in preventing self-intersections. The value of the Lipschitz constant should be reported to examine the numerical stability of the ODE solver.
6. The authors reported that SegCSR requires only 0.37s of runtime per brain hemisphere. However, SegCSR adopted a topology correction algorithm, which may take several seconds to a few minutes, to create an initial midthickness surface for each subject. This should be included in the total runtime. A breakdown of runtime should be reported and compared to SOTA baseline approaches.
[1] Zheng, H., Li, H. and Fan, Y. Coupled reconstruction of cortical surfaces by diffeomorphic mesh deformation. Advances in Neural Information Processing Systems, 2023.
[2] Ma, Q., Li, L., Robinson, E.C., Kainz, B. and Rueckert, D. Weakly Supervised Learning of Cortical Surface Reconstruction from Segmentations. arXiv preprint arXiv:2406.12650
[3] Chen, X., Zhao, J., Liu, S., Ahmad, S. and Yap, P.T. SurfFlow: A Flow-Based Approach for Rapid and Accurate Cortical Surface Reconstruction from Infant Brain MRI. MICCAI, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors elaborate on the key differences between their approach and [1,2,3], particularly in terms of methodology and experimental setup?
2. How does the proposed boundary surface loss function improve upon the traditional bi-directional Chamfer loss used in existing methods?
3. Can the authors provide more details on the computational efficiency and runtime comparisons with existing CSR pipelines?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: The authors have addressed some limitations, but further clarity on the following aspects would be beneficial:
1. The efficacy of SegCSR is influenced by the quality of pGT segmentations. More discussion on how to handle low-quality segmentations would be helpful.
2. The constraint on inter-mesh consistency of deformation might affect the anatomical fidelity of pial surfaces. Further exploration of this trade-off is necessary.
3. The method could be tested on more diverse cohorts to demonstrate its efficacy across various imaging qualities and subject demographics.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: Limited novelty: this work combines [1] and [2]. Explain methodology and exp setup of [1-3]**
**A1**:
Our SegCSR framework is model agnostic, and we choose CoCSR [1] as the baseline b/c it is the SOTA and able to reconstruct multiple cortical surfaces simultaneously. SegCSR is weakly supervised by pGT segmentations while [1] is a supervised method. Our other contributions include the loss functions and regularizations.
Seg2CS [2] is a concurrent study which was published on arXiv in mid-June while our paper was submitted to NeruIPS in mid-May. Their findings should not diminish the significance of our contributions. Technically, their boundary loss is like our Eqs. 2 & 3, their inflation loss forces the deformation along the normal direction of the initial WM surface, and it is unclear if they reconstruct surfaces sequentially or simultaneously. In contrast, our method uses the midthickness surface to bridge the inner and outer surfaces and compute the inter-mesh normal consistency loss between deformed WM and pial surfaces, which is stable in training and needs no cautious gradient computation as [2]. And our method reconstructs multiple cortical surfaces simultaneously.
SurfFlow [3] deforms a sphere template to WM and pial surfaces sequentially and validate on infant dataset. It is a supervised flow-based method similar to CF++. Its main contributions are starting from the sphere template, recurrent network design, and regularization terms.
In terms of exp setting, [1] and [3] (and other supervised methods) use the pGT surfaces from conventional pipelines for training and testing. For [2] and ours, we utilized pGT segmentations for training and pGT surfaces from conventional pipelines for testing. However, [2] did not report the fully supervised results of their model, thus the performance gap is unclear while CoCSR can be seen as the upper bound of our method.
**C2: The edge length loss (Eq. 6) is proposed by [3]. How is the area A chosen?**
**A2**: Eq. 6 consists of two terms and only the edge length loss is inspired by [3]. The other term promotes the surfaces’ smoothness. We will highlight this in the revision.
The area A is estimated on the surfaces generated from the pGT segmentations by computing the average facet area of the WM and pial surfaces respectively. Although there is no GT for A, this term only serves to force the triangular faces to be uniformally distributed. And we utilize a smaller weight on this term.
**C3: Confusing: ribbon segs are used both input and pGT.**
**A3**: The initialization is only an approximation. Its closeness to the true surfaces and the correctness of topology are two major concerns. Such initialization satisfies these two requirements.
The cortical ribbon segmentations contain structural/semantic info about the cortical sheet and can act as an attention guide for extracting informative features around its boundaries, thus supplementing the feature extraction along with the raw image and SDFs. Our preliminary experiments validate its effectiveness, and we will add the results to the revision.
**C4: The VF in Eq. 1 is time-dependent. How to learn non-stationary VFs through a 3D U-Net?**
**A4**: Eq. 1 describes the SVF framework if the vector field v is constant over time; If v is time-varying, then Eq. 1 becomes the LDDMM model. In this paper, our integration is computed by adding sampled velocities step-by-step, which can be seen as sampling from a series of SVFs in corresponding intervals.
**C5: Line 156, homeomorphism $\neq$ diffeomorphism.**
**A5**: Thanks. We will update the terminology in Line 156 to accurately reflect the use of diffeomorphism and homeomorphism.
**C6: Fig. 2(b), the WM and pial surfaces do not have the same normal directions in some regions. The inter-mesh normal consistency loss could cause inaccuracy. More insights to solve this problem?**
**A6**: We agree that the WM and pial surfaces are not 100\% parallel, especially considering the nuance difference of corresponding vertices. This loss term aims to solve the problem that the surface cannot be deformed to deep sulci regions due to severe partial volume effect (PVE). We also propose the intensity gradient loss to position the surface and incorporate the mesh quality loss to further improve the mesh quality (smoothness and edge length).
**C7: Exp results are unreliable and unconvincing. The baseline results on the ADNI and OASIS datasets in Table 1 are the same as those in Table 2 of [1]. Unfair comparisons.**
**A7**: The authors of [1] kindly provided us with their dataset split, pre-processing and program code, and pre-trained models. We replicated their results, which are very close to what was reported in [1]. And we conducted all the experiments on the same datasets.
**C8: Table 1, SegCSR produced < 0.061% SIF; Line 264 “∼0.3% on avg for both surfaces”. Confusing. Which one is correct?**
**A8**: 1) Line 164 describes adult datasets only. 0.061 is for infant data. 2) The avg should be no larger than 0.01%. Will correct this.
**C9: Line 263, “DeepCSR and U-Net produced a lot of SIFs without post-processing”. But MC algo produces topological errors but no SIF.**
**A9**: While the MC algorithm may not directly produce SIFs, the output from DeepCSR and U-Net can introduce such artifacts that require additional post-processing to correct. Will clarify this in revision.
**C10: BCP dataset only has 19 test subjects. Cross-validation is needed for fair evaluation.**
**A10**: The authors of [3] kindly provided us with their dataset split, pre-processing code, and program code. They already employed stratified sampling by age to construct the dataset and maintain balance. We will use k-fold cross-validation to further evaluate our method.
---
Rebuttal 2:
Title: Additional responses
Comment: **We appreciate Reviewer LNir's effort in providing more than a dozen comments. Due to the 6,000-character limit, we cannot address all of them in the rebuttal. We hope the reviewers and ACs will consider our additional responses provided separately.**
**C11: The ODE solver has T=5 steps. Such a large step size could cause unstable solutions and SIF. Report the Lipschitz constant to examine the numerical stability of ODE solver.**
**A11**: Thanks. CF++ uses the rule of thumb $hL \leq 1$ and checks that for all considered examples; cortexODE ensures $\eta(h,L)<1$, where $h=\frac{1}{T}$ and $L$ is Lipschitz constant. In practice, their results using T=10 are satisfying. Compared to their deformation length of the initial surface, our method starts from the midthickness surface, which shortens the need for choosing a large T. We also conducted a preliminary experiment using T=10, the performance is saturated compared to using T=5. Thus, we empirically choose 5 to strike a balance of efficiency and efficacy.
We will include these results and discuss the implications of the Lipschitz constant for the ODE solver's stability.
**C12: Report SegCSR’s runtime 0.37s/hemisphere. Topology correction time should be included. A breakdown of runtime should be reported and compared to SOTA methods.**
**A12**: Summary of the breakdown of runtime in inference.
|Time (s)|DeepCSR|3D U-Net|CF++| cortexODE| CoCSR| Ours |
| - | - | - | - | - | - | - |
| Pre |\\|\\|\\|2.93|2.93|2.93|
| Main framework |125|0.81|1.84|0.49|0.22|0.24|
| Post |\\|0.14|\\|\\|\\|\\|
For our SegCSR, the reported 0.37s includes MC and network forward propagation. The topology correction takes 2s and segmentation map generation takes 0.8s.
**C13: More details on the computational efficiency and runtime comparisons with existing CSR pipelines?**
**A13**: Please refer to my responses “A4 to Reviewer Guru” and “A6 to Reviewer DDcT”.
**C14: How does the proposed boundary surface loss function improve upon existing bi-directional Chamfer loss?**
**A14**: The major difference is on the pial surface reconstruction (Eq. 3 & Fig. 2-c). If using traditional bi-direction Chamfer loss, the model will overfit to the noisy pGT segmentation boundary (Fig. 2-c1, orange) and fail to deform to deep sulci regions. In contrast, using our uni-direction Chamfer loss, the model will drag the pial surface towards the deep sulci. With the help of other loss terms and regularization, the model will find a balance and address the PVE of pGT segmentations (Fig. 3-d vs c). We will rectify the typo in Eq. 3 and explain more.
**C15: Limitations
(1) SegCSR depends on pGT segs. More discussion on low-quality segmentations.
(2) The inter-mesh consistency might affect the anatomical fidelity of pial surfaces. More exploration on the trade-off.
(3) The method could be tested on more diverse cohorts to show its efficacy across various imaging qualities and subject demographics.**
**A15**: We will add more discussion on limitations in Sect. 5.
(1) Please refer to our responses “A5 & A8 to Reviewer DDcT”.
(2) Please refer to our response “A6” above. And we will add results to Supplementary Materials.
(3) Please refer to our response “A4 to Reviewer zPL4”.
**C16: Ethics review needed since involving human subjects**
**A16**: We review and conform with the NeurIPS Code of Ethics. We want to highlight:
1) We use well-established datasets from other resources with their consent. We did not perform experiments on human subjects directly.
2) These datasets should have gone through IRB approvals in the corresponding institutions (Mayo Clinic and UNC). We are not responsible or capable of conducting extra reviews.
3) In Sect. 5, we have clarified the Societal Impact and stressed that the deployment of the model in clinical settings should be approached with caution and under human supervision.
---
Rebuttal Comment 2.1:
Title: I read the rebuttal but have still concerns on novelty and contribution
Comment: my primary concern remains that the contribution is not novel enough to stand out in the context of existing literature. Despite the distinctions the authors have highlighted, the similarities with prior works suggest that this approach may not offer sufficiently new insights. This could make it difficult for this paper to make a significant impact within the community.
---
Reply to Comment 2.1.1:
Title: Response to further adress the remaining concerns
Comment: Thank you for taking the time to review our rebuttal. Hopefully, the following response will help address your remaining concerns about the novelty and contributions of our work.
Based on a **thorough review** on cortical surface reconstruction (CSR) in “Sect. 2: Related Works”, including **1)** Traditional CSR methods (FreeSurfer, BrainSuite, HCP, dHCP, and iBEAT V2.0) and **2)** DL-based CSR methods (SegRecon, DeepCSR, PialNN, TopoFit, vox2cortex, CorticalFlow series, SurfFlow, CortexODE, and CoCSR), we would like to highlight:
1. A novel weak supervision framework. Unlike existing DL-based methods that rely heavily on pre-computed pGT surfaces from traditional pipelines, our approach introduces a weakly supervised framework that reduces this dependency. This is particularly important in scenarios where traditional methods struggle, such as **infant data**, offering an alternative for CSR that is more flexible and generalizable.
2. New loss functions and regularizations. We have introduced uni-directional chamfer loss in Eq. 3, inter-mesh normal consistency loss in Eq. 4, intensity gradient loss, and regularization techniques (mesh quality loss). These innovations directly address the challenges of utilizing pGT segmentation maps as weak supervision, which have not been explored in the existing literature.
- We chose CoCSR as our backbone because it is the SOTA method and can reconstruct multiple cortical surfaces simultaneously. It serves as a fully supervised upper bound for our weakly supervised method, providing a clear benchmark for future studies in weakly supervised CSR tasks. To the best of our knowledge **at submission**, we are the first to study this problem and address this bottleneck in CSR.
- Regarding mesh quality loss, while inspired by [3] for the customized edge length loss, we **further** enhanced it by combining it with normal consistency loss to improve the quality of the reconstructed surfaces. Both terms are useful for improving mesh quality. We will highlight the differences between ours and those in [1] and [3] in the revised paper.
- Concerning the concurrent work [2] mentioned by Reviewer Lnir, it was posted on arXiv **one month after** our submission, making it **impossible** for us to compare with them **at that time**. Importantly, our method **differs from [2] in four key aspects**:
**a)** inter-mesh normal consistency loss, which is stable in training and needs no cautious gradient computation as [2];
**b)** simultaneous multiple CSR;
**c)** more thorough experimental comparison with both fully supervised and weakly supervised methods on various adult and infant datasets;
and **d)** more validations on downstream tasks (e.g., reproducibility on the same subject, cortical thickness estimation).
3. Broad applicability across datasets. Our method has been tested on both adult and infant datasets, demonstrating its adaptability across different imaging conditions. This broad applicability underscores the potential impact of our work in various real-world scenarios, where traditional methods may fall short.
We will stress the differences between our method and existing literature to highlight its unique contributions and potential for impact.
In summary, we believe this study has demonstrated the potential of weakly supervised CSR and hope it will ignite future research in this direction, as well as in other mesh reconstruction tasks (e.g., heart, bone, as mentioned by Reviewer DDcT). | Summary: The authors proposed a novel new method to jointly reconstruct multiple cortical surfaces using weak supervision from brain MRI ribbon segmentation results, which deforms midthickness surface deformed inward and outward to form the inner (white matter) and outer (pial) cortical surfaces. The proposed method is evaluated on two large-scale adult brain MRI datasets and one infant brain MRI dataset, demonstrating comparable or superior performance in CSR in terms of accuracy and surface regularity.
Strengths: 1. Propose a new weakly supervised paradigm for reconstructing multiple cortical surfaces, reducing the dependence on pGT cortical surfaces in training, unlike existing DL methods.
2. Design two loss functions to optimize the surfaces towards the boundary of the cortical ribbon segmentation maps, along with regularization terms to enforce the regularity of surfaces.
3. Conduct extensive experiments on two large-scale adult brain MRI datasets and one infant brain MRI dataset.
Weaknesses: 1. It seems overclaim in the manuscript. The ‘pseudo’ ground-truth surface mentioned in the manuscript is actually the ground-truth mesh in other approaches, obtained by Marching cube/Free surfer. Since the chamfer distance is used to guide the network training, why do the authors claim the proposed method is weakly supervised?
2. It is not clear how the original images are overlaid with the predicted mesh. Is any registration used? Details are missing.
3. It seems the main contribution of the proposed SegCSR is the boundary loss function?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Why not use the total ADNI datasets for network training as what is used in previous research like DeepCSR and voxel2cortex?
2. How the predicted meshes are overlaid on the original images? Details should be given.
3. What does the ‘L-Pial Surface’ and ‘L-WM Surface’ in the tables mean? The Pial and WM surface of the left hemisphere. Why not also present the results for the right hemisphere?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are discussed in the msnuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: Overclaim. The pseudo ground truth (pGT) surface mentioned in the manuscript seems the GT mesh in other approaches, obtained by Marching Cubes (MC)/FreeSurfer. Why is the proposed method weakly supervised?**
**A1**: We have summarized the supervision signals used by our method and others in Table 4 of the Supplementary Materials. Mainstream supervised methods rely on conventional pipelines (e.g., FreeSurfer, iBEAT) to generate pGT cortical surfaces (**pGT surfs**) for both training and testing. In contrast, the main novelty of our work is that SegCSR is weakly supervised using pGT segmentations (**pGT segs**), without requiring pGT cortical surfaces generated by these conventional pipelines.
As discussed in the Introduction, conventional pipelines are time-consuming for extracting pGT surfaces, especially for large datasets. Moreover, they may fail to produce acceptable pGT surfaces, such as infant MRI. In contrast, pGT segment results are relatively easier to acquire, e.g., using fast DL-based methods.
SegCSR only utilizes pGT segment results to extract surfaces using the MC algorithm. But these surfaces provide **noisy and weak supervision**, particularly for the pial surface, which is significantly affected by partial volume effect (PVE) in the segmentations and fails to capture deep cortical sulci (Fig. 2-c). To address this, we propose the novel uni-directional loss function and other regularization terms to predict high-quality cortical surfaces (Fig. 3). We will highlight these contributions in the revision.
**C2: How predicted mesh overlaid on the original images? Registration used?**
**A2**: The reconstructed surfaces are in the same coordinate system as the original images, resulting in an exact match. No additional registration or post-processing is required.
This is because the ribbon segmentations are generated from the original images – naturally aligned with the images. The initial surface is extracted from these segmentations and remains aligned, while the deformation of the cortical surfaces occurs within the same space as the image and the initial surface. Thus, the resulting surfaces align perfectly with the original images.
For visualization, we simply load the reconstructed surfaces and the original image into FreeSurfer and capture screenshots for the 2D projection. We will include a more detailed description of the visualization process in the revision.
**C3: It seems the main contribution of the proposed SegCSR is the boundary loss function?**
**A3**: Our contributions are
1) Weak Supervision Framework: We propose a novel weakly supervised learning approach that leverages pGT segmentations instead of relying on fully annotated surfaces. This approach reduces the dependence on labor-intensive data preparation and addresses issues with conventional methods, such as difficulty in generating accurate pGT surfaces for certain datasets like infant MRIs.
2) Loss Functions: In addition to the uni-directional boundary loss function, we introduce other novel loss functions (e.g., intensity gradient loss) to handle the specific challenges of CSR, particularly the PVE and the inability to capture deep cortical sulci. These loss functions help ensure accurate and high-quality surface prediction.
3) Regularization Terms: We incorporate additional regularization terms that contribute to the accurate prediction and smoothness of cortical surfaces, enhancing the overall quality of the reconstructed surfaces.
4) Evaluation: We conduct extensive experiments on 2 large-scale adult brain MRI datasets and 1 infant brain MRI dataset. Our new method achieves comparable or superior performance compared to existing DL-based methods.
**C4: Why not use the total ADNI datasets for training like DeepCSR and vox2cortex?**
**A4**: ADNI is a large-scale longitudinal study and data are collected in different batches. We utilize the ADNI-1 baseline/screen official release (Line 227), consisting of 817 1.5T T1-weighted brain MRIs from subjects aged 55 to 90, including normal controls (NC), mild cognitive impairment (MCI), and Alzheimer’s disease (AD). This official release dataset is representative, covering a wide age range and balanced target population, including various subject conditions (NC, MCI, AD), ensuring stable model training, and facilitating fair comparisons across methods.
Previous methods have used varying amounts of data from ADNI: DeepCSR and CF++ used 3876 MRIs, vox2cortex used 1647 MRIs, cortexODE used 524 MRIs. The data selection criteria for these methods are not always clear, but it appears they may have combined multiple scans of the same subjects from ADNI or other resources. We chose to follow CoCSR to use the official release of ADNI-1 (817 MRIs) for fair comparison.
Additionally, we utilized the OASIS-1 dataset consisting of 413 T1w scans from subjects aged 18 to 96 years, including NC and AD subjects, the BCP dataset consisted of 121 subjects ranging in age from 2 weeks to 12 months, and the Test-retest dataset consisted of 120 T1w scans from three subjects aged 26 to 31.
In summary, our method has been evaluated on MRI scans of diverse subjects. We will expand our evaluation to include more ADNI batches in future work.
**C5: ‘L-Pial Surface’ & ‘L-WM Surface’ means the Pial & WM surface of the left hemisphere? Why not also present the results for the right hemisphere?**
**A5**: Yes, 'L-Pial Surface' refers to the pial surface of the left hemisphere, and 'L-WM Surface' refers to the WM surface of the left hemisphere.
The results for the right hemisphere are reported in the Supplementary Materials. We observed that the surfaces of both hemispheres are relatively symmetric, and the results are similar. Thus, we presented only the left hemisphere results in the main paper to conserve space. We can briefly mention the right hemisphere results in the paper or provide additional details if necessary.
---
Rebuttal 2:
Title: Did we address your concerns?
Comment: Dear Reviewer zPL4,
Thank you for taking the time to review our manuscript. We hope that our detailed responses have adequately addressed your concerns and clarified the merits of our work.
If you find that we have resolved the issues raised, we kindly request that you reconsider our paper and your final rating. Your reassessment would be greatly appreciated and would help reflect the improvements made based on your valuable feedback.
Should you have any further questions or require additional clarifications, please do not hesitate to comment. With the few hours remaining for the author-reviewer discussion, we will do our best to provide any information needed.
Thank you once again for your consideration.
Authors of Submission-6996 | Summary: The paper presents a deep learning approach to jointly reconstruct multiple cortical surfaces using weak supervision from brain ribbon segmentations derived from brain MRIs. The method leverages the midthickness surface and deforms it inward and outward to fit the inner and outer cortical surfaces by jointly learning diffeomorphic flows. Regularization terms are included to promote uniformity, smoothness, and topology preservation across the surfaces. Experiments are conducted on large-scale adult and infant brain MRI datasets.
Strengths: - The approach is novel in its use of weak supervision from readily available segmentation datasets, which reduces the burden of preparing pseudo-ground truth surfaces.
- The paper is well-written and structured, with a clear motivation for the method.
- The methodology is explained in detail, and the experiments are comprehensive.
- The approach has the potential to democratize the use of deep learning in cortical surface reconstruction by leveraging existing segmentation datasets.
Weaknesses: - The paper's central contribution of weak supervision is undermined by the fact that the model is trained on pseudo ground truth surfaces for white matter and pial surfaces.
- The experimentation is limited to brain cortical surfaces and MRI images. Broader experiments involving different anatomies (e.g., bone cortical surfaces, heart walls) and imaging modalities would enhance the paper's impact.
- Results lack statistical significance analysis to validate sub-millimeter reconstruction errors.
- There is no evidence showing that improvements in mesh reconstructions correlate with enhanced performance in downstream analysis tasks.
- The robustness of the method regarding input noise/perturbation and images from multiple centers is not evaluated.
- There is no analysis of the computational complexity, including the resources and time savings provided by the proposed weak supervision.
- There is no sensitivity analysis on the choice of weights used to weigh the different components of the overall loss.
- The impact of ribbon segmentations quality (e.g., voxel spacing) as weak supervision is not investigated.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can you provide evidence or analysis showing that improvements in mesh reconstructions lead to enhanced performance in downstream analysis tasks?
2. How does the method perform with input noise or perturbations? What is the expected performance under domain shifts?
3. What are the computational resources and time requirements saved by using weak supervision compared to traditional methods?
4. How does the quality of ribbon segmentations (e.g., voxel spacing) impact the reconstruction accuracy?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: The paper's central contribution of weak supervision is undermined by the fact that the model is trained on pseudo ground truth (pGT) surfaces for WM and pial surfaces.**
**A1**: Previous DL methods typically rely on pGT surfaces from conventional pipelines as optimization targets, which we refer to as supervised methods. In contrast, our method utilizes only cortical ribbon segmentations for supervision, making it a weakly-supervised approach in terms of the accuracy and quality of the supervision signal. Although we generate pGT from these segmentations, the supervision remains coarser and less accurate compared to traditional supervised methods. This approach significantly reduces the computational cost for preparing pGT surfaces, making it more efficient.
**C2: Experimentation is limited to CSR from MRIs. Broader experiments in more anatomies (e.g., bone, heart) and imaging modalities would enhance the paper's impact.**
**A2**: We appreciate the reviewer's suggestion. Experimenting with different anatomies and imaging modalities would be valuable.
Our current focus on cortical surfaces of complex structures has driven us to develop advanced network architectures, loss functions, and regularizations. We believe our method could potentially generalize to simpler structures (e.g., bone, heart). But different modalities and anatomical challenges (e.g., non-watertight topology of the heart) may necessitate specialized model designs. We will discuss this in the revision and plan to explore these areas in future studies.
**C3: Results lack statistical significance analysis to validate sub-millimeter reconstruction errors.**
**A3**: We have conducted an independent t-test to assess the statistical significance of our results compared to other baseline models. E.g., on ADNI dataset, our method shows statistically significant improvements over DeepCSR, 3D U-Net, CF++, and vox2cortex, demonstrating the effectiveness of our method. We will report this in the revision.
**C4: Provide evidence or analysis showing that improvements in CSR lead to enhanced performance in downstream analysis tasks?**
**A4**: In Sect. 4.4, we conducted a reproducibility analysis, which is vital for cortical morphology studies as it assesses the consistency of measurements over time. Our SegCSR showed superior performance compared to DeepCSR and comparable results to the SOTA methods like cortexODE and CoCSR. This indicates that SegCSR can be reliably used for studying cortical thickness changes in patients.
Furthermore, we performed an experiment on cortical thickness estimation across a group of 200 subjects, comparing the results obtained from SegCSR with those from FreeSurfer. The high correlation (R = 0.94) between the two methods demonstrates our framework's capability to accurately capture cortical thickness, making it an alternative to both traditional and deep learning-based methods. We will include these results in the revision.
**C5: The robustness of the method regarding input noise/perturbation and images from multiple centers? Expected performance under domain shifts?**
**A5**: We appreciate the reviewer's insightful comment.
1) Our method has been evaluated on two adult datasets and one infant dataset. Notably, the infant data has a smaller region of interest and a lower quality compared to adult MRIs, yet our method performs reasonably well, demonstrating adaptability to different image qualities.
2) The ADNI-1 dataset used in our study includes images collected from different scanners over time, resulting in a range of image qualities. This diversity helps assess the robustness of our method across various imaging conditions.
3) We conducted an experiment on the OASIS dataset by adding Gaussian noise ($\sigma^2=5$) to the images. The results showed that the CSR performance did not degrade significantly (Pial-surf, ASSD: 0.321 vs. 0.329), indicating robustness to input noise.
4) Due to limited rebuttal time, we could not complete a comprehensive evaluation using more diverse images from the ADNI, OASIS, and HCP datasets. We plan to conduct further experiments to assess the method's robustness across a broader range of imaging modalities and conditions.
**C6: No analysis of the computational complexity (resources & time) of the proposed SegCSR.**
**A6**: Please refer to my response "A4 to Reviewer Guru" for a detailed comparison of time efficiency. The GPU memory is as follows:
|GPU (GB)|DeepCSR|3D U-Net|CF++|cortexODE|vox2cortex|CoCSR|Ours|
|- |-|-|-|-|-|-|-|
|Training|3.2|9.2|11.7|5.8|9.8|8.7|8.7|
|Inference|1.5|3.0|3.1|2.0|3.8|2.9|2.9|
**C7: There is no sensitivity analysis on the choice of weights used to weigh the different components of the overall loss.**
**A7**: Please refer to my response "A5 to Reviewer Guru" for weights of loss functions.
**C8: The impact of ribbon segmentations quality (e.g., voxel spacing) as weak supervision is not investigated.**
**A8**: We have conducted an experiment on a subset of the OASIS dataset (100 samples for training and 30 for testing) by resampling the data to a 2mm resolution. The results of the ASSD on both surfaces are summarized below. We will include a comprehensive experiment on the entire dataset in the revision.
| L-Pial | DeepCSR | 3D U-Net | Ours |
| - | - | - | - |
| 1mm | 0.685 | 0.363 | 0.357 |
| 2mm | 1.795 | 1.030 | 0.496 |
| L-WM | DeepCSR | 3D U-Net | Ours |
| - | - | - | - |
| 1mm | 0.646 | 0.256 | 0.249 |
| 2mm | 1.414 | 0.597 | 0.471 |
---
Rebuttal Comment 1.1:
Comment: Thanks for the clarifications and additional results. Atuhors have addressed my questions. I will raise my score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for taking the time to thoughtfully review our paper and consider our clarifications and additional results. We appreciate your willingness to reassess our work and raise the score. | Summary: The paper presents a novel deep learning method for the reconstruction of cortical surfaces from 3D MRI. The proposed method follows an approach learning explicit surface deformations, in which a CNN is used to predict three velocity fields, corresponding to the pial, white matter and midthickness surfaces. Unlike previous techniques which use cortical surface pseudo ground truth (e.g., generated using FreeSurfer), the proposed method trains the network with faster-to-obtain segmentation pseudo ground truth. In addition to the standard surface prediction losses (based on Chamfer distance), the method uses 1) an Inter-Mesh Normal Consistency loss that encourages the pial and WM surface to be locally parallel, 2) an Intensity Gradient loss that place the surfaces at regions of high intensity gradients, 3) a Cycle Consistency loss enforcing inverse consistency between the midthickness-to-pial deformation and the midthickness-to-WM one, and 4) a Mesh Quality loss that helps having regular surface meshes (uniform sized triangles and smoothly varying normals). The method is evaluated on the ADNI, OASIS and BCP datasets, where its performance is compared to that of implicit and explicit approaches. Results show that the method obtains a better reconstruction accuracy compared to other techniques trained in a weakly supervised setting (pGT segmentation mask), but a lower performance than those trained with pGT cortical surfaces.
Strengths: * The proposed method differs from previous approaches that explicit surface deformations by predicting a midthickness surface and incorporating additional loss terms that compensate for the weak supervision of pGT segmentation.
* Experiments, involving three different datasets and comparing against several recent baselines, as well as including various ablation variants, are well designed. Results indicate superior performance in the weakly supervised setting.
Weaknesses: * The main motivation of the proposed method is doubtful. Authors motivate the need for their weakly-supervised cortical reconstruction method by the "prolonged processing time for generating pGT surfaces". However, as the pGT cortical surfaces can be generated automatically in an offline step, I believe the argument is weak. Moreover, recent pipelines for brain image processing, such as FastSurfer, can extract surfaces with comparable accuracy in a fraction of the time.
* The accuracy of the proposed method is considerably lower than approaches which train on cortical surfaces. Furthermore, while it produces fewer topological artifacts like self-intersecting faces, those can be removed via post-proicessing in implicit methods like DeepCSR. Combined with my previous comment, the advantages of the method are unclear.
* The ablation study in Table 2 indicates that most of the proposed loss terms have limited impact on the overall performance. For example, adding the Mesh quality loss seems to actually degrade performance in terms of CD, ASSD and HD.
Technical Quality: 2
Clarity: 3
Questions for Authors: * How does your method compare to other approaches in terms of training and inference time ?
* The proposed method has several hyper-parameters (lambda1-5) that need to be tuned. How were the values selected for these hyper-parameters, and how sensitive is the method to the chosen values?
* In Figure, why is the pial surface represented with two different colors (orange and purple) ?
* In Eq (4), how do you compute the pial and WM surface normals if the point is on the midthickness surface?
* p6: "where npG and npW are the normal vectors of the deformed vertex p on SM and SG respectively": Do you mean on S_G and S_W ?
* p6: "segmentaions"
* Section 4.2: Do you mean Table 1 ?
* p8: "nromal"
* p9: "Also, We can"
See weaknesses for main comments to answer.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitations are reasonably identified in the Conclusions section of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **C1: Main motivation (prolonged time for generating pGT surfaces) is doubtful b/c the pGT surfaces can be generated automatically offline. Recent pipelines, e.g. FastSurfer, can extract surfaces in a fraction of the time**
**A1**: The lengthy time to generate pGT surfaces is not our only motivation. Our key motivations are:
1) Conventional pipelines involve multiple processing steps, leading to lengthy processing time.
2) Each pipeline requires meticulously tuned parameters, posing challenges for generalization across diverse data domains, age groups, or acquisition protocols.
3) DL-based methods have improved efficiency but rely on pGT surfaces from conventional pipelines, increasing computational cost of training data preparation.
We will highlight all these aspects in revision.
Although pGT surfaces can be computed offline, users often need to try different pipelines for different types of data. E.g., FreeSurfer (FS), designed for adult MRIs, does not perform well on infant BCP data. In contrast, segmentations offer a cohort-independent way for CSR and are a well-studied field with established tools. E.g., SynthSeg [1] can robustly segment diverse brain images.
FastSurfer’s recon-surf pipeline is largely based on FS and incorporates a spectral spherical embedding for CSR. Although faster, it still takes 1~1.5h runtime and has specific image quality requirements (e.g., no worse than 1mm, equivalent voxel size).
Our DL-based method aims to reduce reliance on pGT surfaces from conventional pipelines and is as fast as other DL alternatives.
We will emphasize these points more clearly in revision.
[1] Robust machine learning segmentation for large-scale analysis of heterogeneous clinical brain MRI datasets
**C2-1: The accuracy is lower than that of methods trained on pGT cortical surfaces.**
**C2-2: While the SIF is lower, the artifacts can be removed via post-processing as in DeepCSR.**
**C2-3: The advantages of the method are unclear.**
**A2-1**: Our weakly supervised CSR method is trained with segmentation supervision and evaluated by comparing to surfaces from conventional pipelines. In contrast, supervised methods are trained and tested on surfaces from conventional pipelines. It is expected that our method may not outperform all supervised methods in terms of accuracy. But our method outperforms DeepCSR & 3D U-Net (weakly sup.) and CF++ and vox2cortex (sup.), performs comparably to cortexODE (sup.), while is only inferior to CoCSR (sup.). These baselines are diverse, representative and strong, making our comparisons reliable and comprehensive.
**A2-2**: 1) The post-processing in DeepCSR is on the level set, but our method directly deforms the mesh. 2) Mesh-based post-processing is possible. Our method is flexible enough to incorporate such steps if desired. 3) In terms of SIF, our method is either superior or comparable to all DL baselines.
**A2-3**: The advantages are: 1) Our weakly supervised paradigm for CSR doesn't depend on pGT cortical surfaces for training, unlike existing DL methods. Segmentation is much easier to obtain. 2) New loss functions to optimize and regularize surfaces, facilitating easy training and inference. 3) Our method outperforms weakly supervised methods and some supervised methods, significantly narrowing the performance gap w.r.t. supervised methods.
**C3: Tab 2, ablation study, loss terms have limited impact on the overall performance. E.g., adding the mesh quality loss degrades performance (CD, ASSD, HD).**
**A3**: The loss terms are designed to complement each other, balancing accuracy and mesh quality (Lines 284-302). While $\mathcal{L}_{qua}$ leads to slightly degraded CD, ASSD, and HD, it helps reduce the SIF, improving the quality of the reconstructed surfaces. We will provide more visual results to show their impact.
**C4: Compare your method with other methods on training and inference time.**
**A4**: The runtime for 1 iteration across different methods is as follows:
|Time (s)|DeepCSR|3D U-Net|CF++|cortexODE|vox2cortex|CoCSR|Ours|
|- |-|-|-|-|-|-|-|
|Training|0.98|1.97|4.63|1.8|2.2|2.1|2.5|
|Inference|125|0.96|1.84|0.49|0.37|0.22|0.24|
**C5: How were the values selected for the hyper-parameters of loss terms? How sensitive are they?**
**A5**:
**Selection** First, we identified reasonable ranges for the hyper-parameters based on prior work and preliminary experiments. For example, we found that $\lambda_1$ and $\lambda_4$ should be of the same order and generally larger than other regularization terms. Second, we used cross-validation on a subset of our dataset, incrementally adjusting the values to find an optimal set.
**Sensitivity** Our preliminary results indicate that the method is relatively robust within certain ranges. Suppose $\lambda_1=1$, performance is stable if $\lambda_2$ is within [$10^{-4}$, $10^{-1}$], $\lambda_3$ within [$10^{-2}$, $10^{-1}$], $\lambda_4$ within [$10^{-2}$, 1], and $\lambda_5$ within [$10^{-2}$, $10^{-1}$]. Outside these ranges, particularly for $\lambda_2$ and $\lambda_4$, we observed a more noticeable impact on overall performance. We will include more detailed analyses and discussions in the revised paper.
**C6: Fig 2, why pial surface in two colors?**
**A6**: The purple surface corresponds to the pGT pial surface from conventional methods, while the orange surface represents the pial surface generated from the GM segmentation map.
**C7: Eq. 4, how to compute the pial and WM surface normals?**
**A7**: There is a one-to-one correspondence among vertices of deformed surfaces. The normals, $n_{p_{G}}$ and $n_{p_{W}}$, are computed for the corresponding deformed vertex $p$ on the pial surface $\mathcal{S}_G$ and the WM surface $\mathcal{S}_W$, respectively (Line 191).
**C8: 1) Line 191, do you mean S_G and S_W? 2) Sect. 4.2, do you mean Table 1? 3) Other typos.**
**A8**: 1) Yes. 2) Yes. 3) Thanks. We will fix all of them.
---
Rebuttal Comment 1.1:
Title: Thanks for answers
Comment: I thank the authors for carefully answering my comments. After reading the rebuttal, I think the paper brings interesting contributions in terms of methodology however and am still not fully convinced about the method's usefulness in a real-life application. From my understanding, the main advantage of the method is that it avoids the need to wait for existing pipelines like FreeSurfer or FastSurfer to generate the pGT surfaces. However, since this is done in a pre-processing step (and volumes can be processed in parallel on a server), it seems like a small price to pay for a better accuracy during inference. Authors mention that these pipelines are sensitive to hyper-parameters, hence their method could be more robust in some cases, but do not really demonstrate this in their paper. Based on this, I give a final score of borderline accept.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer Guru
Comment: Thank you for your thoughtful feedback and for recognizing the contributions of our methodology. We appreciate your concerns about the real-life applicability of our method compared to established pipelines.
While existing pipelines like FreeSurfer and FastSurfer can generate pGT surfaces in a pre-processing step, our method offers more than just time savings. It aims to reduce dependency on pre-computed pGT surfaces, which is beneficial in cases where varying data types or acquisition protocols might lead traditional pipelines to fail or produce suboptimal results.
In our preliminary experiments, we observed that FreeSurfer, which is optimized for adult MRIs, does not perform well on infant brain images in BCP data. Our method, leveraging robust segmentation results, which can be obtained by well-established tools (e.g., pre-trained segmentation models or SynthSeg), can handle such diverse datasets effectively and reconstruct cortical surfaces with high accuracy and desired topology. This application setting demonstrates our method’s flexibility and potential for handling data with varying qualities and types.
We acknowledge that additional experiments, particularly cross-dataset validations, are needed to fully substantiate these claims, which is a current limitation of our paper. However, the promising results on both adult and infant datasets provide us with confidence in the practical advantages of our method. We are committed to exploring these aspects further and will provide more comprehensive evidence in future work.
Thank you again for your valuable feedback and for considering our paper. | Rebuttal 1:
Rebuttal: **We thank all reviewers for their efforts in reviewing our paper and providing comments.**
**1. Motivation (Reviewers Guru)**
- Conventional pipelines involve multiple processing steps, leading to lengthy processing time.
- Each pipeline requires meticulously tuned parameters, posing challenges for generalization across diverse data domains, age groups, or acquisition protocols.
- DL-based methods have improved efficiency but rely on pGT surfaces from conventional pipelines, increasing computational cost of training data preparation.
We will highlight all these aspects in revision.
**2. Contribution (Reviewers Guru, DDcT, zPL4, and LNir)**
- Weak Supervision Framework: We propose a novel weakly supervised learning approach that leverages pGT segmentations instead of relying on fully annotated surfaces. This approach reduces the dependence on labor-intensive data preparation and addresses issues with conventional methods, such as difficulty in generating accurate pGT surfaces for certain datasets like infant MRIs.
- Loss Functions: In addition to the uni-directional boundary loss function, we introduce other novel loss functions (e.g., intensity gradient loss) to handle the specific challenges of CSR, particularly the PVE and the inability to capture deep cortical sulci. These loss functions help ensure accurate and high-quality surface prediction.
- Regularization Terms: We incorporate additional regularization terms that contribute to the accurate prediction and smoothness of cortical surfaces, enhancing the overall quality of the reconstructed surfaces.
- Evaluation: We conduct extensive experiments on 2 large-scale adult brain MRI datasets and 1 infant brain MRI dataset. Our new method achieves comparable or superior performance compared to existing DL-based methods.
**3. Computation and time efficiency (Reviewers Guru, DDcT, and LNir)**
Please refer to tables in my responses "A4 to Reviewer Guru" and "A6 to Reviewer DDcT".
**4. Weights of loss functions (Reviewers Guru and DDcT)**
Please refer to my response "A5 to Reviewer Guru".
**5. Impact of Segmentation results (Reviewers DDcT, and LNir)**
Please refer to my responses "A5 & A8 to Reviewer DDcT.
---
---
Below are my additional responses to Reviewer LNir in case my comments cannot be displayed.
---
---
**C11: The ODE solver has T=5 steps. Such a large step size could cause unstable solutions and SIF. Report the Lipschitz constant to examine the numerical stability of ODE solver.**
**A11**: Thanks. CF++ uses the rule of thumb $hL \leq 1$ and checks that for all considered examples; cortexODE ensures $\eta(h,L)<1$, where $h=\frac{1}{T}$ and $L$ is Lipschitz constant. In practice, their results using T=10 are satisfying. Compared to their deformation length of the initial surface, our method starts from the midthickness surface, which shortens the need for choosing a large T. We also conducted a preliminary experiment using T=10, the performance is saturated compared to using T=5. Thus, we empirically choose 5 to strike a balance of efficiency and efficacy.
We will include these results and discuss the implications of the Lipschitz constant for the ODE solver's stability.
**C12: Report SegCSR’s runtime 0.37s/hemisphere. Topology correction time should be included. A breakdown of runtime should be reported and compared to SOTA methods.**
**A12**: Summary of the breakdown of runtime in inference.
|Time (s)|DeepCSR|3D U-Net|CF++| cortexODE| CoCSR| Ours |
| - | - | - | - | - | - | - |
| Pre |\\|\\|\\|2.93|2.93|2.93|
| Main framework |125|0.81|1.84|0.49|0.22|0.24|
| Post |\\|0.14|\\|\\|\\|\\|
For our SegCSR, the reported 0.37s includes MC and network forward propagation. The topology correction takes 2s and segmentation map generation takes 0.8s.
**C13: More details on the computational efficiency and runtime comparisons with existing CSR pipelines?**
**A13**: Please refer to my responses “A4 to Reviewer Guru” and “A6 to Reviewer DDcT”.
**C14: How does the proposed boundary surface loss function improve upon existing bi-directional Chamfer loss?**
**A14**: The major difference is on the pial surface reconstruction (Eq. 3 & Fig. 2-c). If using traditional bi-direction Chamfer loss, the model will overfit to the noisy pGT segmentation boundary (Fig. 2-c1, orange) and fail to deform to deep sulci regions. In contrast, using our uni-direction Chamfer loss, the model will drag the pial surface towards the deep sulci. With the help of other loss terms and regularization, the model will find a balance and address the PVE of pGT segmentations (Fig. 3-d vs c). We will rectify the typo in Eq. 3 and explain more.
**C15: Limitations
(1) SegCSR depends on pGT segs. More discussion on low-quality segmentations.
(2) The inter-mesh consistency might affect the anatomical fidelity of pial surfaces. More exploration on the trade-off.
(3) The method could be tested on more diverse cohorts to show its efficacy across various imaging qualities and subject demographics.**
**A15**: We will add more discussion on limitations in Sect. 5.
(1) Please refer to our responses “A5 & A8 to Reviewer DDcT”.
(2) Please refer to our response “A6” above. And we will add results to Supplementary Materials.
(3) Please refer to our response “A4 to Reviewer zPL4”.
**C16: Ethics review needed since involving human subjects**
**A16**: We review and conform with the NeurIPS Code of Ethics. We want to highlight:
1) We use well-established datasets from other resources with their consent. We did not perform experiments on human subjects directly.
2) These datasets should have gone through IRB approvals in the corresponding institutions (Mayo Clinic and UNC). We are not responsible or capable of conducting extra reviews.
3) In Sect. 5, we have clarified the Societal Impact and stressed that the deployment of the model in clinical settings should be approached with caution and under human supervision. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM | Accept (poster) | Summary: The paper presents a novel approach to handling the inherent ambiguities in the SAM used for image segmentation. SAM, despite its robustness, often exhibits sensitivity to slight variations in prompts and object granularity, leading to inconsistent predictions. The authors propose a new framework leveraging a conditional variational autoencoder to model these ambiguities probabilistically. This approach enables SAM to produce diverse and reasonable segmentation outputs by adapting to the inherent ambiguities in the data. The paper details extensive experiments demonstrating the effectiveness of this framework across various practical scenarios involving ambiguous segmentations.
Strengths: 1. This work addresses a critical challenge in image segmentation, especially in medical imaging and other fields where ambiguous data is common. By turning SAM's sensitivity into an advantage, the paper contributes to the advancement of robust and adaptable segmentation models.
2. provides a thorough analysis of SAM's sensitivity to prompt variations and object granularity, backed by detailed experiments and statistical evaluations.
3. The paper is well-structured, with clear definitions and explanations of the proposed methods. The use of figures and tables enhances the understanding of the framework and its performance.
Weaknesses: 1. The paper primarily tests the framework on specific medical imaging and synthetic datasets. There is a lack of diverse real-world datasets, such as those from different domains (e.g., natural scenes, industrial applications), which might exhibit different types and degrees of ambiguity.
2. I have a concern that the framework might be overfitted to the specific characteristics of the tested datasets. This concern is evidenced by Table 6, where the "No Prompt Ambiguity" configuration demonstrated metrics comparable to those of A-SAM. Would it be possible that the test datasets might be biased, exhibiting little ambiguity in prompts?
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Equation 13 mentions learning weights to assemble multiple masks into a final output. Where are these weights predicted from? Does the method obtain multiple results through random sampling or a weighted averaging process? If it's the latter, how does it learn multiple sets of weights? If it's random, how does it correspond to the ground truth?
2. What is the average inference speed for the entire dataset? What percentage of the images contain reasonable masks?
3. Can you elaborate more on why those specific datasets were being chosen?
4. Please refer to the weakness section, can you be more specific on what datasets were used in Ablation and Robustness studies?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have addressed their limitations and discussed the broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for appreciating our paper as addressesing a critical challenge, contributing to the advancement of robust and adaptable segmentation models. We provide pointwise responses to your concerns below.
## Q1. Applicability to real-world non-synthetic and non-medical datasets
As shown in Fig. 3 & Tab. 1 in initial submission, we evaluated our method on the Sim10k dataset, featuring synthetic non-medical street scenes with ambiguous semantics. This challenging context demonstrates our approach's broader applicability beyond medical imaging.
Following your suggestion, we further conducted experiments on KINS dataset for instance segmentation in real-world street scenes. More details and results are shown in Gloabl Table 2 and Global Response 2. Our method performs competitively on KINS, still achieving the superior performance. We will include the results in the revision. Thank you for your insightful suggestions!
## Q2. In Table 6, the performance of "No Prompt Ambiguity" is comparable to the full model?
We respectfully point out that this is a factual error based on the empirical evidence presented in our paper. In Table 6, the results for the "No Prompt Ambiguity" variant are significantly lower across all metrics compared to our full model $\mathcal{A}-SAM (Ours)$, clearly demonstrating the necessity and effectiveness of all components in our proposed design. For example, GED↓: 0.308 vs. 0.228, HM-IOU↑: 0.674 vs. 0.717, etc. These substantial differences consistently observed across various performance indicators provide strong support for the critical role of each element in our model, including the prompt ambiguity mechanism.
## Q3. Where are the ensembling weights in Eq. 13 obtained?
The original SAM outputs multiple predictions for each prompt to address ambiguity. We manually design an ensemble weight for each prediction and make this weight vector learnable. This approach allows our model to dynamically adjust the importance of each prediction, optimizing the combination of multiple outputs to produce more accurate final segmentations for ambiguous cases.
## Q4. Inference speed
Thank you for your valuable suggestion! We've added the inference speed for different methods, as shown below. We find that our method achieves better performance while using less or comparable inference time, demonstrating the superiority and practicality of our approach. This balance of high accuracy and computational efficiency highlights the real-world applicability of our $\mathcal{A}$-SAM model.
| Method | GED ↓ | HM-IOU ↑ | D_max ↑ | Avg. Inference Time (s) |
|:--|--:|--:|--:|--:|
| Prob UNet | 0.324 | 0.423 | - | 0.0277 |
| PixelSeg | 0.243 | 0.614 | 0.814 | 3.6093 |
| Mose | 0.234 | 0.623 | 0.702 | 0.0955 |
| $\mathcal{A}$-SAM (Ours) | **0.230** | **0.763** | **0.959** | 0.0847 |
## Q5. What percentage of the images contain reasonable masks?
These diverse ambiguous masks are obtained from expert multi-annotations provided with the established datasets. Currently, there are no existing standards or methods to evaluate their reasonableness. This lack of standardized evaluation metrics for ambiguous annotations is a recognized challenge in the field. Exploring methods to assess the validity and quality of ambiguous masks could indeed be valuable future work. We appreciate your insightful suggestion, as it highlights an important area for further investigation in ambiguous image segmentation.
## Q6. Why select these datasets?
Following the current common setting established in existing wisedom [1-4], we selected datasets specifically designed for evaluating ambiguous segmentation tasks. This approach ensures our work aligns with the standard practices in the field and allows for meaningful comparisons with existing methods. These datasets are widely recognized in the research community for their ability to challenge models with ambiguous segmentation scenarios.
[1] A probabilistic u-net for segmentation of ambiguous images.
[2] A hierarchical probabilistic u-net for modeling multi-scale ambiguities.
[3] Phiseg: Capturing uncertainty in medical image segmentation.
[4] Stochastic segmentation networks: Modelling spatially correlated aleatoric uncertainty.
## Q7. What datasets used in Ablation and Robustness studies?
Ablation and Robustness studies are conducted on the LIDC dataset. This dataset was chosen for its comprehensive collection of lung CT scans, each annotated by multiple expert radiologists, providing a rich source of ambiguous segmentations. These studies evaluate the contribution of each model component and assess performance stability under various conditions.
---
Rebuttal 2:
Comment: Dear Reviewer V2bw,
We would greatly appreciate it if you could review our response by Aug 13 AoE. After that date, it might be challenging for us to engage in further discussions. If you have any follow-up questions, please don't hesitate to reach out. We deeply value your expertise and time.
Best,
---
Rebuttal Comment 2.1:
Comment: Thanks the authors for the responses. They've addressed my concerns. Thus I'll raise my rating.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer V2bw,
We sincerely appreciate your prompt response, valuable suggestions, and rasing the rating for our work. We look forward to including the suggested changes and hope the paper can inspire a broader audience thanks to your constructive feedback!
Yours,
Authors | Summary: This paper proposes a SAM-based framework to address the ambiguous image segmentation problem. The authors present an optimization framework based on a conditional variational autoencoder, which simultaneously models the prompt and the granularity of the object using a latent probability distribution. This approach allows the model to adaptively perceive and represent the real ambiguous label distribution, enabling SAM to controllably produce a series of diverse, convincing, and reasonable segmentation outputs. Experiments on multiple datasets and metrics demonstrate the effectiveness of the method.
Strengths: 1. To the best of my knowledge and as indicated by the authors, this paper is the first work that leverages the inherent properties in vision foundation models (SAM) for ambiguous image segmentation.
2. The experimental results demonstrate impressive advantages. Compared to the original SAM, the proposed method shows significantly better performance in the presence of prompt shifts. This high level of robustness is extremely valuable in practical applications.
Weaknesses: 1. The task setup of ambiguous image segmentation in this paper is somewhat confusing for me. I have read some referenced and comparative works cited in the paper, such as [a], and found that their task objective is providing multiple segmentation hypotheses for ambiguous images. However, this paper seems to focus more on increasing the accuracy and stability of the model's output when the input prompt has noise or shifts. More explanation about the task setup is needed. Accordingly, it is recommended to include a section in the main text that introduces the task setup, which can help readers who are not experts in this research area understand the paper better.
2. The comparison with conventional ambiguous segmentation models seems unfair because most of the compared methods do not use a network structure as large as SAM. Therefore, it is unclear whether the performance advantage comes from the increased number of network parameters in SAM or from the innovative designs proposed in this paper. I noticed that some of the compared methods, such as [b], can be applied with any encoder-decoder-based segmentation models. Thus, the results of these methods using SAM as the segmentation model should also be reported and compared. This would help evaulate whether the effectiveness of the proposed model is solely due to SAM's larger number of parameters.
3. The writing structure of the paper is somewhat unclear, making it a little difficult to read. For example, the inference method is illustrated in Section 3.1, but the training method is introduced in Section 3.4. It is recommended to create a section titled “Training and Inference,” which contains two subsections that respectively introduce the training and inference methods.
Minor Problem:
1. In Line 169, `Previous research indicates that...' should have corresponding citations added.
[a] A Probabilistic U-Net for Segmentation of Ambiguous Images
[b] MODELING MULTIMODAL ALEATORIC UNCERTAINTY IN SEGMENTATION WITH MIXTURE OF STOCHASTIC EXPERTS
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Will the proposed method perform better than the original SAM if there is no point/box/mask shift?
2. Why is the proposed method trained from scratch using randomly initialized weights? Would it be better to finetune from the pre-trained SAM?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Please see the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are very glad and appreciate that you had a positive initial impression. Thanks for appreciating our paper as the first work that leverages the inherent properties in vision foundation models for ambiguous image segmentation, demonstrating impressive advantages and value in practical applications. We provide pointwise responses to your concerns below.
## Q1. Task setting of ambiguous segmentation
Your assessment of the ambiguous segmentation task setting is entirely correct, namely, providing multiple segmentation hypotheses for ambiguous images.
The key insight of this paper stems from revealing an inherent characteristic of SAM: its non-robust output in response to input prompt perturbations. We innovatively leverage this apparent weakness, transforming it to serve the objective of ambiguous segmentation tasks. Relevant experiments are presented in Figures 3, 4 and Tables 1, 2, 3, 4, 5 in our submission.
Furthermore, we discovered that this design for ambiguous segmentation tasks also incidentally improves the model's robustness to prompt perturbations. Related experiments are shown in Figure 5 of our submission.
## Q2. Performance comparison with Mose
Thank you for your valuable suggestion! We implemented Mose [1] in conjunction with SAM as you recommended. The results of quantitative comparsion for ambiguous segmentation on the LIDC datasetare presented as below. We also include the qualitative comparison in **Figure 1 in uploaded PDF of Global Response**.
We can see that our method still achieves superior performance compared to this approach, demonstrating the advantages of our strategy design beyond the use of SAM.
| Method | GED ↓ | HM-IOU ↑ | D_max ↑ |
|:--|--:|--:|--:|
| Prob UNet | 0.324 | 0.423 | - |
| CAR | 0.252 | 0.549 | 0.732 |
| PixelSeg | 0.243 | 0.614 | 0.814 |
| CIMD | 0.234 | 0.587 | - |
| Mose | 0.234 | 0.623 | 0.702 |
| Mose + SAM | 0.236 | 0.662 | 0.777 |
| $\mathcal{A}$-SAM (Ours) | **0.230** | **0.763** | **0.959** |
[1] MODELING MULTIMODAL ALEATORIC UNCERTAINTY IN SEGMENTATION WITH MIXTURE OF STOCHASTIC EXPERTS
## Q3. Writing structure
Thank you for your suggestion. We agree with this perspective and will reorganize the paper structure as recommended. We will create a new section titled "Training and Inference" in the revision.
We will ensure this new section clearly articulates both the training and inference processes, highlighting the connections between them while maintaining the integrity of each part. This change will facilitate readers' understanding of our approach.
We appreciate your valuable feedback once again, as it will significantly enhance the overall quality and clarity of our paper.
## Q4. Results of original SAM without point/box/mask shift?
Table 1 (Sec. 4.2) compares SAM variants adapted for ambiguous segmentation. As suggested, we also compared these with the original SAM, as shown in **Global Table 1 in Global Rebuttal**. Results show the original SAM's multi-output capability is insufficient for complex, diverse scenarios, leading to suboptimal performance.
## Q5. Trained from scratch or finetuned?
We strongly agree with your assertion that fine-tuning a pre-trained SAM would be superior to training from scratch, as it would be impractical to expend enormous resources to build a large model like SAM from the ground up. Therefore, our approach indeed involves fine-tuning a pre-trained SAM.
---
Rebuttal 2:
Comment: Dear Reviewer 2dZP,
We would greatly appreciate it if you could review our response by Aug 13 AoE. After that date, it might be challenging for us to engage in further discussions. If you have any follow-up questions, please don't hesitate to reach out. We deeply value your expertise and time.
Best,
---
Rebuttal Comment 2.1:
Title: Thank you for your response
Comment: Thank you for your detailed response. A minor question: Line 213 of main paper indicates that "The model is trained from scratch using randomly initialized weights". So is this a writing mistake?
---
Rebuttal 3:
Comment: Dear Reviewer 2dZP,
Thank you for your comment. Regarding line 213 of the main paper, when we stated "training the model from scratch with randomly initialized weights," we meant to convey that we train all weights outside the SAM component (e.g., PGN, IGN, posterior PGN, posterior IGN, etc.) from scratch, while the weights used for the SAM component (e.g., prompt encoder, image encoder, mask decoder, etc.) are initialized with SAM's original weights. We will clarify this point in the revision. Thank you for bringing this to our attention!
Best,
Authors of Submission 830
---
Rebuttal Comment 3.1:
Comment: Thanks for the response. I think the method is good, but as also indicated by other reviewers (vXP5, YPmC), the writing of this paper needs further improvement. Therefore, I keep my rating unchanged.
---
Rebuttal 4:
Comment: We appreciate your **acknowledgment on our method** and that this is *the first work leveraging the inherent properties of vision foundation models (SAM) for ambiguous image segmentation**. We will make the following revisions in our updated submission:
* According to your suggestions, we are commited to **fine-tuning the structure in Method**, by intergrating the related contents of our training and inference pipelines into **an unified sub-section ``Training and Inference''**.
* Also, we are committed to **modifying the statement in Line 213** by: *"We train all weights from scratch except for the SAM components, including the modules of PGN, IGN, posterior-PGN, and posterior-IGN. For the components including prompt encoder, image encoder, and mask decoder in SAM, we initialize them with SAM's original weights before commencing training."*
We sincerely look forward to**incorporating these suggested changes from you** and hope that this paper will **inspire a broader audience thanks to your constructive feedback**.
Yours,
Authors | Summary: This paper builds a framework for amigous object segmentation on top of SAM prompted with bounding boxes, which is known to be sensitive to small prompt changes.
The framework is based on a VAE, and the main idea is to jointly model the prompt and the object granularity with a latent probability distribution to gain more control over SAM’s output. In practice, the prompt embeddings and image embeddings (controlling granularity) are formulated as a distribution.
The method is evaluated on 3 medical imaging datasets and on a synthetic driving dataset, showing superior performance over the baselines.
Strengths: 1. The method is the first to use a promptable large-scale pretrained model like SAM for ambiguous image segmentation
2. The methodology is in general clearly written and easy to follow, figure 2 provides a great overview of the method
3. Extensive evaluation and ablations were performed, showing the method’s superior performance compared to baselines on all of the datasets. (the method is not evaluated on any non-medical real dataset though, see weaknesses)
4. The joint modeling of promts and image embeddings of the proposed method is efficient since the probability sampling is only performed after the SAM encoder and thus the image embedding needs to be computed only once (SAM decoder is lightweight)
Weaknesses: 1. The paper contains several unclear statements or missing details, which make the reproducibility of the method difficult.
2. The evaluation is carried on a niche domain (medical) or on synthetic datasets only. It is hard to judge the performance of this method in general real-world setting.
Technical Quality: 3
Clarity: 2
Questions for Authors: ## General remarks
1. Evaluation on a real-world (not synthetic) non-medical dataset would help to show the generality of the method.
2. It would help readability if it was mentioned that the evaluation metrics are defined in the appendix, also it would help to see the related references in the main paper
3. Is there some intuition/more details on why the granularity is modelled within the image embedding?
## Reproducibility
3. How were the trade-off coefficients tuned?
4. More details on how the three masks from overlapping instances on the SIM10k dataset were obtained should be provided.
5. How was the best checkpoint selected? Was there any hyper-parameter tuning?
6. What does 'achieving significant segmentation outcome‘ mean on line 97? Improvement in segmentation performance over SAM without adapter?
## Fig. 1:
7. SAM outputs multiple predictions for a prompt, how is this handled in Fig. 1a and 1b?
8. Medical domain is not in the training domain of SAM so higher uncertainity/instability of prediction is expected, maybe it is not the best example to showcase the behaviour.
9. What are canonical box prompts from description of Fig. 1? Ground truth bounding boxes?
10. I assume granularities in 1c correpsond to the three output masks of SAM, what is full granularity then?
11. The prompt variation experiment depicted in Figure 1 includes bounding boxes that do not cover the whole region to be segmented. It is not unrealistic to control for that in real scenarios, and it would be interesting to see how the figure would change since SAM seems to be quite sensitive to whether the whole object is covered or not – making the bounding box smaller than an object impacts segmentation more than making it larger.
12. It would be helpful to see how the experts annotate the example
## Fig 2:
13. Why is image embedding concatenated with the IGN sample, but the prompt embedding is the output of PGN directly?
14. Incomplete description – ‚by jointly probabilities‘‘
## Add Weakness 1. – unclear statements and missing details
15. How were the trade-off coefficients set?
16. How exactly was the three masks generated from overlapping instances on the SIM10k dataset?
17. How was the best checkpoint selected? Was any hyper-parameter tuning performed (if yes, on what data)?
18. Line 153 – parameters of axisymmetric Gauss. Distribution „including mean and std“ – the gauss. distirbution does not have any other parameters.
19. What does 'achieving significant segmentation outcome‘ mean on line 97? Improvement in segmentation performance over SAM without adapter?
20. What is meant by the 'final integrated SAM output that integrates multiple candidates‘ on line 44? The only part of SAM that integrates multiple predictions I am aware of is SamAutomaticMaskGenerator class provided by the authors (it features non-maxima suppression) but it prompts SAM with a uniform grid of points while the paper discusses bounding box prompts.
21. The explanation of GT generation for the datasets is confusing since it is incomplete in the paper, it would be nice to at least have a link to the appendix for more details.
22. On lines 38-42, it would help to see an example of such behaviour – what is meant by SAM amalgamating the candidates at different granularities? AFAIK, SAM outputs multiple predictions for each prompt specifically to deal with ambigous prompts.
23. What does 'diminutive adapters‘ on line 93 mean?
24. What is meant by encoder lenght in line 123?
## Add Weakness 2. – evaluation
25. Evaluation on a real-world (not synthetic) non-medical dataset would help to show the generality of the method.
26. Why is original SAM not included in the comparison from subsection 4.2?
## Typos:
27. Line 83 – promotable instead of promptable
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are addressed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your positive initial impression and valuable feedback. We look forward to revising our manuscript based on your suggestions. Below are our point-by-point responses to your concerns. For brevity, we address recurring issues only once.
## Q1. General remarks
**<Applicable to real-world non-synthetic and non-medical datatset?>**
As shown in Fig. 3 & Tab. 1 in initial submission, we evaluated our method on the Sim10k dataset, featuring synthetic non-medical street scenes with ambiguous semantics. This challenging context demonstrates our approach's broader applicability beyond medical imaging.
Following your suggestion, we further conducted experiments on KINS dataset for instance segmentation in real-world street scenes. More details and results are shown in **Gloabl Table 2 and Global Response 2**.
Our method performs competitively on KINS, achieving the superior performance.
**<More Details about metrics>**
We appreciate your suggestion. In the revision, we will include the following content in the appendix:
**Generalized Energy Distance (GED)**: A metric for ambiguous image segmentation comparing segmentation distributions:
\begin{equation}
D^2_{GED}(P_{gt},P_{out}) = 2\mathbb{E}[d(S,Y)]-\mathbb{E}[d(S,S')]-\mathbb{E}[d(Y,Y')]
\end{equation}
where $d(x, y) = 1 - IoU(x, y)$, $Y, Y'$ are samples from $P_{gt}$, and $S, S'$ from $P_{out}$. Lower energy indicates better agreement between prediction and ground truth distributions.
**Maximum and Mean Dice Matching ($D_{max}, D_{mean}$)**: For medical diagnoses, we define Dice score as:
\begin{equation}
Dice(\hat{Y},Y) = \begin{cases}
\frac{2 |Y \cap \hat{Y}|}{|Y|+|\hat{Y}|}, &\text{if } Y \cup \hat{Y} \neq \emptyset\
1, & \text{otherwise}
\end{cases}
\end{equation}
We calculate $D_{max}$ for each ground truth $Y_i$ as:
\begin{equation}
\{D}_{max} = max \\{Dice(\hat{Y}_1,Y_i),Dice(\hat{Y}_2,Y_i),..., Dice(\hat{Y}_N,Y_i) \\}
\end{equation}
**Hungarian-Matched Intersection over Union (HM-IoU)**: This metric calculates the optimal 1:1 match between annotation and prediction using the Hungarian algorithm, better representing sample fidelity. It uses $IoU (Y, Y')$ to determine similarity between samples.
**<Insights of modeling granularity within image embedding?>**
As noted in Lines 185-189, SAM generates multiple candidate masks at different granularities, demonstrating inherent ambiguity related to object granularity. This inspired us to enhance the model's perception of ambiguous objects. We introduce an image generation network (IGN) for object embedding to model this distribution, injecting granularity-aware priors.
## Q2. Implementation
**<Trade-off coefficients>**
As noted in Lines 238, the trade-off coefficients are set as α_P = α_I = 1.
**<How three masks from SIM10k dataset were obtained?>**
As noted in Lines 236-237, for the SIM 10k dataset, we select images where pixels from two instances overlap (e.g., instances A and B), and derive the three potential segmentation masks (e.g., only region A, only region B, the union region of A and B).
**<Model selection>**
Following current best practices [1], we employ a 6:2:2 ratio for training, validation, and test sets. Results are reported based on best performance on the validation set. We will include this information in our revision.
[1] A probabilistic u-net for segmentation of ambiguous images
## Q3. Fig1
**<What’s full granularity?>**
Full granularity refers to the ensemble of various granular outputs within SAM. This approach leverages the multi-scale nature of SAM's internal representations, allowing for a more comprehensive and nuanced segmentation result.
**<Bboxs that are not covering whole region are not considered?>**
We respectfully disagree, as our approach accounts for this scenario. Due to potential bounding box offsets, the complete segmentation region may not always be fully contained. Our method ensures robust performance even when the initial box doesn't perfectly encapsulate the entire region of interest
## Q4. More details
**<How GT generated?>**
We will add the following details to the revised appendix:
The LIDC dataset features manual annotations from four domain experts, accurately reflecting CT imaging ambiguity. Twelve radiologists provided annotation masks. We use the version after the second reading, where experts reviewed and adjusted annotations based on peer feedback, ensuring comprehensive and diverse expert opinions.
The BraTS 2017 dataset includes 285 cases of 3D MRI images (155 slices each) in four modalities (T1, T1ce, T2, Flair). Expert radiologists annotated four classes: background, non-enhanced/necrotic tumor core, oedema, and enhanced tumor core. We overlay these annotations to create binary masks, simulating ambiguous segmentation scenarios.
The ISBI 2016 dataset contains 900 training and 379 testing dermoscopic images. A dermatology expert annotated lesion boundaries in all images, providing a gold standard for segmentation tasks.
For the SIM 10k dataset, we focus on images where pixels from two instances overlap (e.g., instances A and B). From these, we derive three potential segmentation masks: the region exclusive to A, the region exclusive to B, and the union region of A and B.
**<What’s SAM amalgamating different granularities? >**
We acknowledge that the original SAM outputs multiple predictions per prompt to address ambiguity, without integration. Ensemble integration was introduced in Per-SAM [1]. We will clarify this distinction in our revision.
[1] Personalize segment anything model with one shot.
## Q5. Evaluation
**<Why original SAM not included in the comparison from Sec. 4.2?>**
Table 1 (Sec. 4.2) compares SAM variants adapted for ambiguous segmentation. As suggested, we also compared with the original SAM, as shown in **Global Table 1 in Global Response 1**. Results show the original SAM's multi-output capability is insufficient for complex, diverse scenarios, leading to suboptimal performance.
---
Rebuttal 2:
Comment: Dear Reviewer YPmC,
We would greatly appreciate it if you could review our response by Aug 13 AoE. After that date, it might be challenging for us to engage in further discussions. If you have any follow-up questions, please don't hesitate to reach out. We deeply value your expertise and time.
Best,
---
Rebuttal Comment 2.1:
Comment: Thank you for the response!
The following concers remain/arise from your response:
* C1: Full granularity refers to the ensemble of various granular outputs within SAM - how is this ensemble computed?
* C2: I do not think I stated 'Bboxs that are not covering whole region are not considered' in any part of my review so I am quite confused about this reponse
* C3: I do not understand the methodology of creating masks for Sim10k and KINS... . 'the region exclusive to A, the region exclusive to B, and the union region of A and B' - why is the intersection of A, B excluded from the first two masks? I thought the prompt is ambiguous because pixels on the intersection belong to both instances, but here you choose to exclude those pixels... Maybe some examples would help.
I think the work is interesting and the method seems to perform well on different datasets. What prevents me from raising my score is that overall, I am worried that the amount of clarifications and rewriting needed is too big.
---
Rebuttal 3:
Comment: Thank you for your thoughtful comments and the opportunity to clarify our work. We are pleased that our initial response addressed some of your concerns. We will now respectfully address the points you further emphasized.
> C1: Full granularity refers to the ensemble of various granular outputs within SAM - how is this ensemble computed?
Regarding the full granularity ensemble within SAM, as shown in **Eq. (13) and Lines 199-205**, we implement a ensemble strategy to $\textcolor{blue}{\textbf{weighted summing the multi-granular outputs of SAM}}$ to get an ensembled output map. The texts in initial submission is provided as follow: “*Specifically, given multiple candidate outputs from SAM, represented as {M¹, M², ..., M^n}, where n is the number of scales, we introduce a set of learnable mask weights W = {w₁, w₂, ..., w_n} ∈ R^n. The final mask output is obtained through a weighted sum calculation:
$$
\tilde{M}=\Sigma_{i=1}^n w_i \odot \tilde{M}^i,
$$
where w₁, w₂, ..., w_n are initialized to 1/n and subsequently fine-tuned to enable the model to effectively perceive object scales. By adaptively integrating masks at multiple scales, the model's perception and modeling capabilities for complex target diversity are further enhanced.*”
> C2: I do not think I stated 'Bboxs that are not covering whole region are not considered' in any part of my review so I am quite confused about this reponse
We apologize for the confusion. This response actually **corresponds to Question 11 in your initial review regarding Figure 1**: “The prompt variation experiment depicted in Figure 1…making the bounding box smaller than an object impacts segmentation more than making it larger”. We initially misinterpreted your point, as it requires some efforts to accurately understand. You were inquiring about the **impact of bounding box scaling (both enlargement and reduction) on SAM's segmentation performance, particularly when the box may or may not cover the entire object**, right?
First, in **Table 1 of the initial submission**, both SegGPT and SAM use box shifts that include random pixel offsets in all directions and size scaling from [0.8, 1.2], as stated in **Lines 253-254 and 259-260**. We found that our method $\mathcal{A}-SAM$ can achieve superious results compared with them.
To further specifically address your interest in **how box scaling affects segmentation performance when the box may not fully cover the object**, we conducted additional experiments in LIDC dataset (which follows the same setting in Fig. 1 in initial submission). We scale down and up the canonical box prompt for foreground in each image by different scaling factors from 0.8~1.20. The results are as follows. We observed that **when the box doesn't fully cover the entire object (i.e., $\textcolor{blue}{\textbf{scaling factor < 1}}$), the performance change is more dramatic compared to the standard box**. This aligns with intuition. We appreciate your insightful suggestion!
| Scaling Factor of Box Prompt | 0.80 | 0.85 | 0.90 | 0.95 | 1.00 (Canonical prompt) | 1.05 | 1.10 | 1.15 | 1.20 |
|----------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
|Seg. IOU | 0.227 | 0.284 | 0.352 | 0.450 | 0.592 | 0.524 | 0.471 | 0.413 | 0.372
> C3: I do not understand the methodology of creating masks for Sim10k and KINS... . 'the region exclusive to A, the region exclusive to B, and the union region of A and B' - why is the intersection of A, B excluded from the first two masks? …
Regarding the methodology for creating masks for Sim10k and KINS, we refer to region of instance A, region of instance B, and their union A∪B. We didn't extensively discuss the intersection of A and B in the last response, because, for segmentation, there typically aren't strictly overlapping regions at the pixel level. **If a local region of A spatially overlaps and occludes a local region of B, only the local region of A is visible**. Thus, we approach this from a 2D segmentation, pixel classification perspective when referring to 'the region exclusive to A, the region exclusive to B, and the union region of A and B'. From a spatial relationship viewpoint, **$\textcolor{blue}{\textbf{the overlapping region belongs to the occluder}}$ rather than the occluded object**. Thank you for pointing this out!
Title: Further Clarification for Emphasized Issues
---
Rebuttal 4:
Title: Further Clarification for Emphasized Issues
Comment: **Last but not least**, we have to acknowledge **the challenge of elucidating all details of the proposed method regarding data, model and metrics** within the **confined space of a submission**. Nevertheless, we beg to point out that the majority of the content addressed in this rebuttal was $\textcolor{blue}{\textbf{already existed in our initial submission}}$, both in the **main text and the appendix**. **For instance**, questions regarding **<more details about metrics>** were addressed in the $\textcolor{blue}{\textbf{appendix, Section A.2, Lines 513-525}}$. The inquiry about **<How multiple GT generated?>** was covered in the $\textcolor{blue}{\textbf{appendix, Section A.2, Lines 485-500}}$. It's worth noting that most of the datasets used in this paer, including LIDC, BraTS, ISIBI, etc., are widely recognized benchmark datasets for ambiguous segmentation, following **established practices [1-5]**. We simply utilized the pre-defined multiple GTs provided by these datasets. Additionally, **<Trade-off coefficients>** were mentioned in $\textcolor{blue}{\textbf{Line 238}}$ in main text. **Our responses to these queries primarily $\textcolor{blue}{\textbf{emphasize where they are in the original text}}$ rather than introducing new clarification or content.**
Hence, we sincerely believe that the **areas requiring further clarification in the revision are primarily limited to the following several aspects**:
* Adding a definition & clarification of **full granularity** in **Figure 1's caption and in the introduction**.
* Including **a sentence in Section 4.1 (Dataset, Line 230)** to tell the readers to **refer to more details in the appendix**.
* Expanding **Section A.2 in the Appendix (Dataset, after Line 500)** with **a sentence to account for how the ambiguous segmentation masks are made**. We will also add a figure to clarify potential misunderstandings about spatial overlap.
$\textcolor{blue}{\textbf{Most other clarifications are extracted from the existing content in the initial submission and appendix.}}$ Furthermore, we sincerely appreciate $\textcolor{blue}{\textbf{your ``Applause" that this paper is interesting}}$ and we sincerely hope that **some "Flaws" related to clarification issues in this paper can be effectively addressed** thanks to your constructive feedback.
Yours,
Authors
[1] A probabilistic u-net for segmentation of ambiguous images.
[2] Phiseg: Capturing uncertainty in medical image segmentation.
[3] Stochastic segmentation networks: Modelling spatially correlated aleatoric uncertainty
[4] Pixelseg: Pixel-by-pixel stochastic semantic segmentation for ambiguous medical images.
[5] Ambiguous medical image segmentation using diffusion models
---
Rebuttal Comment 4.1:
Comment: Thank you for the detailed response and additional evaluation - after reading it and after careful consideration, I am raising my score.
The authors have shown they are willing to work on improving the clarity of the paper and I think it can benefit the community.
I would like to point out that because of the amount of things that were not clear, I had a hard time reading the paper and couldn't have reproduced it. This seems to be a common concern among the reviewers, it is not only about the points raised by me specifically. I trust the authors to significantly improve the writing based on their responses in the rebuttal/during the discussion.
Final note re non-medical: Thanks, now it is much clearer. Even though it was established by prior work, I am still not sure why merging two objects that are overlapping in 2D is useful, the task seems better motivated within the medical domain to me.
---
Reply to Comment 4.1.1:
Comment: Dear Reviewer YPmC,
We sincerely appreciate your prompt response, valuable suggestions, and raising the rating for our work. We look forward to including the suggested changes and modifications and hope the paper can inspire a broader audience thanks to your constructive feedback!
Yours,
Authors | Summary: This paper aims to convert the flaws in the vision foundation model (e.g., SAM) into advantages for ambiguous object segmentation. To this end, the authors propose a novel framework that employs latent distribution and an optimization architecture. The authors validated the performance of the proposed methods through comprehensive experiments.
Strengths: Unlike existing approaches that aim to stabilize the sensitivity to ambiguous objects in SAM, this paper suggests leveraging the vulnerability for ambiguous object segmentation. The proposed approach seeks to harness SAM's sensitivity, redeemed as a weakness, to address ambiguous and uncertain predictions.
Weaknesses: 1. The explanations are unclear and hard to follow. Specifically, it needs further explanation of how to extract the mean and standard deviation from the convolution blocks and how to utilize the ground truth labels in the posterior version of the prompt generation network.
2. Some symbols are used without explanation (e.g., Θ, Φ, N_i, N_p).
3. Missing reference: Previous research at line 169.
4. Since this paper focuses on clinical scenarios for ambiguous object segmentation, it seems unfair to compare the performance without including existing medical segmentation methods such as OM-Net [1], DC-UNet [2], and CE-Net [3].
[1] https://arxiv.org/pdf/1906.01796v2
[2] https://arxiv.org/pdf/2006.00414v1
[3] https://arxiv.org/pdf/1903.02740v1
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. What is the difference between PGN and posterior PGN?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The authors address limitations of this work and broader impact properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for appreciating our paper as harnessing SAM's sensitivity, redeemed as a weakness, to address ambiguous and uncertain predictions. We provide pointwise responses to your concerns below.
## Q1. Method details
**<How to extract the mean and standard deviation from networks?>**
As noted in Lines 155-157, the mean and standard deviation of the Axis Gaussian distribution are the two vectors output by the neural network. The input to the network can be the image or prompt embedding, and the network is trained to directly output the mean and standard deviation parameters. This allows for efficient extraction of the distributional statistics from the network outputs, which is crucial for downstream probabilistic modeling tasks.
**<How to utilize the ground truth labels in posterior prompt generation network?>**
As described in Eq.(7), for each iteration of the training process, the method randomly samples one of the possible ambiguous labels from the label set. This sampled label is then input into the posterior prompt generation network, which is responsible for producing prompts that capture the semantics of the given ambiguous label. By iteratively sampling from the set of ambiguous labels and feeding them to the prompt generation network, the model is able to learn to produce effective prompts that can handle the inherent ambiguity in the training data.
## Q2. Symbol details
Thank you for your pointing out. The parameters Θ and Φ that you mentioned represent the hyperparameters characterizing two distributions, such as the mean and variance of a Gaussian distribution. $N_i$ and $N_p$ are hyperparameters denoting the dimensions of the vectors. We will provide a more detailed description of these parameters in the revision.
We appreciate your attention to detail and the opportunity to clarify this point.
## Q3. Missing reference
Thank you! The authors will add the line "Previous research [1] has described..." at Line 169 to provide more context on how the current approach builds upon prior work in this area, as outlined in the referenced publication [1].
[1] A Probabilistic U-Net for Segmentation of Ambiguous Images
## Q4. Comparison with more existing models
We appreciate your insightful suggestion! It's worth noting that our method is designed for ambiguous segmentation, while the comparison approach you proposed is designed for conventional deterministic segmentation. To enable a fair comparison, we have adapted our method for deterministic segmentation by ensemble averaging the segmentation results from three sampling iterations into a single output.
We have conducted further comparisons on the overlap dataset and the BraTS 2017 dataset, as mentioned in the references OM-Net [1], DC-UNet [2], and CE-Net [3] you cited. The results of these comparisons are listed as below. As we can see from these results, our method still achieves superior segmentation performance under this setting. We believe this additional evaluation addresses your concern and provides a more comprehensive assessment of our method's performance across different segmentation paradigms.
| Metrics | |Dice↑ | | |Hausdorff95↓ | |
|:----------|:--------:|:------------:|:--------:|:------------:|:--------:|:------:|
| **Category** | **Core** | **Enh. Core** | **Whole** | **Core** | **Enh. Core** | **Whole** |
| OM-Net | 0.842 | 0.785 | 0.907 | 7.561 | 3.299 | 4.382 |
| nnU-Net | 0.819 | 0.776 | 0.903 | 8.642 | 3.163 | 6.767 |
| Wang et al. | 0.838 | 0.786 | 0.905 | 6.479 | 3.282 | 3.890 |
| Kamnitsas et al | 0.797 | 0.738 | 0.901 | 6.560 | 4.500 | 4.230 |
| $\mathcal{A}-SAM$ (Ours) | **0.863** | **0.803** | **0.921** | **6.463** | **3.084** | **3.741** |
[1] https://arxiv.org/pdf/1906.01796v2 [2] https://arxiv.org/pdf/2006.00414v1 [3] [https://arxiv.org/pdf/1903.02740v1](https://arxiv.org/pdf/1903.02740v1)
## Q5. Difference between PGN and posterior PGN?
As noted in Lines 171-173, a posterior version for the prompt generation network $F^{post}_{PGN}$, parameterized by $\Theta^{\mathcal{T}}$, is further introduced during the training process. This posterior prompt generation network learns to generate the effective distribution for the prompt embedding when **accessing the ground-truth label distribution**. The introduction of this additional network component allows the model to better capture the semantics of the ground-truth labels and generate more targeted prompts, improving the performance of models.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the response. They have addressed all my concerns. Thus I will raise my initial rating.
---
Rebuttal 2:
Comment: Dear Reviewer vXP5,
We would greatly appreciate it if you could review our response by Aug 13 AoE. After that date, it might be challenging for us to engage in further discussions. If you have any follow-up questions, please don't hesitate to reach out. We deeply value your expertise and time.
Best, | Rebuttal 1:
Rebuttal: ## Global Response 1. Results of original SAM
As suggested by **Reviewer YPmC** and **Reviewer 2dZP**, we have added the results of the original SAM for comparison. As shown in the figure below, SAM (point) and SAM (box) represent the results of the original SAM obtained using different prompts. It can be seen that, compared to these added original SAM methods, our $\mathcal{A}-SAM$ still exhibits superior results.
| Global Table 1. Comparison with different SAM variants for ambiguous segmentation. |
|:-----------------------------------------------------------------------------|
| Metric | GED↓ | HM-IOU↑ | D_max↑ | D_mean↑ | GED↓ | HM-IOU↑ | D_max↑ | D_mean↑ |
|:--|--:|--:|--:|--:|--:|--:|--:|--:|
| Method | LIDC | | | | BRATS | | | |
| SAM (Point) | 0.385 | 0.347 | 0.664 | 0.335 | 0.274 | 0.167 | 0.341 | 0.225 |
| SAM (Box) | 0.376 | 0.372 | 0.695 | 0.248 | 0.248 | 0.235 | 0.368 | 0.238 |
| SAM w/ Point shift | 0.377 | 0.365 | 0.650 | 0.337 | 0.252 | 0.169 | 0.334 | 0.238 |
| SAM w/ Box shift | 0.361 | 0.380 | 0.673 | 0.253 | 0.239 | 0.242 | 0.344 | 0.246 |
| $\mathcal{A}$-SAM (Ours) | **0.228** | **0.717** | **0.948** | **0.356** | **0.193** |**0.610**| **0.864** | **0.423** |
| Method | ISBI | | | | Sim10K | | | |
| SAM (Point) | 0.538 | 0.793 | 0.891 | 0.709 | 0.294 | 0.162 | 0.249 | 0.197 |
| SAM (Box) | 0.526 | 0.806 | 0.899 | 0.713 | 0.312 | 0.176 | 0.261 | 0.208 |
| SAM w/ Point shift | 0.513 | 0.782 | 0.886 | 0.681 | 0.265 | 0.155 | 0.229 | 0.189 |
| SAM w/ Box shift | 0.491 | 0.792 | 0.896 | 0.685 | 0.255 | 0.160 | 0.239 | 0.199 |
| $\mathcal{A}$-SAM (Ours) | **0.276** | **0.835** | **0.926** | **0.904** | **0.233** | **0.637** | **0.851** | **0.327** |
## Global Response 2. Further results on real-world non-synthetic non-medical dataset.
We appreciate the suggestions from **Reviewer vXP5** and **Reviewer YPmC** to evaluate our method on a real-world non-synthetic non-medical dataset. To address this, we conducted additional experiments on the KINS dataset [1], which is specifically designed for amodal instance segmentation. KINS is derived from the KITTI dataset [2] and includes instance-level semantic annotations. The dataset comprises 7,474 training images and 7,517 testing images across seven object categories.
We followed the same data processing method as in our initial submission to create ambiguous image-label pairs, selecting images with pixel overlap between two instances and creating three potential masks (instance 1, instance 2, and their union). We then evaluated our approach on KINS using the same metrics. The results are presented in the table below:
| Global Table 2. Comparison Results on A real-world non-synthetic non-medical dataset, KINS. |
|:-----------------------------------------------------------------------------|
| KINS | GED↓ | HM-IOU↑ | D_max↑ | D_mean↑ |
|----------------------|-------|-------|-------|-------|
| SegGPT w/ Point shift| 0.427 | 0.584 | 0.691 | 0.396 |
| SegGPT w/ Box shift | 0.385 | 0.640 | 0.758 | 0.472 |
| SEEM w/ Mask shift | 0.362 | 0.695 | 0.812 | 0.518 |
| SAM w/ Point shift | 0.340 | 0.612 | 0.735 | 0.445 |
| SAM w/ Box shift | 0.254 | 0.482 | 0.587 | 0.331 |
| $\mathcal{A}$-SAM (Ours)| **0.237** | **0.633** | **0.839** | **0.445** |
These results demonstrate that our method performs competitively on the KINS dataset, a real-world non-synthetic non-medical dataset. Notably, our method achieves the best performance in terms of GED and D_{max} metrics, while maintaining comparable results with other top-performing methods in HM-IOU and D_{mean} metrics.
[1] Amodal instance segmentation with kins dataset, CVPR, 2019
[2] Vision meets robotics: The kitti dataset, The International Journal of Robotics Research
## Notes on added Figures in the uploaded PDF
As suggested by **Reviewer 2dZP**, we include the visualized comparison between the proposed method with the Mose+SAM (equipping Mose [3] with SAM backbone) as **Figure 1 in the uploaded PDF**. We can see that the compared to Mose+SAM, the segmentations engendered by our proposed A-SAM preserve a higher degree of exact object detail, particularly boundary details, and provide a distinctive visual representation of potential diversity.
[3] Modeling multimodal aleatoric uncertainty in segmentation with mixture of stochastic expert, in ICLR, 2023
Pdf: /pdf/4858816bd8e630e42aff29014de0b85ac12c0c16.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Benefits of Balance: From Information Projections to Variance Reduction | Accept (poster) | Summary: This paper introduces a technique called iterative data balancing—altering data distributions to match predefined marginal distributions—that can lead to variance reduction in model predictions. The authors highlight its utility for self-supervised learning, which has been used to train several foundation models. The results demonstrate that iterative rebalancing of data leads to improvements in zero-shot learning performance and a reduction in variance among the empirical marginals with more than one iteration (k>1) of their technique.
Strengths: The paper has theoretical contributions that include the derivation of non-asymptotic bounds that quantify the variance reduction achieved through their data balancing technique. The authors also present empirical studies that demonstrate the effectiveness of their proposed balancing technique. The authors discuss the utility of data balancing across different tasks, such as image-caption pair matching and self-supervised clustering, identifying the utility of their approach. Their approach has the potential for adoption in various domains, including in the training of foundation models.
Weaknesses: The authors could expand the range of experiments to include a more diverse set of tasks, which in turn could enhance the generalization of the findings. Furthermore, their iterative data balancing technique relies heavily on predefined (uniform) target marginal distributions (see questions about this in next section). Finally, the iterative nature of the proposed data balancing technique may introduce significant computational demands. The paper could benefit by a more comprehensive overview of how the iterative technique computational overhead is impacted by very large datasets and/or models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In your work, your target marginals were uniform; how would your method respond to non-uniform marginals?
2. Your target marginals were accurately specified (as uniform). What if the target marginals of the two distributions were less accurately specified (i.e., the true underlying distributions are not well-known)? How do you think that this would influence the empirical results of your technique (e.g., zero-shot average per-class recall)?
3. Are there existing methods for variance reduction and data balancing? If so, why did you not include an empirical comparison to existing methods?
4. You mention that the zero-shot evaluation metrics are difficult to produce intervals for (i.e., you are missing error bars). Why is this the case?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I don't feel the authors addressed the limitations of their work significantly, aside from noting that their methods were different as those "in practice". As already mentioned in the questions section, how robust are your results to imbalance and uncertainty in the target marginals? Furthermore, are there situations where your technique fails to improve the empirical results? If so, what are they?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We address them below.
>**The authors could expand the range of experiments to include a more diverse set of tasks.**
Thank you for raising this point. We also show performance on an image retrieval task in Figure 8 of the attached PDF using the Pascal VOC benchmark. We show mean average precision (MAP) across 30 queries in a 17k image database. Qualitatively similar findings hold, in that the additional iteration of balancing (over the original CLIP objective) yields performance improvements and warrants further investigation.
>**Data balancing technique relies heavily on predefined (uniform) target marginal distributions...**
We first emphasize that neither the theoretical results nor the method described require the marginals to be uniform. The matrix scaling/Sinkhorn algorithm (as balancing is called in other contexts) has been used across domains for arbitrary marginals. The choice of the uniform marginals in the experiments was motivated by the fact that leading SSL methods already in use currently can be shown to be exactly, mathematically equivalent to balancing with uniform marginals.
However, as far as inaccurately specified marginal distributions, it is an interesting question what the balanced measure converges to ($n \rightarrow \infty$) when $(P_X, P_Y)$ are not the marginals of the true probability measure $P$. This is outside the scope of this work but is a natural follow-up investigation.
>**Finally, the iterative nature of the proposed data balancing technique may introduce significant computational demands.**
Thank you for identifying this point. The balancing technique is not a proposal of this paper; it is already in use in large foundation models such as CLIP and DINO, as described in Section 2. In particular, the approach applies to minibatches, and simply reduces to row scaling and column scaling of a matrix. The batch sizes for training the largest CLIP models are on the order of $10^4$, so taking row and column sums of this matrix is usually not the computational bottleneck of these training workloads.
>**Are there existing methods for variance reduction and data balancing?**
Thank you for raising this important point. Another approach for variance reduction through marginal fitting used in stochastic simulation and one-dimensional optimal transport is to use the [quantile transform](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.quantile_transform.html). That is, assuming that $X_1, \ldots, X_n$ are real-valued, we may replace them by $F_n(X_1), \ldots, F_n(X_n)$, where $F_n$ is the empirical CDF (in other words, $F_n(X_i)$ is the rank of $X_i$ divided by $n$). These random variables are uniformly distributed. By applying $F_X^{-1}$, the resulting (discrete) inform variables are now approximately distributed according to $F_X$. This is an approximation of the inverse CDF sampling technique, as $F_X^{-1}(U)$ for a continuous uniform random variable $U$ will have CDF $F_X$. We call this the “Inverse Probability (IP) Weighting” method and compare it to the empirical distribution and the balanced distribution in a simulation (see Figure 9 of the attached PDF). Across sample sizes, balancing improves on mean squared estimation error. While the values near $s=0$ imply that there is “more information” in the marginals, the inherent variability of the distribution also increases, explaining how various methods do not achieve perfect variance reduction.
>**You mention that the zero-shot evaluation metrics are difficult to produce intervals for. Why is this the case?**
Thank you for identifying this point of improvement. We meant that incorporating error estimates based on bootstrapping the datasets themselves would be computationally infeasible, as each point on the curve requires the evaluation of a foundation model to be used in the CLIP Benchmark suite. However, we have incorporated multiple seeds to account for uncertainty across training runs. Please see both Figure 6 and Figure 8 of the attached PDF to see that qualitatively similar trends hold for the average performance and performance across individual seeds.
>**Are there situations where your technique fails to improve the empirical results? If so, what are they?**
As mentioned in the response to the Reviewer os3n, the learned representations based on more than 2 iterations of rebalancing (Figure 6 of the attached PDF) show that performance in zero-shot tasks may no longer improve after 2 iterations. Similarly, as seen in the simulation in Figure 9, other methods for balancing may perform as well or better when the random variables in $X$ and $Y$ are nearly independent. Please see the "Example" in the response to Reviewer os3n for an intuitive explanation of this simulation. We emphasize that the experiments are meant to be illustrative of the theory established in Sections 2 and 3, and we do not make claims of state-of-the-art empirical methods.
---
Rebuttal Comment 1.1:
Title: Rebuttal Read
Comment: Thank you for taking the time to provide a detailed rebuttal. You have clarified the points I inquired about. | Summary: This paper explores the use of data balancing in various self-supervised learning (SSL) frameworks. The authors argue that this iterative algorithm, which is typically used to avoid representation collapse in SSL models, also provides a benefit of reducing the variance of empirical functionals of the distribution over data sources. The paper establishes non-asymptotic bounds quantifying this variance reduction and relates them to the eigendecays of specific Markov operators.
Strengths: 1. the paper provides a new perspective on the benefits of data balancing
2. provide different examples of data balancing in practice and prove a non-asymptotic bound on the MSE of balanced estimators.
3. The findings may have implications for improving SSL models
Weaknesses: 1.the experiments are somewhat limited in scope
2.adding more visualizations or intuitive explanations may be better for understanding the key finding of the paper.
3.will the assumptions limit the applicability of the findings?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can the findings be extended to other areas?
2. Can the authors provide more details on how the assumptions, such as the spectral gap condition, hold or when will they not hold in practical scenarios? Are there specific types of data or models where these assumptions are more likely to be satisfied?
3. For Figure 2, Can the authors provide experiment results with more iterations?
4. Are there any other variance reduction techniques and how does data balancing compare to those techniques?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: In checklist the author mentions that the work is primarily theoretical and has no societal impact. but it will be better to discuss the positive social impact. The author also mentions the limitation that the setting studied has some dissimilarities with practice but haven't addressed the limitations yet.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your helpful comments and suggestions. We address them below.
The upcoming comments concern the interpretation of the spectral gap condition (and the second largest singular value $s_2$). To facilitate this discussion, we introduce a simple example by starting with an arbitrary value of $s = s_2 < 1$ and construct a probability distribution $P$ that satisfies the condition in Eq (8) of the paper.
**Example:** For $m=2$, we have that $\mathcal{X} = \\{x_1, x_2\\}$ and $\mathcal{Y} = \\{y_1, y_2\\}$, so every element in $h \in L^2(P)$ can be represented by four numbers $(h(x_1, y_1), h(x_2, y_1), h(x_1, y_2), h(y_1, y_2))$. In the case of uniform marginals, we can verify directly that Eq (8) can be satisfied by setting $\alpha_1 = \beta_1 = (1, 1, 1, 1)$, $\alpha_2 = (1, -1, 1, -1)$, $\beta_2 = (1, 1, -1, -1)$ and $P(x_1, y_1) = P(x_2, y_2) = (1+s)/4$ and $P(x_1, y_2) = P(x_2, y_1) = (1-s)/4$ . Thus, as $s \rightarrow 1$, the distribution becomes “fully dependent” as $Y$ and $X$ are completely determined by one another. As $s \rightarrow 0$, the $X$ and $Y$ are independent. Intuitively, the marginals would contain the most information when $s = 0$, as the distribution can be computed directly from them. This idea can be generalized to construct similar distributions for $m >2$, which we use for the simulations in the attached PDF.
>**Can the authors provide more details on how the assumptions, such as the spectral gap condition, hold or when will they not hold in practical scenarios? Are there specific types of data or models where these assumptions are more likely to be satisfied? Will the assumptions limit the applicability of the findings?**
As mentioned in Assumption 2, it will hold if $P(x, y) > 0$ for any $(x, y)$ such that $P_X(x) > 0$ and $P_Y(y) > 0$ by the Perron-Frobenius theorem, but may hold otherwise. As seen above, the $s=1$ cases are often pathological, such as having perfect dependence between $X$ and $Y$. The positivity assumption is standard in even classical references on this subject; see [Bickel, Ritov, Wellner (1991)](https://projecteuclid.org/journals/annals-of-statistics/volume-19/issue-3/Efficient-Estimation-of-Linear-Functionals-of-a-Probability-Measure-P/10.1214/aos/1176348251.full) Eq (P3) for a equivalent condition. Thus, it is safe to assume the condition will hold in all practical scenarios of interest.
>**…adding more visualizations or intuitive explanations may be better for understanding the key finding of the paper.**
Thank you for raising this point. Returning to the example above, we also see that because $\alpha_1 = \beta_1$, the angle between the subspaces can be measured by the angle between $\alpha_2$ and $\beta_2$. By direct computation, we can see that $\langle \alpha_2, \beta_2 \rangle = s$, which means that the cosine of the angle $a$ between the subspaces is $s = \cos a$. Thus, the geometric interpretation of $s$ is the cosine of the angle between $L^2(P_X)$ and $L^2(P_Y)$. This can be visualized in Figure 7 of the attached PDF. In fact, this holds even for non-uniform marginals. With $P(x_1) = p_X$ and $P(y_1) = p_Y$, a slightly more tedious computation will show that Eq (8) is satisfied when $\alpha_1 = \beta_1 = (1, 1, 1, 1)$, $\alpha_2 = (\sqrt{(1-p_X)/p_X}, -\sqrt{p_X/(1-p_X)}, (\sqrt{(1-p_X)/p_X}, -\sqrt{p_X/(1-p_X)})$, $\beta_2 = (\sqrt{(1-p_Y)/p_Y}, \sqrt{(1-p_Y)/p_Y}, -\sqrt{p_Y/(1-p_Y)}, -\sqrt{p_Y/(1-p_Y)})$, and $P(x_1, y_1) = p_Xp_Y + s \sqrt{p_X(1-p_X)p_Y(1-p_Y)}$. In this case, we still have that $\langle \alpha_2, \beta_2 \rangle = s$, so the non-uniform marginals do not warp this angle.
>**Are there any other variance reduction techniques and how does data balancing compare to those techniques?**
Another approach for variance reduction through marginal fitting used in stochastic simulation and one-dimensional optimal transport is to use the [quantile transform](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.quantile_transform.html). That is, assuming that $X_1, \ldots, X_n$ are real-valued, we may replace them by $F_n(X_1), \ldots, F_n(X_n)$, where $F_n$ is the empirical CDF (in other words, $F_n(X_i)$ is the rank of $X_i$ divided by $n$). These random variables are uniformly distributed. By applying $F_X^{-1}$, the resulting (discrete) inform variables are now approximately distributed according to $F_X$. This is an approximation of the inverse CDF sampling technique, as $F_X^{-1}(U)$ for a continuous uniform random variable $U$ will have CDF $F_X$. We call this the “Inverse Probability (IP) Weighting” method and compare it to the empirical distribution and the balanced distribution in a simulation (see Figure 9 of the attached PDF). Across sample sizes, balancing improves on mean squared estimation error. While the values near $s=0$ imply that there is “more information” in the marginals, the inherent variability of the distribution also increases, explaining how various methods do not achieve perfect variance reduction.
>**the experiments are somewhat limited in scope… Can the findings be extended to other areas?**
Thank you for raising this point. We also show performance on an image retrieval task in Figure 8 of the attached PDF using the Pascal VOC benchmark. We show mean average precision (MAP) across 30 queries in a 17k image database. Qualitatively similar findings hold, in that the additional iteration of balancing (over the original CLIP objective) yields performance improvements and warrants further investigation.
>**For Figure 2, Can the authors provide experiment results with more iterations?**
We show this experiment in Figure 6 of the attached PDF. Both from observing the performance of the $k=5$ variant as well as directly observing the marginals in Figure 3 of the manuscript, we see that the marginals stabilize very quickly (after $k=2$ iterations in most cases), and performance remains similar beyond this point.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for detailed responses and additional experiments. I've raised the score of contribution.
---
Reply to Comment 1.1.1:
Title: Response to os3n
Comment: We are happy to hear that the response to the review met your expectations. We are a bit puzzled as it appears on our end that the score is the same as before (5: Borderline accept). Does it appear to be the same for you? Thank you once again. | Summary: This work focusses on data balancing strategies in context of self-supervised learning. The main claim of the paper is that data balancing, commonly used to avoid representation collapse, has a variance reduction effect. The authors introduce an upper bound on the MSE of a balancing estimator, relating it to empirical risk minimisation. The main paper covers the key elements of the proofs, which is given in detail (and is extensive) in the appendix. Experiments are conducted to illustrate the impact of data balancing on examples described in the paper.
Strengths: This paper attempts to shed light on SSL training and the role of data balancing. The paper formalises the problem and develops extensive theory. The main results is pretty cool and insightful in the sense that the upper bound on the MSE shows that data balancing has a variance reduction effect. The topic is of interest to the community and the work is focussing on a poorly understood paradigm that is becoming dominant.
Weaknesses: I have three main concerns with this work:
1/ The theory is *very* extensive. The Appendix contains several pages of proofs that are difficult to parse and come on top of the formalism presented in the main paper. It seems like the main body could be simplified and made more to the point to convey the main gist of the contribution and make it more accessible.
2/ It is unclear how the data balancing examples in Section 2 map to the formalism introduced in Section 3. For example, what would (4) look like for example 1 and example 2?
3/ It is unclear what the experiments bring to the table and how they provide evidence to the main result. Making the link more explicit and explaining what are the key take aways from these results would help the reader.
Technical Quality: 3
Clarity: 2
Questions for Authors: I have the following questions for the authors:
- Line 54: What do you mean by "X and Y are forms of the data that are related to, but distinct from, the form of Z"" given that Z is equal to (X,Y)?
- p4, example 1: What would the target marginals correspond to? What is \psi_n^(k) and P_n^(k) here?
- p4, example 2: What would the target marginals correspond to? What is \psi_n^(k) and P_n^(k) here?
- Why do we need \tilde{\psi}_n^{(k)} in (12) and how does it relate to \psi_n^{(k)}?
- How does (15) relate to the clip example introduced earlier in the paper and why is this a valid and sensible simplification to study?
- Does the main result have implications in practice in terms of design of algorithm?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors provided a brief discussion about future work at the end of the paper. The work presented here is theoretical in nature, attempting to provide new insights in existing approaches. There are not immediate implications in practice, but authors could discuss the scope of their work in more detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and questions. We address them below.
>**The theory is very extensive. The Appendix contains several pages of proofs that are difficult to parse and come on top of the formalism presented in the main paper.**
We provide the complete, self-contained, proofs of all of our theoretical results, for reproducibility. The main analytical appendices (B, C, and D) are split by topic and include an outline for readability. As per the NeurIPS Paper Checklist, "For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?... The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition." This sketch is given between lines 235 and 259. We are happy to incorporate any suggestions to make them more clear.
>**It is unclear how the data balancing examples in Section 2 map to the formalism introduced in Section 3. For example, what would (4) look like for example 1 and example 2?**
The connection between the examples and the analysis in Section 3 is first introduced in the introduction, between lines 59 and 81. The main notational difference is that quantities are often indexed by $\theta$ in Section 2 and 4 as they describe objectives that are evaluated at a specific parameter value. In Section 3, we drop this index as we are considering the statistical properties of balancing estimators (i.e. the loss that is actually optimized in an SSL scenario) at a fixed parameter value. Thus, the analogous value to $\psi\_n^{(k)}$ in (4) is always $\sum_{x, y} h_\theta(x, y) R_\theta^{(k)}(x, y)$, which is another way of writing (2).
>**How does (15) relate to the clip example introduced earlier in the paper and why is this a valid and sensible simplification to study? Does the main result have implications in practice in terms of design of algorithm?**
The objective in Eq (15) is not a simplification but a generalization of the CLIP objective. It exactly reduces to the CLIP objective when $k = 1$. As seen in experiments, zero-shot accuracy of learned representations can improve when considering more iterations (see Figure 2 of the manuscript and Figure 6 of the attached PDF). The main result is meant to provide theoretical insight into the reasons behind the success of SSL training procedures that are widely popular, but not well-understood (as outlined between lines 23 and 48).
>**What do you mean by "$X$ and $Y$ are forms of the data that are related to, but distinct from, the form of $Z$" given that $Z$ is equal to $(X,Y)$?**
$Z$ is not necessarily equal to $(X, Y)$, except in empirical risk minimization (line 53). We use the examples from Section 2; in the case of self-labeling, $Z$ represents an image, $X=Z$, and $Y$ is a learnable cluster representation (Example 1). For contrastive learning, similarly to empirical risk minimization, we have $Z = (X, Y)$ where $X$ is an image and $Y$ is a caption (Example 2). In Appendix E.4, we have yet another example of metadata curation, in which $Z$ is an image-caption pair, $X = Z$, and $Y$ is an associated keyword.
>**example 1: What would the target marginals correspond to? What is $\psi_n^{(k)}$ and $P_n^{(k)}$ here?**
The target marginals are discrete uniform measures on $\mathcal{X}$ and $\mathcal{Y}$ (line 148). $P_n^{(k)}$ would be analogous to $R_\theta^{(k)}$ (line 132), and $\psi_n^{(k)}$ is (throughout) analogous to (2). We say analogous because the starting measure $R_\theta^{(0)}$ comes from the output of a model parametrized by $\theta$, whereas $P_n^{(0)} = P_n$ is used in Section 3 to represent the empirical measure of randomly drawn data.
>**example 2: What would the target marginals correspond to? What is $\psi_n^{(k)}$ and $P_n^{(k)}$ here?**
The target marginals are discrete uniform measures on $\mathcal{X}$ and $\mathcal{Y}$ (line 167). The other quantities have the same interpretation as in the previous example and others.
>**Why do we need $\tilde{\psi}_n^{(k)}$ in (12) and how does it relate to $\psi_n^{(k)}$?**
Because balancing iterations are only well-defined under the event $\mathcal{S}$ (line 192), $\tilde{\psi}_n^{(k)}$ is defined simply to handle the technical consideration of $\mathcal{S}$ not being satisfied.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttals by authors
Comment: I thank the authors for their response. My concern about the clarity of the material presented remains and I would have liked the authors to discuss the practical implications of their results.
However, I agree that, overall, the theory produced is non-trivial and appreciated the additional experiments conducted, which I found informative. I have raised my score as a result. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their hard work reviewing our paper and providing concrete comments! We collect the broad points made below and address other reviewer concerns in the individual responses. To summarize, our paper provides three theoretical innovations:
1. The first quantitative and non-asymptotic analysis of the statistical effect of data balancing (a.k.a. matrix scaling or biproportional fitting), which spans self-supervised learning (SSL), optimal transport, and statistics.
2. A novel interpretation of contrastive objectives relating them to data balancing with subsequent algorithmic implications.
3. A mathematical recursion formula for balanced probability measures that can be used as an independent technical tool.
The main contributions of the paper are of a theoretical nature, for which we provide complete proofs in the Appendix sections, and we also provide numerical experiments as illustrations. While the paper should be assessed as such, we have addressed all feedback from the reviewers.
**The main concerns across reviews were the following.**
**Additional experimentation:** The reviewers requested additional experiments, including performance on tasks other than zero-shot image classification, comparison to other balancing approaches, and extensions of the balanced-CLIP variant for more than two iterations. These experiments are described in the responses and the associated results can be found in the attached PDF.
**Intuition behind theoretical assumptions:** The reviewers also requested clarity on theoretical assumptions, such as the spectral gap condition or whether uniform marginals are needed. We address these with both intuitive explanations and companion simulation experiments.
Please see the individual responses for more details. Thank you!
Pdf: /pdf/9a2327f1f68a4c575f3bcc28e50af26b54bf4808.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Improving Subgroup Robustness via Data Selection | Accept (poster) | Summary: This paper proposes a data-centric model debiasing technique to identify and remove data which harm worst-group accuracy. This method removes fewer data than standard balancing techniques and can be adapted for settings with and without group annotations. Experiments are provided on standard group robustness benchmark datasets, and the method is shown to promote bias discovery on ImageNet in the absence of group annotations.
Strengths: 1. The differences between the full-information, partial-information, and no-information regimes are clearly delineated, and the advantages of D3M and Auto-D3M in each setting are comprehensively discussed. In the no-information regime, which is important and yet understudied in the literature, the authors propose a novel and elegant Auto-D3M algorithm based on TRAK, which I expect to be a strong baseline for future work in this setting.
2. D3M and Auto-D3M compare favorably to other common data balancing techniques such as subsampling a class-balanced dataset, removing far fewer points while achieving better WGA performance.
3. Sections 5.2 and 6 are comprehensive and very useful for developing an intuitive understanding of the proposed group alignment scores and TRAK matrix. The ability to discover spurious correlations in complicated datasets without group annotations is likely to be useful for practitioners.
4. The explanations of each algorithm -- D3M, Auto-D3M, and TRAK -- are clear and well-written. The mathematics is well-explained and sufficiently technical without being convoluted.
Weaknesses: 1. In Table 1, the only comparison to previous work provided for the no-information regime is ERM, which is generally understood to be a weak baseline for group robustness tasks. Some examples of comparisons I would expect to see in this setting include MaskTune [1], uLA [2], DivDis [3], or CB-LLR [4]. Similarly, in the partial-information regime, additional comparisons may include AFR [5] or SELF [4]. (I do not expect the authors to include all these comparisons, but it would benefit to discuss the most appropriate ones).
2. In Section 6, I believe a reference and comparison to [6] is missing. Similarly to this paper, [6] uses a data-centric method to discover and mitigate spurious correlations in the ImageNet dataset.
3. Tables 1, 2, 3, and Figure 6 lack error bars. It would improve the scientific rigor of the paper to run these experiments over multiple random seeds and provide standard deviations or confidence intervals.
4. There are a couple typos and grammatical errors in the writing, e.g., on lines 482 and 484. Also, the bibtex could use an update, as some references are out of date (e.g., Kirichenko et al. -- [21] in the paper -- is listed as an ArXiv preprint but appeared at ICLR 2023).
***References***
[1] Taghanaki et al. “MaskTune: Mitigating Spurious Correlations by Forcing to Explore”. NeurIPS 2022.
[2] Tsirigotis et al. “Group Robust Classification Without Any Group Information.” NeurIPS 2023.
[3] Lee et al. “Diversify and Disambiguate: Learning From Underspecified Data.” ICLR 2023.
[4] LaBonte et al. “Towards last-layer retraining for group robustness with fewer annotations”. NeurIPS 2023.
[5] Qiu et al. “Simple and Fast Group Robustness by Automatic Feature Reweighting.” ICML 2023.
[6] Moayeri et al. “Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases.” NeurIPS 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is the hyperparameter k from D3M (number of examples to remove) the same as the hyperparameter k from TRAK (dimensionality of the gradient projection)? If not, it would be helpful to use different letters and detail how the k in TRAK is chosen.
2. Why is random initialization used for CelebA, as opposed to standard ImageNet initialization? Do the CelebA comparisons in Table 1 also use random initialization?
3. In the appendices, the tables reference proposed methods TRAK and Auto-TRAK. Is this meant to read D3M and Auto-D3M respectively?
4. While not strictly necessary, I would be curious to see a qualitative comparison of the results from Section 5.2 and Figures 3 and 4 with other data selection techniques from the robustness literature. How do the data with negative alignment scores compare with data selected via misclassification [1], disagreement [2], other influence functions [3, 4], or Shapley values [5]? Are negative alignment scores perhaps more interpretable than these other techniques?
***References***
[1] Liu et al. “Just Train Twice: Improving Group Robustness without Training Group Information.” ICML 2021.
[2] LaBonte et al. “Towards last-layer retraining for group robustness with fewer annotations”. NeurIPS 2023.
[3] Koh and Liang. “Understanding Black-box Predictions via Influence Functions.” ICML 2017.
[4] Feldman and Zhang. “What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation.” NeurIPS 2020.
[5] Ghorbani and Zou. “Data Shapley: Equitable valuation of data for machine learning.” ICML 2019.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: While the limitations of group annotations are sufficiently discussed, I have an additional question about compute efficiency of the proposed method. The experiments were performed on 8x A100 GPUs, which is much more extensive than comparative methods (e.g., [1] only needs to train a linear classifier, which, being essentially a logistic regression, is very efficient). Is this level of compute necessary for D3M? I am mainly concerned about the TRAK subroutine of Auto-D3M, where in step (d) the covariance matrix of the projected gradients of the entire dataset is constructed and inverted T=100 times. How much wall-clock time and GPU VRAM does this step consume, and how does it scale with the hyperparameter k?
***References***
[1] Moayeri et al. “Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases.” NeurIPS 2023.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review, and address their questions below.
**[Further comparisons to the no-information regime]**
Our goal was to include the strongest baselines (to the best of our knowledge) and show that our methods perform better/comparably to them. For instance, since Auto-DDA outperforms most supervised baselines, it also outperforms weaker unsupervised (e.g., no information) baselines. However, we would be happy to include a table in the appendix in the final version of the paper with the suggested baselines.
**[Additional Reference and Typos]**
Thank you for the additional reference! We will include this in the final revision of our paper, and will additionally fix the typos you mentioned.
**[Error bars]**
We thank the reviewer for this feedback. We’ve added versions of Table 1 and Figure 6 with error bars to the global rebuttal document. Figure 2 already has a confidence region over 10 runs, but is also duplicated in the global rebuttal document.
**[Hyperparameter k]**
The hyperparameter k from D3M (number of examples to remove) is not the same as the k from TRAK (dimensionality of gradient projection). We will change the variable name for the final revision.
**[Initialization]**
Our focus in this paper is the impact of the training dataset, so we used randomly initialized models to keep the setup straightforward (though our method does not depend on the initialization scheme). We then switched to ImageNet initialization for WaterBirds specifically because we found that the base model performance is extremely poor and noisy when initialized from scratch.
**[TRAK/Auto-TRAK]**
We apologize for the confusion here: we used to call our method TRAK and Auto-TRAK, and then changed the names to D3M and Auto-D3M (but forgot to change the appendix), We will fix this for the final revision
**[Qualitative Comparison]**
We thank the reviewer for the suggestion! While we didn’t have time in this response period to experiment with other data selection techniques as suggested, we agree that this would be an interesting avenue for further investigation.
**[Computation Runtime]**
We appreciate the reviewer’s concern about the computational expense of our approach. We use TRAK as a subroutine in our method: a detailed analysis of the wall-clock time for TRAK can be found in Figure 1 of their paper [1]. Specifically, as noted in Appendix A.4 in their paper, the construction and inversion of the covariance matrix of projected gradients is actually quite cheap (this matches our experience): indeed, more of the runtime comes from training the models for the ensemble (TRAIN_TIME in their computation). Here, as noted in Appendix E of the TRAK paper, TRAK can be further optimized by re-using multiple checkpoints of the same model for the ensemble, further tuning the projection dimension, or even by reducing the number of models from 100 (as noted in their Appendix E.1). For simplicity’s sake, we did not explore any of these optimizations in our work, but would happy to discuss them further for the final revision.
Stepping back, TRAK is simply a subroutine called by D3M and can be replaced by more performant and efficient datamodeling/data attribution methods as they emerge. The rapid pace of data attribution as a field suggests that the computational cost of this subroutine will continue to decrease with time.
[1] Park, Sung Min, et al. "Trak: Attributing model behavior at scale." arXiv preprint arXiv:2303.14186 (2023).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the comprehensive response, and especially for the inclusion of additional figures with error bars. With the additional clarifications and discussion, I believe this paper will be interesting for the community and set a strong baseline for future work. Therefore, I recommend acceptance. I have raised my Soundness score to a 3 and overall rating to a 7. | Summary: The paper introduces a method called Data Debiasing with Datamodels (D3M) that addresses the problem of model bias (using the worst-case loss over groups as the metric). The approach leverages a process known as datamodeling to predict model behavior based on training data influence, focusing on removing data points that contribute heavily to worst-group error. The paper illustrates D3M’s effectiveness across various datasets, showing that it can outperform both traditional model and data intervention strategies. Moreover, the method is adaptable to the setting without explicit subgroup labels.
Strengths: Originality: The paper introduces an innovative approach, Data Debiasing with Datamodels (D3M), which creatively combines elements from influence functions and data-centric fairness strategies to address model bias. D3M focuses on optimizing the dataset by identifying and removing specific training instances that disproportionately skew model performance against minority subgroups. This methodological innovation brings high originality to the paper.
Quality: The authors conduct a thorough analysis across multiple datasets, effectively demonstrating how D3M enhances worst-group accuracy. The use of comparative baselines and the examination of different scenarios (including those without explicit subgroup labels) shows the robustness and reliability of D3M.
Clarity: The paper is relatively well-structured. The effective use of diagrams and necessary mathematical definitions help demonstrate the results. Moreover, case studies help readers understand the use cases of D3M.
Significance: The significance of this work is relatively substantial, addressing the issue of the subgroup biases of models. Moreover, by providing a tool that can improve model fairness without needing subgroup labels, the paper contributes to the applications where the group labels are unavailable.
Weaknesses: One weakness of the method is its exclusive focus on improving worst-group accuracy without presenting results on how it might affect the overall accuracy for all groups. This raises concerns about potential trade-offs, where enhancing fairness for the worst-performing subgroup could compromise the model's general performance. Additionally, the paper does not thoroughly explore how different model configurations might influence the outcomes. Understanding how variations in model architectures, initial parameter settings, or training procedures affect the effectiveness of the method is useful for validating its robustness and adaptability to diverse scenarios. Finally, a relatively minor weakness is that the demonstration of the paper could be more organized and coherent.
Technical Quality: 4
Clarity: 2
Questions for Authors: After improving the worst-group accuracy, does the model still maintain good overall accuracy? How does the method impact the performance across all groups? Were the results of the method tested across various model architectures to confirm its generalizability? In scenarios lacking explicit group labels, were there any experiments conducted to assess the effectiveness of the pseudo group labeling approach using the datamodel matrix in the setting or case studies in this paper?
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The paper presents relatively robust results. However, I do not see it adequately address the limitations of the methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing a thorough review of our paper. Below, we address the feedback points raised by the reviewer:
**[Focus on worst-group accuracy (WGA)]**
The reviewer raises the concern of evaluating only WGA without reporting overall accuracy. We note that we report balanced (i.e., average across all groups) accuracies for all of our experiments in Tables 2 and 3 in the appendix (we report WGA in the main paper due to space constraints). We will mention those results more prominently in the main paper.
**[Impact of model architectures and hyperparameters on D3M performance]**
We agree that investigating how model configurations change the efficacy of D3M is an promising avenue to pursue! In particular, it would be interesting to see how pre-training (e.g., on ImageNet) changes the behavior of D3M when there is a strong initial prior. For the Waterbirds dataset we do use ImageNet pre-trained models, but we don’t have a direct comparison on the same dataset between D3M’s behavior with ImageNet and randomly initialized models. We would be happy to pursue this for the final paper.
**[Paper organization]**
We will edit the camera-ready version of the manuscript to improve the organization of the paper where this is needed. We kindly ask the reviewer to elaborate on the paragraphs/sections which may need additional organizational work.
**[Effectiveness of the pseudo-labels]**
When reporting WGA for Auto-D3M (where we do not explicitly use any group labels in the method) we compute WGA using the groundtruth test group labels (e.g., in Table 1). We find that while the pseudo-labels are not exactly the same as the groundtruth group labels, they are aligned enough that performing D3M with the pseudo-labels (e.g., Auto-D3M) competitively improves worst group accuracy with respect to the groundtruth groups.
In our ImageNet case study, we similarly report WGA in Figure 6 by performing manual group assignment on the 50 test images per class (e.g., for tench, we manually label for each image for “whether a human is present”). Again we find that our pseudo-labels are aligned enough with these manual labels that performing Auto-D3M improves accuracy with respect to the manually assigned groups.
[1] Idrissi, Badr Youbi, et al. "Simple data balancing achieves competitive worst-group-accuracy." Conference on Causal Learning and Reasoning. PMLR, 2022.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I appreciate that the authors refer to Tables 2 and 3. It would be acceptable to keep them in the Appendix, but it would be better to highlight this with one or two sentences in the main body of the paper. Regarding the organization of the paper, I feel that Section 4 could benefit from more focused editing, as it encompasses discussions from multiple perspectives. It might be helpful to provide a clearer outline at the beginning of the section, or perhaps divide it into subsections with a summary at the start of each. For different model configurations, I believe it would be a significant weakness if comparisons are not conducted. I hope this will be addressed in the final version of the paper. I look forward to the final draft. | Summary: This paper introduces a new data debiasing technique called Debiasing with Data Attribution (DDA). DDA utilizes data modelling framework to identify and eliminate training examples that negatively impact the accuracy of the worst-performing groups. Additionally, the paper presents AUTO-DDA, an extension of DDA that can identify biases even without prior knowledge of group information. The proposed methods are validated through experiments on various datasets such as CelebA-Age, CelebA-Blond, Waterbirds and MultiNLI.
Strengths: 1. The proposed approach is simple and effectively improves the performance on real-wolrd datasets such as ImageNet.
2. The paper is presented well and easy to follow.
Weaknesses: 1. The performance of ImageNet is only reported on selected classes. How are the classes selected for evaluation? Is it based on the amount of bias present in the classes?
2. I am unsure if the proposed approach is effective when the majority of the data consists of bias-aligned points. For example, if there are only a few conflicting points and the rest are bias-aligned, how will the data be removed? I doubt the approach would still be useful for debiasing since a large part of the data is still going to be majorly biased. Even if the authors claim that the majority of the bias aligned points will be removed, I believe the model would still overfit to the data since the final dataset would be extremely small. Analyzing the performance of the approach with varying numbers of bias-conflicting points (1%, 5%, 10% of CMNIST(10 class classification)) in the dataset would be beneficial to understand this scenario. This experiment would provide insights into how well the approach scales to real-world scenarios where the degree of bias is significantly high.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the questions in the weakness section.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback on our work. Below, we have address the concerns raised by the reviewer:
**[Selection of specific classes chosen for ImageNet experiments]**
We selected classes that previous work found had biases in the ImageNet dataset. Specifically biases in the classes Red Wolf, Tench, Cauliflower, and Strawberry were studied in [1] and biases in the classes Snorkel, Howler Monkey, and Dog Sled were studied in Hard ImageNet [2].
**[Debiasing datasets with few bias-conflicting examples]**
As the reviewer points out, if basically all the training data points are bias-aligned points (and there are only a couple of examples that are not aligned) a dataset selection method may result in a very small training dataset which risks the model overfitting. This is a misalignment of objectives: here we are optimizing for worst group accuracy. In such a case, the worst group accuracy of the ERM model would be close to zero, since we entirely rely on the bias and thus completely fail on the non-bias aligned points. Reducing to a small dataset through dataset selection might degrade the overall accuracy, but will still likely improve the worst group accuracy.
However, even in this case, D3M is a better approach than dataset balancing, since we are more sample efficient when removing data points (removing the “worst offenders”) as demonstrated in Figure 2. Moreover, both Celeba-Blond and Waterbirds are highly bias-aligned datasets, where the smallest group constitutes a very small number of examples. As we show in Table 1, D3M is still effective in these scenarios.
[1] Jain, Saachi, et al. "Distilling model failures as directions in latent space." arXiv preprint arXiv:2206.14754 (2022).
[2] Moayeri, Mazda, Sahil Singla, and Soheil Feizi. "Hard imagenet: Segmentations for objects with strong spurious cues." Advances in Neural Information Processing Systems 35 (2022): 10068-10077.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: **[Selection of specific classes chosen for ImageNet experiments]** Thanks for clarifying this.
**[Debiasing datasets with few bias-conflicting examples]**
I request the authors to please validate the claim emperically, by training a simple 3 layer MLP using D3M for various percental of bias conflicting samples (0.05%, 1% and 5%) on CMNIST dataset where the proportion of bias conflicitng samples can be easily controlled as suggested in my intial review. Also ablating on various percentage of bias conflicing samples helps understand the sensitivity of the proposed methods on number of bias conflicitng samples about which im concerned about.
---
Rebuttal 2:
Title: Experiment is incoming!
Comment: HI reviewer QjGu,
We really appreciate your suggested experiment, and have been working for the last few days straight on implementing it and getting the results (we are somewhat compute-constrained at the moment)! We really appreciate your patience and should have results by the end of the day today.
- Authors
---
Rebuttal Comment 2.1:
Title: Experiment with few bias-conflicting examples
Comment: Thank you for your patience! We've run the experiment you requested.
## Setup
For the MNIST data
- Let $\hat{y}$ be the binary signal according to the digit: (0 if $<5$ and 1 if $\geq 5$). The true label $y$ has a 90% chance of being $\hat{y}$ and 10% chance of being $1-\hat{y}$
- Let $y_{col} = y$ with $p_{corr}$ probability and $1-y$ otherwise. If $y_{col}$ is 1, color the image red, otherwise if 0 color the image green.
Thus the color ($y_{col}$) aligns with the true label ($y$) with $p_{corr}$ probability and the digit $\hat{y}$ with 0.9 probability. If $p_{corr} > 0.9$, the ERM model relies more heavily on the color than the digit.
We take val as 10% of the training split of MNIST. We consider three splits for train where there are very few bias conflicting examples.
- $p_{corr} = 0.95$
- $p_{corr} = 0.99$
- $p_{corr} = 0.995$
And our test set has $p_{corr} = 0.1$ (so the bias alignment is reversed).
Here, the groups are the pair of $y, y_{col}$ (e.g., the final class and the color of the image.)
## Results
| $p_{corr}$ | ERM WGA | D3M WGA | Balancing WGA (oracle) | ERM Avg. Acc | D3M Avg. Acc | Balancing Avg. Acc (oracle) | Frac Examples Removed by D3M |
|:--------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| :-------:|
| $p_{corr} = 0.95$ | $20.52 \pm 0.85$ | $44.48 \pm 0.72$ | $78.66 \pm 1.82$ | $29.00 \pm 0.37$ | $89.79 \pm 0.06$ | $82.78 \pm 0.44$ | 0.73 |
| $p_{corr} = 0.99$ | $1.95 \pm 0.25$ | $55.15 \pm 3.53$ | $69.98 \pm 3.21$ | $12.00 \pm 0.22$ | $87.99 \pm 0.065$ | $75.86 \pm 0.66$ | 0.80 |
| $p_{corr} = 0.995$ | $0.62 \pm 0.13$ | $42.24 \pm 2.05$ | $26.16 \pm 17.62$ | $10.62 \pm 0.11$ | $87.31 \pm 0.21$ | $60.19 \pm 6.55$ | 0.83 |
**ERM vs. D3M**: We find that even at these more extreme scenarios of bias (where the vast majority of examples are removed), we can significantly improve both WGA and average test accuracy of the model using D3M over using ERM. Our performance does not significantly degrade even at $p_{corr} = 0.995$ where there are very few bias conflicting examples.
**Number of examples removed**: Also, note that to balance the dataset, more than 90% of examples would need to be removed. Since our method is more sample efficient than balancing, we can remove a smaller fraction of the examples. Our heuristic for choosing $k$ (the number of examples to remove) *overshoots* the best number. In particular, the previously majority classes are now the worst performing groups in all cases: we can thus improve the accuracies further by removing *even fewer* examples (e.g., by searching for $k$ on the validation set).
**Comparison to balancing**: We further compare to balancing the dataset. We note that this method requires *training group annotations*, while our method does not. In this simplified setting, most of the examples are equally harmful for WGA, so naive balancing can do well. Here balancing the dataset improves WGA over D3M for $p_{corr} = 0.95, 0.99$. However, balancing more significantly degrades overall accuracy as there are very few points remaining in the dataset. For $p_{corr} = 0.995$, balancing the dataset requires such a small overall training dataset that both WGA and overall accuracy are harshly degraded compared to D3M.
We would be happy to include this result within our paper. We apologize for the delay: we worked hard over the weekend to get these results in (but were delayed due to some cluster issues). We would appreciate if the reviewer would revisit their decision to lower their score, given this experiment. | Summary: The paper proposes Data Debiasing with Datamodels (D3M), a method to improve machine learning model performance on underrepresented subgroups by removing specific training examples that cause failures. Unlike traditional balancing methods, D3M efficiently debiases classifiers without needing group annotations, significant dataset reductions or additional hyperparameter tuning.
Strengths: - Significance: This work effectively identifies and removes training samples to improve the worst-group accuracy. As demonstrated by the experiments, this method outperforms both standard model-based and data-based approaches.
- Comprehensive Datasets: A wide range of datasets is used for image and text classification tasks, with corresponding benchmarks evaluated against existing methods as listed in Appendix B, "Details of Experiments."
Weaknesses: - Writing and Format: the presentation of the paper needs readability improvement:
- Redundant section start: Line 81
- Excessive parenthetical comments and irregular format: Lines 20, 25, 28, 32, 67-71, 83, etc.
- In addition to isolating problematic training data, experiments should be conducted to assess the impact on the necessity of further hyperparameter tuning and to strengthen the case for the effectiveness of the proposed method.
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: Please read rebuttal and provide more substantive comments
Comment: Dear reviewer FTVd,
Your review appears to mostly mention formatting issues. Please read the authors' response to other reviews and provide comments regarding the content of the paper, if you have any.
Thanks,
AC
---
Rebuttal Comment 1.1:
Comment: I have read the other reviews and the authors' responses. I look forward to the final draft of the paper, as the authors will be editing the camera-ready version to improve its organization. Regarding the content, I appreciate the insights of the other reviewers and would like to increase my score to 4. | Rebuttal 1:
Rebuttal: Thank you for your reviews! We respond to the questions of each reviewer individually below. We additionally include results with error bars in the attached PDF as requested by Reviewer yCP5.
Pdf: /pdf/a9bf007e68534ce595870fb5bcc21d558e941ad5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Speaking Your Language: Spatial Relationships in Interpretable Emergent Communication | Accept (poster) | Summary: This work investigates the presence of spatial deixis (e.g., spatial references in language dependent on the context of the utterance) in a signalling game within the paradigm of emergent communication.
It begins by introducing a variant of the signalling game which requires the sender to communicate the relative position of some element in a sequence of integers visible to the receiver.
Analysis of the semantics of the emergent communication is performed primarily through looking at normalized point-wise mutual information.
These analyses show that the emergent communication regularly uses spatially-referent messages (or sub-message units), validating the presence of spatial deixis in the environment.
Strengths: In conjunction with standard criteria, there are three characteristics that are particularly important for emergent communication research: reusability (how easily can another researcher use the products of this research), generalizability (how much do the findings of this research apply broadly to our knowledge of emergent communication), and directedness (does this research contribute concretely to particular questions in emergent communication research).
### Quality
- (major) The experimental design is of good quality; the methods are in line with the standards of the field.
### Clarity
- (major) The language of this paper is easy to read and illustrates the central points effectively.
### Reusability
- (major) The provided code appears to be of good quality (have not attempted to run it); this work would be very easy to reuse in subsequent experiments.
### Generalizability
- (minor) The environment is relatively simple, with few confounding factors, making it easier to draw conclusions about more general tendencies in EC.
### Directedness
- (major) This paper is directed toward an important goal in emergent communication: discovering/engineering more human-like features in EC, namely deixis or context-dependent reference.
- (minor) Secondarily, this paper also demonstrates some degree of compositional semantics and syntactic features.
Weaknesses: ### Quality
- Nothing of note.
### Clarity
- (minor) One or two more of the points in Section 5 need to be illustrated (likely with just a table); although the text is mostly clear, some tables which aggregate what is said there would make referencing the paper much easier.
- (minor) It is a little bit confusing that your hypotheses are what you expect to be false; it might be clearer to state hypotheses in positive (even if the statistical test you are using is rejecting a null hypothesis of a random baseline).
### Reusability
- Nothing of note.
### Generalizability
- (major) It is not clearly the case that the environment addresses deixis in a way that applies to emergent communication environments more generally (see first question for more).
- (major) There is not much discussion on how the deixis investigated in this paper is applicable to emergent communication more generally.
### Directedness
- Nothing of note.
Technical Quality: 4
Clarity: 3
Questions for Authors: - The biggest question regarding the paper for me is how do we make the jump for the simple example of deixis presented in the empirical investigation of the paper to a more robust form of deixis. It is not wrong (and likely correct, in fact) to have started out with a toy problem, but I believe there either needs to be empirical work and/or some light theoretical work on what exactly is meant by "deixis" in this paper, how the environment investigated satisfies that definition, and how this is relevant for further environments. I might raise my score if the authors could respond with a slightly more formal characterization of deixis and how it maps both to the current environment and more sophisticated, "natural" environments (e.g., embodied multi-agent environment).
- In the environment, are the integers actually represented as integers in the neural network or are they encoded as one-hot vectors? If they are OHVs, it is not clear that it is the case; if they are not OHVs, it seems like an odd design choice to feed a scalar into an NN when it is representing something categorical.
- How are the various inputs to the receiver agent actually fed into the network. Are they just concatenated "temporally" and given as a sequence to the GRU?
### Comments
- The fact that this paper is inducing a segmentation of emergent communication is (minor) contribution in and of itself, so I think it deserves a mention in the introduction.
- The "td" variable needs to be introduced before the example; I am assuming it is the list of distractors plus the correct answer, but it should be stated explicitly when defining the vectors earlier.
- Table 2: Don't reuse X; use other variables.
- Include the actual URL for the (Anonymous) GitHub so that it is obvious it is a link.
- I don't understand the point being made at Line 337.
- I do not think it is appropriate to specifically mention "SVO" as an interpretation of the language since there is no clear way to distinguish between nouns, verbs, subjects, or objects; I think it is fine to say that there is syntactic structure, but I a skeptical of there being evidence to make any claim further than that here.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer vZ26
We thank the reviewer for the insightful feedback and constructive criticisms. We appreciate that the reviewer found our experimental design and overall approach to be of high quality and relevant to the field of EC. We also appreciate that the code was found to be of high quality.
To address the comments of the reviewer:
## Questions
> The biggest question regarding the paper for me is how do we make the jump for the simple example of deixis presented in the empirical investigation of the paper to a more robust form of deixis. (...)
Let $O$ be the observation available to an agent. Let $O_p$ be a point in the observation from which the relative position will be described. For example, $O_p$ can represent the agent position or the position of an object. Let $O_t$ be the target object to which the relative position is being described, for example, a different object or agent. Let $d(O_t)$ denote the relative distance of $O_t$ to the reference point $O_p$. $\theta(O_t)$ is defined as the angle between $O_p$ and $O_t$, using for example the vector angle.
The spatial deixis could then be defined as a mapping of tuples $(d, \theta)$ to specific $n$-grams. For example, a tuple $(2,\pi)$ can be mapped to the $n$-gram $15$ from this paper, representing "2 to the left". This, combined with specifying the $O_t$, completes the message, which in this paper could be $[15,0,2]$ to mean "$O_p$ is to 2 to the left of 15".
The version of deixis presented in this paper is a special case of this general deixis, where $O$ is a 1D tensor, $O_p$ is prescribed to be always $-1$, and the $\theta$ can only take values of $\pi$ and $0$, as we operate in 1D. However, the concepts presented can be easily extended to the case of 2D or 3D tensors and observations, and when the agent chooses both $O_p$ and $O_t$. When extended to multi-agent settings, the agents may need to additionally specify their position, or both $O_p$ and $O_t$ in their messages, since their observations may have different relative positions.
> In the environment, are the integers actually represented as integers in the neural network or are they encoded as one-hot vectors? (...)
The integers in the environment are indeed represented as scalars. Based on your comment, we have run additional tests of the OVH approach and found that it performed significantly worse. Instead of learning to communicate about their observations, the agents appeared to be memorising the dataset, increasing accuracy on the training set but staying at random on the validation dataset. Instead, in the scalar approach, the agents learn to communicate, increasing their accuracy on the validation set. This test was performed on agents with the same hidden sizes, except for the layers processing the one-hot vectors. It may be the case that with more tuning of the layer sizes the OVH approach would work, but this is outside the scope of this paper. An additional advantage of using the scalar approach are savings in terms of the number of trainable parameters (98.8K vs 72K), with about a 27% reduction in size.
> How are the various inputs to the receiver agent actually fed into the network. (...)
Let $B$ be the batch size, $T$ the temporal/sequence length dimension, $C$ the hidden size of the GRU and Linear layers (which are the same), $N_v$ the vocabulary size of the agents, and $N_l$ the message length.
The output of the sender agent is in the form of $[B,N_l,N_v]$ , or $B$ tensors, each representing the probability of a character being in a given position of the message, as per the Gumbel-Softmax reparametrisation.
This is fed into the receiver agent, to `GRU 1` in Figure 1, from which only the hidden states for the last character probabilities are collected, i.e., only the hidden states after each message has been processed in full. This results in a tensor of shape $[1,B,C]$, where $1$ represents the number of layers of the GRU.
This tensor can then be used as the initial hidden state for `GRU 2` in Figure 1.
`GRU 2` receives the sequences in the form of $[B,T,1]$, with $T$ being our sequence length, and the $1$ representing the scalar values. We again collect only the final hidden states for each element, resulting in the output of shape $[1,B,C]$.
The $td$ vector is passed through a linear layer, in shape $[B,5,1]$, with $5$ representing the 4 distractors and 1 correct target. Similarly to the `GRU 2` input, we use scalar values. The linear layer outputs a tensor of shape $[B,5,C]$.
Finally, the `GRU 2` output is matrix multiplied with the $td$ embeddings. We first permute the `GRU 2` output tensor, creating a tensor of shape $[B,C,1]$.
Then the two tensors are multiplied, i.e., $[B,5,C] \times [B,C,1]$, resulting in a tensor of shape $[B,5,1]$, which is then squeezed to produce the batch of target predictions $[B,5]$.
## Comments
> I don't understand the point being made at Line 337:
We apologise for the confusion. We argue that non-compositional messages are easily compressible, when they would be transmitted in a real-world setting, with limited bandwidth. For simple encodings, since these messages are monolithic, they could be compressed to a single token/character. In contrast, compositional messages require at least two tokens/characters, one for each integer/positional component. We will clarify this in the revised version.
> I do not think it is appropriate to specifically mention "SVO" as an interpretation of the language since there is no clear way to distinguish between nouns, verbs, subjects, or objects. (...)
We have removed the claim of SVO ordering from the paper. Instead, we discuss a possible syntactic structure, but make no comparisons to natural language syntax or concepts, as reviewer `VwFJ` suggested.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Regarding the response to the "generalization" point of the review starting with "Let O be the observation available to an agent", I think this is a step in the right direction but still needs more development (although it is a little hard to judge the proposed formalization out of context).
I think the main shortcoming of the sketch of formalization proposed is that it is still not quite clear what "deixis" is in a sort of environment-agnostic way, which is what I was getting at in my review (although potentially unclearly).
For example, $O_p$ is a point in the observation serving as the origin, but what does "point in the observation" mean in some environment-independent way?
Maybe one could start off by specifying that the environment has some structure which supports deictic expressions in the first place (e.g., space, time, succession, 1st-2nd-3rd person distinction).
Additionally, there needs to be some way that the context in which the potentially deictic expression is grounded; that is, we can't just have some arbitrary $O_p$, there needs to be some way in which $O_p$ is derivable from the context of the utterance.
With these in mind, we could maybe define deixis in an EC environment along the lines of the following:
- The observations of the environment support some notion of position (or something more general, if you can think of it).
- Using this notion of position, we can take some original point, apply a transformation (i.e., moving a distance in space), and derive the target point.
- The original point is derived from the context of the utterance.
I am not saying this the best or only way to do, but hopefully this better illustrates what I meant by "generalization". I think the account offered in the rebuttal is on the right track, but maybe a couple extra steps could be taken to make it environment agnostic.
I am moving recommendation from a 5 to a 6 because think the proposed changes are close to making this paper more widely applicable to other EC environments.
Nevertheless, I probably will move my rating to a 7 because I am not able to see a fully-developed formalization in context.
---
As an aside, regarding OHVs vs scalars, it seems odd that OHVs would perform worse since it essentially pre-processes the scalar values into something the neural networks can work with---maybe this has something to do with over-parameterization when switching from scalars to OHVs, but I don't think this all that important (aside from making it clear that scalars are being used since one might naturally assume integers are represented as OHVs), as it does not invalidate the experiments.
Everything else not mentioned here I found was addressed suitably by the rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued engagement with our paper! We improve the formalisation of deixis below, making it more environment-agnostic.
To do so, we expand the observation to be an $m$-dimensional $R^n$ tensor, which can represent any possible properties of the observation.
## Formalisation of deixis
Let $O$ represent an abstract observation that an agent perceives from its environment, $O \in R^{n \times n \times n ... n = n^m}$. The $m$ dimensions can represent the spatial, temporal, or other positions. For example $R^{n^3}$ could represent a 3-D observatrion.
Let $O_p$ be the reference point in $O$, which coordinates are represented by an $n$-tuple of real numbers $(x_1,x_2 ... x_m)$ and $O_t$ be the target point in $O$, with its coordinates represented by an $n$-tuple of real numbers $(y_1, y_2 ... y_m)$.
Then, the relative distance function $d(O_p,O_t)$ returns an $n$-tuple of real numbers $(z_1,z_2 … z_m)$, such that $z_i = x_i - y_i$. This relative distance function allows for unambiguous identification of the target object $O_t$, given that the position of $O_p$ is known.
A deictic expression can be defined as a mapping of the value of $d(O_p,O_t)$, the reference point $O_p$, and their context $O$, to a specific linguistic or symbolic expression that describes the relationship between $O_p$ and $O_t$. This mapping can be represented as:
$(O,d(O_p,O_t),O_p) \rightarrow \text{Expression}(O,d(O_p,O_t),O_p)$
where the resulting expression $\text{Expression}(O,d(O_p,O_t),O_p)$ is a description of the reference point $O_p$ and its relative distances to the target point $O_t$, given the context $O$.
### Examples
Consider two objects, $O_p$ and $O_t$, located at Cartesian coordinates $(x_1, x_2)$ and $(y_1, y_2)$, respectively. The relative distance function $d(O_p, O_t) = (z_1, z_2)$ captures the differences in both dimensions.
In this context, suppose the expression "2 to the left and 3 up" is used. This would correspond to $z_1 = -2$ (indicating 2 to the left) and $z_2 = 3$ (indicating 3 up) relative to the reference point $O_p$. However, without knowing the exact position of $O_p$, this information alone is insufficient to identify $O_t$.
To resolve this, the reference point $O_p$ must either be explicitly included in the deictic expression or implicitly understood by all interlocutors. For example, if it is agreed that the reference point is at $(5, 4)$, where $(5,4)$ could be an abstraction of any object, the expression "2 to the left and 3 up" clearly locates $O_t$ at $(3, 7)$. Without such an agreement, a full deictic expression is necessary, such as "2 to the left and 3 up from $(5, 4)$".
Depending on the context $O$ and the need for clarity between agents, some information provided by the relative distance function $d$ may be omitted. For example, if the target object $O_t$ is the only item of a given type in the environment to the left of the reference point $O_p$, simply stating "to the left" might be sufficient without specifying the exact distances or the $y$ direction. Similarly, if the agents are conversing about objects on a flat surface, the height difference may be irrelevant and ignored, simplifying the expression further.
The examples above could also be rewritten to showcase the ability to specify any other deictic basis, by simply changing the underlying meaning of a given dimension. The expression "5 minutes after 10:00" specifies a temporal deixis by explicitly locating an event $O_t$ relative to a reference time $O_p$. In a 3D spatial context, an expression like "4 to the left, 2 forward, and 1 down" captures the exact relative position of an object $O\_t$ in Euclidean space.
This formalization will be included in the revised version of our paper. | Summary: This paper proposes a new communication game in the emergent communication framework to analyze the emergence of _deictic reference, i.e. expressions akin to demonstratives like "this" and "that". These are important expressions in natural language and especially in this emergence literature, since their meaning is context-dependent and "functional", i.e. cannot be reduced to objective properties of the object of reference. The paper also introduces an application of normalized pointwise information to the analysis of the emergent communication protocol in order to identify holistic messages (where a message refers to an entire meaning) and compositional ones (where certain n-grams and/or positions refer to specific "components" of the meaning). Both of these are welcome contributions and will be of interest to many people working on emergent communication. The core idea in their game is to use integers within longer sequences as the object of reference, provide a _partial observation_ of the true context to the sender (so that absolute positional information cannot be used) and to _mask out_ the target object in a sequence (so that the integer itself cannot be used); what remains as possible information to convey are things like "two to the right of 13".
Strengths: * A carefully designed emergent communication scenario which requires something like spatial deixis to emerge for successful communication. This is an important component of human language that goes beyond what has been done in existing literature.
* Interesting and useful application of NPMI for the analysis of (non-)compositionality of the resulting messages.
* Engages well with existing literature to situate the new contribution of this paper.
* Results also show a robustness to things like random seed, which is not always the case in emergent communication.
Weaknesses: * Some experimental details could be more carefully reported and some analyses could be more systematic/quantitative (see questions below).
* The artificial messages used to validate their NPMI metric does not yield results as strong as one would like (as discussed in the Limitations section); this makes it not entirely clear that the metric does what its intended to do.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Why did you choose a fixed-length of 3 for the messages (as opposed to either a single token, or variable-length)?
* Line 92 and 114: should "target integers" and "targets" both be singular? There's one target integer, correct? Or is the plural here just over a batch of examples? If the latter, the wording is a bit confusing since the worked case in the paper is just one example ("batch size 1" so to speak).
* Can $PMI_c$ and $PMI_{nc}$ be seen as one metric, with the latter a special case of the former (i.e. for the full tri-grams)? The discussion just before Section 5 seems to suggest so, so I would encourage more elaboration on whether these are really two separate degrees or not. For instance: does high $nc$ entail low $c$, and vice versa?
* "The analysis provided in this section is based on the messages collected from the test dataset after the training has finished". What was the train/test split here? Appendix A provides model / optimizer hyper-parameters, but what are the game/environment/data choices?
* Can the observations in Section 5.1 and 5.2 be made more quantitative? I would appreciate a more detailed analysis of the types of composition observed, their frequency, and other factors like that.
* H2 and Table 1: while the Comp-P case is above chance, if the NPMI method correctly identified "genuinely compositional" messages, we would expect nearly perfect accuracy in this case, right?
* Very minor typographic point: I think that the "n" in "$n$-gram" and similarly in the main text should be in math mode.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer JyuC
We thank the reviewer for their comprehensive comments and thoughtful critique. We appreciate that the reviewer has found our paper interesting and that the investigation of deictic references is perceived to be of high value.
To address the points raised by the reviewer:
## Weaknesses
> Some experimental details could be more carefully reported and some analyses could be more systematic/quantitative (see questions below).
> The artificial messages used to validate their NPMI metric does not yield results as strong as one would like (as discussed in the Limitations section); this makes it not entirely clear that the metric does what its intended to do.
Both answered below.
## Questions
> Why did you choose a fixed-length of 3 for the messages (as opposed to either a single token, or variable-length)?
The choice was motivated by our goal of analysing the spatial deixis and the analysis method. A single token would not be able to be segmented and could not show any possible compositionality. While it would be possible for the agents to use a single token to convey the same information, assuming a large vocabulary size, such tokens could not be interpreted as easily. Additionally, allowing for variable length messages increases the computation cost of the analysis. As our method checks all possible $n$-grams used by the agents, increasing the message length would correspondingly increase the number of $n$-grams needing to be generated. The special case handling of each message would also add overhead. However, extending our setting to variable-length messaging, both the training and analysis code, is entirely possible and feasible.
> Line 92 and 114: should "target integers" and "targets" both be singular? There's one target integer, correct? Or is the plural here just over a batch of examples? If the latter, the wording is a bit confusing since the worked case in the paper is just one example ("batch size 1" so to speak).
Thank you for pointing this out. There is indeed only one target integer. The camera-ready version will correct this wording.
> Can $PMI_c$ and $PMI_{nc}$ be seen as one metric, with the latter a special case of the former (i.e. for the full tri-grams)? The discussion just before Section 5 seems to suggest so, so I would encourage more elaboration on whether these are really two separate degrees or not. For instance: does high $nc$ entail low $c$ , and vice versa?
$PMI_c$ and $PMI_{nc}$ can be viewed as complementary metrics. High $nc$ values for certain messages do not necessarily entail low $c$. While in practice the two metrics do align in opposition, there could be cases where if a compositional message is most often used only in a specific context, it could have both high $nc$ and $c$ values. For example, a message $[1,9,10]$, where $1$ is a positional component, and $[9,10]$ are integer components, could always be used whenever the integer represented by $[9,10]$ is in a given position. Then, even though the agents use compositional rules to create this message, it would also be classed as high $nc$, since its parts are never observed separately. It would also have a high $c$, as the aforementioned $n$-grams are also associated with the given integer and position.
As mentioned, in practice, compositional parts of the compositional messages will be frequently used in different messages, lowering the value for $nc$.
> "The analysis provided in this section is based on the messages collected from the test dataset after the training has finished". What was the train/test split here? Appendix A provides model / optimizer hyper-parameters, but what are the game/environment/data choices?
The datasets used are: training 200k, validation 200k, test 20k. We generate each dataset separately instead of splitting a singular dataset into the 3 used. We are confident that this approach does not present any issues with overlap, as all sequences are randomly generated. Considering the sequence length of 60, since there are no integer repetitions, the number of permutations is $60! \approx 8 \times 10^{81}$, far outnumbering our number of generated samples. We also empirically confirm this by checking the overlap across 1000 randomly generated datasets, finding an overlap rate of 0%.
We will provide additional information about the environmental and dataset choices in the appendices of the revised version.
> Can the observations in Section 5.1 and 5.2 be made more quantitative? (...)
We assume this is similar to the comment by Reviewer `vZ26`. We provide this additional quantitative information in table format in the general response. It will also be included in the camera-ready version.
> H2 and Table 1: while the Comp-P case is above chance, if the NPMI method correctly identified "genuinely compositional" messages, we would expect nearly perfect accuracy in this case, right?
If the compositional messages were context-free and composed only by concatenation of the positional components, we would indeed expect nearly perfect accuracy.
The lower accuracy for the Comp-P case might be due to contextual information not accounted for by our creation of the messages or the possibility that some messages may require slightly more complex composing methods. For example, if a certain integer component representing $5$ is only used when $6$ is also present in the observation, our method of creating the messages would fail. Accounting for such context dependence is not encoded in the message segmentation or generation process. Similarly, if there are "synonyms" for meanings such as "1 to the left" that are only used in certain contexts, this would also not be accounted for in our methods.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed and helpful reply! I am already a fan of this paper, and these responses clarify any last remaining questions that I had. | Summary: The authors design a referential game environment intended to motivate the emergence of spatial references, cast in the form of a task where the target integer must be selected from an integer sequence. The character vocabulary for the message is smaller in size than the set of integers in the list, and this necessitates an alternative to directly specifying the target integers. Using a traditional GRU-based speaker/listener architecture, the model achieves high task accuracy. Using existing information theoretic measures the authors are able to roughly decode the semantics of the messages and show some degree of compositionality in the messages.
Strengths: - The experimental design is simple but I think straightforward and correct for what the authors want to test.
- Similarly, it appears from the analysis in sections such as Table 2 that the resulting messages do seem to exhibit a variety of communication strategies, including the desired type in some messages (compositional positional)
- The approach of decoding the meaning and segmentation of the messages via NPMI (though I would have also liked to see some discussion of where this would/would not be appropriate in terms of a general evaluation metric for EC. It seems some strong expectation over what the emergent language needs to say may be necessary? In this case, the presence of the integers, for instance)
Weaknesses: - Overall I find the biggest weakness to be in the scope of the paper and the degree to which the design of the environment caters to the type of messages the authors want to elicit here. It comes across as a toy problem, and through the lens of the field as a whole, I think it raises the question of whether there is sufficient novelty in making such small and targetted tweaks to the referential game formula.
This might be best highlighted by revisiting the motivating example, such as "a blue vase with intricate motifs on the table". Why refer to this as "the vase over there" or "that vase near you"? There are pressures in the referential game to draw out these spatial references, but these feel artificial and devoid of broader understanding about linguistic pressures when we compare them to the pragmatic concerns that would motivate a spatial reference (and what type of spatial reference) in the motivating example.
I also think claims are over-stated. The authors claim this is the first paper in EC to have syntax and make comparisons to SVO ordering. This comes across as a flimsy attempt to connect to human language and to signal a degree of progress in the complexity of ECs, but a lengthier discussion is warranted. Syntax can't be treated as something that does or does not exist, but rather, discussion of what formal language class the emergent language falls into would be relevant, and nothing here would necessitate a CFG or the degree of syntax that is meaningful when it comes to discussing natural language.
Despite the simplicity I do see value in this work but I would have liked to have seen a less contrived environment with a more difficult learning problem / substantial scope in the necessary semantics to feel comparable to the degree of contributions typical of a paper at this venue. I think it would be far more appropriate at a more targetted venue where it can also recieve appropriate attention and discussion.
Technical Quality: 4
Clarity: 3
Questions for Authors: - There are some fairly trivial solutions to this problem. It seems compositional integer style gets at this -- are there cases where compositional integer would fail? I'm not seeing the need to learn a spatial solution to this problem when it seems that a two-character code could cover all possible target integers. Does this vary as the length of sequence or size of alphabet are increased/decreased?
- Similarly, is there any reason to motivate the choice of vocabulary size with respect to latin alphabet? The chunks of the messages are more akin to words than characters. To me it just read as an attempt to have a connection to human language, but that relationship was not meaningful.
- It is mentioned that the hyperparameters for MI are optimized for translation accuracy. I can make some guesses as to what might be done here, but it wasn't clear to me exactly what is being compared here.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: Some limitations have been discussed, though limitations of the experimental design (such as 3 character messages, "nouns" being integers and therefore being able to reference without features, etc.) are not. The authors mention some doubts in previous work regarding the value of referential game -- I think this is precisely one of the ways that these doubts emerge, i.e., "we try very hard to setup a task where success must be achieved in this way, and the model finds it". These sorts of limitations may not be addressable in the scope of the paper, but they are also not mentioned. There are no negative societal impacts from such work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer VwFJ
We would like to thank the reviewer for the insightful feedback and detailed criticism. We also appreciate that the reviewer found our experimental setup and analysis interesting.
To address the points raised by the reviewer:
## Weaknesses
> Overall I find the biggest weakness to be in the scope of the paper and the degree to which the design of the environment caters to the type of messages the authors want to elicit here. It comes across as a toy problem, and through the lens of the field as a whole, I think it raises the question of whether there is sufficient novelty in making such small and targetted tweaks to the referential game formula. (...)
We address this weakness in detail in the general response.
> I also think claims are over-stated. The authors claim this is the first paper in EC to have syntax and make comparisons to SVO ordering. This comes across as a flimsy attempt to connect to human language and to signal a degree of progress in the complexity of ECs, but a lengthier discussion is warranted. (...)
We agree that the claim of SVO ordering may be an overstatement, so we have removed it from the paper. Instead, we point to a possible syntactic structure, but make no comparisons to natural language syntax, as also suggested by reviewer `vZ26`.
## Questions
> There are some fairly trivial solutions to this problem. It seems compositional integer style gets at this -- are there cases where compositional integer would fail? I'm not seeing the need to learn a spatial solution to this problem when it seems that a two-character code could cover all possible target integers. Does this vary as the length of sequence or size of alphabet are increased/decreased?
We apologise for the confusion — the compositional integers style does include spatial referencing, where one character represents an integer, and the other the relative position of that integer. It represents a spatial solution. Simply representing the target integer is impossible, as the sender does not know the target integer, so that cannot be transmitted. If the sender, instead of using the compositional message, always transmits the identity of the integer one to the left of the target, we would still consider this a spatial solution. The spatial deixis is implicitly agreed between the sender and the receiver. Indeed, for our setting, a two character code, which includes spatial referencing, would be enough to convey all information. However, without explicit or implicit spatial referencing, we do not identify a feasible solution to this environment.
We do not observe any differences in terms of the performance of our agents when the train sequence length is increased/decreased, except for the convergence speed and need to adjust the hidden layer sizes, as noted in our response to Reviewer `YdNG`. Similarly, there is no observable difference for changes in the alphabet sizes, up to a point: there must be enough characters, and the message space must be large enough for the sender to be able to describe its observations. In the case of making the message space too small, the sender cannot convey enough information about its observations to the receiver. If the message space is too large, the sender can just encode the complete observation, bypassing the need for communication.
We will include this information in the revised version of the paper.
> Similarly, is there any reason to motivate the choice of vocabulary size with respect to latin alphabet? The chunks of the messages are more akin to words than characters. To me it just read as an attempt to have a connection to human language, but that relationship was not meaningful.
The choice of vocabulary size was arbitrary, with $26$ providing high expressivity and presenting a good starting point. If such a vocabulary is enough to convey information in natural languages like English, this should also apply to the agents. We have additionally run preliminary tests with smaller and larger sizes of the vocabulary and found no impact on the agent performance.
> It is mentioned that the hyperparameters for MI are optimized for translation accuracy. I can make some guesses as to what might be done here, but it wasn't clear to me exactly what is being compared here.
We apologise for the lack of clarity. The optimised hyperparameters are $t_c$ and $t_n$, on which a grid search is performed. We create all possible permutations of the tuple ($t_c$, $t_n$), for $t_c \in [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]$ and $t_n \in [1, 2, 3, 5, 10, 15]$. Then, we select the best tuple of hyperparameters regarding the measured translation accuracy on the test dataset. We will clarify this in the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Regarding the simple environment, I agree and disagree. Of course there is prudence in starting from a simple, well-controlled environment, as presented in the paper. But of course the risk is that it will be difficult to scale the approach up or reconcile it with existing work. Frankly this is an all too common problem in the EC literature, which is rampant with one-off papers which each show some semblance of the target structures in emergent languages, and which are never incorporated or built upon ever again, and cited only in passing as yet another linguistic phenomena which now has a toy world which can give rise to some simple form of it.
The question should probably be, is it too much to ask for both? Is it too much to expect that an EC paper at least show some consideration for what comes next, or to attempt to show both a contrived toy scenario and a more sophisticated one which explores the limitations of the approach in the face of a more realistic and more difficult learning problem?
To this end, I greatly appreciate the prodding by Reviewer vZ26, and the author response, to formalize a more general notion of deixis, such that we could at least have some understanding of what scaling this work up might entail.
With that in mind, with the inclusion of this formalisation I am willing to increase my score to 5, on the accept side of the decision boundary, as I think the work as a whole might represent an adequately thorough investigation of this specific issue. This is a difficult decision -- I am still a bit concerned about the potential lack of impact from the work, especially when submitted to a competitive and general interest venue, and where the additions to the draft (which seek to formalize the broader phenomenon of deixis) are still a work in progress. | Summary: This paper shows that emergent communication can learn spatial references. They first create a modified referential game which requires the agents to communicate by messages that indicate the relative position of a number. The proposed agent architecture shows that the GRU-based agents can achieve good performance. The analysis uses NPMI to identify the meaning of the ngrams in the message (i.e., the correlation between the ngram and the referred positions). This paper further shows that the mapping generated by NPMI is correct by generating additional datasets based on the identified dictionaries to show that both non-compositional and compositional messages carry the intended meanings.
Strengths: - This paper proposes a novel spatial game to study the emergence of spatial references.
- This paper shows that NPMI is an effective measure to decompose the messages by finding correlations with the intended meanings.
Weaknesses: - The paper is not very easy to follow especially the definition and design of different types of sequences, examples of the messages, and how the hypotheses are tested. The presentation can still be improved.
- It is unclear how much the test set overlaps with the training set when measuring the accuracy. There is no control of generalization tests such as varying full sequence length or observation of certain patterns of sequences. So, it is hard to understand if the learned messages are effective or memorization of part of sequences in training. For example, does the ngram that means “leftmost” can effectively communicate in a longer sequence?
- The design of the game put high communication pressure on the agents. The agents need to develop messages conveying relative positions in order to succeed. How does the success relate to the communication protocol, for example, when the message length is longer, is it still necessary to develop messages that convey relative positions? It is unclear about the role of channel bandwidth, effective communication, and developed messages.
- The test in Compositional-NP is to generate the dataset by removing the positional component of the message. This is an extreme case of H2. In reality, the message is most likely to be corrupted rather than removed. To reject the null hypothesis, it will be more convincing to have a corrupted message version.
Technical Quality: 3
Clarity: 2
Questions for Authors: - In the experiment, the observation is always fixed length which makes the communication easier. What happens when the observation is longer? I can imagine if the longer sequence contains the same number at different positions, it will introduce some ambiguity in the messages.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - The experiment is based on one-dimensional sequences, the types of spatial references are limited in this case. It will be helpful if the author can discuss how it can extend to more complex types of spatial references.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Response to Reviewer YdNG
We would like to thank the reviewer for their insightful comments and feedback. We appreciate that the reviewer found our game setting novel and the measure effective.
We address the concerns and weaknesses raised below.
## Weaknesses
> The paper is not very easy to follow especially the definition and design of different types of sequences, examples of the messages, and how the hypotheses are tested. (...)
Based on your and the other reviewers' comments, we will improve the presentation for the camera-ready version of the paper. We will include the formalisations of the definitions, rephrasing the hypotheses, additional information about the dataset splits, and the types of different sequences tested.
> It is unclear how much the test set overlaps with the training set when measuring the accuracy. There is no control of generalization tests such as varying full sequence length or observation of certain patterns of sequences. (...)
The test set does not overlap with the train set or the validation set. Therefore, as our agents achieve high accuracy on the validation set, we can conclude that the results are not a case of overfitting. We will add the dataset split and overlap information in the revised version.
We tested train sequence lengths between 20 and 100 and observed that agents can still learn to achieve the same accuracy as in the original task. The only differences are that longer sequences take longer to converge and require a larger hidden size.
Longer sequences in the test dataset will be difficult to reliably evaluate after the training has finished due to the lack of trained weights for either the sender or the receiver to process given integers. This is a similar issue to extending the context windows in LLMs. However, sequences shorter than the ones present in the training dataset can be evaluated, and would contain integers that the agents would have the corresponding weights for. Our preliminary tests suggest that the agents can understand shorter sequences than the ones they were trained on. They can still achieve the same high accuracy as on the sequences from the training set. We only observe at most a 5-10% decrease in accuracy when the training and test sequences are significantly different lengths (60 vs 20 respectively). We hope this alleviates some concerns about the generalisation of our approach.
Both tests will be expanded and included in the revised version of the paper.
> The design of the game put high communication pressure on the agents. The agents need to develop messages conveying relative positions in order to succeed. How does the success relate to the communication protocol, for example, when the message length is longer, is it still necessary to develop messages that convey relative positions? (...)
Channel bandwidth will indeed play a role in how the agents learn. Increasing the channel bandwidth to the point that the agents can describe the observation in full will nullify any development of spatial referencing.
However, this is not the focus of this work. We show that the agents can communicate about spatial relationships, how such messages can be composed, and that this does not require as much bandwidth as transferring the complete observation. Therefore, analysing the bandwidth of the channel was outside the scope of this paper.
> The test in Compositional-NP is to generate the dataset by removing the positional component of the message. This is an extreme case of H2. (...)
We apologise for the confusion. We think that our test already represents your suggestion. When the positional components are removed from the messages, any part of the message that does not convey information about an integer is replaced with $0$ instead of truncation. This ablation test is the most straightforward way of rejecting the null hypothesis of whether the positional components were correctly identified. We will clarify this in the camera-read version.
## Questions
> In the experiment, the observation is always fixed length which makes the communication easier. What happens when the observation is longer? (...)
As mentioned in our response to the weaknesses, we notice no differences when the observation length is increased.
However, including integer repetitions would be challenging for the current setting and would require a careful design of the dataset. It would not provide additional information about the emergence of spatial deixis. If repetitions were allowed, there could be cases where the sender's observations could be duplicated multiple times in a single sequence. This would make the task of the receiver nearly impossible to accomplish, and would lead to training instability. Instead, to allow for repetitions, the sequence window would have to be extended, or the dataset generation would have to be designed to ensure that the sequence windows cannot be repeated while maintaining the possibility of single integer repetitions.
To illustrate this, consider a sequence $S = [1,2,3,4,5,1,2,4,4,5]$, where each integer can be repeated multiple times. If the observation to the sender is $o_s=[1,2,-1,4,5]$, then it is impossible for the receiver to correctly predict the target number, as both $3$ and $4$ are valid answers.
After applying such restrictions, the dataset allowing repetitions would most likely lead to the same results as the one presented in this paper. Possible differences include the sender having to refer to 2 integers in a single message to identify the referent position in the sequence precisely.
## Limitations
> The experiment is based on one-dimensional sequences, the types of spatial references are limited in this case. It will be helpful if the author can discuss how it can extend to more complex types of spatial references.
Thank you for this suggestion. We discuss the formal extension of our setting in our response to Reviewer `vZ26`.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I thank the authors for the further explanation. It is good to see it generalize to a longer sequence. However, this is not the only form of generalization; also, non-overlapping training and test sets do not mean the messages generalize to different combinations of deixis. For example, seeing "4 to the left" and "3 to the right" and generalizing to "3 to the left". These generalization tests will need to reflect what a formal deixis definition this paper uses.
Regarding the design to avoid repetition, I understand the difficulty in identifying the referent position, but this restriction can limit the learned messages to simple ones. So, it will be helpful to discuss this limitation in the design too.
Overall, I appreciate the authors' response and it addressed some of my questions. I will increase my score to 5 and suggest a further discussion on the generalization that reflects the definition of deixis. | Rebuttal 1:
Rebuttal: # General Response
We would like to thank the reviewers for their constructive comments.
We address some common themes in the general response, with more detailed comments in each reviewer rebuttal. Where it was needed, the quoted parts of the review texts were shortened to (...) for brevity. We welcome the reviewers' comments on any major unaddressed points in the discussion. We will be able to answer anything that was missing from our rebuttal.
We have also noted any typographical and layout comments, and assure the reviewers that they will be corrected in the revised version.
## Generalisation
Reviewers `YdnG` and `VwFJ` argue that the settings presented, and the results, are simple and not generalisable.
We agree that the environment present is quite simple. However, we disagree that this represents a weakness of our results. We see this as a good starting point, with any extraneous confounding factors removed, as reviewers `vZ26` and `JyuC` noted. By using a simple environment, we can show more precisely which factors affect the emergence of spatial referencing strategies. Using a more complex setting at such an early stage of research into deictic phrases in EC could potentially lead to confusing and not generalisable results. For example, using a vision network, or complex tasks, could lead to a generally low accuracy of the agents, which would not necessarily imply that spatial deixis are impossible, or require different approaches. Instead, it could point to an issue with the pretrained vision network, training the vision network, or the size of the agent's network being too small/large for the task given. We therefore consider that starting with the simple environment and showing how such spatial references can emerge, leads to useful insights, paving the way for future research in more complex settings with more complex architectures.
### Generalisation tests
We would also like to thank the reviewers for suggesting additional generalisation tests, allowing us to expand the analyses. We have run these preliminary tests, with the initial results showing that our methods generalise to training on both longer and shorter sequences. We also show that the agents can communicate even when presented with shorter sequences than the ones they were trained on. This indicates that the learned spatial references are transferrable to different environments, including OOD settings. These generalisation tests will be expanded on in terms of robustness and sample size, and featured in the camera-ready version of our paper.
## Presentation
All reviewers requested that the paper provide more information and formalisation, especially regarding the definitions and dataset details. We will include this in the camera-ready version. We will also be adding a more formal definition of deixis, as presented in our response to Reviewer `vZ26`.
Additionally, Reviewers `vZ26` and `JyuC` requested additional information for Section 5 of the paper. We present this in the table below. The entries in the table are composed of average percentages, across all $t_n$ and $t_c$ choices. In the parentheses, we show the maximum and minimum values across all $t_n$ and $t_c$ choices. The average \% of emergence represents the absolute \% of runs which developed that message type or message feature. For all messages, the average \% of messages which are of a given type, or exhibit a given feature, is only counted for in runs where these features emerged.
This table will also be added to the camera-ready version.
| **Message Type** | **Avg. \% Emergence** | **Avg. \% of Messages** |
|-----------------------------------------------------|----------------------------------|---------------------------|
| Non-Compositional Positional | 99.3\% (100\%-93.75\%) | 1\% (3\%-0\%) |
| Non-Compositional Positional Reserved | 18.75\% | 1\% (3\%-0\%) |
| Non-Compositional Integer | 45.1\% (100\%-0\%) | 10\% (15\%-0\%) |
| Compositional Integer | 100\% | 34\% (99.7\%-0\%) |
| Compositional Positional | 25\% (27\%-0\%) | 56\% (100\%-0\%) | | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LOVA3: Learning to Visual Question Answering, Asking and Assessment | Accept (poster) | Summary: This paper augments presents a data augmentation / multi-task learning technique to improve model quality for Visual Question Answering (VQA). The key idea of the paper, motivated by analogy to humans, is that asking questions and assessing answers are also key skills, apart from just answering questions. The paper seeks to train a model to "Answer, Assess, Ask" jointly, relying existing datasets for a variety of answering and asking tasks, and deriving a new dataset called EvalQABench for the assessment task. The introduction of the EvalQABench dataset, initially created by LLMs and later filtered by experts, is another potentially valuable and lasting contribution. Multiple tasks on augmented data are implemented on the LLava backbone which is an existing SOTA model. The paper compares their technique (called LOVA) to several SOTA models on a variety of datasets showing robust gains in a multitude of settings, providing confidence in the technique's validity.
Strengths: The paper is well motivated: assessing and asking (evaluation and question generation) are closely associated tasks to question answering, that can be performed on datasets that are easily derived from question answering datasets. The argument that training on these closely related tasks improves generalization on question answering is intuitive though reliant on analogy with humans, which has its own traps.
The paper evaluates their technique against a variety of SOTA models, and across a multitude of tasks, proving that the gains are robust. The paper also provides ablation studies for various components, showing their utility. In general the experiment section is detailed, extensive and is a highlight of the paper.
The paper has 100 citations, and extensive references to related work, making it easier to assess the novelty of the work.
Weaknesses: As the authors point out, due to cost considerations, the authors only evaluate the technique on smaller (relative to MLLMs) models. This is important as model size is a confounder when it comes to assessing the usefulness of data augmentation or multi-tasks. A technique useful for a 7B model is not necessarily useful for a 70B model. However, given the cost of inference of larger models, improving smaller models to be competitive with larger models has its own benefit.
There is prior work that already includes question answering and question generation, for example the InstructBLIP paper. Viewed in that sense, this paper makes an incremental contribution adding the assessing task to the answering, asking combination that was already shown to be useful earlier. However, the EvalQABench dataset is potentially very useful for the whole subfield of visual question answering. One minor but interesting finding in the paper is that in a balanced dataset split rightdown with model with 50% Yes and 50% No answers, not all models predict Yes/No close to 50% of the time.
Technical Quality: 3
Clarity: 3
Questions for Authors: In section 1, there's a claim that the EvalQABench datasets is generated via a "new automatic pipeline". However, in section 3.2 the authors say "... acknowledging that the Fuyu-8B model is not flawless and recognizing that no multimodal model, including GPT-4V, is perfect, we have implemented both manual filtering and error correction...". Do the earlier claims about the pipeline being automatic overstate the case? Are they necessary?
Does the feedback add value beyond just rephrasing the answer is a longer sentence. A lot of the feedback seems trivial and already captured in the q,a pair. For e.g. "What does the woman have on her back". "backpack" vs "No, the woman has a backpack on her back". As another e.g. "What are the people doing?", "motorcycling", vs "No, the people in the picture are motorcycling".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Smaller models sizes are an understandable limitation as mentioned in the paper, and referenced earlier in the review.
The introduction of EvalQA dataset and the additional "asses" task is the key incremental contribution of the paper. By looking at rows 4 and 7 in Table 6 that shows ablation studies, one could discern the incremental benefit of EvalQA. The deltas in the scores are somewhat underwhelming (unclear if they are significant).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review! We sincerely appreciate your acknowledgment of LOVA3’s motivation, clarity, novelty, effectiveness, and consistent performance gains.
**W1: The model size.**
We sincerely thank your valuable comments. Due to limited GPU resources, it is hard for us to train with the larger model over 7B. Moreover, we found that the results of our model are compatible with LLaVA1.5 (13B) on VQAv2 (80.3 > 80.0), GQA (63.3 = 63.3), MME (1552.7 > 1531.3). This result demonstrates the effectiveness of our training paradigm.
**W2: Training Data used in InstructBLIP.**
We agree that there are question generation datasets in InstructBLIP, but our GenQA has the following differences:
(1) We ask the model to generate questions and answers jointly, not only the question.
(2) We use not only generic data types but also multi-choice VQA, multi-turn VQA, and two grounding data types REC and REG for GenQA task. This increases the difficulty of the GenQA task and enhances the model's problem-solving abilities.
**Q1: Claim of "New automatic pipeline."**
Thank you for your suggestion. The term "New automatic pipeline" in Section 1 refers to the automatic data generation process and does not include the data filtering process, which could lead to confusion. We will revise this in the next version.
**Q2: "Does the feedback add value beyond just rephrasing the answer in a longer sentence? A lot of the feedback seems trivial …"**
Yes, the feedback is generated by rephrasing the question and answer.
Firstly, we want the model not only to give the classification of "Yes/No", but we would also like there exists a simple explanation. Thus, we use the LLaMA-2 to rephrase the question and the ground-truth answer to generate feedback. During training, the ground-truth answer and negative answer do not appear simultaneously. It is not trivial to add such feedback.
Secondly, rephrasing the ground-truth answer and question is an efficient way to create data. Generating feedback for 32,000 images manually is impractical, so we utilize LLM assistance. Our preliminary study shows that this strategy ensures high accuracy in feedback generation and is effective for training the model to assess the VQA triplet.
**Limitation: About the results of EvalQA in Table 6**
The reason for the incremental results in Table 6 of EvalQA data may be the following:
(1) Data Size: Compared to the other two tasks, VQA (665K) and GenQA (842K), EvalQA includes only 64K data points.
(2) Data ratio: It is crucial for training MLLMs, as highlighted by InstructBLIP and LLaVA1.5. Therefore, the smaller data size of the EvalQA task affects the results in rows 4 and 7 of Table 6. However, referring to row 3 in Table 6, one can observe improvements even with only 64K additional data included. | Summary: This paper enhances the MLLM's visual understanding capability by training it to ask questions about an image and evaluate the correctness of given question-answer pairs about an image. To achieve this goal, new data is extracted from existing datasets and a new model is fine-tuned on the new data. The experiment shows that the newly added data can improve the MLLM's capability of understanding of images with higher scores on VQA tasks.
Strengths: - The paper is generally well-written and easy to understand.
- The argument that training a MLLM to ask questions and evaluate answers can improve its visual understanding is reasonable and, verified by the well-conducted experiments in the paper.
- The experiment setups are carefully designed to avoid unfair comparisons.
Weaknesses: - The three key capabilities of MLLMs covered by the paper--asking, answering, and evaluation--should be characterized in an interactive environment (like in an embodied environment where the MLLM is treated as the high-level perceiver/planner/controller of robots) instead of in the static environment. Consider, for example, an MLLM doing an embodied task that needs asking about some key questions, this is where the asking capabilities really make sense. However, the paper only trains and evaluates the MLLM in simple VQA problems as in previous literature. In the paper's current state, the value of the paper is limited and, from my perspective, does not meet the bar of acceptance if VQA tasks are considered only. The scope of the paper needs to be increased to a significant extent that touches the essence of MLLMs with higher-level capabilities that incorporate iterative/interactive thinking and planning.
- The added synthesized data only gives the model a limited improvement in performance, while adding a large amount of computation overhead. In fact, if we use models like GPT-4(V) to synthesize random VQA data, the performance will increase as well [1], so I do not see the clear benefit of specifically doing the asking and evaluation data augmentation. This problem is relevant to the first problem: the capability added to MLLM should not be evaluated in VQA tasks.
[1] Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs.
Technical Quality: 3
Clarity: 3
Questions for Authors: (Table 6) Why does adding GenQA-Grounding data improves ScienceQA performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper mentions some limitations of the proposed pipeline. However, as mentioned above, the biggest limitation is the limited scope of the considered setting which only involves VQA (including grounding) problems without considering the embodiment of MLLMs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and valuable review. We thank your suggestions about extending the research scope to other domains.
**W1-1: Why do we apply three key capabilities in the current static environment rather than in an interactive environment?**
Firstly, it should be noted that these three abilities are not exclusive to interactive environments such as embodied AI. Other research domains, such as GUI Assistant [12], AI Medical Diagnosis [13], and LLM-Agent [14, 15], also involve asking and assessments in their systems. Both static and interactive environments require the incorporation of these three capabilities. Therefore, incorporating these abilities into MLLM model training can be recognized as one strength of our work that other MLLMs neglect.
Secondly, the scope of this research is about training MLLM. We are motivated by the current model training of MLLMs that primarily focuses on teaching models to answer the questions while neglecting the asking and assessing abilities. Our investigation of current MLLMs, including LLaVA, Qwen-VL [16], and CogVLM [17], revealed that while these models excel at answering questions, they perform inadequately in asking and assessing skills. Therefore, we consider that it is necessary to explore adding asking and assessing abilities in training MLLM. Our experiments in Tables 3, 4, and 5 reconfirm that these three abilities actually enhance models’ problem-solving capabilities.
**W1-2: Is it appropriate to say that the asking capability of an MLLM makes more sense in embodied tasks?**
(1) We agree that asking key questions is important for embodied tasks. However, other tasks, such as AI diagnosis and AI assistant systems, also require this ability. It is not exclusive to embodied tasks. In other words, this ability is applicable not only to embodied tasks but also to other domains and is worth exploring in MLLM training.
(2) We assume that if an MLLM is able to yield high-quality questions and corresponding answers, it indicates a stronger problem-solving ability and a deep visual understanding. This is similar to human cognition: after thoughtfully understanding concepts, we can well ask questions. Refer to the ablation study in Table 6, by adding GenQA data, there are consistent improvements in VQA datasets. Therefore, we consider adding asking tasks to be essential in MLLM training.
(3) Highlighting the importance of asking ability is also the affirmation of our contribution about incorpararting GenQA in the model training, which other MLLMs have neglected.
**W1-3: Is it acceptable for testing on VQA and multimodal benchmarks?**
Firstly, we follow the previous representative MLLMs, such as LLaVA1.5, InstructBLIP, Qwen-VL, and CogVLM, to test our model on widely used VQA datasets and multimodal benchmarks so that we can directly compare our results with theirs.
Secondly, VQAv2, GQA, VizWiz, ScienceQA, and the object hallucination VQA dataset POPE are widely used VQA datasets for testing MLLMs. Additionally, we select five popular multimodal benchmarks—MME [18], SEED-Bench, MMBench [19], LLaVA-Bench, and MM-Vet [20]—which encompass a wide range of diverse and challenging multimodal questions across various domains (e.g., Math, Instance Attribute, Spatial Relation, Text Understanding, Object Recognition, Scene Understanding) especially for testing MLLMs.
Thirdly, we appreciate your suggestion to extend the scope of our research to embodied AI and consider it a potential area for future exploration.
**W2-1: The added synthesized data only gives the model a limited improvement in performance while adding a large amount of computation overhead.**
(1) The improvement is not limited, as we chose LLaVA1.5 (7B) as the baseline and trained the model following the original training recipe without tuning any hyperparameters. Additionally, we did not involve any new images in training. Therefore, the clear improvements shown in Tables 3, 4, and 5 are meaningful. Some results of our 7B model are even compatible with the larger model LLaVA1.5 (13B):
| | Size | VQAv2 | GQA | VizWiz | MME |
| --- | --- | --- | --- | --- | --- |
| LLaVA1.5 | 7B | 78.5 | 62.0 | 50.0 | 1510.7 |
| LLaVA1.5 | 13B | 80.0 | 63.3 | 53.6 | 1531.3|
| LOVA$^3$ | 7B | 80.3 | 63.3 | 53.6 | 1552.7 |
(2) The training costs are not high. We trained our model for only 24.5 hours on an 8 A100 (40G) GPU setup.
(3) As shown in Table 6, the first six rows, which present results with different data sizes, consistently outperform the baseline model LLaVA1.5. This indicates that our training paradigm performs well even with smaller data sizes, including as low as 64K data points in EvalQA.
**W2-2: If we use models like GPT-4(V) to synthesize random VQA data, the performance will increase as well.**
Synthesizing data to improve performance is non-trivial and is crucial for training MLLMs. The key challenge is how to generate essential data at a low cost. We propose a solution that does not require GPT-4(V) or new images. Instead, it leverages existing annotations to achieve a significant performance improvement. For the GenQA task, the training data is derived from the original training datasets used in LLaVA1.5, such as VQAv2 and GQA. For the EvalQA task, we use VQAv2 as the source for data generation. As shown in the ablation study in Table 6, there are robust gains by adding GenQA and EvalQA data. These results indicate the effectiveness of our created data.
**Q1: Why does adding GenQA-Grounding data improve ScienceQA performance?**
(1) Training with GenQA-Grounding data enhances the ability to understand object positions deeply, which is beneficial for fully leveraging visual information for reasoning.
(2) Many images in ScienceQA [21] (https://scienceqa.github.io/) are from natural scenes. Therefore, enhanced visual understanding would improve the accuracy of reasoning is reasonable.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
Regarding your first, second, and third arguments, other agents, like GUI Assistant/digital-device agent and LLM-Agent still need an interactive environment in the loop, so you still need to change your evaluation settings. My opinion is not restricted to robotics tasks. It is not appropriate to say that "previous works use VQA tasks for evaluation, so we use them as well"; a good paper should choose/create the most appropriate settings that can validate the main claims.
I keep the score.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Dear reviewer, we greatly appreciate your time in reviewing our response. We hope that your concerns are addressed with our rebuttal. Please let us know if there are any further questions that need clarification.
---
Rebuttal 3:
Title: Author Response to Reviewer 4XZr
Comment: Dear Reviewer 4XZr,
We deeply appreciate the time you have devoted to reviewing our manuscript. In our rebuttal, we have carefully considered and addressed your concerns. As we have not yet received your feedback during the discussion period, we would like to summarize our rebuttal below to help clarify the key points.
Regarding your main concerns about evaluating the answering, asking, and assessment capabilities on generic multimodal tasks rather than embodied tasks, we would like to express our viewpoint:
- **Multimodal instruction tuning is a crucial and foundational area that can significantly benefit downstream tasks, including embodied tasks and GUI assistance.**
- **We would like to emphasize that evaluating MLLMs on VQA tasks is a widely accepted practice, as demonstrated by other models such as LLaVA1.5, Qwen-VL, and CogVLM. This evaluation is essential to assessing the overall capabilities of MLLMs.**
- **Applying questioning and assessment abilities is equally important in both general MLLM research and embodied AI research. Moreover, our experiments reaffirm that adding the two additional abilities is able to bring consistent and robust gains. Therefore, conducting research with MLLMs tested on VQA tasks is a non-trivial and essential endeavor.**
We trust these responses effectively address the concerns you raised. As the discussion period deadline approaches, we eagerly await any further feedback you may have.
Once again, thank you for your dedication to the review process.
Best regards,
Authors of Paper 9008
---
Rebuttal 4:
Title: Author Response to Reviewer 4XZr
Comment: Dear Reviewer 4ZzR,
Firstly, there is no need to change our current settings. **There is no clear evidence to suggest that asking and assessing are exclusively applicable to the interactive environment.** And then, we demonstrate that incorporating the two additional tasks is **benefit** for MLLM training. The tasks, such as embodied AI and GUI Assistant, are **downstream tasks**, whereas our focus is on **multimodal foundation model training**. Adapting to these tasks would require **entirely different methods and experiments, essentially resulting in a separate paper**. It is important to clarify that it is not inappropriate to focus on downstream tasks before evaluating the approach on a foundation model.
Secondly, **we referred to previous works to underscore the significance of multimodal instruction tuning and to provide evidence supporting the validity of evaluating on VQA datasets and benchmarks.** These examples serve as the foundation of our argument. We respectfully disagree with your viewpoint, particularly since the **other two reviewers found our experimental setup to be reasonable, fair, and insightful.**
Thirdly, as shown in Table 7, we have already evaluated the assessing ability in comparison with other SOTA models. The results clearly demonstrate that our model significantly enhances this ability without introducing prediction bias. Our evaluation setting was conducted reasonably.
Best regards,
Authors of Paper 9008
---
Rebuttal Comment 4.1:
Comment: Thanks for the response. I understand that your focus is on multimodal foundation model training, but if your method is general and most likely to be useful for some specific task (here interactive agents), the evaluation should better reflect this point. I am not saying that the proposed method is useless, but that the paper can be largely improved and put into a more appropriate context if evaluation settings change from VQA to interactive tasks. The authors are encouraged to do so in order to make the paper much stronger to have a larger impact on the community.
On the other hand, the authors said that "Adapting to these tasks would require entirely different **methods** and experiments", but if your method is to improve the general capability of foundation models, why do you need entirely different **methods** when considering that the improved capability is closely related to the tasks?
Considering that the authors have clarified some of the points, I raise the score to 4. | Summary: The paper introduces LOVA3, a framework designed to enhance Multimodal Large Language Models (MLLMs) by incorporating not only visual question answering (VQA) but also the capabilities of generating questions (GenQA) and evaluating question-answer pairs (EvalQA). The primary objective is to improve the comprehensive multimodal understanding of AI models.
LOVA3 includes the development of EvalQABench, a benchmark with 64,000 training samples to evaluate VQA data quality. The framework uses the LLaVA-1.5 model as a base, incorporating datasets like VQAv2 and GQA to train these new tasks. Experimental results on ten multimodal benchmarks demonstrate that LOVA3 significantly improves the models' performance, highlighting the benefits of incorporating comprehensive questioning and evaluation abilities into MLLMs. The paper emphasizes the approach and robust results, despite noting the increased computational cost and the need for further testing on larger models and domain-specific tasks.
Strengths: 1. LOVA3 introduces a strategy that extends beyond traditional VQA tasks by incorporating question generation and evaluation.
2. The creation of EvalQABench provides a rigorous way to test and improve MLLMs.
3. The multiple perspectives of experimental results provide insights of the proposed framework across multiple benchmarks.
Weaknesses: 1. Incorporating additional tasks like GenQA and EvalQA, but the two tasks are also the existing steps of the visual language instruction generation for visual question answering (e.g. SEED-Bench) or visual instruction tuning (e.g., LLaVa-Bench). They also used LLMs or MLLMs for the dataset generation and validation. To explained the special novelty or contribution would be better.
2. The work doesn't provide detailed explanations on how to validate the generated data quality from humans instead of using imperfect models (LLMs or VLMs). It uses Fuyu-8B for data generation but employs a stronger MLLM (LLaVA 1.5) as the base model for instruction tuning. Since LLaVA 1.5 is stronger than Fuyu-8B, the generated negative samples would be less challenging and easier to recognize by stronger models.
3. The paper lacks a more in-depth analysis of potential data biases and strategies to mitigate them.
4. The proposed benchmark is relevant to visual question answering and data generation for visual question answering. It would be necessary to survey and discuss the recent existing datasets (e.g., VQA-GEN, CrossVQA, OVQA, STAR) and generated benchmarks (e.g., LLaVA-Bench, SEED-Bench, SOK-Bench, CinePile) as fully considered.
5. The paper does not provide the generated dataset for review, which is important for the validation of the work.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How about the prompt stability of the QA generation and the differences when using the different variants of prompts?
2. Why does the work apply Fuyu-8B instead of LLaVA 1.5 for the data generation and is there any comparison between the different new VLMs?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for thoroughly reviewing our work! We have carefully considered all your concerns and addressed them in the following responses.
**W1: Comparison with SEED-Bench and LLaVA-Bench in using LLMs or MLLMs.**
(1) GenQA and EvalQA are two new training tasks, whereas SEED-Bench and LLaVA-Bench are test benchmarks. For GenQA, no LLMs or MLLMs are used for data creation. We propose an efficient strategy for creating the data from existing datasets (see Appendix C). For EvalQA, we use Fuyu-8B [3] and LLaMA-2 [4] for data creation. This cannot be a weakness of our work, as creating over 32,000 samples for training the assessment ability necessitates using LLMs/MLLMs rather than human labeling. Additionally, we demonstrate the effectiveness of using non-commercial LLMs/MLLMs in data generation rather than commercial models like GPT-4(V).
(2) Novelty: To the best of our knowledge, LOVA$^3$ is the first work to imbue the asking and assessment abilities in training a robust and intelligent MLLM. Additionally, we are the first to propose GenQA and EvalQA tasks for enhancing the model’s problem-solving ability. Contribution: We contribute a new training pipeline, a new benchmark EvalQABench, and an open-source model.
**W2-1: Is LLaVA1.5 stronger than Fuyu-8B?**
It is not appropriate to claim that LLaVA1.5 [5] is stronger than Fuyu-8B. As Fuyu-8B is trained with large-scale VL data, LLaVA1.5 is only trained with about 1.4M data in the whole two-stage training. The zero-shot results provided by Fuyu-8B (https://www.adept.ai/blog/fuyu-8b) show that even without training with the VQAv2 and GQA datasets (training data of LLaVA1.5), Fuyu-8B achieves exceptional performance on multiple VQA datasets.
**W2-2: Are the generated negative samples easier to recognize or of lower quality?**
Firstly, the generated samples are not simple for model training. As shown in Table 7, these state-of-the-art (SOTA) MLLMs still suffer from inferior accuracy on the generated data, indicating the generated data remains challenging.
Secondly, LLaVA1.5 is prone to predicting “Yes” (nearly 66%), as shown in Table 7, indicating that the EvalQABench samples are not easy for LLaVA1.5 to identify.
Thirdly, the EvalQA task differs from the VQA task. If we use Fuyu-8B or GPT-4 to build VQA synthetic datasets and then use the synthetic data to train the model, it may lead to model degradation. However, we only use Fuyu-8B to produce negative answers for constructing negative samples for model assessment. Thus, EvalQA data would not restrict the VQA performance.
**W3: In-depth analysis of potential data biases.**
(1) For GenQA, we build on existing annotated datasets from the original LLaVA1.5 training data. Since we incorporate not only generic VQA data but also Multi-Choice VQA, Multi-turn VQA, and grounding like REC and REG, our GenQA dataset is diverse. As shown in Table 1, we strictly follow the data ratio of LLaVA1.5 for each data type. Thus, our experimental settings have no obvious data biases in terms of data types and amounts.
(2) For EvalQA, we use the VQAv2 dataset, a subset of the original training corpus of LLaVA1.5. Therefore, no new datasets are used. Moreover, VQAv2 is already a balanced version of VQAv1. We create the EvalQABench train set by randomly selecting 100,000 samples from the VQAv2 train set and finally yielding 32,000 negative samples (see Appendix D). As shown in Table 2, we have 9 question types of EvalQA tasks, for each type, with manually restricted ratios for data balance.
**W4: Comparison with other datasets or benchmarks.**
Thanks for your suggestion regarding the comparison with other datasets. Here are the unique aspects of GenQA and EvalQA:
(1) Unlike the traditional Visual Question Generation (VQG) task, which focuses primarily on the generic VQA domain, GenQA is designed to generate diverse VQA data. Additionally, GenQA generates both questions and answers simultaneously, whereas traditional VQG focuses solely on question generation.
(2) EvalQABench is designed to assess VQA data rather than answer VQA questions. In contrast, other VQA benchmarks (e.g., VQA-GEN [6], CrossVQA [7], OVQA [8], STAR [9]) and multimodal benchmarks (e.g., LLaVA-Bench, SEED-Bench, SOK-Bench [10], CinePile [11]) primarily evaluate a model's ability to answer questions.
Compared to the datasets you mentioned:
(1) The data generation process of CrossVQA contains the joint question and answer generation. However, our GenQA is considered an additional task for enhancing the model's comprehension ability, not a test dataset for assessing distribution shifts.
(2) OVQA and STAR are video VQA datasets that focus on question answering.
(3) SOK-Bench (May 15, 2024) and CinePile (May 14, 2024) are contemporary works that use ChatGPT for data generation. In contrast, our EvalQABench uses only non-commercial models.
Thank you for recommending these datasets. We will add the comparison in the next version of Section 2.
**W5: The paper does not provide the generated dataset for review.**
We create an anonymous link at https://anonymous.4open.science/r/LOVA3-9008/README.md due to the double-blind policy containing all our created datasets of EvalQABench. Please refer it for further details.
**Q1: The prompt stability of the QA generation**
We train the GenQA task by randomly choosing one from 58 prompts for each data sample along with a short description for the data type, such as “Please provide a clear and direct question and answer after examining the image. This is a Multi-choice VQA task.” Thus, it is stable for QA generation when one prompt is randomly chosen during inference.
**Q2: Why apply Fuyu-8B for the data generation?**
We chose Fuyu-8B because it is a fast, open-source MLLM with exceptional performance on many complex tasks. Through our preliminary study, we found that Fuyu-8B is stronger than LLaVA1.5 with fewer hallucination issues and better visual input understanding ability.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Dear reviewer, we would like to thank you for your insightful feedback. We hope that your questions are addressed with our rebuttal. Please let us know if there are any further questions that need clarification.
---
Rebuttal 3:
Title: Author Response to Reviewer UkuU
Comment: Dear Reviewer UkuU,
Thank you again for your valuable reviews of our submission. As we have not yet received your feedback on our rebuttal in the current discussion period, we would like to summarize our key points below to help address your concerns.
- **Regarding the use of LLMs or MLLMs in creating the EvalBench, we believe this approach is non-trivial and essential for training MLLMs effectively.** We would like to clarify that, unlike SEED-Bench and LLaVA-Bench, we did not use commercial models like ChatGPT for data generation. **It brings insights for other future works for their data generation with low financial costs.** Additionally, **current published works LLaVA [1] (NeurIPS 2023), Ferret [2] (ICLR 2024), SNIFFER [3] (CVPR 2024), Next-GPT [4] (ICML 2024), ShareGPT4V [5] (ECCV 2024) also use GPT-4(V) to build their corresponding data** for training specialized MLLMs. It is the common practice of using stronger LLMs/MLLMs for data generation.
- **While we respectfully disagree with the viewpoint that 'LLaVA1.5 is stronger than Fuyu-8B,'** we acknowledge that LLaVA1.5 achieves exceptional performance on diverse datasets and benchmarks. However, this does not necessarily indicate that it is superior to Fuyu-8B, as LLaVA1.5 was trained on foundational and relevant datasets like VQAv2 and GQA. Therefore, it is reasonable to use Fuyu-8B to generate negative answers while the results in Table 6 demonstrate that our synthetic data is high quality which is challenging for SOTA MLLMs.
- Adhere to the policy of NeurIPS, we create an anonymous link **https://anonymous.4open.science/r/LOVA3-9008/README.md that includes all our training and evaluation codes, created datasets, and pre-trained weights**.
We hope these clarification adequately address your initial questions. As the discussion period deadline approaches, we keenly await your further feedback you may have.
Once again, thank you for your dedication to the review process.
Best regards,
Authors of Paper 9008
---
References:
[1] Visual Instruction Tuning
[2] Ferret: Refer and Ground Anything Anywhere at Any Granularity
[3] SNIFFER: Multimodal Large Language Model for Explainable Out-of-Context Misinformation Detection
[4] NExT-GPT: Any-to-Any Multimodal LLM
[5] ShareGPT4V: Improving Large Multi-modal Models with Better Captions | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.