title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
TFGDA: Exploring Topology and Feature Alignment in Semi-supervised Graph Domain Adaptation through Robust Clustering | Accept (poster) | Summary: This paper proposes a novel framework named TFGDA for the semi-supervised graph domain adaptation. Considering that existing graph domain adaptation works overlook the utilization of graph structure information, this paper innovatively proposes a STSA strategy that fully leverages graph structure information to promote the learning of domain-invariant node features. Moreover, TFGDA also devises a SDA strategy to achieve a fine-grained feature distributions alignment, which maps node features onto multiple great circles and utilizes the spherical sliced-Wasserstein distance to quantify the domain discrepancy. To remedy the overfitting phenomenon caused by limited source labeled nodes, a novel RNC strategy is proposed to guide the discriminative clustering of unlabeled nodes. Comprehensive experiments on various benchmarks demonstrate that the proposed TFGDA outperforms state-of-the-art methods in semi-supervised graph domain adaptation tasks.
Strengths: 1) This paper is well-motivated and easy-to-follow. The overall structure is clear. Source graph label scarcity is indeed a common challenge encountered in real-world applications.
2) Preserving the graph structure from a topological perspective to promote the extraction of transferable node feature is a novel idea. It effectively utilizes the inherent properties of the graph to enhance model’s transfer performance, which can shed some light on future research on graph transfer learning.
3) It is a novel attempt to utilize the SDA strategy to achieve fine-grained feature distribution alignment in spherical space.
4) The motivation behind the RNC strategy is clear. The experimental results also validate that the RNC strategy effectively mitigates overfitting issues and significantly outperforms other strategies.
5) The effectiveness and SOTA transfer performance of TFGDA has been confirmed through extensive ablation studies, visualization results, and comparative experiments.
Weaknesses: I have several concerns and questions as below:
[About writing]
1) It is suggested to include some discussions on practical applications of graph transfer learning in the Sec 1, which may better emphasize the significance of the research.
[About the proposed method]
2) The proposed STSA strategy employ persistent homology to extract the topological structures of both graphs. In topological theory, topological features of different dimensions capture the structure of the underling space from different perspectives. What dimensions of topological features did the authors use in STSA strategy to align the topological structures of the input space and latent space? How was this determined?
3) What distance metric was used when constructing the Vietoris-Rips simplicial complex in the STSA strategy?
4) Previous domain adaptation works [1] have used the K-means algorithm to guide unlabeled nodes clustering. What advantages does the RNC strategy have compared to K-means?
[About the experiments]
5) Although the authors conducted detailed experiments on mainstream graph domain adaptation datasets (i.e., ACMv9, Citationv1 and DBLPv7), I hope they discuss the scalability of TFGDA on larger graph datasets, such as the recommender systems dataset AliCVR [2].
6) It is suggested to compare and discuss the proposed TFGDA with the recently introduced GIFI method [3].
Ref [1] Deng W, Liao Q, Zhao L, et al. Joint clustering and discriminative feature alignment for unsupervised domain adaptation[J]. IEEE Transactions on Image Processing, 2021, 30: 7842-7855.
Ref [2] Guo G, Wang C, Yan B, et al. Learning adaptive node embeddings across graphs[J]. IEEE Transactions on Knowledge and Data Engineering, 2022, 35(6): 6028-6042.
Ref [3] Qiao Z, Xiao M, Guo W, et al. Information filtering and interpolating for semi-supervised graph domain adaptation[J]. Pattern Recognition, 2024, 153: 110498.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See Weakness. If the authors address all my concerns, I am pleased to improve the final score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer R5oT for the thorough review of the manuscript and the valuable comments that will aid in enhancing our paper.
**Q1: Include some discussions on practical applications of graph transfer learning in the Sec.1.**
**A1:** Graph Transfer Learning can be applied in various practical scenarios, such as recommender systems [K1], molecular property prediction [K2], social and academic networks analysis [K3], cross-modal retrieval [K4], and human parsing [K5].
Ref.[K1] Learning adaptive node embeddings across graphs. TKDE 2022.\
Ref.[K2] Transfer learning with graph neural networks for improved molecular property prediction. NC, 2024.\
Ref.[K3] A comprehensive survey on graph neural networks. TNNLS, 2020.\
Ref.[K4] Cross-Domain Transfer Hashing for Efficient Cross-modal Retrieval. TCSVT, 2024.\
Ref.[K5] Graphonomy: Universal Human Parsing via Graph Transfer Learning. CVPR 2019.
**Q2: What dimensions of topological features did the authors use in STSA strategy? How was this determined?**
**A2:** As described in Section A.2 of the Appendix (lines 508-511), we use 0-dimensional topological features in our STSA strategy to align the topological structures of the input and latent spaces. Because some preliminary experiments have shown that using higher-dimension topological information does not lead to clear accuracy improvements but noticeably increases the model’s training time.\
Additionally, the topological theory [K6-K7] has demonstrated that low-dimensional topological features (e.g., 0-dimensional) can roughly reflect the topological structure of the data, while high-dimensional topological features (e.g., 3-dimensional and 4-dimensional) are responsible for capturing some complex details of topological structure.
Thus, to balance computational efficiency and model performance, we choose to retain 0-dimensional topological features in our STSA strategy.
Ref.[K6] Computing persistent homology. 2004.\
Ref.[K7] Persistent homology-a survey. 2008.
**Q3: What distance metric was used when constructing the Vietoris-Rips simplicial complex in the STSA strategy?**
**A3:** We use Euclidean distance as the distance metric to construct the Vietoris-Rips simplicial complex in our STSA strategy.
**Q4: What advantages does the RNC strategy have compared to the K-means algorithm?**
**A4:** Compared to the K-means clustering algorithm, our RNC strategy has two main advantages:
**1)** Using the K-means algorithm in neural networks results in a two-stage clustering process [K8]. Specifically, the deep features extracted by the network are first clustered using the K-means algorithm to generate pseudo-labels. These pseudo-labels are then used to guide the model training. This clustering process inevitably introduces pseudo-labels noise into the model during training, limiting the model's generalization ability. On the contrary, our proposed RNC strategy is an end-to-end clustering method that does not involve any pseudo-labels, thereby enhancing the model's robustness in face of different transfer scenarios.
**2)** When combined with representation learning, the K-means algorithm is prone to degenerate clustering solutions [K8-K10], where all unlabeled nodes are assigned to the same cluster, severely affecting the construction of decision boundaries.
The proposed RNC strategy is designed based on the mutual information theory, which effectively guides unlabeled nodes toward robust clustering and naturally avoids degenerate clustering solutions. As shown in Section A.3.3 of the Appendix (lines 566-579), we have provided a detailed theoretical analysis that explains how our RNC strategy can avoid degenerate clustering solutions. Furthermore, as mentioned in Section 4.5 (lines 234-236), the experimental results in Figure 2 and Figure 3 have also validated this point.
Ref. [K8] Deep Clustering for Unsupervised Learning of Visual Features. ECCV 2018.\
Ref. [K9] On strategies to fix degenerate k-means solutions. Journal of Classification 2017.\
Ref. [K10] How to Use K-means for Big Data Clustering. PR 2023.
**Q5: Discuss the scalability of TFGDA on larger graph datasets, such as the AliCVR dataset.**
**A5:** Based on your valuable suggestion, we follow Ref.[K1] to utilize a real-world large-scale recommender systems dataset Ali-CVR to conduct additional experiments. Specially, Ali-CVR can be divided into 3 large-scale graphs: **AN**, **A11** and **AB11**.
We report the node classification accuracy in the target graph, and the results in Table R1 demonstrate the scalability and stability of our method.
**Table R1: Node classification accuracy (%) in the target graph. Source graph label rate: 10\%. [Source graph $\rightarrow$ Target graph]**
| Method | **AN**$\rightarrow $**A11** | **AN**$\rightarrow $**AB11** | **A11**$\rightarrow $**AN** |
| ------ | ------ | ------ | ------ |
| UDA-GCN | 20.66$\_\{\pm2.41}$ | 23.47$\_\{\pm3.07}$ | 21.29$\_\{\pm1.75}$ |
| GraphAE | 23.80$\_\{\pm2.06}$ | 27.19$\_\{\pm2.53}$ | 25.04$\_\{\pm1.58}$ |
| SGDA | 27.42$\_\{\pm1.83}$ | 31.25$\_\{\pm2.20}$ | 28.36$\_\{\pm1.34}$ |
| **TFGDA** | **33.67**$\_\{\pm1.42}$ | **35.82**$\_\{\pm1.73}$ | **34.15**$\_\{\pm1.09}$ |
**Q6: It is suggested to compare and discuss the TFGDA with the GIFI method.**
**A6:** Following your suggestion, we compare the transfer performance of our method with the GIFI on the challenging transfer scenario, where only 5% of the source graph nodes are labeled. The results in the Table R2 demonstrate the effectivenss of our method.
**Table R2: Transfer performance (%) measured by Macro-F1 on some transfer tasks. Source graph label rate: 5%.**
| Method | **A**$\rightarrow $**C** | **C**$\rightarrow $**D** | **D**$\rightarrow $**A** |
| ------ | ------ | ------ | ------ |
| GIFI | 73.7$\_\{\pm0.69}$ | 69.0$\_\{\pm2.05}$ | 64.1$\_\{\pm1.08}$ |
| **TFGDA** | **78.9**$\_\{\pm0.46}$ | **72.6**$\_\{\pm1.35}$ | **64.3**$\_\{\pm0.72}$ |
---
Rebuttal 2:
Comment: Dear Reviewer R5oT:
We thank your response and appreciation of our work and rebuttal. We will make sure to incorporate the new results and discussions into our revision to enable it to be a high-quality paper.
Best Regards, Authors of 504.
Title: Response to Reviewer R5oT | Summary: This paper focus on the semi-supervised graph domain adaptation, and introduces a new framework called TFGDA. Graph usually contains complex structure information, while existing GTL studies often overlooks the importance of structure information when extracting transferable node features. TFGDA thus proposes a novel STSA strategy to utilize the topological structures information between input and latent spaces to assist GTL. To solve the instability caused by adversarial training-based domain adaptation methods, this paper also presents an SDA strategy to reduce cross-domain node feature distributions discrepancy in the spherical space. Furthermore, an innovative mutual information-based RNC strategy is proposed to address the overfitting issue by guiding the robust clustering of unlabeled target graph nodes. Extensive experimental results show that TFGDA outperforms existing state-of-the-art methods across various transfer learning tasks, indicating its superiority and stability.
Strengths: (1) The paper is well-written and has a clear structure. Compared to the widely studied unsupervised domain adaptation in GTL, the semi-supervised domain adaptation is more relevant to real-world application scenarios. Therefore, it is meaningful to explore effective solutions to address the challenges faced by semi-supervised domain adaptation. In general, the paper is quite novel and worth reading.
(2) The introduction of the STSA strategy is well-motivated. Leveraging the graph structure information to facilitate graph transfer learning is indeed an innovative attempt and shows significant transfer performance gains on multiple tasks.
(3) The SDA strategy exhibits significant superiority over existing methods in reducing node feature distributions difference. Additionally, it is an interesting idea to devise a node clustering strategy RNC from the view of mutual information. Detailed experimental results demonstrate the effectiveness of RNC in addressing the overfitting problem and enhancing model robustness.
(4) The paper provides comprehensive experiment on multiple benchmark datasets to validate the superior transfer performance over existing state-of-the-art methods.
Weaknesses: To make this paper more comprehensive, there are some concerns that I would like the authors to address.
(1) This paper mainly utilizes multiple real-world academic graphs as datasets. Further exploration can be conducted on other types of graph datasets (such as real-world graph datasets), which can more effectively validate the generalizability of the proposed method.
(2) Furthermore, I recommend the authors to include a discussion and comparison of the model’s inference efficiency in the paper.
(3) [Minor comment:] While the t-SNE visualization results in Figure 2 clearly show the advantage of TFGDA in reducing feature distribution discrepancy, I recommend the authors to include additional quantitative metrics to better demonstrate the transfer ability of the method, such as the {A}-distance [1].
(4) [Minor comment:] Since the SDA strategy contains some complex mathematical details, it is recommended to add a high-level algorithm table to summarize it.
Reference:
[1] Analysis of representations for domain adaptation. (Neurips 2006).
Technical Quality: 3
Clarity: 3
Questions for Authors: I have some doubts and hope the authors to clarify them.
(1) When the model performs inference on the target domain graph, is the shift parameter perturbation branch $\xi$ of RNC activated? Does the graph data flow through the regular branch or the perturbation branch before being fed into the classifier \mathcal{C} for inference?
(2) What is the intrinsic reason for the need to introduce structure information in graph transfer learning frameworks? Is it because the structure information of the data tends to be lost as the network layers go deeper?
(3) Some previous images based-transfer learning works [2-4] have utilized the wasserstein or the sliced wasserstein as distance metrics to minimize domain discrepancy. In the SDA strategy, why is the SSW distance better than these two distance in measuring domain differences?
Reference:
[2] Sliced wasserstein discrepancy for unsupervised domain adaptation. (CVPR 2019).
[3] Reliable weighted optimal transport for unsupervised domain adaptation. (CVPR 2020).
[4] Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. (ECCV 2018).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have provided clear explanations of the limitations that the proposed method may encounter in the supplementary materials. This article does not have any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer byDA for the careful reading of the manuscript and the related comments, which are helpful to improve our paper.
**W1: Further exploration can be conducted on other types of graph datasets.**
**A1:** Based on your suggestion, we conduct additional experiments on a real-world recommender systems graph dataset called AliCVR to validate the generalizability of our method. Please refer to the answer **A5** to Reviewer **R5oT**.
**W2: Include a discussion and comparison of the model’s inference efficiency.**
**A2:** We compare model's inference time on the transfer task **A**$\rightarrow$**C** in the following Table R3. Due to the addition of the spherical space $\mathbb{S}\_\{r}^{d-1}$, our TFGDA exhibits a slightly longer inference time (1.02x) than the SOTA competitor SGDA [S1], which does not significantly increase the inference time but brings a clear performance gain.
Additionally, it is worth mentioning that if the spherical space $\mathbb{S}\_\{r}^{d-1}$ is removed, during inference the network architecture of our TFGDA is almost identical to that of SGDA, as we will disable the shift parameters perturbation branch $\xi\_\{s}$ and $\xi\_\{t}$.
**Table R3: Model's Inference time on Citationv1 (C).**
| Transfer Task | Method | Inference Time on **C** | Micro-F1 |
| ------ | ------ | ------ | ------ |
| **A**$\rightarrow$**C** | SGDA | 5.46s | 75.6$\_\{\pm0.57}$ |
| | **TFGDA** | 5.58s | 81.0$\_\{\pm0.34}$ |
Ref. [S1] Semi-supervised Domain Adaptation in Graph Transfer Learning. IJCAI 2023.
**W3: Include additional quantitative metrics to better demonstrate the transfer ability of the method, such as the $\mathcal{A}$-distance.**
**A3:** Following your advice, we utilize the $\mathcal{A}$-distance as a metric to evaluate the transfer ability of different methods in Figure 2. Table R4 shows the results of the $\mathcal{A}$-distance on **A**$\rightarrow$**C** task. Since our method can accurately align feature distributions of two domains at the class-level, it achieves the smallest $\mathcal{A}$-distance.
**Table R4: $\mathcal{A}$-distance on A$\rightarrow$C task.**
| Method | $\mathcal{A}$-distance |
| ------ | ------ |
| TFGDA-R | 1.67 |
| SGDA | 1.54 |
| TFGDA-TR | 1.48 |
| **TFGDA** | 1.36 |
**W4: Add a high-level algorithm table to summarize the SDA strategy.**
**A4:** Thanks for the valuable advice. We will include a a high-level algorithm table for the SDA strategy in the revised version to make the paper more comprehensive.
**Q1: During inference, is the shift parameter perturbation branch $\xi$ of RNC activated? Does the graph data flow through the regular branch or the perturbation branch before being fed into the classifier $\mathcal{C}$ for inference?**
**A5:** As mentioned in Section A.2 of the Appendix (line 513), during the inference process, we will disable the shift parameters perturbation branches $\xi_{s}$ and $\xi_{t}$.
Thus, during the inference process, the graph data will be fed through the regular branch (i.e., without any feature perturbation) into the classifier $\mathcal{C}$ for the final prediction.
**Q2: What is the intrinsic reason for the need to introduce structure information in graph transfer learning frameworks? Is it because the structure information of the data tends to be lost as the network layers go deeper?**
**A6:** Notably, unlike images and time series data, graph data usually contains rich structure information that encodes complex relationships among nodes and edges. Most existing graph transfer learning works directly use graph convolutional networks (GCNs)-based feature extractors to learn transferable node features. However, recent studies [S2-S4] have indicated that GCNs are insufficient in capturing the sophisticated structure information in graph, which means that the graph structure information may be lost or destroyed after passing through the GCNs-based feature extractor. Therefore, we propose the STSA strategy to encode these intrinsic structure information in graph (i.e., from the input space) into the latent space by aligning the topological structures of the input space and the latent space, effectively improving the model's transfer performance.
Ref. [S2] Coco: A coupled contrastive framework for unsupervised domain adaptive graph classification. ICML 2023.\
Ref. [S3] Graph Kernel Neural Networks. TNNLS 2024.\
Ref. [S4] Theoretically improving graph neural networks via anonymous walk graph kernels. WWW 2021.
**Q3: In the SDA strategy, why is the SSW distance better than the wasserstein and the sliced wasserstein distance metrics in measuring domain differences?**
**A3:** **1)** As described in lines 182-184, due to numerous nodes in the graph (e.g., ACMv9 dataset has over 9000 nodes), directly using the classical Wasserstein distance to compute feature distributions discrepancy is computationally expensive, rendering it impractical for graph transfer learning task.
**2)** To eliminate the negative influence of feature norm discrepancy on the learning of transferable node features, our SDA strategy thus guides feature distributions alignment in spherical space. As described in lines 528-530, the sliced Wasserstein distance focuses on calculating discrepancy between distributions in Euclidean space [S5], making it difficult to precisely measure feature distributions discrepancy in spherical space. While our SSW distance fully considers the manifold structure of data in spherical space when assessing distribution discrepancy [S6], thereby facilitating a more precise quantification of difference in feature distributions. Furthermore, the detailed ablative experiments results in Figure 5 of the Appendix (lines 518-532) have already validated that our SSW distance is superior to the sliced Wasserstein distance in measuring domain discrepancy.
Ref.[S5] Sliced Wasserstein Kernels for Probability Distributions. CVPR 2016.\
Ref.[S6] Spherical Sliced-Wasserstein. ICLR 2023.
---
Rebuttal 2:
Title: Looking forward to the reply
Comment: Dear Reviewer byDA:
Thanks so much again for the time and effort in our work. According to the comments and concerns, we conduct the corresponding experiments and further discuss the related points. As the discussion period is nearing its end, please feel free to let us know if there are any other concerns. We would be happy to provide more details and clarifications.
Best Regards, Authors of 504.
---
Rebuttal Comment 2.1:
Comment: Thank you to the authors for their rebuttal. I have reviewed all comments and the authors’ responses, which effectively addressed my concerns through clear explanations and additional experiments. I recommend incorporating these details into the manuscript or supplementary materials and support the acceptance of this paper.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer byDA
Comment: Dear Reviewer byDA:
Thank you for your recognition of our work. We appreciate your efforts in reviewing our work and rebuttal. We will incorporate your valuable suggestions into the revised version to ensure that it becomes a high-quality paper.
Best Regards, Authors of 504. | Summary: This paper presents a framework called TFGDA for semi-supervised graph domain adaptation (SGDA). It addresses the challenge of annotating unlabeled target graph nodes by utilizing knowledge from a source graph with limited labels. The framework incorporates three key strategies: Subgraph Topological Structure Alignment (STSA), Sphere-guided Domain Alignment (SDA), and Robustness-guided Node Clustering (RNC). These strategies collectively aim to encode topological structure information, stably align feature distributions, and guide the clustering of unlabeled nodes, thereby enhancing the model's transfer performance.
Strengths: 1. Interesting Topic: Graph domain adaptation (GDA) is a compelling topic with significant potential for advancing the field of graph-based learning.
2. SOTA Performance: The experimental results indicate that the proposed methods achieve state-of-the-art (SOTA) performance across various benchmarks.
3. Theoretical Foundation: The paper is supported by a solid theoretical foundation, leveraging topological data analysis and domain adaptation principles.
Weaknesses: 1. Limited Innovation: The innovation in the paper is limited. While the authors claim to be the first to consider graph structure, similar approaches have been explored in previous work that also utilize graph structural information.
2. Overly Optimistic Results: The experimental results seem excessively positive, with improvements of 5-10 points on certain datasets, which is highly challenging. The absence of open-source code makes it difficult to verify these results.
3. Lack of Code Availability: The paper does not provide open-source code, which hinders reproducibility and further validation of the proposed methods by the research community.
Technical Quality: 2
Clarity: 2
Questions for Authors: please refer to Weaknesses
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: please refer to Weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We extend our heartfelt gratitude to Reviewer aMJG for the careful reading of the manuscript and the valuable feedback provided, which are helpful to improve our paper. Our detailed point-by-point responses are provided below.
**W1: While the authors claim to be the first to consider graph structure, similar approaches have been explored in previous work that also utilize graph structural information.**
**A1:** Thanks for the valuable comment. We primarily showcase the innovation of our method through the following three aspects:
**1):** Although many graph convolutional networks (GCNs)-based methods have already been proposed to exploit graph structure information to promote the learning of features, these methods heavily rely on a large amount of labeled data.
Moreover, these methods overlook the substantial domain discrepancy between training and testing datasets in real-world scenarios, making them unsuitable for semi-supervised graph domain adaptation (SGDA) scenario.
Specifically, these GCNs-based models trained only with the limited source labeled nodes (i.e., SGDA task scenario) will suffer severe performance degradation when directly applied to a new target domain.
**2):** In addition, these GCNs-based methods [X1-X6] typically mine the graph structure information in the deep feature space by designing well-crafted GCN architectures or introducing some complex modules. However, recent studies [X6-X8] have pointed out that GCNs are insufficient in capturing the sophisticated structure information in graph, which means that the graph structure information may be lost or destroyed after passing through the GCNs-based feature extractor. Thus, directly mining graph structure information from the deep feature space is a suboptimal way, which affects the learning of transferable node features in our SGDA setting.
The proposed STSA strategy aims to extract the graph structure information directly from the input space and encode these powerful information into the latent spherical space by aligning the topological structures of the two spaces. This method does not lose or destroy the graph structure information during training. Furthermore, our STSA strategy does not introduce any changes to the network architecture, effectively avoiding an increase in model's complexity and ensuring its adaptability to integration with other methods.
**3):** What's more, these GCN-based methods typically model graph structure by considering the similarity between features of adjacent nodes, making it difficult to capture the sophisticated high-order structure information in graph [X5,X6].
In contrast, our work attempts, for the first time, to model the graph structure information from a persistent homology (i.e., topological data analysis) perspective, enabling a more precise capture of the complex structures inherent in graph.
In summary, our proposed STSA strategy is an innovative solution for mining graph structure information and facilitating graph transfer learning.
Ref.[X1] Distilling Knowledge From Graph Convolutional Networks. CVPR 2020.\
Ref.[X2] Graph-in-Graph Convolutional Network for Hyperspectral Image Classification. TNNLS 2022.\
Ref.[X3] Multigraph Fusion for Dynamic Graph Convolutional Network. TNNLS 2022.\
Ref.[X4] Knowledge Embedding Based Graph Convolutional Network. WWW 2021.\
Ref.[X5] Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks. NIPS 2020.\
Ref.[X6] Coco: A Coupled Contrastive Framework for Unsupervised Domain Adaptive Graph Classification. ICML 2023.\
Ref.[X7] Graph Kernel Neural Networks. TNNLS 2024.\
Ref.[X8] Theoretically Improving Graph Neural Networks via Anonymous Walk Graph Kernels. WWW 2021.
**W2&W3: The experimental results seem excessively positive. The paper does not provide open-source code.**
**A2&A3:**\
**1):** It is worth noting that semi-supervised graph domain adaptation (SGDA) is a relatively novel task scenario in graph transfer learning, first introduced in Ref.[X9]. Currently, there has been little exploration in this field. As stated in lines 248-251, the main compared methods adopted in Table 1 are some semi-supervised methods and graph domain adaptation methods.
Due to the label scarcity of source graph and the huge domain discrepancy in the SGDA scenario, these semi-supervised methods are unable to tackle these challenges, resulting in severe performance degradation on the target domain.
Although graph domain adaptation can alleviate domain discrepancy, they inevitably encounter the overfitting issue. Because these methods typically assume that the source domain is fully labeled, making it difficult to effectively utilize a large number of unlabeled nodes, as stated in lines 265-267.
Our method effectively addresses these challenges, thus achieving SOTA performance.
**2):** Regarding the pioneering work SGDA [X9], it struggles to achieve superior performance because it simply utilizes the classical adversarial learning to reduce domain discrepancy, and its clustering strategy inevitably introduces pseudo-label noise into the model (as verified in lines 307-308 and 518-532).
Ref.[X9] Semi-supervised Domain Adaptation in Graph Transfer Learning. IJCAI 2023.
**3) Source code:**
Based on your valuable suggestion, we will provide AC with the source code of our method in order to validate the authority of our work. We commit to publicly release the code after the paper is accepted.
If you have additional concerns, please let us know and we will do our best to address them. We appreciate your time and efforts in reviewing our work.
---
Rebuttal 2:
Title: Looking forward to the reply
Comment: Dear Reviewer aMJG:
Thanks so much again for the time and effort in reviewing our work. According to your valuable comments and concerns, we provide the source code and further discuss the related points. As the discussion period is nearing its end, we would like to kindly ask the reviewer if our rebuttal has addressed some of the reviewer's questions or concers and if any of them remain unaddressed. We would be happy to provide more details and clarifications.
Best Regards, Authors of 504. | Summary: This paper introduces a graph transfer learning framework called TFGDA, which leverages the structure information to enhance model’s generalization performance. The TFGDA framework includes the structure alignment strategy STSA, the feature distribution alignment strategy SDA, and the RNC strategy to address the overfitting caused by label scarcity. Experiments are conducted various benchmarks.
Strengths: 1.This paper is first attempt to utilize the intrinsic topological structure information hidden in graphs to assist graph transfer learning, which is well-motivated and novel.
2. This paper is well-written and well-organized. The figures are clear and the text is easy to follow.
3.The experiments are comprehensive. The experimental results demonstrate the superiority of the proposed framework.
Weaknesses: 1. This paper proposes to extract spherical features of subgraphs in Subgraph Topological Structure Alignment strategy. Please further explain the reasons for using spherical features and analyze the necessity.
2. The interplay between loss functions could affect the extracted features and model performance. It would be beneficial if the authors could provide a thorough analysis for the combination of loss functions and theoretically explanation of the success.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors should further clarify the motivation behind using spherical features of their proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our gratitude to Reviewer 9hdx for their comprehensive evaluation of the manuscript and the insightful feedback provided, which will greatly contribute to the improvement of our paper. Kindly refer to our specific responses outlined below.
**W1: This paper proposes to extract spherical features of subgraphs in STSA strategy. Please further explain the reasons for using spherical features and analyze the necessity. The authors should further clarify the motivation behind using spherical features of their proposed method.**
**A1:** Thanks for your valuable advice.
**1) explain the reasons:** Notably, several studies [Y1] in transfer learning have demonstrated that the domain shifts are mainly caused by the feature norm discrepancy between the source and target domains, and the model degradation on the target domain is primarily caused by its excessively smaller feature norms than the source domain. To eliminate the negative effects of feature norm differences, we map features into the spherical space to guide the alignment of feature distribution (as stated in lines 170-173), which can significantly facilitate the learning of domain-invariant features and improve the model's transfer performance.
Furthermore, Refs.[Y1-Y2] have pointed out that highly informative features typically display more significant feature norms, and task-specific features with larger norms are more transferable.
However, due to the large parameter search space, it is difficult to find the optimal spherical radius for different transfer tasks.
To address this issue, we follow the lower bound theorem from Ref.[Y3] to set an appropriate radius $r$ for the spherical space $\mathbb{S}_{r}^{d-1}$, as mentioned in lines 173-176 of the manuscript.
Ref.[Y1] Larger Norm More Transferable: An Adaptive Feature Norm Approach for
Unsupervised Domain Adaptation. ICCV 2019. \
Ref.[Y2] Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. ICLR 2018.\
Ref.[Y3] Unsupervised and Semi-supervised Robust Spherical Space Domain Adaptation. TPAMI 2022.
**2) analyze the necessity**:
According to your advice, we conduct additional ablation experiments to validate this point. As described in Table 2 of the paper, variant TFGDA-T ($\mathcal{L}\_\{cls}+\mathcal{L}\_\{stsa}$) implements the STSA strategy in spherical space $\mathbb{S}\_\{r}^{d-1}$.
Based on variant TFGDA-T, we remove the spherical space $\mathbb{S}\_\{r}^{d-1}$ and implement the STSA strategy in regular space, resulting in a new variant called TFGDA-T (w/o sphere).
The experimental results are shown in the **Table A** of the global response PDF file.
We can observe that the introduction of spherical space $\mathbb{S}\_\{r}^{d-1}$ can effectively enhance the model's transfer ability, which is consistent with the aforementioned analysis.
**W2: The interplay between loss functions could affect the extracted features and model performance. It would be beneficial if the authors could provide a thorough analysis for the combination of loss functions and theoretically explanation of the success.**
**A2:** Thanks for your valuable suggestion.
It is worth noting that our ablation study results in Section 5.3 (lines 271-285) have validated the contribution of each loss term and the complementary relationship among different loss terms (i.e., different combinations of loss terms). Moreover, the t-SNE visualization results in Figure 2 have also confirmed the importance the combination of loss terms $\mathcal{L}\_\{rnc}$, $\mathcal{L}\_\{stsa}$ and $\mathcal{L}\_\{sda}$.
We have also conducted a sensitivity analysis on the trade-off parameters of these loss terms in Section A.3.2 of the Appendix (lines 557-565).
**Theoretically Explanation:** Based on your insightful advice, we provide a theoretical analysis of our method. Due to the characters limit, we will present this theoretical analysis in the **global rebuttal area**.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' responses. I have adjusted my score accordingly, and vote for acceptance of this paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 9hdx
Comment: Dear Reviewer 9hdx:
Thank you for the positive feedback. We appreciate your efforts in reviewing our work. We will reflect your suggestions in the revised version to enable it to be a high-quality paper.
Best Regards, Authors of 504.
---
Rebuttal 2:
Title: Looking forward to the reply
Comment: Dear Reviewer 9hdx:
Thanks so much again for the time and effort in reviewing our work. As we are getting closer to the end of the author-reviewer discussion phase, we would like to kindly ask the reviewer if our rebuttal has addressed some of the reviewer's questions or concers and if any of them remain unaddressed. We would be happy to provide more details and clarifications.
Best Regards, Authors of 504. | Rebuttal 1:
Rebuttal: **1) To Reviewer 9hdx:** Dear Reviewer 9hdx, based on your insightful advice, we provide a **theoretical analysis** of our method.
The theoretical analysis of our method is based on the theory of domain adaptation (DA) [Y4-Y5].
Formally, let $\mathcal{H}$ be the hypothesis space. Given two domains $\mathcal{S}$ and $\mathcal{T}$, the probabilistic bound of error of hypothesis $h$ on the target domain is defined as:
$\\psi\_\{\\mathcal{T}}(h) \\le \\psi \_\{\\mathcal{S}}(h) + \\frac{1}{2}d\_\{\\mathcal{H}\\Delta \\mathcal{H}}(\\mathcal{S},\\mathcal{T})+\\mu^{\*\}$,
where the expected error on the target domain $\psi_{\mathcal{T}}(h)$ are bounded by three terms: **(1)** the expected error on source domain $\psi_{\mathcal{S}}(h)$; **(2)** the $\mathcal{H} \Delta \mathcal{H}$-divergence between the source and target domains $d_{\mathcal{H} \Delta \mathcal{H}}(\mathcal{S},\mathcal{T})$; **(3)** the combined error of ideal joint hypothesis $ \mu^{*} = \min_{h'\in \mathcal{H}} \psi_{\mathcal{S}}(h') + \psi_{\mathcal{T}}(h')$.
The goal of DA is to lower the upper bound of the expected target domain error $\psi_{\mathcal{T}}(h)$.
Note that in unsupervised domain adaptation (UDA), minimizing $\psi_{\mathcal{S}}(h)$ can be easily achieved with source label information, as source domain samples are completely annotated.
However, in our semi-supervised graph domain adaptation (SGDA) setting, due to the label scarcity of source domain, the model is prone to overfitting when solely relying on the source domain classification loss $\mathcal{L}\_\{cls}$ for optimization. Therefore, we introduce the **RNC** strategy ($\mathcal{L}\_\{rnc}$) to address the overfitting issue, with the aim of guiding $\psi\_\{\mathcal{S}}(h)$ towards further minimization.
Most DA methods mainly focus on reducing the domain discrepancy $d\_\{\mathcal{H} \Delta \mathcal{H}}(\mathcal{S},\mathcal{T})$, such as utilizing techniques like adversarial learning, MMD, and CORAL.
In comparison to these methods, our **SDA** strategy ($\mathcal{L}\_\{sda}$) effectively eliminates the feature norm discrepancy in spherical space $\mathbb{S}\_\{r}^{d-1}$ and guide a more stable alignment of feature distributions. Furthermore, considering that graph data contains rich structure information that encodes complex relationships among nodes and edges, and existing graph transfer learning (GTL) methods usually adopt graph convolutional networks (GCNs)-based feature extractor to learn domain-invariant node features. However, recent studies [Y6-Y8] have pointed out that GCNs are insufficient in capturing the sophisticated structure information in graph, which seriously affects the transfer of domain-invariant knowledge and consequently limits the model’s generalization ability. To solve this problem, we thus propose the **STSA** strategy ($\mathcal{L}\_\{cls}$) to align the topological structures of the input space and the spherical space, in order to facilitate the GCNs-based feature extractors to capture more domain-invariant node features.
Consequently, the combination of **SDA** ($\mathcal{L}\_\{sda}$) and **STSA** ($\mathcal{L}\_\{stsa}$) strategies further promotes the minimization of the domain discrepancy $d\_\{\mathcal{H} \Delta \mathcal{H}}(\mathcal{S},\mathcal{T})$.
Notably, $\\mu^{\*\}$ is expected to be extremely small, and therefore it is often neglected by previous methods.
However, it is possible that $\\mu^{\*\}$ tends to be large when the cross-domain category distributions are not well aligned [Y9].
In this paper, we leverage the **RNC** strategy ($\mathcal{L}\_\{rnc}$) to guide both labeled nodes and unlabeled nodes toward achieving robust clustering, effectively promoting the fine-grained alignment of category distributions and ensuring that $\\mu^{\*\}$ remains at a relatively small value.
In summary, our proposed method not only minimizes the expected error on source domain $\psi\_\{\mathcal{S}}(h)$ and the domain discrepancy $d\_\{\mathcal{H} \Delta \mathcal{H}}(\mathcal{S},\mathcal{T})$, but also keeps $\\mu^{\*\}$ at a small value, thereby ensuring a low upper bound.
Ref.[Y4] A theory of learning from different domains. Machine learning. 2010.\
Ref.[Y5] Analysis of representations for domain adaptation. NIPS 2006.\
Ref.[Y6] Coco: A coupled contrastive framework for unsupervised domain adaptive graph classification. ICML 2023.\
Ref.[Y7] Graph Kernel Neural Networks. TNNLS 2024.\
Ref.[Y8] Theoretically improving graph neural networks via anonymous walk graph kernels. WWW 2021.\
Ref.[Y9] Progressive feature alignment for unsupervised domain adaptation. CVPR 2019.
**2) To Reviewer 9hdx:** The necessity analysis Table A mentioned in A1 is displayed in the global response PDF file.
**3) To All Reviewers:** Based on the valuable suggestion of Reviewer aMJG, we will provide AC with the source code of our method in order to validate the authority of our work. You can get the anonymous link of our source code from the AC.
Pdf: /pdf/e2d13ef1965d0f797ea8a51270d8936fe8b28edf.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Temporal Graph Neural Tangent Kernel with Graphon-Guaranteed | Accept (poster) | Summary: This is a theoretical paper that studies graph neural tangent kernels in the context of temporal graphs. The authors propose a temporal GNTK model and proves rigorous error bounds. This work also links the convergence of the temporal GNTK to the graphon GNTK.
Strengths: To the best of my knowledge, this is the first work that attempts to apply the NTK theory to continuous-time dynamic graphs. Despite the issues I raise below, I enjoyed the paper, which provides interesting insights into how NTK can be used to analyze temporal graph learning. However, this approach may not be SOTA.
Weaknesses: 1. The theory is an incremental modification of [7] to temporal graphs by including the time index $t$ to each graph.
1. It is unclear how this approach compares with traditional temporal graph learning methods that capture temporal relationships.
1. The assumptions made are quite strong, and some claims are dubious.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Equation (1) is confusing. Are node features not given as the input in your discussion of CTDG in Section 2? How do you deal with the node features in (1)?
1. To obtain (6), you have not mentioned that the weights $\mathbf{W}$ are initialized as i.i.d. standard normals. Furthermore
1. To obtain (6) and (13), aren't you actually training a different GNN for each time $t$?
- Your notation should be $f_{kernel}^{(t)}$ and $f_{temp}^{(t)}$, otherwise it misleads the reader. I.e., $f_{temp}^{(t)}$ (possibly based on $f_{temp}^{(t-1)}, \ldots, f_{temp}^{(1)}$) is trained using $\lbrace G_i^{(t)} \rbrace_{i=1}^n$.
- This is not the usual way temporal GNNs are trained. It also requires the strong assumption that all $G_i$ are synchronized (in terms of the inference data and task) in snapshot times $t$.
1. The kernel (12) does not capture the temporal relationships between different time instances. I am not sure how this models or compares with typical temporal graph learning approaches that may use LSTM, RNN or transformers to capture temporal interactions.
1. In relation to the above points, why do you not consider the kernel that takes the whole temporal graph sequence $G$ and $G'$ as inputs? I can understand that this is more sophisticated and complex but seems like a more fruitful approach.
1. Table 1: I fail to see why Temp-G3NTK is computationally cheaper than GraphMixer. (Also, you have not defined $K$ for GraphMixer.) For graphs with large $|V|$, Temp-G3NTK is more expensive according to your table.
1. GraphMixer is not included in Figure 1 because it is "designed to directly process continuous-time dynamic graphs, which require observing the entire length of the input temporal graph instead of processing snapshots". I don't think this claim is accurate. You can easily implement GraphMixer by restricting it to the snapshots.
Minor:
1. Theorem 5.1: should be $G_i$ has $T$ timestamps.
1. Can you explain the meaning of the model name? What is "Graphon-Guaranteed"? It also does not sound grammatically correct.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: I did not find a discussion on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! We greatly appreciate your enjoy on our paper's insights, theoretical analysis, and experiments. We address your questions in the form of Q&A below.
Moreover, for the length limit, we distillate our answer and provide brief but important answers, more detailed intermediate steps, reasonings, and derivations can be provided during the upcoming author-reviewer discussion phase. All the detailed reply and supplementary experiments will be added to our updated paper.
> **Q1. How do you deal with the node features in Equation (1)?**
The reason we omit node features in Equation (1) is that prevalent benchmarks only support time-aware edge features.
But if node features are given, our framework still works. Theoretically, Equation (1) can be extended as follows: $h_v(t) = c \cdot \sum_{(u, \bar{t})}[\mathbf{t}\_{enc}(t - \bar{t}) || \mathbf{e}\_{uv}(t) || \mathbf{x}_{u}(\bar{t})]$, where $\mathbf{x}_u(\bar{t})$ is the node features of $u$ at time $\bar{t}$.
Adding node features will not affect our conclusions and framework, as the construction of our kernel depends on $h_v(t)$ as a general term, instead of its components, such as node / edge features. Moreover, Theorem 5.1 and Theorem 5.2 still hold if node features are given.
> **Q2. How weights W in Equation (6) are initialized**
For Equation (6), the weights $\mathbf{W}$ are initialized i.i.d from the normal distribution $\mathcal{N}(0, 1)$.
> **Q3. To obtain (6) and (13), aren't you actually training a different GNN for each time $t$?**
We want to clarify that our method does not involve training GNNs.
The kernel function in Equation 6 is just a starter and illustrated by a temporal graph neural network for easy interpretation, because the temporal graph neural network exists earlier than the proposed temporal graph neural tangent kernel (i.e., ours).
The derivation of our kernel function is given in Sec. 4, especially in Equation (12), which just relies on the covariance matrix computation through Equation (11), to Equation (10), to Equation (9), to Equation (8), and finally to Equation (7).
A typo is found during the rebuttal. For the last term in Eq (11), $l-1$ should be $l$.
We will strengthen the above derivation in the updated paper.
> **Q4. The kernel (12) does not capture the temporal relationships between different time instances. I am not sure how this models or compares with typical temporal graph learning approaches ...**
Temporal dependencies are captured in Equation (1), where we construct the temporal node representations for a node $v$ at time $t$ by aggregating information (node, edge features, and time difference) from its previous temporal neighbors $(u, \bar{t}), \bar{t} < t$. Intuitively, this models the current state of $v$'s neighborhood at time $t$ based on its own neighborhood at the previous time $\bar{t}$. Since we would want a vector representation for $v$, we perform vector summation over all information of temporal neighbors $(u, \bar{t})$.
It is true that there is an established line of works in temporal graph learning using recurrent architecture and attention to learn temporal dependencies like TGN and TGAT, but there are works employing simpler architectures, such as GraphMixer, which aggregates information from recent temporal neighbors and processes them with MLP-Mixer layers.
> **Q5. Why do you not consider the kernel that takes the whole temporal graph sequence and as inputs? I can understand that this is more sophisticated ...**
If we understand your question correctly, your concern is about Equation (13) that current time prediction only depends on other samples at the current time.
We admit that taking the historical behavior as you suggested is a promising approach, and actually this is what we have done in the paper. Instead of taking each past snapshot individually, recall our Equation (1), temporal dependencies are captured, where we construct the temporal node representations for a node $v$ at time $t$ by aggregating information (node, edge features, and time difference) from its previous temporal neighbors $(u, \bar{t}), \bar{t} < t$. Hence, the current time representation is the cumulation of the past. In this angle, the sequence of the temporal graph is observed by our method.
> **Q6. Table 1: I fail to see why Temp-G3NTK is computationally cheaper than GraphMixer. (Also, you have not defined $K$ for GraphMixer.) ...**
First, $K$ for GraphMixer is the padding length, i.e., the maximum number of recent interactions a node can sample.
The theoretical complexity compassion is not that straightforward by only looking at the power to number of nodes. As we stated in the paper, GraphMixer needs the complex neural computation like gradient descent and back propagation, but our method does not. That’s why we claim our method is faster.
To in-depth verify our claim, we added empirical running time comparison of all baselines across all datasets. The results are reported as Table 1 in the 1-page pdf in “general rebuttal”. According to the result, our method is roughly 10x-20x faster than GraphMixer, and also faster than most baselines.
We will include these new results in our paper to in-depth verify my hypothesis.
> **Q7. GraphMixer is not included in Figure 1 because it is "designed to directly process continuous-time dynamic graphs, which require observing the entire length of the input temporal graph instead of processing snapshots". I don't think this claim is accurate. You can easily implement GraphMixer by restricting it to snapshots.**
Technically, GraphMixer can process discrete time dynamic graphs, by only considering part of the past snapshots as the continuous dynamic graphs, although it was originally designed for continuous graphs.
Since we can use snapshot timestamp as edge timestamp for each edge in this snapshot, then we are now running GraphMixer and will add its plot during the author-reviewer discussion phase.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. Can you address the weakness on "It is unclear how this approach compares with traditional temporal graph learning methods that capture temporal relationships."? E.g., ROLAND, DGNN, SSGNN, EvolveGCN, etc.
---
Reply to Comment 1.1.1:
Title: [Comparison with Four Traditional Temporal Learning Works] [Part I]
Comment: Thanks for your follow-up question! We are very glad to answer this question.
Below, we first give a brief answer that highlights the important differences between our method and the methods you recommended.
Second, we then give each one-by-one detailed illustration (in chronological order) for each of your recommended works to support our first point.
Third, we base our analysis on a recent survey’s taxonomy and point out where our method stands.
Answering this question helps expand the scope of our paper's literature review, and we will add all the answers to our paper.
---
### **1. Brief but important difference**
**1.1 Neural representation learning vs. Neural tangent kernel**
The traditional temporal graph learning methods you recommended, like DGNN, EvolveGCN, ROLAND, and SSGNN, rely on neural network training (e.g., stacking neural layers, gradient descent, and backpropagation) to obtain neural representations to support corresponding graph downstream tasks.
Our temporal graph neural tangent kernel method does not rely on neural network structure but can achieve the expressive power of graph neural networks, as our Theorems and experiments demonstrate.
**1.2. Complexity and neural architecture**
This point is the extension of the last point.
To be specific, DGNN, EvolveGCN, ROLAND, and SSGNN belong to the category of recurrent graph neural architectures that handle temporal information (e.g., on the learnable weight level like EvolveGCN or hidden representation level like ROLAND). This is indeed an effective direction, but it requires heavy time complexity.
Facing this problem, an emerging direction appears, i.e., MLP-Mixer on Static Graphs [1] or GraphMixer on temporal graphs [2]. Especially, GraphMixer aggregates information from recent temporal neighbors and processes them with MLP-Mixer layers. Motivated by this direction, we propose our temporal graph neural tangent kernel. Also, we would like to note that, even without recurrent neural architectures, temporal information can also be preserved in our method.
To be more specific, in our method, temporal dependencies are captured in Equation (1), where we construct the temporal node representations for a node $v$ at time $t$ by aggregating information (e.g., node features, edge features, and time difference) from its previous temporal neighbors $(u, \bar{t})$. And the entire process does not involve neural training but just depends on mathematical time kernel functions. In other words, this process records the current state of $v$ based on its own neighborhood at the previous time $\bar{t}$ and can be retrieved for future state computation. Besides theoretical derivation, especially, Figure 1 visualizes our method’s effectiveness.
To support our above statement, we list the detailed illustration below.
---
### **2. Detailed illustration of each recommended work in chronological order**
**2.1. DGNN: SIGIR 2020**
- Task:
* Targeting on link prediction (Ous is mainly for temporal graph classification and can be easily adapted to temporal node classification)
- Details in Methodology:
* There are two types of nodes: interacting nodes and influenced nodes
* If two nodes are involved in a (directed) interaction (u, v, t), then “u, v” are interacting nodes and nodes that are nearby this interaction are referred to as “influenced nodes”.
* If two nodes $u$ and $v$ interact at a certain time, then DGNN updates their temporal representation. First, process each of $u$ and $v$ separately by employing **recurrent architecture** to their previous temporal representations and finally combine with time encoding the difference between the current time and the last interaction time of that node. Then, merge the two representations and obtain two new representations for $u$ and $v$.
* If two nodes interact, nearby nodes (“influenced nodes”) would be affected. DGNN also updates “influenced nodes”, i.e., applying **recurrent architecture** on their previous representation, combining with two representations from interacting nodes, and the time encoding of difference between current time and last interacting time.
**2.2. EvolveGCN: AAAI 2020**
- Task:
* Mainly on link prediction and node classification
- Details in Methodology:
* Operates on graph snapshots. Use **recurrent architecture** (e.g., LSTM) to update the weight of each neural layer across time.
* At the time $t$, get node representation by applying the current snapshot’s adjacency matrix, learnable weights, and representation from the previous recurrent layer.
---
Rebuttal 2:
Title: Required additional experiments for adding GraphMixer plot to Figure 1 of the paper
Comment: We are happy to report that the additional running experiments required for GraphMixer with the rest 7 baselines are now finished.
Due to the rebuttal policy, we can not add an external link to direct you to check the figure. However, we describe the overall pattern and provide the important data points in table format as follows. The new figure and corresponding analysis are promised to be added to our paper.
- After involving GraphMixer, the general pattern in Figure 1 is not changed, i.e., our method still outperforms baselines, and the gap is obvious.
- For providing specific data points, we extract the important comparison, i.e., GraphMixer vs. Ours, and made the tabular representation as below. According to the table, we can see our method outperformed GraphMixer in most cases.
---
_Table. For involving GraphMixer in Figure 1 of the paper, the temporal graph classification accuracy on different stages of temporal graphs._
| Dataset | Method | 1 / 5 | 2 / 5 | 3 / 5 | 4 / 5 | 5 / 5 |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: |
| Infectious | Temp-G $^3$ NTK (Ours) | **0.640** | **0.620** | **0.650** | **0.740** | **0.590** |
| | GraphMixer | 0.525 | 0.545 | 0.480 | 0.495 | 0.500 |
| Facebook | Temp-G $^3$ NTK (Ours) | **0.600** | **0.700** | 0.564 | **0.620** | 0.530 |
| | GraphMixer | 0.539 | 0.546 | **0.580** | 0.564 | **0.561** |
---
Rebuttal 3:
Comment: Thanks for your appreciation, and we are glad to answer your follow-up question!
> **Performance of complex neural architectures and simple neural architectures**
According to the most recent temporal graph learning work, e.g., MLP-Mixer [1] and GraphMixer [2], there is a trend to replace the complex neural architecture in the representation learning process, but using simplified neural architecture to maintain or even boost the performance.
According to GraphMixer [2], learning positional encoding plus simple MLPs can achieve very competitive performance. For example, in the link prediction task on Reddit datasets, GraphMixer achieved 99.93% precision and outperforms the 99.30% of JODIE [3] (using recurrent architecture) and the 99.80% of TGN [4] (also using recurrent architecture).
Motivated by this direction, we designed our temporal graph neural tangent kernel. On the one hand, we even simplified the complexity of SOTA temporal graph neural networks like GraphMixer [2], TGN [4] (using recurrent arch), and DyRep [5] (using recurrent arch), and outperformed no matter
- in the theoretical complexity (as shown in Table 1 of the paper);
- or in the empirical complexity (as shown in Table 1 of the 1-page PDF file in “general rebuttal”);
- or in the theoretical effectiveness (as shown in Theorem 5.1 and entire Appendix B for proof);
- or in the empirical effectiveness (as shown in Table 2 of the paper for graph-level tasks and in Table 3 of the paper for node-level tasks).
Moreover, besides our two existing baselines that already used recurrent architectures (TGN [4] and DyRep [5]), we also added your recommended work to our comparison, and the result is shown below.
Table. Temporal Graph Classification Accuracy on Infectious Dataset
| Method| Accuracy |
| :--- | :---: |
| WL-Subtree | 0.600 $\pm$ 0.044 |
| Shortest Path | 0.670 $\pm$ 0.075 |
| Random Walk | 0.670 $\pm$ 0.073 |
| Graph2Vec | 0.565 $\pm$ 0.081 |
| NetLSD | 0.625 $\pm$ 0.061 |
| GL2Vec | 0.545 $\pm$ 0.051 |
| GraphMixer | 0.500 $\pm$ 0.000 |
| TGN | 0.520 $\pm$ 0.019 |
| _EvovleGCN_ | _0.521 $\pm$ 0.093_ |
| TEMP-G$^{3}$NTK (OURS) | **0.740 $\pm$0.058** |
From the above table, we can see that EvolveGCN performs similarly with another recurrent architecture baseline TGN, which aligns with the analysis [1,2] that complex neural architectures can be replaced by simple architectures in the temporal graph learning community. Since it is replaceable, we next expand the possible trade-off.
> **Possible Trade-off**
Here, we would like to share the trade-off of this evolution trend in the temporal graph learning community. To simplify the neural architecture but most importantly maintain (or even boost) the performance, it requires the researchers to devote time and efforts to propose new paradigms, just like what we did in the paper to replace the neural computing with exact mathematical expressions and ensure the theoretical and empirical effectiveness.
On the other hand, our method still triggers the future direction for the temporal graph learning community. For example, even though we already achieved theoretical and empirical effectiveness and efficiency, we discerned that the room still exists to do better for future research. For example, in our Equation (13), making the decision for the testing data, it usually needs a bunch of observed training data. Even though our computation is simple and not related to neural computation like gradient descent and back propagation, we still wish to decrease the number of training data to meet the specific case if the training data is inadequate.
Here, we would like to share a possible solution. We can involve the prototype learning in a non-neural way, such that a bunch of training data can be presented by a single prototype representation. Hence, the number items in Equation (13) can be decreased and the efficiency can be largely increased. This is an interesting topic, and we would like explore it as our future research direction.
---
Reference
[1] A Generalization of ViT/MLP-Mixer to Graphs, ICML 2023
[2] Do We Really Need Complicated Model Architectures For Temporal Networks, ICLR 2023
[3] Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks, KDD 2019
[4] Temporal graph networks for deep learning on dynamic graphs, arXiv 2020
[5] Dyrep: Learning representations over dynamic graphs, ICLR 2019
---
Rebuttal Comment 3.1:
Comment: Thanks for your new results, which have clarified my doubts. I am happy to increase the score.
---
Reply to Comment 3.1.1:
Comment: Thanks again for your appreciation! Answering your insightful questions helps us improve the quality of the paper. We will include the above answer in our paper. Thanks very much for your time and review! | Summary: This paper proposes a graph neural tangent kernel method that extends the advantages from static graphs to temporal graphs. Theoretical analyses are conducted to prove the transferability and robustness of the model. Experiments are performed on graph-level tasks as well as node-level tasks.
Strengths: 1: The motivation for the paper is strong. Temporal graphs are quite a hot topic these days, and transferring graph tangent kernel method to temporal graphs shows clear intuition.
2: The theoretical analysis regarding the generalization bound, convergence and time complexity are comprehensive.
3: Experiments show the competitive results, which demonstrate the potential and effectiveness of the proposed method.
Weaknesses: The related work section is a bit condensed, which may decrease the persuasiveness of the paper.
The explanation regarding the simplicity and interpretation ability of the proposed method is unclear. I wonder if the authors can further clarify.
Technical Quality: 3
Clarity: 3
Questions for Authors: This is good work. Is it possible to use more graph neural architecture-based methods in experiments for two-level tasks? Also the datasets selection can be more inclusive.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! We greatly appreciate your acknowledgment of our paper's motivation, theoretical analysis, and experiments. We address your questions in the form of Q&A below.
Moreover, for the length limit, we distillate our answer and provide brief but important answers, more detailed intermediate steps, reasonings, and derivations can be provided during the upcoming author-reviewer discussion phase. All the detailed reply and supplementary experiments will be added to our updated paper.
> **W1. Expanding the related work section**
On one hand, we expand the detailed introduction to Graphons, Graphon Neural Network, and Graphon Neural Tangent Kernel.
For example, following details will be added to Section C.1 of our Appendix.
```
Graphons. A graphon is a symmetric, bounded, and measurable function $W : [0, 1]^2 \rightarrow [0, 1]$ that acts as the limit object of dense graph sequences and defines a family of similar graphs. Specifically, given an undirected graph $\mathbf{F}$ and a graph sequence $\{G_n\}$, whose $i$-th graph in the sequence, $G_i$, has $i$ nodes, then $\{G_n\}$ converges to $W$ if $\lim_{n \rightarrow \infty} t(\mathbf{F}, G_n) = t(\mathbf{F}, W)$, where $t(\mathbf{F}, G_n)$ defines the density of homomorphisms between $\mathbf{F}$ and $G_n$, and $t(\mathbf{F}, W)$ can be defined similarly. In this way, $W$ can also be regarded as a generative models for a certain graph family.
Graphon Neural Networks (WNNs) [26]. Similar to how GNNs perform graph convolutions on a graph given the adjacency matrix, $A$, and node / edge features $H$, Graphon Neural Networks (WNNs) also perform graphon convolutions using $W$ and $X$, where $X: [0, 1] \rightarrow \mathbb{R}$ (corresponds to node-level) or $X: [0, 1]^2 \rightarrow \mathbb{R}$ (corresponds to edge-level) is graphon signals. Intuitively, we could consider WNNs as the counterpart of GNNs that operate on graphons. As proven in [26], as the graph's size of the graph sequence $\{(G_n, H_n)\}$ generated by $W, X$ grows, i.e $n \rightarrow \infty$, the output of GNNs operating on $\{(G_n, H_n)\}$ converges to the output of WNNs operating on $W, X$. More details can be found in Section C.1 in our Appendix.
```
On the other hand, we include more temporal graph representation learning methods discussion in our paper, like CAWN [1], and DySAT [2].
For example, following details will be added to Related Work of our paper.
```
* CAWN obtains the causal walks for each node, anonymize the node indices on each walk, further process these walks with recurrent architectures, and the final temporal node representation is determined by aggregating walks encodings.
* DySAT first transforms the given CTDG into discrete temporal graph snapshots, then applies Graph Attention Network (GAT) on each snapshots to obtain the spatial information, $\mathbf{X}(t)$, where $t$-th corresponds to the graph snapshot at time $t$, and denote the spatial information for node $u$ in the $t-$th graph snapshot as $[\mathbf{X}(t)]_u$. Then $u$'s temporal representation is obtained by applying Transformer on its spatial information at different timestamps: $h_u(t_k), h_u(t_{k - 1}), \dots, h_u(t_0) = \text{Transformer}([\mathbf{X}(t_k)]_u, [\mathbf{X}(t_{k - 1})]_u, \dots, [\mathbf{X}(t_0)]_u)$.
```
Finally, we would be more than willing to discuss any related work the reviewer recommend during the upcoming author-reviewer discussion phase.
Reference:
[1] Inductive Representation Learning in Temporal Networks via Causal Anonymous Walks, ICLR 2021
[2] DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks, WSDM 2020
> **W2. More explanation regrading the simplicity and interpretation of the proposed method**
For _simplicity_, as stated in the paper, our kernel method does not involve complex neural architecture or training algorithms. Moreover, all terms appearing in the method can be computed in an iterative manner as shown from Equation 7 to Equation 11.
Additionally, we added the empirical running time for comparing the efficiency of our method and 8 baselines across 4 datasets, as shown in Table 1 in the 1-page pdf in "general rebuttal". Overall, the empirical running time aligns with our theoretical analysis of time complexity in Table 1 of the paper. That is, our method belongs to the graph kernel category, where the node-wise comparison is usually inevitable, and our time complexity is lower; Compared to the neural baselines, since our method does not rely on complex neural training like backpropagation, our method is still efficient.
In brief, the _interpretation_ of our graph neural tangent kernel relates to the training dynamics of the temporal graph neural architecture that is defined in Section 3.1. Specifically, in the limit that the layer width of MLPs goes towards infinity, then the graph prediction using the architecture defined in Section 3.1 is equivalent to the prediction of using kernel regression, as stated in Equation 13.
> **Q1. This is good work. Is it possible to use more graph neural architecture-based methods in experiments for two-level tasks? Also the datasets selection can be more inclusive.**
As stated in Section 6.1, for graph-level datasets, we cover a variety of different domains from DBLP (citation graph), Facebook (social graph), Infectious (human-contact graph), and Tumblr (social graph). Moreover, in Appendix D.4, we also show how our methods can scale on larger datasets (Wiki, Reddit, MOOC, LastFM), in terms of number of nodes and edges.
Our baselines include static graph kernel methods, static GNN methods, temporal GNN methods, and ours stand for the temporal graph kernel method.
During the rebuttal, we also include all baselines into a new graph dataset from https://chrsmrrs.github.io/datasets/docs/datasets/, the performance is shown in Table 2 in 1-page pdf in “general rebuttal”. According to the new results, our method still achieves the best performance.
---
Rebuttal Comment 1.1:
Title: Response to authors' Rebuttal
Comment: I would like to thank the authors for the hard work addressing my concers and considering my opinion. I've increased my score. Good luck with the submission.
---
Reply to Comment 1.1.1:
Comment: Thanks very much for your insightful review! We enjoy answering your questions and addressing your concerns! The above answer is promised to be added to our paper. Thanks again! | Summary: The proposed Temporal Graph Neural Tangent Kernel with Graphon-Guaranteed or Temp-G3
NTK method is a novel temporal graph learning method with provable error bounds. It extends the the simplicity and interpretation ability of Graph Neural Tangent Kernel to the temporal graph setting leading to rigorous error bounds for the temporal graph classification task. Furthermore, in the case of growing networks, the proposed method will converge in the limit to the graphon NTK value. Temp-G3NTK method achieves strong performance on both temporal graph classification and the node property prediction task.
Strengths: - **novel method with theoretical bounds**: the proposed method is one of the first methods in temporal graph learning with provable error bounds. This is novel and opens up research for more future methods with provable bounds.
- **strong performance**. The Temp-G3NTK method shows strong performance on the temporal graph classification task and competitive performance for the node property prediction task from TGB. Additional ablation studies are also provided in the appendix.
- **clear presentation**: overall the paper is clearly presented and also with discussion on the time complexity of the method.
Weaknesses: The main concern I have is the **time complexity**. The $O(n^2L|V|^2 + n|E|)$ complexity is quite high thus limiting the scalability of the method to networks with millions of nodes. The node pair wise covariance matrix would be expensive to compute.
Additionally, there is no limitation and societal impact discussion in the paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: - might be good to add plots or table showing the compute time for each dataset.
- Is it possible to adapt the method to continuous time dynamic graphs (CTDGs)?
- How would the model convergence be if the network both grows and shrinks over time. This is quite common in real world temporal graphs.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Missing discussion on limitations for current work as well as discussions on societal impact. Note that it is possible to lead to negative societal impact when temporal graph learning methods are deployed in applications such as anomaly detection, brain activity classification and more.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! We greatly appreciate your acknowledgment of our paper's novelty, presentation, and model performance. We address your questions in the form of Q&A below.
Moreover, for the length limit, we distillate our answer and provide brief but important answers, more detailed intermediate steps, reasonings, and derivations can be provided during the upcoming author-reviewer discussion phase. All the detailed reply and supplementary experiments will be added to our updated paper.
> **Q1. Might be good to add plots or tables showing the compute time for each dataset**
We have added the empirical running time of our method and 8 other baselines on 4 benchmark datasets. _The table is plotted as Table 1 in the 1-page pdf file in the “general rebuttal”_.
Overall, the empirical running time aligns with our theoretical analysis of time complexity in Table 1 in the paper. That is, our method belongs to the graph kernel category, where the node-wise comparison is usually inevitable, and our time complexity is lower. Compared to the neural baselines, since our method does not rely on complex neural training like backpropagation, our method is still efficient.
Given our method achieved the best classification accuracy, as shown in Table 2 in the paper, according to the corresponding running time reported in Table 1 of the uploaded 1-page pdf during the rebuttal, our method is (1) more than 10x - 20x faster then complex temporal graph neural network methods like GraphMixer [6] and TGN [24]; (2) similarly efficient as simple kernel methods like WL-Subtree [27] and Shortest Path [4] and embedding methods like NetLSD [30] and GL2Vec [5]; and only Graph2Vec [21] is running faster than our method, but our performance is roughly 1.4x better.
> **Q2. Is it possible to adapt our method to continuous time dynamic graphs (CTDGs)?**
The answer is yes.
In the paper, we choose discrete time just for a better illustration of the proposed method. The transformation between discrete time and continuous time is possible, as described in lines 63-65 of the paper.
In other words, given 2 CTDGs $G, G'$, suppose that we already computed the Temp-G$^{3}$NTK value between $G, G'$ at time $t$. If new links occurring at time $t', t' > t$ are added into $G, G'$, then we update the neighborhood of each node in $G, G'$, and re-compute the Temp-G$^{3}$NTK value for $(G, G')$ at time $t'$.
An extreme case needs more caution, i.e., _spare_ continuous time dynamic graph. For example, only one edge is updated per timestamp. This scenario will make our method run slower, because more snapshots are emerging. But multiple edges updated per timestamps can be adapted seamlessly. Moreover, one edge update per timestamp is very rare no matter in the real-world setting.
> **Q3. How would the model convergence be if the network both grows and shrinks over time. This is quite common in real world temporal graphs.**
In brief, our method handles the case if some graph shrinks at the intermediate timestamp, and our theoretical findings hold as long as the life-long pattern is growing.
In other words, for a graph that has some growing stages and some shrinking stages in its life, if its number of nodes follow an overall growing trend, then Theorem 5.2 holds.
To be more specific, let the number of nodes of a temporal graph snapshot at time $t$, $G^{(t)}$, be $n(t)$, then Theorem 5.2 holds if $n(t) \rightarrow \infty$ as $t \rightarrow \infty$. Intuitively, the term $||K_W(W^{(t)}, W'^{(t)}) - K_W(W, W')||$ could be decomposed and bounded by terms that depend on $||W^{(t)} - W||, ||W'^{(t)} - W'||$ and $||X^{(t)} - X||, ||X'^{(t)} - X'||$ (where $X$ is $W$'s signal function and $X^{(t)}, W^{(t)}$ denote the induced graphon and induced graphon corresponds to temporal snapshot $G^{(t)}$). Then, $||W^{(t)} - W||, ||X^{(t)} - X||$ are bounded by terms that depend on the number of nodes $1 / n(t)$, so $||W^{(t)} - W||, ||X^{(t)} - X|| \rightarrow 0$ as $n(t) \rightarrow \infty$.
Moreover, according to the real-world temporal graph dataset benchmark like (https://snap.stanford.edu/data/index.html), especially social graphs and citation graphs, a growing pattern is usually more common.
> **Q4. Possible negative societal impact**
In general, graph deep learning often involves analyzing complex networks, which can include sensitive personal information. If not managed properly, this can lead to privacy violations, especially when dealing with social networks, financial systems, or medical data. Also, if the data used to train graph models contains biases, these biases can be propagated and even amplified by the algorithms.
To the best of the authors’ knowledge, we do not discern specific negative societal impacts caused by this proposed method. However, we will add the above general discussions regarding the risks of graph deep learning in our updated paper.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: I thank the authors for their detailed discussion and addressing my concerns. I will retain my score and believe this work will be of interest to the wider temporal graph learning community.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank you for your helpful review and your appreciation of our work. We are glad that our answer addressed your concern. The above answer is promised to be added to our paper. Thanks! | Summary: This paper introduces the graph tangent kernel for temporal graph, thus enabling the learning task on time-evolving graph structures. The authors generalized the concepts of graph tangent kernel to incorporate the time domain information. They derived the generalization bound for the corresponding kernel predictor, and that their kernel approximates the graphon neural tangent kernel.
Strengths: The authors generalized the concepts of graph tangent kernel to incorporate the time domain information. They derived the generalization bound for the corresponding kernel predictor, and that their kernel approximates the graphon neural tangent kernel.
Weaknesses: See the questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Formula (1) defines the node feature. In this definition, the time instance $t$ is encoded by the function $t_{\text {enc }}$. From the construction it seems that $t_{\text {enc }}$ aims to make the time difference between $t-\bar{t}$ as a vector. However, $t-\bar{t}$ is already simple enough as a scalar. Can you explain the motivation of using $t_{\text {enc }}$ instead of $t-\bar{t}$ directly? What does $\cos (\Delta t \mathbf{w})$ indicate?
2. As the node feature is defined by formula (1), the authors suppose that there are no node features given in the beginning (from the dataset), and only edge features are accessible. Does the whole framework still work well if the node features are given by the dataset? In other words, does the conclusions in this paper rely on the specific construction method of the node features (i.e., formula (1))?
3. Formula (12) provides the definition of the proposed "kernel", but the authors did not prove that this "kernel" satisfies the usual definition of kernel, i.e., it is symmetric and semi-definite.
4. Theorem 5.1 and 5.2 assume the case that we indeed use the designed kernel and formula (13) to learn the task. However, there should be theoretical proofs that the proposed method (1) and (2) indeed converge to the Gaussian process or kernel method, like in [12].
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! We greatly appreciate your acknowledgment of our paper's theoretical analysis. We address your questions in the form of Q&A below.
Moreover, for the length limit, we distillate our answer and provide brief but important answers, more detailed intermediate steps, reasonings, and derivations can be provided during the upcoming author-reviewer discussion phase. All the detailed reply and supplementary experiments will be added to our updated paper.
> **Q1. Explain using $\mathbf{t}_{enc}$ instead of $t - \bar{t}$ directly? What does $\cos(\Delta t \mathbf{w})$ indicate?**
Theoretically, mapping a discrete scalar to a continuous high-dimensional vector space has two unique advantages. First, the embedding vector can capture more complex information like non-linear relationships. Moreover, the embedding does not hurt the original scalar representation, i.e., close time scalars still have similar embedding vectors, and the embedding vectors can also capture periodic behaviors [1]. Second, the embedding vector can jointly work with deep learning models, i.e., embedding vectors for time can be potentially used within any neural architecture’s representation learning process, which is incapable of using discrete time scalars.
In our case, $\mathbf{t}\_{enc}(\Delta t) = \cos(\Delta t \mathbf{w})$ maps a scalar $\Delta t$ to an $d_t-$dimensional vector, whose $i-$th entry is $\cos(\Delta t \cdot \alpha^{-(i-1) / \beta})$. Given two real positive numbers $p, q > 0$ such that $p > q$. As $i \rightarrow d\_t$, the $i-$th entries of $\mathbf{t}\_{enc}(p), \mathbf{t}\_{enc}(q)$ go towards $0$. Since $p < q$, it is likely that $\mathbf{t}\_{enc}(p)$ contains more $0$ entries than $\mathbf{t}\_{enc}(q)$.
Moreover, there are many choices for time encoding. However, most functions require trainable parameters that add extra computational burden. But our $\mathbf{t}\_{enc}(.)$ relies on mathematical properties and only needs fixed parameters ($\alpha, \beta$), which is suitable for kernel-based methods.
Empirically, as shown in Table 4 of our paper, using relative difference $t - \bar{t}$ (3rd line of Table 4) directly yields a lower accuracy than using $\mathbf{t}\_{enc}(t - \bar{t})$, justifying the effectiveness of $\mathbf{t}\_{enc}(.)$.
Reference: [1] Time2Vec: Learning a Vector Representation of Time
> **Q2. Does the whole framework still work well if the node features are given by the dataset?**
The reason we omit node features in formula (1) is that prevalent benchmarks only support time-aware edge features.
But if node features are given, our framework still works. Theoretically, formula (1) can be extended as follows: $h_v(t) = c \cdot \sum_{(u, \bar{t})}[\mathbf{t}\_{enc}(t - \bar{t}) || \mathbf{e}\_{uv}(t) || \mathbf{x}\_{u}(\bar{t})]$, where $\mathbf{x}\_{u}(\bar{t})$ is the node features of $u$ at time $\bar{t}$.
Adding node features will not affect our conclusions and framework, as the construction of our kernel depends on $h_v(t)$ as a general term, instead of its components, such as node / edge features. Moreover, Theorem 5.1 and Theorem 5.2 still hold if node features are given.
> **Q3. Proof of symmetric and semi-definite properties.**
We have proved the symmetric and semi-definite properties of our method. Due to the rebuttal length limit, the detailed proof will be uploaded during the author-reviewer discussion phase via the “official comment” channel.
> **Q4. However, there should be theoretical proof that the proposed method (1) and (2) indeed converge to the Gaussian process or kernel method, like in [12].**
In short, our proposed method naturally enjoys the convergence in [12].
```
First, we provide some preliminary.
```
Let $f_{nn}(\mathbf{x}, \theta)$ be the output of a fully connected neural network, with parameters $\theta \in \mathbb{R}^{p}$ and $\mathbf{x} \in \mathbb{R}^d$ as the input.
Then the Neural Tangent Kernel (NTK) corresponds to $f$, and any arbitrary input $\mathbf{x}, \mathbf{x'}$ would be
$\nabla_{\theta}f_{nn}(\mathbf{x}, \theta)^ \top \nabla_{\theta}f_{nn}(\mathbf{x}', \theta)$
Suppose that $f_{nn}$ has $L$ layers, the $l-$th layer's width is denoted as $d_l$. Theorem 1 from [12] states that in the limit of $d_1, \dots, d_L \rightarrow \infty$, the NTK converges in probability to a deterministic limiting kernel.
```
Next, we analyze these terms in our paper's context.
```
If we apply Theorem 1 from [12], then the NTK (defined in [12]) associating with a deep neural network consisting of $L$ layers applying on temporal node representation $v, v'$ converges, with high probability, to the deterministic limiting kernel $\boldsymbol{\Theta}^{(L)}(G^{(t)}, G^{(t)})\_{vv'} \otimes \text{Id}\_{dL}$, (where $\text{Id}\_{d_L}$ is the identity matrix with dimension $d_L$ and $d_L$ is the width of the $L$-layer MLP), if $d_1, \dots, d_L \rightarrow \infty$, and $\boldsymbol{\Theta}^{(L)}$ is denoted as a scalar kernel. To be clear, the kernel value between two graphs in Equation (13) is defined based on the summation of limiting scalar kernel $\boldsymbol{\Theta}^{(L)}$ taken over all node pairs $v, v'$.
Regarding Gaussian process, based on Proposition 1 from [12], the output of the deep neural network consisting of $L$ layers operating on temporal node representations defined in Section 3.1 converges to Gaussian process of covariance $\boldsymbol{\Sigma}^{(L)}$, with $d_1, \dots, d_{L - 1} \rightarrow \infty$ (as defined in Equation 9).
```
To better sum up for understanding, our method is constructed based on limiting objects of the NTK convergence theories.
```
We will add the above derivation to our updated paper.
---
Rebuttal 2:
Title: Additional proof asked for Symmetric Property
Comment: **[Symmetric proof]**
---
We need to prove $K(G^{(t)}, G'^{(t)}) = K(G'^{(t)}, G^{(t)})$.
Given our proposed kernel function, $K(G^{(t)}, G'^{(t)}) = \sum\_{v \in V(t)} \sum\_{v' \in V'(t)} \boldsymbol{\Theta}^{(L)}(G^{(t)}, G'^{(t)})\_{vv'}$, we first write down another equation, where the internal order is flipped, i.e., $K(G'^{(t)}, G^{(t)}) = \sum\_{v' \in V'(t)} \sum\_{v \in V(t)} \boldsymbol{\Theta}^{(L)}(G'^{(t)}, G^{(t)})\_{v'v}$
---
We first prove that $\boldsymbol{\Theta}^{(l)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(l)}(G'^{(t)}, G^{(t)})\_{v'v}, \forall l, 1 \leq l \leq L.$
For $l = 1$, we have
$\boldsymbol{\Theta}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'} \cdot \dot{\boldsymbol{\Sigma}}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} + \boldsymbol{\Sigma}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'}$
= $(h\_v(t)^\top h\_{v'}(t)) \cdot \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'})}{2\pi} + \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'})}{2\pi} + \frac{\sqrt{1 - \boldsymbol{\Sigma}^{(0)}(G^{(t)}, G'^{(t)})^2\_{vv'}}}{2\pi}$
= $(h_v(t)^\top h_{v'}(t)) \cdot \frac{\pi - \arccos(h_v(t)^\top h_{v'}(t))}{2\pi} + \frac{\pi - \arccos(h_v(t)^\top h_{v'}(t))}{2\pi} + \frac{\sqrt{1 - (h_v(t)^\top h_{v'}(t))^2}}{2\pi}$
= $(h_{v'}(t)^\top h_{v}(t)) \cdot \frac{\pi - \arccos(h\_{v'}(t)^\top h\_{v}(t))}{2\pi} + \frac{\pi - \arccos(h\_{v'}(t)^\top h\_{v}(t))}{2\pi} + \frac{\sqrt{1 - (h\_{v'}(t)^\top h\_{v}(t))^2}}{2\pi}$
= $(h_{v'}(t)^\top h\_{v}(t)) \cdot \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(0)}(G'^{(t)}, G^{(t)})_{v'v})}{2\pi} + \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(0)}(G'^{(t)}, G^{(t)})\_{v'v})}{2\pi} + \frac{\sqrt{1 - \boldsymbol{\Sigma}^{(0)}(G'^{(t)}, G^{(t)})^2\_{v'v}}}{2\pi}$
= $\boldsymbol{\Theta}^{(0)}(G'^{(t)}, G^{(t)})\_{v'v} \cdot \dot{\boldsymbol{\Sigma}}^{(1)}(G'^{(t)}, G^{(t)})\_{v'v} + \boldsymbol{\Sigma}^{(1)}(G'^{(t)}, G^{(t)})\_{v'v}$
= $\boldsymbol{\Theta}^{(1)}(G'^{(t)}, G^{(t)})_{v'v}$
Then, suppose $\exists k, 1 \leq k \leq L$, we can have $\boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(k)}(G'^{(t)}, G^{(t)})\_{v'v}$
Therefore,
$\boldsymbol{\Theta}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'} \cdot \dot{\boldsymbol{\Sigma}}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} + \boldsymbol{\Sigma}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'}$
= $\boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'} \cdot \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'})}{2\pi} + \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'})}{2\pi} + \frac{\sqrt{1 - \boldsymbol{\Sigma}^{(k)}(G^{(t)}, G'^{(t)})^2\_{vv'}}}{2\pi}$
= $\boldsymbol{\Theta}^{(k)}(G'^{(t)}, G^{(t)})\_{v'v} \cdot \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(k)}(G'^{(t)}, G^{(t)})\_{v'v})}{2\pi} + \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(k)}(G'^{(t)}, G^{(t)})\_{v'v})}{2\pi} + \frac{\sqrt{1 - \boldsymbol{\Sigma}^{(k)}(G'^{(t)}, G^{(t)})^2\_{v'v}}}{2\pi}$
= $\boldsymbol{\Theta}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'}$
Hence, if $\boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(k)}(G'^{(t)}, G^{(t)})\_{v'v}$, then $\boldsymbol{\Theta}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(k + 1)}(G'^{(t)}, G^{(t)})\_{v'v}$.
Moreover, we have proven that $\boldsymbol{\Theta}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(1)}(G'^{(t)}, G^{(t)})\_{v'v}$.
Thus, by induction, we have $\boldsymbol{\Theta}^{(l)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(l)}(G'^{(t)}, G^{(t)})\_{v'v}, \forall l, 1 \leq l \leq L.$
---
Finally, we have $K(G(t), G'(t)) = \sum_{v \in V(t)} \sum_{v' \in V'(t)} \boldsymbol{\Theta}^{(L)}(G^{(t)}, G'^{(t)})\_{vv'} = \sum_{v' \in V'(t)} \sum_{v \in V(t)} \boldsymbol{\Theta}^{(L)}(G'^{(t)}, G^{(t)})\_{v'v} = K(G'(t), G(t))$
The proof is completed.
---
Rebuttal 3:
Title: Additional proof asked for Semi-Definite Property
Comment: **[Semi-definite proof]**
---
We need to prove $K(G^{(t)}, G^{(t)}) \geq 0$
Given our proposed kernel function $K(G^{(t)}, G'^{(t)}) = \sum_{v \in V(t)} \sum_{v' \in V(t)} \boldsymbol{\Theta}^{(L)}(G^{(t)}, G'^{(t)})_{vv'}$, we first prove that
$\boldsymbol{\Theta}^{(L)}(G^{(t)}, G'^{(t)})\_{vv'} \geq 0, \forall l, 1 \leq l \leq L$.
---
For $l = 1$,
$\boldsymbol{\Theta}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'} \cdot \dot{\boldsymbol{\Sigma}}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} + \boldsymbol{\Sigma}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'}$
= $(h_v(t)^\top h_{v'}(t)) \cdot \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'})}{2\pi} + \frac{\pi - \arccos(\boldsymbol{\Sigma}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'})}{2\pi} + \frac{\sqrt{1 - \boldsymbol{\Sigma}^{(0)}(G^{(t)}, G'^{(t)})^2\_{vv'}}}{2\pi}$
= $(h_v(t)^\top h_{v'}(t)) \cdot \frac{\pi - \arccos(h_v(t)^\top h_{v'}(t))}{2\pi} + \frac{\pi - \arccos(h_v(t)^\top h_{v'}(t))}{2\pi} + \frac{\sqrt{1 - (h_v(t)^\top h_{v'}(t))^2}}{2\pi}$
= $x \cdot \frac{\pi - \arccos(x)}{2\pi} + \frac{\pi - \arccos(x)}{2\pi} + \frac{\sqrt{1 - x^2}}{2\pi} ~(\text{Let } x = (h\_v(t)^\top h\_{v'}(t)))$
As visualized in Figure 1 in the uploaded 1-page PDF file in "general rebuttal", we know that $x \cdot \frac{\pi - \arccos(x)}{2\pi} + \frac{\pi - \arccos(x)}{2\pi} + \frac{\sqrt{1 - x^2}}{2\pi} \geq 0, \forall x \in [-1, 1]$
As $x$ is the input of $\arccos(.)$ function, so it must be constrained to interval $[-1, 1]$. Also, we can achieve the constraint $x \in [-1, 1]$ by normalizing $h_v(t), h_v'(t)$.
Therefore,
$\boldsymbol{\Theta}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(0)}(G^{(t)}, G'^{(t)})\_{vv'} \cdot \dot{\boldsymbol{\Sigma}}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} + \boldsymbol{\Sigma}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} \geq 0, \forall v, v'$
---
Suppose $\exists k, 1 \leq k \leq L$, we can have $\boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})_{vv'} \geq 0$
Then, we can derive the three following equations that
$\boldsymbol{\Theta}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} = \boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'} \cdot \dot{\boldsymbol{\Sigma}}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} + \boldsymbol{\Sigma}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'}$,
$\dot{\boldsymbol{\Sigma}}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} = \mathbb{E}_{(a, b) \sim \mathcal{N}(0, \boldsymbol{\Lambda}^{(k+1)}(G^{(t)}, G'^{(t)})\_{vv'})} [\dot{\sigma}(a) \cdot \dot{\sigma}(b)] \geq 0$, as $\dot{\sigma(x)} \geq 0, \forall x$,
and $\boldsymbol{\Sigma}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} = \mathbb{E}_{(a, b) \sim \mathcal{N}(0, \boldsymbol{\Lambda}^{(k+1)}(G^{(t)}, G'^{(t)})\_{vv'})} [\sigma(a) \cdot \sigma(b)] \geq 0$, as $\sigma(x) \geq 0, \forall x$.
Moreover, as $\boldsymbol{\Theta}^{(k)}(G^{(t)}, G'^{(t)})\_{vv'} \geq 0$, we now have $\boldsymbol{\Theta}^{(k + 1)}(G^{(t)}, G'^{(t)})\_{vv'} \geq 0$.
We also have $\boldsymbol{\Theta}^{(1)}(G^{(t)}, G'^{(t)})\_{vv'} \geq 0$, so by induction, $\boldsymbol{\Theta}^{(l)}(G^{(t)}, G'^{(t)})\_{vv'} \geq 0, \forall l \leq 1 \leq L$.
---
Finally, we have $K(G^{(t)}, G'^{(t)}) = \sum_{v \in V(t)} \sum_{v' \in V(t)} \boldsymbol{\Theta}^{(L)}(G^{(t)}, G'^{(t)})_{vv'} \geq 0$
The proof is completed. | Rebuttal 1:
Rebuttal: First of all, we want to sincerely thank the time and review of all reviewers and chairs, and we are very glad to learn the reviewers' appreciation for this paper.
Also, the reviewers' raised questions are very actionable and helpful to improve our paper. We have uploaded the answers individually to each reviewer.
Here, we upload a 1-page PDF file that contains Figures and Tables, which are required by reviewers for additional theoretical analysis and experiments.
Thanks again!
Pdf: /pdf/810ca54bcf3f0cd0b283a372e9ad69f7b4d40bd6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
You Only Cache Once: Decoder-Decoder Architectures for Language Models | Accept (oral) | Summary: This paper proposes YOCO, a hybrid model that combines gated linear attention with standard attention (SA). The model stacks efficient self attention (ESA) in the first $L/2$ layers, succeeded by another $L/2$ cross-attention layers.
Notably, the output of the last ESA is shared across subsequent CA layers, thereby achieving significant parameter reduction and enabling exceptional key-value (KV) cache compression, critical for optimizing inference.
Two ESA variants are evaluated: sliding window attention and a novel gated retention method, which incorporates data-driven head-wise decay over retention.
Upon scaling YOCO to a 3-billion-parameter model trained on a corpus of 1 trillion tokens, the authors report superior performance relative to Llama-like architectures in language modeling tasks.
They also conduct some analysis on long-seq evals and observe near-perfect performance on needle-in-haystack tests and other benchmarks like Qasper.
Strengths: 1. YOCO's hybrid structure delivers remarkable results in needle-in-haystack scenarios and demonstrates robust performance on retrieval-centric tasks, marking a pioneering achievement.
2. The proposed data-dependent gated-retention brings great improvement against retention.
3. By facilitating substantial KV cache compression relative to standard attention, YOCO exhibits superior retrieval capabilities compared to existing linear attention models. I'm very glad to see the results of YOCO scaling to larger sizes.
Weaknesses: I see no obvious disadvantages of this paper; however, the manuscript would benefit from:
1) The authors should add more comparions with exisiting linear-time / hybrid models trained on trillions of tokens, e.g., RWKV6 and TransNormer, whose checkpoints are publicly available.
2) Despite concurrent works, I suggest the authors to add discussions with Samba [1] and Mamba2 [2] in their next version.
[1] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
[2] Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Some notations are confusing: 1) Regarding Eq.7, the usage of $\beta_{iB}$ suggests an accumulation effect from preceding chunks, which may mislead readers. Additionally, the notations of $\beta_{[i]}(j,k)$ appears unused. If I understand correctly, $x_{[i]}$ is a 2-d tensor while $\beta_{[i]}$ is a scalar, it could be better to use another notation to distinguish the two. 2) Eq. 8 should be $\mathrm{head}_1,\dots,\mathrm{head}_n=\dots$
2. I'm curious if the authors have tried other linear attention variants instead of gRet, e.g., Mamba, and GLA.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your positive comments.
>Q1: add results of exisiting linear-time / hybrid models trained on trillions of tokens, e.g., RWKV6 and TransNormer
A1: We focus on the evaluation using the same training data for fair comparisons. So we use OpenLLaMA and StableLM in Table 3. We also compare with other linear-time / hybrid architectures (e.g., Mamba, and hybrid H3) in Table 5 with the same training data and settings. We can include RWKV6 and TransNormer numbers for reference results as suggested.
---
>Q2: Despite concurrent works, I suggest the authors to add discussions with Samba and Mamba2 in their next version.
A2: Samba and Mamba2 were released in arXiv after the NeurIPS submission. These two methods are complementary to this work, which is promising to use them as self-decoder in YOCO. Specifically, the ablation setting `Interleaved & Hybrid` in Table 6 is similar to the hybrid design of Samba, and both Mamba2 and gRetNet follow the similar design principles. We can include the discussions in the camera-ready version.
---
>Q3: suggestions about notations
A3: Thanks for the suggestion. We will optimize the notations in Eq. (7) and (8).
---
>Q4: I'm curious if the authors have tried other linear attention variants instead of gRet, e.g., Mamba, and GLA.
A4: We conducted experiments with sliding-window attention and gated retention in the work. The other linear attention variants are supposed to behave similarly and follow the same trend.
---
Rebuttal Comment 1.1:
Comment: Thanks for replying to my question, I have no other questions and keep my score. | Summary: The authors propose a new architecture for language models, where the top half of the transformer layers uses the KV from the bottom layer, while the bottom half applies efficient self-attention. The proposed architecture effectively reduces the KV cache size while maintaining the performance of the model, especially for long-context scenarios. Experiments also show that the method could scale up to 13B parameters.
Strengths: 1. The proposed architecture is simple and effective, which could be easily integrated into existing transformer model implementations.
2. The experiments are comprehensive and convincing. The authors prove the effectiveness of the method on a 3B model and up to 1M context.
Weaknesses: 1. As opposed to the first strength, the paper does not introduce new techniques or insights, thus limited in novelty. The authors also did not give possible explanations for the effectiveness of the proposed architecture.
2. The paper is lack of sufficient argumentation surrounding the design decisions. Though section 4.5 and 4.6 provide some preliminary analysis, further discussions are required to make the paper more convincing. For example, how the efficient self-attention and decoder-decoder structure affect the model's performance respectively.
3. The paper reports that the model outperforms the baseline transformers, but it remains unclear what contributes to the performance improvement. The main experiment is a partial comparison of the 1T token checkpoint instead of the fully trained model, so it is possible that the model is just easy to optimize (under large learning rates) but not necessarily converge to a better point. Also, the YOCO model has a different hyperparameter setting from the baseline model, with larger intermediate size (the scaling curve), which may also contribute to the performance improvement.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to **Weaknesses**
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to **Weaknesses**
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: insights and explanations for the effectiveness of the proposed architecture
A1: The key insights are summarized as follows. First, KV cache can be shared across layers without significantly affecting language modeling performance. Most previous work focuses on compressing KV cache along with the sequence dimension, while YOCO improves the cache issues from another perspective, i.e., the layers. Second, the hybrid design is competitive. After the NeurIPS submission, there were several concurrent works indicate this insight, such as Samba, and character.ai's architecture blog. Third, the early exit insight (as described in Figure 2 and Line 115) dramatically improves the prefill speed. All the above insights and advantages make YOCO novel and go beyond conventional KV compression methods, which improves deployment and user experience.
---
>Q2: How the efficient self-attention and decoder-decoder structure affect the model's performance respectively.
A2:
- The comparisons between decoder-decoder and decoder-only architectures are presented in Table 6, i.e., the settings `YOCO_[1:1]` and `Interleaved & Hybrid`, where the interleaved model is a decoder-only architecture with hybrid layers. The results show that the two layouts achieve similar performance.
- For different self-decoder choices, we conducted experiments with sliding-window attention and gated retention. Both representative design choices work well as shown in Figure 3 (i.e., model size scaling up experiments) and Table 5 (i.e., ablation studies).
- Different ratios between self-decoder and cross-decoder are also compared in Table 6.
- In order to comprehensively inspect how the proposed architecture affects performance, we conducted evaluation from diverse perspectives, including scale up training tokens (Section 4.1), scaling curves of the proposed architectures (Section 4.2), scale up the YOCO model to 1M context length and evaluate its long-sequence modeling capability (Section 4.3), compare with Transformer variants (Section 4.5), and ablation studies on various design choices (Section 4.6).
---
>Q3: what contributes to the performance improvement?
A3: The improvements mainly come from the hybrid-style architecture. Multiple recent works confirmed this point, such as Samba[1], and Jamba[2]. The trends are consistent across different learning rate schedules. Because YOCO saves the key and value projection matrices, for fair comparisons, we accordingly increase the FFN part in order to keep the overall parameter count similar across models. For example, as shown in the page 47 of the Llama 2 paper[3], this is a common practice for fair comparisons across design choices. Besides, instead of performance, we focus more on the improvements in terms of inference memory, prefill latency, and throughput.
[1] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling
[2] Jamba: A Hybrid Transformer-Mamba Language Model
[3] Llama 2: Open Foundation and Fine-Tuned Chat Models
We hope the above explanation clarifies the rationale behind our experiment designs. Thank you again for the valuable feedback.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for replying to my questions. Considering the impressive performance and hybrid architecture of this work, I would like to increase the final rating to 6. | Summary: The paper introduces YOCO, a decoder-decoder architecture designed for large language models. This architecture comprises a cross-decoder stacked upon a self-decoder, efficiently encoding global key-value caches reused by the cross-decoder. YOCO aims to reduce GPU memory demands and improve prefill latency and throughput while maintaining global attention capabilities. Experimental results demonstrate that YOCO achieves competitive performance compared to Transformer models, significantly reducing inference memory and prefill latency, and effectively extending context lengths up to 1M tokens with high retrieval accuracy.
Strengths: - YOCO's design, with its cross-decoder and self-decoder, offers a novel approach to caching key-value pairs, reducing GPU memory consumption.
- The architecture significantly reduces prefill latency and improves throughput, addressing critical bottlenecks in long-sequence language model inference.
- YOCO demonstrates effective scalability in model size and training tokens, maintaining competitive performance with other leading Transformer models.
- Extensive experiments validate YOCO's performance and efficiency gains, showing substantial improvements in memory usage and latency across various model sizes and context lengths.
Weaknesses: - Transformers with flash attention could also scale to 1m tokens (e.g. FlashDecoding, https://crfm.stanford.edu/2023/10/12/flashdecoding.html) any comparison/discussion? Additional complexity with the cross-decoder and self-decoder mechanisms may pose implementation challenges.
- While the architecture shows significant improvements in inference efficiency involving very long context lengths, it remains unclear how the fixed-size sliding window size affects the performance versus efficiency tradeoffs.
Technical Quality: 3
Clarity: 3
Questions for Authors: - The evaluation primarily focuses on memory and latency improvements. Does YOCO also bring training efficiency gains?
- Are YOCO models slower than models in Table 3? Since the context size is usually much smaller, but YOCO used fixed window size of 1024 while most task examples probably contain <1024 tokens.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The paper does not explicitly discuss any limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: The comparison and discussion with FlashDecoding.
A1: Flash-Decoding and kernel fusion have been used in comparison (described in L240, L121, L25), i.e., the Transformer results have been based on FlashDecoding. The contributions of YOCO and FlashDecoding are orthogonal. YOCO optimizes the pre-filling complexity and KV cache memory from the perspective of architecture design, while FlashDecoding optimizes the implementation. We can directly utilize FlashDecoding for cross-decoder without rewriting the kernel.
---
>Q2: The performance versus efficiency tradeoffs of Efficient Self-Attention (ESA)
A2: Table 3/4 and Figure 3/4 show that ESA does not harm the end-to-end performance under the YOCO architecture. We find that the window size from 1024 to 4096 of self-decoder (SWA) achieves similar end performance in our early experiments.
---
>Q3: The training efficiency in YOCO
A3: The training efficiency of YOCO and Transformer is comparable when the training length is small. When the training length becomes long, YOCO training is faster compared with Transformers because of the cost saving of self-decoder, with a speedup ratio between 1x and 2x.
---
>Q4: The efficiency comparison when the token length is very short
A4: Even for short sequences, there is still 2 times prefill speedup with YOCO. As described in Figure 2 and Line 115, we can still exit early before entering the cross-decoder during the prefill stage. The YOCO models are not slower than Transformers in Table 3.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the clarifications. I will keep my rating. | Summary: The paper introduces YOCO (You Only Cache Once), a novel decoder-decoder architecture for large language models. YOCO uses a self-decoder to generate global key-value (KV) caches, reused by a cross-decoder, reducing GPU memory usage and improving inference efficiency. The architecture achieves comparable performance to full transformers but with significantly lower memory demands. Extensive experiments show YOCO's effectiveness in scaling with more training tokens, larger model sizes, and longer context lengths, up to 1 million tokens. YOCO demonstrates substantial improvements in memory footprint, prefill latency, and throughput, making it a promising model for long-context understanding and multimodal applications.
Strengths: Overall, this is a high-quality paper.
Originality: The paper presents a novel architecture that achieves performance comparable to full transformers with only one layer storing global KV tokens.
Quality: The paper includes extensive experiments that robustly demonstrate the proposed model structure's ability to maintain excellent scaling performance while achieving good inference efficiency. The experiments are comprehensive and well support the claims made in the paper.
Clarity: The paper is well-motivated, clearly stating the problem it aims to solve. The overall model structure is also clearly explained. The experimental section is well-organized, effectively showcasing how the model scales up with more training tokens, larger model sizes, and longer context lengths. It was very enjoyable to read.
Significance: I believe this paper highlights the importance of achieving good scaling performance with only a single layer of global KV cache, including strong needle retrieval capabilities. This is a significant contribution, demonstrating the potential for efficiently handling long sequences with such models.
Weaknesses: - The paper should evaluate the in-context learning ability of the new architecture.
- I believe more ablation studies on the window size of the sliding-window attention are necessary. The paper could more thoroughly investigate several important model parameters.
- I think a significant future application for long context models is long video understanding. While this paper focuses on language modeling, it could benefit from including some discussion on extending the model to multimodal scenarios.
- There are a few typos in the paper. For example, in line 36, "early exit before entering the self-decoder" should be "cross-decoder" instead of "cross-encoder."
Technical Quality: 4
Clarity: 4
Questions for Authors: - In the ablation study, does Unstacked YOCO refer to the model without the self-decoder?
- Therefore, in the new model, will the number of layers and the number of attention heads per layer differ from the standard transformer design?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Please refer to the weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review and insightful feedback.
>Q1: In the ablation study, does Unstacked YOCO refer to the model without the self-decoder?
A1: The input of Unstacked YOCO's cross-decoder is the output of **embedding layer**. In comparison, the input of YOCO's cross-decoder is the output of **self-decoder**. The model without the self-decoder is `YOCO_[0,1]` in Table 6, where the whole model is stacked with cross-decoder and the shared KV cache is word embedding.
---
>Q2: The model hyper-parameters such as the number of layers and the number of attention heads.
A2: We keep most of them the same as the standard Transformer to ensure fair evaluation, where both the model size and depth are comparable.
---
>Q3: a significant future application for long context models is long video understanding
A3: We consider multimodal scenario as one of the most important future directions. Thanks for your suggestions.
---
>Q4: Ablation studies on the window size of SWA.
A4: We find that the window size from 1024 to 4096 achieves similar end performance in our early experiments. Since a larger window size affects inference latency, we keep the default as 1024.
---
>Q5: a few typos
A5: We will fix them in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for addressing most of my concerns, I will keep my score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GREATS: Online Selection of High-Quality Data for LLM Training in Every Iteration | Accept (spotlight) | Summary: This paper proposes a novel online batch selection algorithm called GREedy Approximation Taylor Selection (GREATS) for training large language models (LLMs). The algorithm aims to improve training convergence speed and generalization performance by selecting informative and diverse examples for model updates. GREATS uses a principled formulation of the online batch selection problem and leverages greedy algorithms and Taylor expansions to approximate the utility of data points. This paper presents extensive experiments on various LLM tasks to demonstrate the effectiveness of GREATS in improving training performance.
Strengths: 1) This paper presents a novel online batch selection algorithm that addresses the limitations of existing methods and significantly improves training convergence speed and generalization performance.
2) The algorithm is based on a principled formulation and uses innovative techniques such as greedy algorithms and Taylor expansions to approximate the utility of data points.
3) This paper provides comprehensive evaluations on various LLM tasks, demonstrating the robustness and versatility of GREATS.
Weaknesses: 1) The authors only consider MaxLoss and GradNorm as baselines, they do not compare GREATS with other state-of-the-art online batch selection methods.
2) No confidence intervals in any of the tables, which makes it hard to gauge the significance of the accuracy gains.
3) The authors only report accuracy on MMLU and TYDIQA test sets, I cannot find the results of other datasets.
Technical Quality: 3
Clarity: 3
Questions for Authors: Why are uniform selection and selection-via-Proxy [*1] not included as baselines?
[*1] Selection via proxy: Efficient data selection for deep learning, ICLR 2020.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback!
**Q. [Additional Baseline Comparison.]** *"Why are uniform selection and selection-via-Proxy not included as baselines?"*
**A.** We thank the reviewer for bringing up potentially additional baselines. If we understand correctly, the “uniform selection” baseline corresponds to the “Regular” curve in our current experiments, where the training batches are sampled uniformly at random. For “Selection-via-proxy”, our understanding is that it is a general paradigm for accelerating offline data selection. We'd like to clarify that **offline data selection techniques are not competitors to online batch selection methods** like GREATS. Instead, these approaches are complementary and can be used in conjunction in the training pipeline. Offline methods can provide an initial high-quality dataset, while online methods can further optimize the training process by selecting the most informative batches during runtime. This potential synergy between offline and online data selection is an interesting direction for future research.
As suggested by the reviewers, we have expanded our comparisons to include reference model-based online batch selection techniques [1] and embedding similarity-based batch selection. Our results show that GREATS consistently outperforms these baselines across various hyperparameter settings, further validating its effectiveness. See **Q1 in global response** for a detailed discussion.
[1] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
**Q. [Error bar for the experiment results?]**
**A.** We sincerely appreciate the reviewer's insightful suggestion. We have updated the experiment results for key benchmarks with error bars across 3 independent runs.
| | Regular | GradNorm | MaxLoss | GREATS (Ours) |
|---------------------|----------------|----------------|----------------|----------------|
| MMLU - sociology | 62.7% (±0.9) | 61.0% (±0.5) | 63.9% (±0.8) | **66.1% (±0.8)** |
| MMLU - us_foreign_policy | 60.1% (±0.8) | 62.2% (±1.4) | 60.2% (±1.2) | **65.8% (±2.2)** |
| TYDIQA | 54.3% (±0.3) | 53.4% (±0.18) | 54.7% (±0.13) | **55.0% (±0.3)** |
These updated results with error bars further solidify our findings. GREATS consistently outperforms all baselines across these key benchmarks, even when accounting for run-to-run variability. We've also added error bars to the loss curves for these experiments (see **Q1 in global response**) which show similar results. Due to the computational intensity nature of some experiments, we are in the process of extending this multiple-run analysis to the Alpaca-Samsum and OpenWebText (pretraining) settings. We're committed to providing a complete picture of GREATS' performance across all benchmarks in the revision.
**Q. [Accuracy results for other datasets?]**
**A.** We thank the reviewer for the valuable comment!
**OpenWebText Pretraining:** For our GPT2 pretraining on OpenWebText, we primarily focused on test loss as the most appropriate metric. This choice was made due to the nature of pretraining (where the model is not instruction-tuned) and our computational constraints. Test loss provides a direct measure of the model's ability to predict the next token, which is the core objective of language model pretraining.
**Alpaca-Samsum:** Since Samsum is a summarization task, we evaluated the model's performance using the ROUGE-1 score, which is a standard metric for summarization quality. Here are the additional experiment results on Samsum's test set:
| | Regular | GradNorm | MaxLoss | GREATS (Ours) |
|----------|----------|----------|----------|----------|
| ROUGE-1 | 35.78% (±0.1) | 35.91% (±0.03) | 35.65%(±0.24) | **36.74%(±0.46)** |
As we can see, GREATS outperforms all baselines on this metric, further demonstrating its effectiveness across diverse task types.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' global and individual responses and acknowledge their point about empirical results, I am happy to increase my rating. | Summary: The paper introduces a well-motivated online data selection method based on Taylor series expansions (with additional approximations) and apply it to training/fine-tuning LLMs. The paper's results show good performance improvements on fine-tuning tasks, but little gains on pre-training tasks.
Strengths: The paper is very well motivated, and tackles a problem which has clear significance to large scale model training.
The problem setup is very intuitive and straightforward: select the data points now which would maximize utility after one model update step.
The paper is also relatively well written. The method is clearly derived and explained -- most/each introduced approximation is accompanied by a clear interpretation. I most enjoyed reading section 3.2 -- splitting out the importance score of a sample and correcting it based on the scores of previously seen samples is very neat.
The paper includes several relevant experiments (most on fine-tuning) -- using a few relevant baselines. The results appear strong, in general, when compared to the selected baselines.
Weaknesses: I gave the paper a rating of 5 mostly because of the limited evaluation. Simple baselines are just missing from the paper, e.g.: (1) a classifier to detect similarity to validation data and (2) rho loss. I would be happy to increase my rating if these baselines are added to the paper.
To clarify, regarding rho loss, I do not find the argument in line 262-263 at all convincing. In a separate pass, you can just label the training data with per token losses from a pre-trained reference model. No extra training flops needed over the "Max Loss" baseline.
Note that influence functions [Koh & Liang] (in the context of LLMs too [Grosse et al.]) are not discussed anywhere, even though they are closely related with the method proposed here. The difference is that influence functions look at final performance and the paper here looks at 1-step performance. It would be great to see what happens when consider n-step unrolls, i.e., and use the influence functions formulation -- of course, this could be left specifically for future work.
The runtime complexity (i.e. number of flops) of the GREATS algorithm is missing (under every condition) and is not compared to that of regular training. Indeed runtime comparisons (i.e. in seconds) are useful, they are a bit incomplete.
A minor point is that the argument in lines 28-29 is tenuous at best ("Moreover, online batch selection operates on smaller batches of data, reducing the need for cumbersome data preprocessing and enabling more efficient use of computational resources compared to static data selection methods that process the entire dataset upfront".) It is much more convenient to pre-cache the data when training these models. At training time, one really just wants to use accelerators as efficiently as possible.
[Koh & Liang] Understanding Black-box Predictions via Influence Functions
[Grosse] Studying Large Language Model Generalization with Influence Functions
Technical Quality: 3
Clarity: 3
Questions for Authors: - It is unclear when the Hessian is set to the identity in all GREATS experiments. I assume H=I always, as otherwise the runtime would blow up. Is this assumption correct?
- How did you choose the (up to 16) validation dataset points? Randomly?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have moderately addressed the limitations of their work. I've pointed out what else I was expecting to see in the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments!
**Q. [Additional Baseline Comparison.]**
**A.** We sincerely appreciate the reviewer's feedback and suggestions for additional baselines. We have taken these recommendations seriously and have expanded our comparisons accordingly. **(1) Reference model-based online batch selection (RHO Loss) [1]**: We have implemented and evaluated the RHO Loss method as suggested. **(2) Similarity to validation data:** Regarding the suggestion of *"a classifier to detect similarity to validation data,"* we note that our validation sets are very small, which may not be sufficient to train a robust classifier. Instead, we implemented a method that selects the training batch based on embedding similarity to the validation data, using Sentence-BERT embeddings. This approach serves as a proxy for the suggested classifier-based method while being applicable to our small validation set scenario.
Our expanded experiments show that GREATS consistently outperforms these additional baselines across various hyperparameter settings, further validating its effectiveness. We have included a detailed discussion of these results in **Q1 in global response**.
[1] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
**Q. [Relationship to the influence function.]**
**A.** We appreciate the reviewer's insightful comment regarding the relationship between our proposed method and influence functions. Influence functions typically assume strong convexity of the loss landscape and convergence to a local optimum. In contrast, our method makes no such assumptions, making it more suitable for the highly non-convex landscapes encountered in large language model training. While influence functions focus on the impact of training examples on the final model performance, our method explicitly considers the immediate impact on the next training step. This allows us to capture the dynamic nature of the training process and adapt to the model's current state. We will incorporate this discussion into the related work section.
**Q. [Runtime complexity of GREATS?]**
**A.** We appreciate the valuable suggestion! Here’s the complexity analysis for GREATS: Assuming that the batch size is $B$ and the number of model parameters is $p$, the complexity of backpropagation is $O(Bp)$.
The additional cost of the "ghost" dot-product involves computing the pairwise dot-product between each data point’s activation (or output derivative), which are of dimension $O(\sqrt{p})$, and then taking the scalar product between the inner product results on activations and output derivatives. The step of computing the pairwise activation (or output derivative) dot-products has a runtime complexity of $O(B^2 \sqrt{p})$, resulting in a $B \times B$ matrix. Taking the element-wise product between the two $B \times B$ matrices has a runtime complexity of $O(B^2)$. Hence, the overall runtime complexity is $O(Bp) + O(B^2 \sqrt{p})$. In practice, since the batch size $B$ is significantly smaller than the number of parameters $p$, the additional runtime $O(B^2 \sqrt{p})$ is negligible compared to the backpropagation complexity $O(Bp)$. This analysis explains why our empirical runtime measurements (in seconds) show that GREATS performs comparably to regular training. We will incorporate this detailed complexity analysis into our paper to provide a more comprehensive understanding of the algorithm's efficiency. Thank you for highlighting this important aspect!
**Q.** “*A minor point is that the argument in lines 28-29 is tenuous at best ("Moreover, online batch selection operates on smaller batches of data, reducing the need for cumbersome data preprocessing and enabling more efficient use of computational resources compared to static data selection methods that process the entire dataset upfront".) It is much more convenient to pre-cache the data when training these models. At training time, one really just wants to use accelerators as efficiently as possible.*”
**A.** We appreciate the reviewer’s valuable comments. We agree with the reviewer and acknowledge that offline data selection provides the convenience for not modifying training pipeline. We will include this point and modify the Introduction accordingly. On the other hand, online batch selection offers unique advantages in terms of adaptability to the model's learning progress. This can be particularly valuable in scenarios where the relevance of data may change as training progresses. Furthermore, offline and online selection methods are not mutually exclusive and can be complementary. A hybrid approach, leveraging both pre-cached, pre-selected data and dynamic online selection, could potentially offer the best of both worlds.
**Q. [Hessian approximation used in the experiments?]**
**A.** In all of our experiments, we use identity matrix as the approximation to the Hessian matrix, and we found the performance is already good. Future work can further explore the use of more advanced techniques for Hessian approximation, such as K-FAC [1].
[1] Martens, James, and Roger Grosse. "Optimizing neural networks with kronecker-factored approximate curvature." ICML 2015.
**Q. [How to choose the validation data?]**
**A.** In the standard benchmarks such as MMLU, the validation data is being provided for each subject of size 5. For the benchmarks where the validation set size is large, we sample the validation set used in the experiments uniformly at random. We will incorporate this additional detail into our experiment setting section. | Summary: The paper presents an online adaptive subset selection framework GREATS which acts as a principled and efficient online batch selection. The authors further showcase their proposal's utility towards training for Large language models training thereby better performance during training
Strengths: - The paper for is well written, easy to follow.
- Contributions are well formulated and clearly stated.
- The paper's proposal is well-suited towards better efficient training in Large language model training as authors show extensive experiments in practical utility of their proposal towards efficient LLM training.
Weaknesses: Notation could be a bit better:
- For instance under definition of $\textbf{Utility Function}$, $U^{(t)}: \mathrm{R}^d \rightarrow \mathrm{R}^d$ denotes the utility function mapping the training set to an optimal batch.
- vector notations should be bold
Many adaptive subset selection techniques have been shown to be either submodular or weakly submodular. As in either case, submodular functions provide better functionality. It would have been interesting to see if the proposed utility function which is essentially the loss w.r.t validation data satisfies some form of submodularity. Some form of theoretical justification would be highly appreciated.
Experiment comparison wise:
- Regarding baselines, I am wondering why authors have not tried any baseline comparison against submodular subset selection techniques. Some examples: https://proceedings.mlr.press/v139/killamsetty21a. Subset selection techniques is often to show better in many cases.
I am willing to increase my rating if this questions can be addressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have clearly mentioned about their proposed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q. [Notation]**
**A.** We thank the reviewer for the suggestions on the notations, which we will incorporate into our revision. We note that the utility function $U^{(t)}: R^d \rightarrow R$ maps to the model performance change in $t$th step instead of the optimal batch.
**Q. [Is the proposed utility function close to submodular?]**
**A.**
We sincerely appreciate the reviewer's insightful question. We conducted additional experiments and in **Global response’s Figure 5 (c)**, we plot how the loss change in a single gradient update step, i.e., our utility function $U^{(t)}(S)$, against varying input batch sizes $|S|$. We observed that as batch sizes increase, the average loss change (computed across randomly sampled subsets of fixed sizes) indeed exhibits a "diminishing returns" trend. This behavior is consistent with the general properties of submodular functions. However, we also noted significant variance in the utilities, especially for smaller batch sizes. This high variance suggests that strict submodularity may not hold. The observed "diminishing returns" trend aligns with the intuition behind submodular functions and provides some justification for the effectiveness of our greedy selection approach. While our utility function may not be strictly submodular, we conjecture that it could potentially fall into the category of weakly submodular functions.
We will incorporate this analysis into our paper, which might open up interesting theoretical directions for future work.
**Q. [Discussion on submodular subset selection techniques]**
**A.** We sincerely appreciate the reviewer for bringing our attention to the relevant literature on submodular subset selection techniques.
To the best of our knowledge, gradient coreset-based online data selection algorithms, including those utilizing submodular optimization (like [1]), generally face scalability challenges when applied to the setting of foundation models. The primary limitation stems from their requirement to compute (or even store) per-sample gradient vectors, which becomes prohibitively expensive in terms of both computation (or memory) for large-scale models with millions or billions of parameters. For instance, if we understand correctly, to optimize the resulting error in [1], the computation of $Err$ requires computing per-sample gradient vectors. An interesting future avenue is to explore whether our "ghost inner product" technique can be further extended to address the computational problems in this line of work.
We will incorporate these references and discussion into our related work section in the revision.
**Additional baselines:** As suggested by other reviewers, we have expanded our comparisons to include reference model-based online batch selection techniques [2] and embedding similarity-based batch selection. Our results show that GREATS consistently outperforms these baselines across various hyperparameter settings, further validating its effectiveness. See **Q1 in global response** for a detailed discussion.
[1] Killamsetty, Krishnateja, et al. "Grad-match: Gradient matching based data subset selection for efficient deep model training." ICML 2021
[2] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer
Comment: I thank the authors for providing clarification to the questions I had. Considering the responses and doing another detailed pass at the paper, I am happy to increase my rating, | Summary: The paper proposes a new online batch selection algorithm, GREATS, which frames batch selection as an optimization problem: specifically, selecting a subset of the training data that maximally reduce the validation loss when used as part of training. It uses a Taylor approximation of the effects of training on a given data point to estimate the value of each data point for the purposes of training, including a second order Hessian term that partially accounts for potential interactions between the data points of the batch affecting their collective value as training data, then greedily selects a batch of training data to minimize validation loss. Further, the paper propose the 'ghost inner product' method of efficiently computing pairwise gradient inner products with only a single step of backpropegation.
The paper tests their method in a variety of finetuning settings on Llama-2-7B, Mistral-7B, and Llama-3-8B (trained on either LESS or Alpaca datasets and tested on MMLU, TydiQA, or SamSUM), as well as for pretraining a GPT2 base model from scratch on OpenWebText data. It uses some fairly simple baselines (the "regular" unbiased sampling of batches, sampling high loss data points, and sampling high gradient norm data points), and finds that GREATS consistently outperforms them. Finally, the authors address the questions of sensitivity to validation data set size and of runtime, finding that GREATS is pretty robust to using smaller validation sets and that their implementation of GREATS has little computational overhead.
Strengths: **Originality**:
The clear formulation of the objective for online batch selection, the Taylor approximation-based approach to efficiently optimize that objective, and the ghost inner product method all seem like potentially original and valuable contributions.
**Quality**:
The paper largely seems well put together and executed, though with some potential issues related to a lack of strong baselines.
**Clarity**:
The paper was generally clear, with some grammatical issues or weird phrasing. E.g.,
Line 13: "the extensive training times" --> "their extensive training times"
Line 59: "updating model" --> "updating the model"
Line 62: "one-step gradient" --> "one step of gradient" or "single-step gradient"
Line 87: ", and update" --> "and updates"
Line 88: "where" --> ", where"
and various other places.
However, the introduction of "utility functions" seems superfluous. It seems like the authors can just talk about the optimization objectives of various batch selection methods directly. E.g., "We search for training datapoints that minimize the validation loss" vs "We propose to set the utility function to be the validation loss, and then search for training data points that optimize this utility function".
**Significance**:
The significance of this contribution is hard to judge due to 1) potentially weak baselines, and 2) lack of engagement with more contemporary prior work.
However, it seems potentially high, due to the importance of the domain, and because the approach seems simple and intuitively well-motivated.
Weaknesses: I think the biggest weakness of this paper is the lack of stronger baselines to compare against. E.g., both of the following seem like stronger contenders than the max loss / max grad norm approaches:
- https://proceedings.neurips.cc/paper_files/paper/2023/hash/dcba6be91359358c2355cd920da3fcbd-Abstract-Conference.html
- https://proceedings.neurips.cc/paper_files/paper/2023/hash/6b9aa8f418bde2840d5f4ab7a02f663b-Abstract-Conference.html
If I'm considering using some online batch selection method from the literature, max loss / max grad norm don't really seem like they'd even be in the running. So comparing against them doesn't really help establish to be that GREATS would be the most appropriate choice.
Technical Quality: 3
Clarity: 2
Questions for Authors: Do the authors use SGD or Adam in their experiments? The limitations section mentions the ghost inner product technique doesn't work for Adam, but says "using SGD as a proxy for Adam has proved to be effective in our experiment." Does this mean the authors compute the ghost inner product for SGD, but then just use the results for Adam? (The learning rates seem to suggest so).
I have concerns about the optimality of the training setups. The paper does not appear to describe any sort of hyper parameter tuning for the finetuning or pretraining. How were these values selected? Additionally, given that SGD (or Adam?) with small batch sizes is a nonstandard choice for such experiments, how would the authors argue that their results can be expected to be relevant to a setting that better matches current SOTA (e.g., with Adam and large batch sizes)? Can we expect the method's benefits to hold for larger batch training, given that batch selection should become less impactful as batch size increases (e.g., going to zero once we reach full-batch training).
From what I understand, the authors always select batches using validation data drawn from the same distribution as the testing data. In realistic training settings, one usually does not know the exact distribution form which the downstream test data will be drawn. Thus, I wonder if the authors could address the question of what happens when the validation data used during online batch selection is partially "off distribution" for the actual test data?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Lacking an implementation of the ghost inner product method for Adam and similar state of the art optimizers may be a significant limitation of this method? It depends on whether the authors were forced to use SGD in their experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive comments!
**Q. [Relationship between online batch selection and offline data selection / data mixture optimization techniques [1, 2]? & Additional baseline comparison]**
**A.**
We appreciate the reviewer highlighting the relevant literature on offline data selection [1] and data mixture optimization [2]. We'd like to clarify that **offline data selection techniques are not direct competitors to online batch selection methods** like GREATS. Instead, these approaches are complementary and can be used in conjunction in the training pipeline. Offline methods can provide an initial high-quality dataset, while online methods can further optimize the training process by selecting the most informative batches during runtime. This potential synergy between offline and online data selection is an interesting direction for future research.
**Additional baselines:** As suggested by other reviewers, we have expanded our comparisons to include reference model-based online batch selection techniques [3] and embedding similarity-based batch selection. Our results show that GREATS **consistently outperforms** these baselines across various hyperparameter settings, further validating its effectiveness. See **Q1 in global response** for a detailed discussion.
[1] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." NeurIPS 2023.
[2] Xie, Sang Michael, et al. "Doremi: Optimizing data mixtures speeds up language model pretraining." NeurIPS 2024.
[3] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
**Q. [Experiment with different hyperparameters]**
**A.** As suggested, we have performed experiments with different choices of learning rates and batch sizes. Our results show that GREATS **consistently performs** well across all the hyperparameter configurations we considered. This robustness suggests that GREATS can maintain its effectiveness without requiring extensive hyperparameter tuning. An ablation study on the number of validation points in Figure 3 (a)-(b) of the paper. This study demonstrates that GREATS can perform effectively even with a small number of validation points (2 to 5), which is a crucial aspect of its practical applicability.
**Q. [Potential distribution shift between validation and test distribution?]**
**A.**
We appreciate the reviewer's insightful question. **Subsampling validation data** is one solution here. This subsampling approach effectively reduces overfitting to the validation set and introduces some variability in the selection process, which can help mitigate the impact of potential distribution mismatches.
We acknowledge that the availability of a high-quality validation set is a limitation of GREATS. However, most state-of-the-art data selection (e.g., [1, 2]) and online batch selection techniques (e.g., [3]) rely on a high-quality validation set. This is a common approach in the field to guide the selection process toward data that aligns with the target domain or task. In finetuning tasks, practitioners typically have a small set of high-quality, task-specific data that can serve as a validation set. Our experiments demonstrate that **GREATS can perform well even with a very small validation set** (as few as 2-5 samples), which makes the requirement more practical to fulfill in many scenarios. We stress that while GREATS does require a validation set, it avoids other limitations of existing methods, such as the need for reference models, making it more practical for large-scale training scenarios.
[1] Xia, Mengzhou, et al. "LESS: Selecting Influential Data for Targeted Instruction Tuning." ICML 2024.
[2] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." NeurIPS 2023.
[3] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
**Q [Do the experiments use Adam optimizer?]**
**A.** **In all of our experiments, we use AdamW as the optimizer for model training.** This aligns with common practices in training large language models. We acknowledge that our theory for GREATS is derived in terms of SGD. While the theory can be extended to Adam, the efficient "ghost" inner-product technique we developed is specifically tailored for SGD and is challenging to directly adapt to Adam due to its more complex update rule and maintained moments. Hence, we adopt a hybrid approach where we select the valuable batch based on the theory derived from SGD, but perform the actual training updates using AdamW. This approach allows us to benefit from the computational efficiency of our SGD-based selection method while still leveraging the optimization benefits of AdamW for training. Despite this theoretical mismatch, our experiments demonstrate that this hybrid approach performs well in practice. The consistent improvements over baselines across various models and tasks suggest that the SGD-based utility function provides a good proxy for data importance, even when Adam is used for training. We recognize the potential for further improvement by aligning the batch selection more closely with Adam-style updates.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I thank the authors for their detailed responses to the points raised.
The authors' additional experiments with their new baselines have addressed the primary limitation I saw with this work, and I have accordingly raised my soundness score from 2 to 3 and my overall rating from 6 to 7.
Although it is good to see that the proposed method is effective across a number of different hyper parameter settings, this does not address my question regarding the effectiveness of the method in the high batch size setting. From the figures provided, it looks like GREATS with batch size may be 4 a relatively larger improvement over the baselines than GREATS with a batch size of 32, and the rest of the baseline methods also become more similar to each other in the batch size 32 experiments (though this could also just be because the batch size 32 runs have far worse perplexity for some reason?). If GREATS does become less of a relative improvement as batch size increases, I strongly encourage the authors to explicitly discuss this fact in the paper (while of course highlighting that this should be an issue for any online batch selection algorithm).
The authors also suggested that they might subsample validation data to improve generalization to unknown downstream tasks. I agree it might, but there aren't really empirical results to support such a possibility. On the other hand, I acknowledge that ensuring generalization to unknown domains is challenging, so perhaps this is the best that can reasonably be done in such a timeframe. | Rebuttal 1:
Rebuttal: We thank all the reviewers for the positive assessments!
**Q1 [Additional baseline comparisons & hyperparameter choices]**
**A.** We appreciate the reviewers' suggestions for additional baseline comparisons and hyperparameter sensitivity analysis. In response, we have conducted extensive additional experiments in **Figure 1-4 in the attached pdf**.
**Additional Baseline Comparisons.**
- **Reference model-based online batch selection [1] (Reviewers 2jYA, SyBn)**: We implemented the RHO-Loss method from [1] as a representative baseline for reference model-based approaches. As suggested by Reviewer SyBn, we use a pretrained LLM as the reference model. Due to time and computational constraints, we used Llama-3.1-8B-Instruct as the reference model (with Llama2-7B as the target model), prioritizing training on data points with high RHO loss ($L(x; f_{tgt}) - L(x; f_{ref})$ where $f_{tgt}$ is the target model and $f_{ref}$ is the reference model). Surprisingly, the RHO-Loss method performed poorly, showing the worst performance across almost all settings. We hypothesize two potential reasons here: **(1) Noise and scale discrepancy:** The loss value from the current target model $L(x; f_{tgt})$ is highly noisy and may be of a different scale compared to the reference model. This discrepancy could result in $L(x; f_{ref})$ having minimal impact on the batch selection results. **(2) Reference model limitations:** Due to computational and time constraints, we used a relatively small pretrained model as the reference. This may not adequately reflect the "learnability" or "difficulty" of the training examples, which is crucial for effective data selection. These findings highlight a key advantage of GREATS: it offers a more efficient and flexible alternative that can perform well **without relying on reference models**, which may not be available in practice.
- **Similarity-based batch selection (Reviewer SyBn)**: We implemented a baseline that selects training batches based on their similarity to validation data using Sentence-BERT embeddings. This approach also did not outperform GREATS.
**Hyperparameter Sensitivity Analysis.** We repeated MMLU experiments with various hyperparameter settings, including different batch sizes and learning rates. Each experiment was run with 3 different random seeds, and we have included error bars in our plots to reflect the variability. Across these expanded comparisons and varied hyperparameter settings, GREATS **consistently outperformed** all baselines. This robust performance across different scenarios further validates the effectiveness and stability of our proposed method.
[1] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
**Q2 [Do the experiments use Adam optimizer?] (for Reviewer 2jYA and vNzN)**
**A.** **In all of our experiments, we use AdamW as the optimizer for model training.** This aligns with common practices in training large language models. We acknowledge that our theory for GREATS is derived in terms of SGD. While the theory can be extended to Adam, the efficient "ghost" inner-product technique we developed is specifically tailored for SGD and is challenging to directly adapt to Adam due to its more complex update rule and maintained moments. Hence, we adopt a hybrid approach where we select the valuable batch based on the theory derived from SGD, but perform the actual training updates using AdamW. This approach allows us to benefit from the computational efficiency of our SGD-based selection method while still leveraging the optimization benefits of AdamW for training. Despite this theoretical mismatch, our experiments demonstrate that this hybrid approach performs well in practice. The consistent improvements over baselines across various models and tasks suggest that the SGD-based utility function provides a good proxy for data importance, even when Adam is used for training. We recognize the potential for further improvement by aligning the batch selection more closely with Adam-style updates.
Pdf: /pdf/615b8a489a52fcdc3d6dd02321c2918a5fdbce10.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes GREATS (GREedy Approximation Taylor Selection), a novel online batch selection algorithm for large language model (LLM) training. GREATS aims to improve training efficiency by dynamically selecting the most informative data points from each training batch, based on their potential to reduce validation loss. The authors formulate the batch selection problem as set function optimization and leverage Taylor expansions to efficiently approximate the impact of each data point on validation loss. To address the computational overhead of calculating per-sample gradients, they introduce a technique called "ghost inner-product." Experimental results demonstrate that GREATS consistently accelerates training convergence and improves generalization performance across various language modeling tasks.
Strengths: * GREATS is grounded in a principled set function optimization framework, aiming to directly optimize model performance on a held-out validation set. This contrasts with heuristic-based methods, providing a stronger theoretical foundation.
* Leveraging Taylor expansions to approximate the marginal gain of data points is a clever approach that significantly reduces computational complexity. This makes GREATS more practical for large-scale LLM training compared to methods requiring repeated model updates and validation evaluations.
* The proposed "ghost inner-product" technique for efficient gradient inner-product calculation is innovative and addresses a major computational bottleneck in online batch selection. This technique has the potential for broader applicability in machine learning beyond data selection.
Weaknesses: * As also pointed out by the authors, GREATS relies on a small set of clean validation data, which may not always be readily available in practical scenarios, especially for pretraining. Classifying a point as noisy and not of interest just because a similar point doesnt exist in validation might not be ideal, and I am not sure how GREATS could be handling this.
* Approximation Accuracy: While the use of Taylor expansions for approximation is efficient, the accuracy of these approximations can be affected by factors like learning rate and non-linearity of the model. Further analysis and discussion on the potential limitations of these approximations would strengthen the paper.
* Limited Comparison: The paper only compares GREATS against simple baseline methods (Regular, MaxLoss, GradNorm) and excludes comparison with reference-model-based methods. While these are computationally expensive, including such comparisons might be beneficial to the larger research community.
* The performance of GREATS is likely sensitive to various hyperparameters like the learning rate, batch size, and the number of validation data points. The paper lacks a systematic study of these hyperparameter sensitivities.
* As pointed out by the authors, the "ghost-inner product" technique works best in tandem with SGD, but not with Adam or other optimizers. Thus, it is unclear if these models used Adam while training, but just SGD for the utility function computation, or if the entire model was trained through SGD itself. I believe, Adam with weight decay has become a common optimizer for these LLMs and the findings if presented by training through SGD might not translate to the practical setting where Adam or AdamW is used. I believe further clarification might be required here.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please look at weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the nice words!
**Q [Requirement of validation set]**
**A.** We acknowledge that the availability of a high-quality validation set is a limitation of GREATS. However, most state-of-the-art data selection (e.g., [1, 2]) and online batch selection techniques (e.g., [3]) rely on a high-quality validation set. This is a common approach in the field to guide the selection process toward data that aligns with the target domain or task. In finetuning tasks, practitioners typically have a small set of high-quality, task-specific data that can serve as a validation set. **Validation data for pretraining:** We agree that for pretraining scenarios, obtaining a representative validation set can be more challenging. However, even in these cases, it's often possible to curate a small set of diverse, high-quality samples that help for data selection [4, 5, 6]. Our experiments demonstrate that **GREATS can perform well even with a very small validation set** (as few as 2-5 samples), which makes the requirement more practical to fulfill in many scenarios. Furthermore, GREATS selects points based on their potential to improve performance on the validation set instead of simply filtering data points that are similar to the validation data. It's worth noting that while GREATS does require a validation set, it avoids other limitations of existing methods, such as the need for reference models, making it more practical for large-scale training scenarios.
[1] Xia, Mengzhou, et al. "LESS: Selecting Influential Data for Targeted Instruction Tuning." ICML 2024
[2] Xie, Sang Michael, et al. "Data selection for language models via importance resampling." NeurIPS 2023
[3] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022
[4] Tirumala, Kushal, et al. "D4: Improving llm pretraining via document de-duplication and diversification." NeurIPS 2024
[5] Maini, Pratyush, et al. "Rephrasing the Web: A Recipe for Compute and Data-Efficient Language Modeling."
[6] FineWeb: decanting the web for the finest text data at scale.
**Q [Approximation accuracy of Taylor expansion]** *“While the use of Taylor expansions for approximation is efficient, the accuracy of these approximations can be affected by factors like learning rate and non-linearity of the model. Further analysis and discussion on the potential limitations of these approximations would strengthen the paper.”*
**A.** We thank the reviewer for the valuable suggestions. We conducted additional experiments and examined two key correlations in **Figure 5 in Global response**: **(a)** The correlation between the actual validation loss change in a single gradient update step and the loss change predicted by the sum of training gradients taking dot product with the validation point (i.e., the first term in Equation (4)). **(b)** The correlation between the actual validation loss change in a single gradient update step and the loss change predicted by both the gradient dot product sum and the Hessian interaction term, as shown in Equation (4). As we can see, if we do not incorporate the interaction term, the Pearson correlation coefficient is approximately 0.75. Figure (b) shows that the correlation coefficient improved by including the Hessian interaction term to approximately 0.84. These results demonstrate that our approximations capture a significant portion of the actual loss change, with the inclusion of the Hessian interaction term providing a notable improvement in accuracy.
**Q [Comparison with reference-model-based methods]**
**A.** We have conducted additional experiments using the RHO-Loss method from [1] as a representative baseline for reference-model-based approaches. Due to time and computational constraints, we used Llama-3.1-8B-Instruct as the reference model (with Llama2-7B as the target model), prioritizing training on data points with high RHO loss ($L(x; f_{tgt}) - L(x; f_{ref})$ where $f_{tgt}$ is the target model and $f_{ref}$ is the reference model). Surprisingly, the RHO-Loss method performed poorly, showing the worst performance across almost all settings. We hypothesize two main reasons here: **(1) Noise and scale discrepancy:** The loss value from the current target model $L(x; f_{tgt})$ is highly noisy and may be of a different scale compared to the reference model. This discrepancy could result in $L(x; f_{ref})$ having minimal impact on the batch selection results. **(2) Reference model limitations:** Due to computational and time constraints, we used a relatively small pretrained model as the reference. This may not adequately reflect the "learnability" or "difficulty" of the training examples, which is crucial for effective data selection. These findings highlight a key advantage of GREATS: it offers a more efficient and flexible alternative that can perform well without relying on reference models, which can be computationally expensive or may not be available in practice.
[1] Mindermann, Sören, et al. "Prioritized training on points that are learnable, worth learning, and not yet learnt." ICML 2022.
**Q [Experiment with additional baselines & different hyperparameters]**
**A.** As suggested, we have performed experiments with additional baselines and different choices of learning rates / batch sizes. The results are in **Global response Q1**. The results show that GREATS **consistently performs well** across all the hyperparameter configurations we considered. This robustness suggests that GREATS can maintain its effectiveness without requiring extensive hyperparameter tuning. An ablation study on the number of validation points in Figure 3 (a)-(b) of the paper. This study demonstrates that GREATS can perform effectively even with a small number of validation points (2 to 5), which is a crucial aspect of its practical applicability. | null | null | null | null | null | null |
G3: An Effective and Adaptive Framework for Worldwide Geolocalization Using Large Multi-Modality Models | Accept (poster) | Summary: In the paper, authors propose a novel framework, G3, for worldwide geolocalization of a given photograph anywhere on Earth. The authors address the challenges of capturing location-specific visual cues and handling variations in image data distribution across the globe. G3 utilizes a three-step process: Geo-alignment, which learns location-aware image representations, Geo-diversification, which employs multiple retrieval-augmented prompts for robust location prediction, and Geo-verification, which combines retrieved and generated location data for final prediction. The authors also introduce the MP16-Pro dataset to support location-aware visual representation learning. Experiments on the IM2GPS3k and YFCC4K datasets demonstrate the superiority of G3 over existing methods.
Strengths: * All the modules in the G3 framework: Geo Alignment, Geo Diversification and Geo Verification seem logical and rational. Three kinds of embedding coming from the vision encoder are used for retrieval. LLM is used to generate a set of plausible coordinates by providing positive and negative examples.
* The method achieves superior performance over several baselines at various levels of granularity on IM2GPS3k and YFCC4K.
* Overall, the method is interesting and novel, the writing and flow of the paper is meaningful.
Weaknesses: * The only limitation discussed is regarding the efficiency of inference. However, there is no mention of how much compute time and memory (in number) is required to geo-localize a given input image.
* There are no concrete qualitative example of failures reported in the paper. Can the system be fooled easily? For example, if an image from Italy contains a human with a flag of The Netherlands, is the system capable of correctly geolocalizing the image? How does the RAG system along with the LLM perform in such a case?
* Limited evaluation considering the state-of-the-art. No mention of recent works such as Pigeon [1] or a GeoReasoner [2].
* Why did the authors choose to use CLIP vision encoder for extracting image features? Recent works have shown that purely image-based pretrained models such as DINO-v2 are better feature extractors than CLIP. No ablation study done for the choice of the vision encoder.
* Overall, from discussion in **L316-L333** and Figure 4, it looks like the number of references provided to LLM highly depends and varies based on the image content. The performance is highly sensitive to this hyperparameter and a single value cannot guarantee optimal performance. This can make the framework highly unreliable for practical use cases.
[1] Haas, Lukas, Michal Skreta, Silas Alberti, and Chelsea Finn. "Pigeon: Predicting image geolocations." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12893-12902. 2024
[2] Li, Ling, Yu Ye, Bingchuan Jiang, and Wei Zeng. "GeoReasoner: Geo-localization with Reasoning in Street Views using a Large Vision-Language Model." In Forty-first International Conference on Machine Learning.
Technical Quality: 4
Clarity: 3
Questions for Authors: * There is only a marginal improvement in performance when including the geo-diversification step considering it is potentially the most expensive step during the inference.
* Currently, the text associated with each coordinate only includes the country and city labels. Will the performance of the framework improve by including fine-grained details such as region and/or street name?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations are included but failure cases are missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: No mention of how much compute time and memory (in number) is required to geo-localize a given input image.
**Response**:
Thanks for your comment. We gather the compute time and memory cost in Geo-diversification and Geo-verification, as Geo-alignment is not directly used in inference. We will add this information in our final version. The statistics are listed as follows:
|Phase|Time cost|Memory cost|
|:-:|:-:|:-:|
|Geo-diversification|LLaVA generating time: 4s/10 prompts; Loading Model: 10s|GPU Memory: 20.81GB; Memory: 6GB|
|Geo-verification|Evaluating time on IM2GPS3K: 56s|GPU Memory: 8.85GB; Memory: 10.37GB|
**W2**: Hard and failure cases analysis.
**Response**:
Thanks for your interesting question.
We identify an image similar to the situation you mentioned, depicting a man in Paris holding an American flag. **Detailed images can be found in the PDF file in global response**. Upon testing, G3 accurately predicts the location as Paris without being fooled.
Additionally, we use images from the failure analysis section of GeoReasoner concerning the Eiffel Tower for testing, which are also included in the PDF. We find that G3 can accurately identify the replica of the Eiffel Tower in the USA but fails to recognize the replica in China, which is better than GeoReasoner. This discrepancy may be due to the presence of more iconic buildings in the images of the USA replica, which aid in location determination, whereas the images of the Chinese replica lack clear geographic indicators, leading to incorrect predictions by the model. All these results and analyses will be included in our formal version.
**W3**: Limited evaluation considering the state-of-the-art. No mention of recent works such as Pigeon [1] or a GeoReasoner [2].
**Response**:
Thank you for your comment. We present a comparison of the performance of G3 and PIGEON on the commonly used datasets. There are two reasons we do not take GeoReasoner as a baseline:
1. GeoReasoner is specifically fine-tuned for country classification, which differs from our focus on worldwide geolocalization.
2. GeoReasoner conduct experiments on filtered IM2GPS3K dataset, which includes only highly locatable data. Therefore, the experimental results in the GeoReasoner paper are not comparable.
The results of PIGEON and G3 are shown below:
**IM2GPS3K**
| Model | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
|PIGEON|11.3|36.7|53.8|72.4|85.3|
|G3(GPT4V)|16.65|40.94|55.56|71.24|84.68|
**YFCC4K**
| Model | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
|PIGEON|10.4|23.7|40.6|62.2|77.7|
| G3(GPT4V)|23.99|35.89|46.98|64.26|78.15|
From the experimental results, we can find that G3 demonstrates superior performance across most metrics. Additionally, it is worth noting that PIGEON, during its training process, incorporates not only the MP16 dataset but also an additional 340k images from the Google Landmark v2 dataset, utilizing a larger training dataset compared to our work.
Thanks again for your advice, we will include these discussions in our final version.
**W4**: Why using CLIP Vision encoder.
**Response**:
Thanks for your questions. In this paper, we use CLIP's vision encoder to align our work with previous studies such as GeoCLIP and Img2Loc, ensuring the results are comparable.
**W5**: Overall, from discussion in L316-L333 and Figure 4, it looks like the number of references provided to LLM highly depends and varies based on the image content. The performance is highly sensitive to this hyperparameter and a single value cannot guarantee optimal performance. This can make the framework highly unreliable for practical use cases.
**Response**:
Thank you for your question.
As mentioned in lines L316-L333, the number of references provided to the LLM highly depends on and varies based on the image content. You are correct. This indicates that the heterogeneity in the spatial distribution of images can make LMM's predictions highly sensitive. To address this issue, we propose Geo-diversification. By combining RAG templates with different numbers of reference coordinates, we comprehensively consider all candidates to enhance the robustness of the model's predictions.
Figure 4 illustrates the trade-off between different levels of prediction accuracy. If the application scenario requires high accuracy for small-scale predictions, a smaller number of candidates for each RAG prompt should be chosen. Conversely, if the scenario requires high accuracy for large-scale predictions, a larger number of candidates for each RAG prompt is preferable. Overall, selecting 5 as the hyperparameter provides a balanced performance, achieving optimal results at the region, country, and continent levels, and near-optimal results at the street and city levels.
**Q1**: There is only a marginal improvement in performance when including the geo-diversification step considering it is potentially the most expensive step during the inference.
**Response**:
Thanks for your comment. Through the ablation study in Table 2, we observe that removing Geo-diversification results in a significant drop in performance. On the Im2GPS3K dataset, the average accuracy across five scales decreases by 2.14%, and on the YFCC4K dataset, the average accuracy across five metrics drops by 8.28%. These results demonstrate the necessity and effectiveness of Geo-diversification.
**Q2**: Including fine-grained details in Geo-alignment.
**Response**:
Thank you for your question. Exploring the impact of finer-grained text descriptions on performance requires rerunning the Geo-alignment, which takes approximately 70 hours of training time. We have not yet completed this experiment, but we will provide the results during the discussion phase next week.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for posting the clarifications and additional results which help strengthen the paper. I have updated my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your feedback and for updating the score. The experimental results regarding integrating more fine-grained geographical textual descriptions in the Geo-alignment will be provided tomorrow. Thanks for your patience.
---
Reply to Comment 1.1.2:
Comment: Thank you for your valuable reviews and patience. We complete experiments on incorporating more fine-grained textual descriptions in Geo-alignment. Specifically, in addition to including the city, county, and country information in the textual descriptions of coordinates, we also introduce neighborhood information, which is the most fine-grained data that can be obtained from Nominatim. We use G3-N to denote this variant and keep the other hyperparameters the same as G3. The experimental results on IM2GPS3K are presented below:
|Methods|Street 1km|City 25km|Region 200km|Country 750km|Continent 2500km|
|:-:|:-:|:-:|:-:|:-:|:-:|
|G3-N|16.44|40.64|54.35|70.57|83.98|
|G3|**16.65**|**40.94**|**55.56**|**71.24**|**84.68**|
From the results, we can see that G3 outperforms G3-N across all metrics. This may be because the text encoder's pre-training corpus contains very few instances of neighborhood-level information, resulting in weaker modeling capabilities for neighborhood names. Therefore, introducing neighborhood information into the textual descriptions of coordinates actually adds noise, which negatively impacts the effectiveness of Geo-alignment and subsequently reduces the model's prediction accuracy.
Once again, thank you for taking the time to review our paper. If you feel that our responses have adequately addressed your concerns, we kindly ask if you could consider raising the score. Thank you! If you have any further questions, please do not hesitate to let us know. | Summary: This paper introduces G3, a RAG framework for geo-localization. By introducing a three-step process, the G3 framework achieves superior performance against other SoTA methods. To improve the expressiveness of the image embeddings, the paper proposed a new dataset, MP16-Pro, which adds textual descriptions to the existing MP-16 dataset. By comprehensive experiments, the paper demonstrates the necessity and merits of the G3 framework.
Strengths: 1. The proposed method, G3, achieved competitive performance in geo-localization against existing classification-based, retrieval-based and RAG-based methods.
2. Compared to the original MP16 dataset, the proposed MP16-Pro dataset additionally provides textual geolocation-related data, which could be beneficial to the community if fully open-sourced.
3. Comprehensive experiments and explanations are provided to prove the effectiveness and necessity of each G3 component.
4. The authors have open-sourced their project in a clear and instructive manner, which is very positive for reproduction.
Weaknesses: 1. The necessity of the Geo-alignment module in the G3 framework has been indicated by experiments results in Table 2. However, the authors did not address the motivation for their particular design choice of the alignment module. Why do the image features have to align with both text features and gps features? Does aligning with simply one modality work just as well? The authors should clarify that by conducting the corresponding ablation study.
2. One of the main contributions claimed by the authors is the introduction of the MP16-Pro dataset. However, the description of the construction process for the MP16-Pro dataset is not sufficiently detailed.
3. The G3 framework was not the first work to incorporate RAG into geolocalization, nor was it the first work to use retrieval-based models. While the proposed method achieved superior performance, it lacks certain novelty to the field.
4. In the experiment setup in section 5(line 226), no retrieved coordinate is considered when evaluating G3 on IM2GPS3K, which is inconsistent to figure 2 of the paper. If no retrieved coordinate works better in certain cases, I wonder about the necessity and applicability of such design.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Since the proposed dataset, MP16, is relatively large, I wonder if the authors have run decomtamination procedures to ensure there is no overlapping data between training and evaluation.
2. The authors of Img2Loc provided experiment results with other LMMs (LLaVA), I wonder what is G3’s performance when switching the LMM to LLaVA compared to Img2Loc. I think by incorporating this result the authors can more robustly state their superiority over existing methods.
3. As shown in Figure 2, the text descriptions of the location in MP16-Pro are not used during inference, I wonder if adding text descriptions to the prompt would work better.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors of this paper have addressed the limitation of the G3 framework by pointing out its high computation cost. The introduction of alignment and diversification brings on more computation cost and latency compare to existing methods, which limits the retrieval and inference speed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: The necessity of the Geo-alignment module in the G3 framework has been indicated by experiments results in Table 2. However, the authors did not address the motivation for their particular design choice of the alignment module. Why do the image features have to align with both text features and gps features? Does aligning with simply one modality work just as well? The authors should clarify that by conducting the corresponding ablation study.
**Response**:
Thank you for your suggestion.
Firstly, from an intuitive perspective, GPS and text data can model continuous and discrete geographic information. Specifically, two cities located on a national border belong to different countries and may have inconsistent driving directions. This is suitable for alignment using text information and image data. On the other hand, their terrain, landscape, and climate are generally similar, which can be expressed by aligning GPS data with image data.
Second, We also conduct experiments to clarify the necessity of aligning image features with both text features and gps features. Please refer to the Ablation study of Geo-alignment section in the global response. We will also add these results in our final version paper.
**W2**: One of the main contributions claimed by the authors is the introduction of the MP16-Pro dataset. However, the description of the construction process for the MP16-Pro dataset is not sufficiently detailed.
**Response**:
Thanks for your comment.
The original MP16 dataset provides image data along with the GPS information of the image, such as: image 4f/a0/3963216890.jpg, LAT: 47.217578, LON: 7.542092. The MP16-Pro dataset enhances MP16 by adding textual descriptions about the location for each sample.
The enhanced sample becomes: image 4f/a0/3963216890.jpg, LAT: 47.217578, LON: 7.542092, **neighbourhood: Wengistein, city: Solothurn, county: Amtei Solothurn-Lebern, state: Solothurn, region: NA, country: Switzerland, country_code: ch, continent: NA**.
It is also worth noting that we perform geographic reverse geocoding using Nominatim. This tool is error-free and is equivalent to directly inputting latitude and longitude coordinates into OpenStreetMap to obtain address information. We will add these details of MP16-Pro construction in our final version.
**W3**: The G3 framework was not the first work to incorporate RAG into geolocalization, nor was it the first work to use retrieval-based models. While the proposed method achieved superior performance, it lacks certain novelty to the field.
**Response**:
Thanks for the opportunity for us to clarify the motivation and contribution of G3. Our work builds upon img2loc, but img2loc and other existing approaches suffer from two serious issues: they may easily confuse distant images with similar visual contents, or cannot adapt to various locations worldwide with different amounts of relevant data. To address these issues, we propose G3 and achieve significant performance improvements. Additionally, to further advance the field, we introduce the MP16-Pro dataset, which supplements the original MP16 dataset with geographic text descriptions for each sample.
**W4**: Necessity of retrieved coordinate.
**Response**:
Thank you for your comment. The setting of retrieved coordinates is to ensure the lower bound of the pipeline's effectiveness, especially when the LMM model's capabilities are not strong enough. We conduct experiments on the LLaVA model on IM2GPS3K with the following variants:
* LLaVA S=0: LLaVA variant without retrieved coordinates.
* LLaVA S=1: LLaVA variant with one retrieve coordinate.
* LLaVA S=2: LLaVA variant with two retrieved coordinates.
|Model| Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
| LLaVA S=0|14.15|35.57|48.78|66.53|81.68|
| LLaVA S=1|14.41|35.64|48.88|66.40|81.55|
| LLaVA S=2|14.31|35.87|49.42|66.93|81.78|
From the experimental results, we can see the necessity of retrieved coordinates in the LLaVA experiments. As the number of retrieved coordinates increases from 0 to 2, all metrics generally show an improvement.
**Q1**: Since the proposed dataset, MP16, is relatively large, I wonder if the authors have run decomtamination procedures to ensure there is no overlapping data between training and evaluation.
**Response**:
Thank you for your insightful question. The dataset we use, MP16, is consistent with those used in prior works in this domain. This consistency ensures comparability and reproducibility of our results.
**Q2**: LMM Ablation Study.
**Response**:
Thanks for your advice. We conduct the experiments with LLaVA on IM2GPS3K, please refer to the Open-source LMM (LLaVA) experiments Section in the global response.
**Q3**: As shown in Figure 2, the text descriptions of the location in MP16-Pro are not used during inference, I wonder if adding text descriptions to the prompt would work better.
**Response**:
Thank you for your insightful suggestion. Based on your recommendation, we design the following experiments. The experimental variants and results are as follows:
* ZS: Zero-shot template, RAG template without any information from reference images.
* GPS: RAG template incorporating the GPS coordinates of reference images.
* GPS+Text: RAG template incorporating both the GPS information and textual descriptions of reference images.
|Variants|Street 1km|City 25km|Region 200km|Country 750km|Continent 2500km|
|:-:|:-:|:-:|:-:|:-:|:-:|
|ZS|12.41|35.87|50.88|67.60|80.75|
|GPS|16.65|40.94|55.56|71.24|84.68|
|GPS+Text|16.75|41.44|55.68|71.60|85.01|
The experimental results indicate that adding more various modality information from the reference images to the RAG template can effectively improve prediction accuracy. We will include these experimental results in our final version.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for providing clarifications and additional experimental results, which have addressed some of my concerns. I will raise my score to 6.
---
Reply to Comment 1.1.1:
Comment: We appreciate your constructive feedback and valuable reviews. Thank you for your time and effort in reviewing our work. | Summary: This paper proposes three steps, i.e., geo-alignment, geo-diversification, and geo-verification to optimize both retrieval and generation phases of word-wide geo-localization.
Strengths: 1. The motivation is clearly stated.
2. The experimental results show the effectiveness of the proposed method.
3. The proposed method achieves state-of-the-art performance.
4. Code is publicly available.
Weaknesses: The experiments are not insufficient.
The model seems too large, so the author should provide the number of parameters and gflops experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the purpose of image vectorization? Please provide a detailed explanation by the author.
2.Figure 6 is difficult to understand, what does the author want to express? What does it mean that the number of references has a significant impact on the model?
3. The model seems too large, the author should provide the number of parameters and gflops experiments.
4. The author proposed a new dataset MP16 Pro, but it seems that the results have not been published on this dataset.
5. The author obtain textual descriptions by geographical reverse encoding during the Database Construction. Are the descriptions generated for longitudes and latitudes with similar geographical locations consistent? Is the text description useful for images taken at similar locations? The author can add descriptive text experiments to the ablation experiment to prove that the text is helpful for feature representation.
6. The experiment of paper lacks the results of baseline
7. the author does not introduce enough details about the branch of Geo-diversification Module.
8. Fig.5/8 lacks a comparison with the visualization results of baseline.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitation in the paper, but not in enough depth.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: The experiments are not insufficient. The model seems too large, so the author should provide the number of parameters and gflops experiments.
**Response:**
Thanks for your question. We compile the data of model's parameters and computational load, as shown in the table below.
| Total params | Trainable params | GFLOPs |
|:--------------:|:--------------------:|:------:|
| 441,266,179 | 13,648,131 (3.09%) | 304.38 |
Since the training parameters of G3 are concentrated in the geo-alignment stage, and most of the module parameters are frozen, only 3.09% of the parameters need to be optimized.
Thank you again for your suggestion, we will include this data in the final version of the paper.
**Q1**: What is the purpose of image vectorization? Please provide a detailed explanation by the author.
**Response**:
Thank you for your question. Image vectorization enables us to convert images into vector representations. These vectors allow us to calculate the similarity between a query image and images stored in our database. By doing so, we can efficiently retrieve reference images in RAG process that aid in accurately generating the geographic location of the query image.
**Q2**: Figure 6 clarification.
**Response**:
Thank you for your question.
In Figure 6, the left side of each example shows the query image, with the ground truth displayed in red text below it. On the right side are the predictions made by LMM using the same RAG template combined with different amounts of reference image information, with the most accurate prediction highlighted in red text. The blue reference text below indicates the coordinates of the top 10 most similar reference images retrieved from the database based on the query image.
In Figure 6, we aim to illustrate that introducing different numbers of coordinates as references during the RAG process significantly impacts the accuracy of the generated results. This is due to the heterogeneity of images in geographic space, which causes the significantly various number of effective reference images retrieved from the database for different query images, further affecting the RAG outcomes. This case study validates the necessity of geo-diversification. Geo-diversification mitigates the prediction sensitivity caused by the heterogeneous spatial distribution of images by generating candidate coordinates using multiple prompts that contain varying numbers of reference coordinates simultaneously. We will add more detailed descriptions of Figure 6 in its Caption.
**Q3**: The author proposed a new dataset MP16 Pro, but it seems that the results have not been published on this dataset.
**Response**:
Thank you for the opportunity to clarify our perspective. In the paper, the MP16 Pro dataset was used solely for training and constructing the database, while all testing was conducted on the Img2GPS3K and YFCC4K datasets. This approach is consistent with existing works such as GeoCLIP and Img2Loc.
**Q4**: Questions about textual descriptions.
**Response**:
Thanks for your questions. (1) Generating consistency. We perform geographic reverse geocoding using Nominatim. This process is error-free and is equivalent to directly inputting latitude and longitude coordinates into OpenStreetMap to obtain address information. (2) The effectiveness of text descriptions.
please refer to the Ablation study of Geo-alignment section in the global response.
**Q5**: The experiment of paper lacks the results of baseline
**Response**:
Thanks for your advice. We will further add the results of PIGEON to the overall results. The comparison between G3(GPT4V) and PIGEON on IM2GPS3K and YFCC4K are shown below:
**IM2GPS3K**
| Model | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
|PIGEON|11.3|36.7|53.8|72.4|85.3|
|G3(GPT4V)|16.65|40.94|55.56|71.24|84.68|
**YFCC4K**
| Model | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
|PIGEON|10.4|23.7|40.6|62.2|77.7|
| G3(GPT4V)|23.99|35.89| 46.98 | 64.26 | 78.15 |
From the results, we can find that G3 is superior to PIGEON on eight out of ten metrics on IM2GPS3K and YFCC4K. We will add the performance of PIGEON to the overall results in our final version.
**Q6**: the author does not introduce enough details about the branch of Geo-diversification Module.
**Response**:
Thanks for your advice. The purpose of Geo-diversification is to generate diverse predictions based on RAG prompts combined with different numbers of references. This approach aims to address the issue of inconsistent numbers of effective references retrieved due to spatial heterogeneity in images. In Geo-diversification, we combine the top S retrieved candidates with $K \times N$ generated candidates. K represents the number of RAG prompts, and N represents the number of results generated per RAG prompt. Therefore, Geo-diversification produces a total of $K \times N + S$ candidate predictions.
**Q7**: Fig.5/8 lacks a comparison with the visualization results of baseline.
**Response**:
Thanks for your advice.
In Figure 5, the CLIP ViT on the left represents the baseline, while the right side shows the retrieval results of G3. It can be observed that CLIP ViT focuses only on the visual features of the image (such as the presence of two people), whereas G3 pays more attention to the location where the image was taken. As a result, G3 can retrieve images that are geographically closer to the query image.
In Figure 8, we aim to visually demonstrate the performance of G3 under different error bounds. We can observe that the model's predictions are more accurate when the image contains clear geographic indicators such as buildings and decorations. However, when the image is filled with elements like the ocean or sky, which lack clear geographic indicators, the prediction error is larger.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your valuable reviews and the time you've invested. As the author-reviewer discussion period is coming to an end, we sincerely want to confirm whether we have addressed your concerns. If there are any points that require further clarification or additional experimental results, please do not hesitate to let us know. If you believe our response has adequately resolved the issues you raised, we kindly ask you to consider the possibility of raising the score. | Summary: This work focuses on the task of "worldwide geolocalization" with an effective and adaptive framework based on large multi-Modality models. A novel framework, i.e., G3, is proposed, including Geo-alignment, Geo-diversification, and Geo-verification. This work also releases a new dataset MP16-Pro. The experiment results show that G3 has superior performance on two well-established datasets IM2GPS3k and YFCC4K.
Strengths: 1. The task of "worldwide geolocalization" is very important and quite interesting.
2. This paper is easy to follow, it is well-written.
3. The experiment results are solid. The G3 model achieve better results than GeoCLIP and Img2Loc on IM2GPS3k and YFCC4K.
Weaknesses: 1. In the Geo-diversification part, there is no ablation study on different LMMs and different RAG templates. I wonder whether it still works on open-source LLMs.
2. Minor:
a. Figure 1 is often set as a teaser figure, which could show the basic design of the whole work. It would be better to indicate the solution, instead of only showing the limitations.
b. Table 1 needs citations for each previous work. And GeoCLIP should be noted as NeurIPS 2023 instead of arXiv.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Please address my above concerns on Weaknesses.
2. The qualitative results show that most of the retrieved images are photos for tourists. What about using real world images from official company (e.g., Google Map)?
3. Please briefly describe how does the MP16-Pro Dataset help (or improve) the G3 model.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: In the Geo-diversification part, there is no ablation study on different LMMs and different RAG templates. I wonder whether it still works on open-source LLMs.
**Reponse:**
Thanks for your advice.
For the ablation study on different LMMs, please refer to the Open-source LMM (LLaVA) experiments section in the global response.
Regarding the RAG templates, our template follows the previous work Img2Loc to ensure a fair comparison. We agree that exploring RAG templates is necessary. So we conduct experiments to explore the impact of incorporating different reference images' information into the RAG template on prediction performance. The experimental variants and results are as follows:
* ZS: Zero-shot template, RAG template without any information from reference images.
* GPS: RAG template incorporating the GPS coordinates of reference images.
* GPS+Text: RAG template incorporating both the GPS information and textual descriptions of reference images.
| Variants | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:------------:|:--------------:|:-------------:|:--------------:|:-------------:|:----------------:|
| ZS | 12.41 | 35.87 | 50.88 | 67.60 | 80.75 |
| GPS | 16.65 | 40.94 | 55.56 | 71.24 | 84.68 |
| GPS+Text | 16.75 | 41.44 | 55.68 | 71.60 | 85.01 |
The experimental results indicate that adding more modality information from the reference images to the RAG template can effectively improve prediction accuracy. We will include these experimental results in our final version. Thank you again for your assistance.
**W2**: Figure 1 is often set as a teaser figure, which could show the basic design of the whole work. It would be better to indicate the solution, instead of only showing the limitations.
**Reponse:**
Thank you for your comment. We understand that teaser figures are typically used to present the basic design of the entire work. We would like to clarify that our intention with the current figure was to highlight the issues with existing methods and help readers understand our motivation. In response to your suggestion, we will revise Figure 1 to include an overview of our solution, making it easier to comprehend our approach. Thank you for your valuable feedback.
**W3**: Table 1 needs citations for each previous work. And GeoCLIP should be noted as NeurIPS 2023 instead of arXiv.
**Reponse:**
Thanks for your comment. We will add citations for each previous work in Table 1. Additionally, we will update the reference for GeoCLIP to NeurIPS 2023 instead of arXiv and check the other references. We appreciate your attention to these details.
**Q1**: The qualitative results show that most of the retrieved images are photos for tourists. What about using real world images from official company (e.g., Google Map)?
**Reponse:**
Thank you for your insightful question. Adding more images from official companies could indeed enhance the robustness of the database, thereby improving the overall effectiveness of the pipeline. However, we used MP16 as the image source for this work for three main reasons:
* Consistency: To maintain consistency with other works, such as GeoCLIP and Img2Loc, which also use MP16 as their data source.
* Dataset Distribution Rights: We extended the text descriptions based on MP16 to create the new dataset MP16-Pro, which includes distribution attributes. In contrast, Google's street view images have clear distribution restrictions, so we did not include them in this work.
Your insights are thought-provoking, and we greatly appreciate your constructive feedback.
**Q2**: Please briefly describe how does the MP16-Pro Dataset help (or improve) the G3 model.
**Reponse:**
Thanks for your comment.
The original MP16 dataset provides image data along with the GPS information of the image, such as: image 4f/a0/3963216890.jpg, LAT: 47.217578, LON: 7.542092. The MP16-Pro dataset enhances MP16 by adding textual descriptions about the location for each sample.
The enhanced sample becomes: image 4f/a0/3963216890.jpg, LAT: 47.217578, LON: 7.542092, **neighbourhood: Wengistein, city: Solothurn, county: Amtei Solothurn-Lebern, state: Solothurn, region: NA, country: Switzerland, country_code: ch, continent: NA**.
MP16-Pro primarily improves G3 through the following aspects:
* Adding Geographic Semantic Information: By incorporating additional geographic semantic information in geo-alignment, aligning it with GPS and image data, which makes the geographic information of samples in MP16 more accurate.
* Enhancing Image Retrieval Accuracy: More precise geographic information enhances the accuracy of images retrieved based on the query image in geo-diversification, providing more effective references for the LMM to predict coordinates. For specific details, please refer to Section 5.5 Case Study on reference image retrieval.
* Help Train the GPS Encoder: Introducing textual descriptions and aligning them with GPS information in geo-alignment helps train the GPS encoder more effectively, further increasing the reliability of the GPS encoder's judgments during geo-verification.
Thank you again for taking the time to review our work, and we hope our response addresses your concerns.
---
Rebuttal Comment 1.1:
Comment: As we approach the end of the author-reviewer discussion period, we respectfully wish to check in and ensure that our rebuttal has effectively addressed your concerns regarding our paper. Should there be any remaining questions or if further clarifications or additional experimental results are needed, please do not hesitate to let us know. We appreciate the thoughtful reviews and the time you’ve invested in providing us with valuable feedback to improve our work. If you believe that our responses have sufficiently addressed the issues raised, we kindly ask you to consider the possibility of raising the score. | Rebuttal 1:
Rebuttal: # Global Response
We would like to extend our sincere gratitude to all the reviewers for your valuable comments and constructive feedback on our manuscript. Your insights have been instrumental in improving the quality of our work.
We find two common concerns raised by multiple reviewers, which we response through this global response.
## Open-source LMM (LLaVA) experiments
We conduct the experiments of G3 with LLaVA (LLaVA-Next-LLaMA3-8b) on Im2GPS3K. Since Img2Loc did not specify the version of LLaVA they used, we also re-ran the experiments on Img2Loc. The results are as follows:
| Model | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
| GeoCLIP | 14.11 | 34.47 | 50.65 | 69.67 | 83.82 |
| Img2Loc (LLaVA) | 10.21 | 29.06 | 39.51 | 56.36 | 71.07 |
| Img2Loc (GPT4V) | 15.34 | 39.83 | 53.59 | 69.70 | 82.78 |
| G3(LLaVA) | 14.31 | 35.87 | 49.42 | 66.93 | 81.78 |
| G3(GPT4V) | 16.65 | 40.94 | 55.56 | 71.24 | 84.68 |
We can find that:
* After switching LMMs from GPT4V to LLaVA, the performance of G3 shows some decline across various metrics but remains competitive: compared to GeoCLIP, it performs better at the street level and city level.
* Additionally, compared to Img2Loc(LLaVA), G3(LLaVA) significantly outperforms Img2Loc(LLaVA), demonstrating the effectiveness of the proposed modules.
* Finally, by comparing the performance of G3 equipped with LLaVA and GPT4V to Img2Loc equipped with LLaVA and GPT4V, we can observe that G3 shows more stable performance across different LMMs.
## Ablation study of Geo-alignment
To verify the necessity of aligning the three modalities in geo-alignment, we conduct experiments on the following variants.
* IMG: Directly using pretrained CLIP vision encoder as encoder.
* IMG+GPS: Aligning Image representations with GPS representations in Geo-alignment, the textual descriptions are not used.
* IMG+GPS+Text (G3): Aligning three modalities simultaneously in Geo-alignment.
| Model | Street 1km | City 25km | Region 200km | Country 750km | Continent 2500km |
|:-:|:-:|:-:|:-:|:-:|:-:|
| IMG | 15.71 | 40.64 | 54.85 | 70.8 | 84.05 |
| IMG+GPS | 16.91 | 41.41 | 55.02 | 70.94 | 84.18 |
| IMG+GPS+Text (G3) | 16.65 | 40.94 | 55.56 | 71.24 | 84.68 |
From the experimental results, we can draw the following conclusions:
* By comparing IMG+GPS+Text, IMG+GPS, and IMG, we find that adding GPS and text information can both enhance the feature representation compared to using the original image information alone.
* By comparing IMG+GPS+Text with IMG+GPS, we find that IMG+GPS performs better at smaller scales, while IMG+GPS+Text performs better at larger scales. This might be because GPS is suitable for modeling variations at smaller scales, whereas text descriptions do not vary significantly at small scales and may even remain the same.
We hope these responses address your concerns. Thank you once again for your time and effort in reviewing our manuscript.
Pdf: /pdf/86f962002160f179d9b1fca7120399cdd3344dfb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a RAG-based framework for worldwide geo-localization. The first Geo-alignment stage projects input images to embedding spaces to align with GPS coordinates, and text description with contrastive learning. Given new input image, the system is able to retrieve similar GPS and text description. Then the retrieved candidates GPS and text prompts are fed to GPT4V with a pre-defined prompt template to generate GPS coordination. The final stage conducts a similarity-based verification based on multi-modal representations. The method is evaluated on two worldwide geo-localization datasets, i.e., IM2GPS3k and YFCC4K, with state-of-the-art performance.
Strengths: + The RAG-design for geo-localization is interesting and promising.
+ The writing is easy to follow.
+ The performance is much better than previous methods.
+ Ablation result is provided for the three stages.
+ The case study and failure cases are informative and interesting.
Weaknesses: - My major concern is that the current pipeline is highly dependent on the LMM which is the powerful closed-source model, GPT-4V. This model is expensive for large-scale applications and also hard to reproduce due to unannounced updates for API across time. It would be better to provide the results with open-source large multi-modal models, for example, LLaVA. I would expect a lower accuracy with open-source models.
- The proposed MP16-Pro dataset is also claimed as a contribution, but there is no guarantee that the data will be released. Hope this will be provided in final version.
Technical Quality: 3
Clarity: 3
Questions for Authors: See the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitation is included in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1**: My major concern is that the current pipeline is highly dependent on the LMM which is the powerful closed-source model, GPT-4V. This model is expensive for large-scale applications and also hard to reproduce due to unannounced updates for API across time. It would be better to provide the results with open-source large multi-modal models, for example, LLaVA. I would expect a lower accuracy with open-source models.
**Response:**
Thanks for your suggestions. Please refer to the Open-source LMM (LLaVA) experiments section in the global response to see our response.
**W2**: The proposed MP16-Pro dataset is also claimed as a contribution, but there is no guarantee that the data will be released. Hope this will be provided in final version.
**Response:**
Thank you very much for your attention to the MP16-Pro dataset presented in our paper. The MP16-Pro dataset includes image data (360GB) and metadata (700MB). Due to anonymous file-sharing platforms' file size limitations and supplementary file size limitations, we were unable to release this data during the review stage. To further enhance the credibility of MP16-Pro, we upload 100k rows of metadata in the anonymous repository's data folder. The URL of the anonymous repository is given in the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. My concerns have been addressed and I am raising the rating to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising the rating and for your constructive reviews. We're glad our response could address your concerns, and we appreciate your support. | null | null | null | null | null | null |
Drones Help Drones: A Collaborative Framework for Multi-Drone Object Trajectory Prediction and Beyond | Accept (poster) | Summary: This paper presents a collaborative framework for multi-drone object trajectory prediction, named DHD, which consists of two specifically designed components: the GBG module, aimed at generating more accurate BEV representations in aerial scenes, and the SISW module, which adaptively selects regions for collaborative interactions. Additionally, a simulated multi-drone collaborative observation dataset is created to demonstrate the effectiveness of the proposed DHD framework.
Strengths: - This paper successfully implements an end-to-end multi-drone object trajectory prediction based purely on visual inputs and extends it to the collaborative object detection task. This work fills in the blank of end-to-end multi-drone collaborative prediction, demonstrating notable originality.
- The proposed GBG module innovatively leverages the unique inclined observations from drone perspectives as geometric priors and replaces traditional depth estimation with height estimation. These enhancements significantly improve the accuracy of BEV representations for long-range aerial observations compared to the listed baselines, such as LSS and DVDET.
- This paper presents an efficient communication strategy called SISW, which considers both the limited inter-drone communication and the prediction task's dependency on foreground and contextual information. Experimental results indicate that this design outperforms previous sparse interaction strategies like where2comm.
- A simulated dataset, "Air-Co-Pred," is created to support multi-drone object trajectory prediction. The dataset enriches the available resources within the multi-drone field and can serve as a valuable benchmark for future research.
Weaknesses: - The authors need to carefully review the paper for grammar mistakes, typos, and formatting errors. Specific issues include:
- Line 82: BEV's full-term appears redundantly.
- Line 119: The subscript in$ Y_{k}^{o} $ is incorrect and should be$ Y_{k}^{t_o} $.
- Line 152: Figure reference is incorrect.
- Line 221: "2HZ" is improperly formatted and should be "2Hz."
- Equation 6: The subscript next to $ \Sigma $ should be $ xy $ instead of $ hw $.
Addressing these issues will improve the paper's readability.
- The transmission volumes for various collaboration strategies in Table 2 are unclear. It would be beneficial to supplement this information to better reflect the balance between the performance improvements in prediction and the cost of collaborative interactions. Additionally, it appears that the results of DHD in Table 2 are not the best in every column, so why are they bolded?
- Please consider exploring scenarios where the number of drones varies from 1 to 4. This would help investigate the impact of drone quantity on perception enhancement and also reflect the influence of potential drone failures on performance metrics.
- In reality, extrinsic parameters are derived through computation and approximation, leading to certain biases. Please explore how the extent of extrinsic bias affects the performance of collaborative prediction.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Could you provide more details about Air-Co-Pred, including the drone attitude settings, flight patterns, the speed of objects, etc.? Why were four drones chosen to collect data?
- In Section 3.3, the equation regarding the derivation of the depth upper-bound is presented directly without any intermediate steps in the main text or supplementary materials. Please provide the detailed derivation process to help readers understand it better.
- The reviewer found that the proposed GBG module is highly relevant to the applications of camera intrinsics and extrinsic in UAVs. The reviewer encourages the authors to acknowledge the existence of related works and clarify the differences.
For reference, see:
Shen H, Lin D, Yang X, et al. Vision-Based Multi-Object Tracking through UAV Swarm[J]. IEEE Geoscience and Remote Sensing Letters, 2023.
Pan T, Dong H, Deng B, et al. Robust Cross-Drone Multi-Target Association Using 3D Spatial Consistency[J]. IEEE Signal Processing Letters, 2023.
- Considering the complexity of the proposed framework, could you provide additional details or code for better understanding and reproducibility?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge that the current focus of this paper is on simulated environments to investigate multi-drone object trajectory prediction without considering latency and camera noise. These are reasonable limitations, and addressing them in future work would be beneficial
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [Weakness 1]: Grammar mistakes
We have carefully reviewed and corrected the grammar mistakes, typos, and formatting errors.
### [Weakness 2]: Transmission volumes. Why the results of DHD in Table 2 bold?
Based on the V2V communication protocol [1], data broadcasting can achieve a bandwidth of 27 Mbps at a range of 300 meters. Considering the capture frequency of 2 Hz, the bandwidth requirement should be less than 13.5 Mbps to avoid severe delay. The original BEV feature map is `64 x 200 x 200` in size, corresponding to 9.77 MB. The result-level fusion utilizes the output segmentation and offset maps, which are `2 x 200 x 200` in size, corresponding to 0.305 MB. We calculate the transmission expenses for each method, as shown in Table VI of the attached PDF.
All partially connected methods of intermediate collaboration meet the above bandwidth requirements. Notably, although V2X-ViT and V2VNet achieve better performance, their high transmission expenses make them less feasible for practical applications. By the way, our proposed DHD has shown the best performance within the partially connected approaches for intermediate collaboration, so we highlight it in bold.
[1] Arena F, Pau G. An overview of vehicular communications[J]. Future internet, 2019, 11(2): 27.
### [Weakness 3]: Exploring the number of drones varies from 1 to 4?
To explore how the number of collaborating drones affects performance, we conduct relevant experiments, as shown in the overall rebuttal.
For the short-range area of 50mx50m, three drones are sufficient to predict trajectories, with performance comparable to that of four drones. However, for the long-range area of 100mx100m, predictive performance improves as the number of drones increases.
This difference arises because the drones in our dataset are positioned near the intersection to monitor traffic flow, making it easier to cover areas close to the intersection, which fall within the short-range area. In contrast, much of the information in the long-range area extends along specific road branches, which are only partially captured by the drones at the intersection. Therefore, having more drones results in more comprehensive coverage of the long-range area.
Conversely, the reduction in the number of drones does indeed reflect the influence of potential drone failures on performance.
### [Weakness 4]: Explore how the extent of extrinsics bias affects collaborative prediction?
We have conducted a series of experiments by introducing Gaussian noise to the extrinsic parameters of drone cameras, as shown in Fig. 2 of the attached PDF document and described in Answer 1 of the overall rebuttal.
### [Question 1]: More details about Air-Co-Pred? Why are totally four drones?
The dataset is gathered using four drones, each with a field of view set to 70 degrees by default. The drones are positioned at an altitude of 50 meters, monitoring traffic flow at urban intersections. The speed distribution of the observed objects ranges from 0 to 10 m/s, with low-speed states primarily concentrated around 1.5 m/s and high-speed states around 8 m/s. In the CARLA simulations, speed is influenced by several factors, including the default safe speed settings and the necessity of complying with traffic regulations and preventing collisions.
Furthermore, the four drones are arranged in a square formation to achieve a square-shaped coverage area for comprehensive aerial observations. This configuration allows for overlapping fields of view to mitigate occlusions, and the unique observations from each drone enrich the out-of-sight perception capabilities of the entire system.
### [Question 2]: The detailed derivation process.
According to the pinhole camera model, the transformation between the world coordinate system and the pixel coordinate system can be described by :
$$
\begin{bmatrix}
x \\\\
y \\\\
z
\end{bmatrix} = {R}^{-1} \\left( {K}^{-1} \begin{bmatrix}
u \\\\
v \\\\
1
\end{bmatrix} {d} - {T} \\right).
$$
Next, we consider \\({M} = {R}^{-1}{K}^{-1} \\) and \\( {N} = {R}^{-1}(-{T}) \\) for further simplification and derivation:
$$
\begin{bmatrix}
x \\\\
y \\\\
z
\end{bmatrix} = {M} \begin{bmatrix}
u \\\\
v \\\\
1
\end{bmatrix} d + {N} = \begin{bmatrix}
m_{11} & m_{12} & m_{13} \\\\
m_{21} & m_{22} & m_{23} \\\\
m_{31} & m_{32} & m_{33}
\end{bmatrix} \begin{bmatrix}
u \\\\
v \\\\
1
\end{bmatrix} d + \begin{bmatrix}
n_1 \\\\
n_2 \\\\
n_3
\end{bmatrix}.
$$
By setting \\( y = -H \\),
the depth upper-bound can be derived as:
$$
{D_{(u,v)}}=\frac{-H-n_2}{m_{21} u+m_{22} v+m_{23}}.
$$
### [Question 3]: Clarify the differences with related works.
Indeed, these two works leverage the camera parameters of UAVs to enhance cross-view association. Specifically, they project 2D detection results from each perspective into a unified 3D space with camera parameters and associate targets based on feature similarity and spatial distance.
In contrast, our proposed GBG module uses UAV camera parameters along with the available flight altitude to derive the depth upper bound of each pixel.
This geometric prior helps to constrain depth estimation and calculate the viewing angle of each pixel, facilitating the subsequent BEV generation.
By acknowledging these works and clarifying our distinct approach, we highlight the unique contributions and applications of the GBG module in multi-drone collaboration.
### [Question 4]: Additional details or codes.
We have already provided the core code in the initial supplementary materials. Additionally, to ensure better understanding and reproducibility, we plan to release the Air-Co-Pred dataset, the source codes of the complete project, and the model checkpoints, which include our methods and previous baselines. This will contribute to further reproduction and improvement of the system.
---
Rebuttal Comment 1.1:
Title: Comment after reading the response
Comment: Thanks for the authors’ time and effort in responding to the weaknesses and questions raised.
Overall, this revised version is more comprehensive, incorporating some previously missing details and effectively resolving all my concerns. Moreover, the innovations presented in the paper have been fully recognized by reviewers CzWU, nEkT, and rEnT, and partially acknowledged by reviewer nhVA. Consequently, I am convinced to maintain my initial judgment.
---
Reply to Comment 1.1.1:
Title: Thank you for the feedback
Comment: Thank you very much for taking the time to read our responses and provide your valuable feedback. As mentioned, we are committed to sharing our resources with the community. If the submission is accepted, we will release the dataset, along with the relevant codes and model checkpoints, to facilitate further research and development. | Summary: This paper studied the problem of collaborative multi-drone object trajectory prediction. The authors proposed a framework named "Drones Help Drones" (DHD) that can improve accuracy compared to existing methods while reducing the required communication bandwidth. To be more specific, the authors leveraged a Ground-prior-based BEV Generation module that can generate more accurate BEV representations, and used a Sparse Interaction via Sliding Windows module that enables efficient information sharing between drones. The authors also constructed a new dataset named "Air-Co-Pred" for multi-drone collaborative prediction tasks.
Strengths: The paper is well-structured and clearly written, with helpful visualizations that aid in understanding the proposed methods and results. The DHD framework introduces a solution to key challenges in multi-drone collaborative perception, particularly the GBG module for more accurate BEV generation and the SISW module for efficient information sharing. The introduction of the "Air-Co-Pred" dataset addresses a gap in existing resources for multi-drone collaborative prediction tasks, potentially benefiting the broader research community.
Weaknesses: * The primary evaluation is conducted on simulated data. While the authors acknowledge this limitation, it raises questions about the framework's performance in more complex, real-world scenarios with sensor noise, communication delays, environmental variability, and other common difficulties in real-world experiments.
* It would be interesting to see how the performance scales with the number of collaborating drones, which could provide valuable insights into the method's scalability. Is there an optimal or maximum number of drones for effective collaboration?
Technical Quality: 3
Clarity: 3
Questions for Authors: see above
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [Weakness 1]: Questions about the framework's performance in more complex, real-world scenarios with sensor noise, communication delays, environmental variability, and other common difficulties in real-world experiments.
To address the reviewer’s concerns about the gap between real-world scenarios and simulated data, we have incorporated some of the mentioned challenges into our experimental setup to assess their impact on the performance of multi-drone collaborative prediction. Specifically, we examine the effects of sensor noise, which primarily affects the acquisition of camera extrinsics, as well as environmental factors such as drone vibrations and rough terrain, which can influence the drone's relative height to the ground plane. To evaluate the impact of these factors on performance, we conduct complementary experiments by introducing Gaussian noise into these aspects. The statistical results are shown in the attached PDF, and a detailed analysis is provided in Answer 1 of the overall rebuttal.
As to communication delays, we propose integrating a delay assessment module into the collaborative interaction stage of DHD. This module would compensate for lost or delayed frames by leveraging historical collaborative features. Moreover, the compensation process can be streamlined by prioritizing critical areas using the information volume assessment component in the SISW module. At the downstream decoder, missing predictions can also be filled by blending predictions from adjacent time frames and extrapolating trends accordingly.
### [Weakness 2]: How does the performance scale with the number of collaborative drones? Is there an optimal or maximum number of drones for effective collaboration?
To explore how the number of collaborating drones affects performance, we conducted relevant experiments, as shown in the table below:
| Num of Drones | IoU (Short) | IoU (Long) | VPQ (Short) | VPQ (Long) |
|------|-------------|------------|-------------|------------|
| 1 | 41.5 | 31.1 | 33.5 | 25.6 |
| 2 | 57.1 | 45.4 | 48.2 | 38.4 |
| 3 | 61.3 | 50.2 | 50.4 | 43.4 |
| 4 | 61.4 | 54.0 | 50.4 | 46.2 |
For the short-range area of 50m x 50m, three drones are sufficient to predict trajectories, with performance comparable to that of four drones. However, for the long-range area of 100m x 100m, predictive performance improves as the number of drones increases.
This difference can be attributed to the positional layout of the drones in the dataset, Air-Co-Pred. The drones are situated near intersections to monitor traffic flow, facilitating comprehensive coverage of the short-range areas close to these intersections. In contrast, much of the information in the long-range areas extends along specific road branches, which may only be captured by a single drone. Therefore, increasing the number of drones results in more comprehensive coverage of the long-range areas.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their detailed explanation. I've raised the score accordingly.
---
Reply to Comment 1.1.1:
Title: Thank you for the feedback
Comment: Thank you for patiently pointing out the weaknesses in our initial manuscript and reviewing our response. If the submission is accepted, we will include a detailed analysis of our proposed DHD's real-world limitations and scalability in the revised version, ensuring a more comprehensive presentation of our paper. | Summary: The paper presents the "Drones Help Drones" (DHD) framework for improving collaborative trajectory prediction in multi-drone systems. It addresses challenges with aerial observations and communication bandwidth by: (i) Using ground priors from inclined drone observations to enhance Bird's Eye View (BEV) accuracy; (ii) Implementing a selective mechanism to prioritize essential information, reducing communication needs; (iii) Introducing the "Air-Co-Pred" dataset for multi-drone collaborative prediction.
Experiments show that DHD improves BEV accuracy, reduces required transmission bandwidth, and maintains high prediction performance.
Strengths: The paper presents an innovative approach to multi-drone collaborative trajectory prediction. Here are some observations:
S1. The module for estimating depth from multi-view drone perspectives is convincing and well-formulated. The use of object height contributes significantly to research in this field.
S2. The strategy requires only 25% of the transmission ratio without affecting performance. This might be due to the SISW module, which retains only essential information.
S3. The three strategies (depth estimation, interaction, and prediction) are trained end-to-end. This is somewhat surprising given the architecture design.
Weaknesses: W1. The strategy follows the classic collaborative framework composed of three modules: BEV estimation, construction of interaction, and prediction. The methodology utilizes well-known strategies from the literature for this task. Specifically, for BEV estimation, it uses part of the framework from [13], although it is well adapted with the inclusion of height in the depth estimation. The Selective Sparse Interaction used in the SISW module appears similar to the approach in [14], as well as the aggregation performed. Finally, the prediction mechanism is taken from [34], as mentioned by the authors in L206.
Although the construction of a unified framework is not trivial, the overall strategy deviates little from existing ones.
W2. The authors have contributed to the creation of a new dataset that could benefit future research, but this will require appropriate adjustments. There is no comparison with other datasets used in the same or related fields, making it impossible to evaluate the quality of the dataset.
W3. The lack of additional datasets also makes the effectiveness of the framework less convincing, as there is no evidence of generalization on other established benchmarks. I understand the uniqueness of the task and wonder if it is possible to test the framework on established benchmarks (e.g., DAIR-V2X, V2XSet, and OPV2V datasets).
W4. In Table 2, the reasoning for separating V2X-ViT and V2VNet from other methodologies is not clear. The distinction between fully/partially connected is not a convincing rationale. Could the authors clarify this point?
W5. [Minor] Some findings in the appendix, such as Figure 9 and Section 6.4, could have been included in the main paper to strengthen certain claims.
Technical Quality: 2
Clarity: 3
Questions for Authors: See Weaknesses section for questions, especially I would appreciate if the authors highlighted the differences with the other mentioned methodologies.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [W1 & Q1]: Differences with existing methodologies.
Our DHD builds on existing solutions, but we make significant improvements to address unique challenges in multi-drone collaborative prediction.
For instance, long-range aerial observations can lead to substantial depth estimation errors, affecting BEV representations. Additionally, objects in view are typically small and sparse, with critical information making up only a minor portion. However, previous methods, primarily focusing on collaborative detection, have not adequately considered the selection and utilization of sparse information for joint prediction. Our DHD framework is specifically designed to overcome these challenges, distinguishing our approach from others.
Regarding the drone-specific BEV generation module, GBG, while it is based on [13], we introduce innovations of geometric priors to refine depth estimation, as acknowledged by Reviewer CzWU rEnT vjyF. Moreover, the improvements we've made in the LSS method are also applicable to other depth-based BEV generation techniques.
As for the similarity with [14], our SISW module focuses on areas with significant feature variations for subsequent collaborative feature fusion, which are likely to contain objects or critical environmental information essential for prediction tasks. This solution is not considered by previous collaborative strategies, including [14]. In terms of implementation, the sliding window mechanism for information volume assessment is a major contribution of our SISW module, as noted by Reviewer CzWU nEkT. Additionally, for the selective sparse interaction component, we optimize the selection mechanism by replacing traditional threshold-based filtering with score ranking. This modification allows us to precisely control transmission volume based on predefined ratios, rather than relying on manually-tuned empirical thresholds.
For the feature fusion module, while not our primary innovation, our approach differs significantly from [14]. Instead of their attention-based fusion, we use a multi-layer convolutional method to calculate contribution weights for each collaborative feature guided by local features. Theoretically, attention mechanisms are better suited for global feature associations, while the convolutional paradigm is widely recognized for its strength in preserving local features. Since our instance-level prediction task benefits more from strong local feature representations, we opt for this fusion method and provide additional experiments to demonstrate performance differences.
### [Weakness2]: A comparison with other datasets.
Please refer to Answer 3 of the overall rebuttal. Additionally, we provide a detailed comparison of key attributes in the table below:
| **Attribute** | **VisDrone-MDOT** | **VisDrone-MDMT** | **Airsim-Map** | **CoPerception-UAVs** | **Air-Co-Pred** |
|------------------------------|-------------------|-------------------|---------------|-----------------------|-----------------------|
| Source | TCSVT-21 | TMM-23 | CVPR-20 | Neurips-22 | Submit to Neurips-24 |
| Real/Simulation | Real | Real | Simulation | Simulation | Simulation |
| Support Tasks | 2D SOT | 2D MOT | 2D Seg | 2D/3D Det, BEV Seg | 2D/3D Det, BEV Seg, Pred, MOT |
| Number of Drones | 2~3 | 2 | 5 | 5 | 4 |
| Number of Samples | 105,584 | 19,839 | 4,000 | 5,276 | 8,000 |
| Frequency | 10Hz | 10Hz | Unknown | 0.25Hz | 2Hz |
| Cam. Params. & Coord. Info. | x | x | x | √ | √ |
| Suitability & Justification | x; w/o Cam. Params. & Coord. Info. | x; w/o Cam. Params. & Coord. Info. | x; w/o Cam. Params. & Coord. Info. & Temporal Annotations.| Partially; Long time intervals render it unsuitable for temporal tasks. | √ |
Cam. Params. & Coord. Info. refers to camera parameters and coordinate information, which are critical for BEV generation.
### [W3]: Additional datasets. (e.g., DAIR-V2X, V2XSet and OPV2V datasets).
The benchmarks you mentioned are primarily designed for collaboration in autonomous driving scenarios, where the viewing angles of vehicles are horizontal. This contrasts sharply with the aerial oblique perspectives in our multi-drone collaboration research, making our drone-specific BEV generation module, GBG, less applicable to these benchmarks.
Besides, in the initial manuscript, we provided experimental results on the publicly available CoPerception-UAVs dataset in Table 4 of the initial manuscript, to demonstrate the generalization capabilities of our framework for collaborative detection.
We understand your concern about the reliance on simulated multi-drone datasets. In fact, we are currently developing a real-world multi-drone collaboration dataset to further validate and extend our work in future studies.
### [W4]: Reasons for separating V2X-ViT and V2VNet from others.
Benefiting from the fully connected paradigm, V2X-ViT and V2VNet achieve complete feature-level interaction among collaborators and employ more complex techniques for collaborative feature fusion. As a result, it is reasonable for them to demonstrate superior performance compared to the partially connected paradigm.
From the perspective of communication overhead, it is logical to distinguish V2X-ViT and V2VNet from other communication-efficient methods for a fair comparison.
Notably, our proposed SISW outperforms current partially connected methods and is comparable to the fully connected paradigm.
### [W5]:
Thank you for your suggestion regarding the content layout.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the thorough explanation of the differences with existing methodologies. I also appreciated the comparison of the dataset with others in the literature; it would be helpful if this could be included in the revised version of the paper.
In general, my concerns have been addressed, and I am inclined to raise the score.
---
Reply to Comment 1.1.1:
Title: Thank you for the feedback
Comment: Thank you very much for reading our responses! If the submission is accepted, we will incorporate more details about the differences between our DHD and existing methodologies, and highlight the comparison of our proposed Air-Co-Pred with other datasets in the final version. | Summary: The paper proposes a ground-prior-based BEV generation for drone trajectory prediction. The methods also propose an efficient information interaction via a Sliding Windows module. It accesses information volume across different areas through sliding windows. The authors provide experiments on the simulator-based Air-Co-Pred dataset and 3D object detection on CoPerception-UAVs.
Strengths: 1. The flow of the paper is thorough and clear.
2. The proposed ground-prior-based BEV generation and Sliding Windows module are innovative.
Weaknesses: The Ground-prior-based BEV Generation (GBG) module relies on certain geometric assumptions for depth estimation. In real-world scenarios, these assumptions might not hold due to variations in terrain, drone stability, and other environmental factors, potentially affecting the accuracy of BEV representations.
Although the paper mentions using EfficientNet-B4 for feature extraction due to its low latency and optimized memory usage, it does not provide a detailed analysis of the computational efficiency and scalability of the overall framework.
The Air-Co-Pred dataset, while a valuable contribution, is still limited to simulated data. The paper could benefit from additional datasets or validation for the main task.
The proposed Sparse Interaction via Sliding Windows (SISW) module, although innovative, might oversimplify the information selection process. The method assumes that feature discrepancies are the primary indicators of valuable information, which may not always be the case, especially in highly dynamic or cluttered environments.
There is a lack of baseline in the main experiment; the authors have compared their method with only two other methods for BEV generation.
There is a lack of trajectory prediction evaluation; the authors should provide the average displacement error as well.
Typo in Table 3 caption.
SISW has shown little improvement, and the performance declines when the transition rate is 0.25. Additionally, the authors do not provide the individual result of SISW on 1 with GBG still present.
Technical Quality: 2
Clarity: 3
Questions for Authors: Can the authors provide a component ablation of only the SISW component, like three rows on a 1:1 ratio? Furthermore, the results of missing GBG and SISW on a 0.25 transition rate are not provided.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper relies on simulations using the CARLA platform. Although the simulated environment provides controlled and varied scenarios, it may not fully capture the complexities and unpredictabilities of real-world environments.
It seems like the sliding window module is not that significant, and the authors have not provided a complete ablation study to show its efficacy.
The component ablation is not thorough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [W1 & L1]: Consider variations in terrain, drone stability, and other environmental factors.
Please refer to Answer 1 in the overall rebuttal.
### [W2]: Analysis of the computational efficiency and scalability of the overall framework
As seen in Table VI of the PDF, the differences in trainable parameters (which reflect model complexity) and model parameter sizes (which impact memory usage) among the compared methods are not significant, except for Who2com and When2com. Notably, the FLOPs for V2VNet and Late Collaboration are significantly higher than those of other baselines. This is due to the iterative processes involving multiple for-loops for information exchange, which generate multiple parallel paths during computation. Each branch performs independent calculations before merging the results, significantly increasing computational complexity. The same principle applies to the difference in FLOPs between our DHD framework and Where2comm.
For the investigation of stability, we explore the number of collaborative drones in Answer 2 of the overall rebuttal.
### [W3]: Additional datasets for validation.
Please refer to Answer 3 in the overall rebuttal.
### [W4]: SISW might oversimplify the information selection process. Can it handle highly dynamic or cluttered environments?
Our proposed SISW module is specifically designed with the characteristics of aerial perspectives, where targets are often small and sparse, and surrounding environmental textures can guide trend prediction. Specifically, SISW employs a sliding window mechanism to assess and select critical information. This design is simple yet efficient, outperforming previous sparse interaction baselines.
Although the SISW module is not primarily designed for highly dynamic or cluttered environments, our dataset, Air-Co-Pred, inherently includes such scenarios, thereby enhancing the module's adaptability and robustness.
Specifically, our dataset contains substantial moving vehicles, which aligns with the highly dynamic scenarios mentioned.
As illustrated in Figure 6 of the initial manuscript, the low proportion of targets within the field of view indicates a high presence of clutter.
Furthermore, our adopted BEV representations help filter out much of the clutter in 2D vision during spatial projection by truncating the z-axis.
Therefore, the SISW module proposed in this paper can be considered applicable to dynamic and cluttered environments to a certain extent.
### [W5]: A lack of BEV baselines.
Thank you for your feedback. In our work, we focus on optimizing depth estimation from aerial perspectives within the BEV generation process. The improvements we've made in the GBG module are indeed applicable to other depth-based BEV generation methods.
We recognize the importance of comparing our method against a broader range of baselines.
To provide additional comparisons, we include methods like **BEVDet4D**, which considers temporal BEV representations, and **BEVerse**, which applies multi-task learning to improve BEV generation. As shown in Table V of the PDF, while multi-task learning offers minimal performance gains, incorporating temporal information does enhance BEV generation. However, our DHD framework with the GBG module, which leverages geometric priors, still outperforms these methods.
### [W6]: Calculate the average displacement.
As illustrated in Table II of the PDF, a lower ADE does not necessarily indicate better prediction performance. For example, "No Collaboration", which relies on single-view observations, misses a significant number of objects. However, its calculated ADE is relatively low because this metric only considers matched trajectories. Similarly, methods that performed well in our initial experiments, such as Early Collaboration, V2VNet, V2XViT, and our proposed DHD, exhibit higher ADE compared to Where2comm and UMC. This is partly due to Where2comm and UMC having a higher missing ratio, which excludes some difficult trajectories from the ADE calculation. Therefore, when using ADE as a metric, it's important to also consider the ratio of missing trajectories.
This is why we adopt evaluation metrics such as IoU and VPQ, as suggested in the research of our adopted predictor ''PowerBEV''. These metrics provide a more comprehensive assessment by considering missed detections, false positives, and the deviation between predicted and ground truth positions.
### [W7]: Typo errors.
We have noted the incorrect usage of quotation marks and will correct it in the revised version.
### [W8 & Q1 & L2 & L3]: Additional component ablation study about SISW module.
Regarding your comment that "SISW has shown little improvement, and the performance declines when the transition rate is 0.25," the SISW module is specifically designed to preserve critical information for the prediction task during sparse collaborative interactions. This concept is also positively acknowledged in Strength 2 of Reviewer nhVA's feedback. Given that our method transmits only a subset of features, it is understandable that there may be a slight performance decline compared to the complete feature transmission.
We have enriched the ablation studies as you suggested in Table IV of the PDF. Notably, when the SISW module is absent, we resort to randomly selecting 25% of spatial features for transmission to achieve spatial sparsity.
The additional experiments show that when the transmission ratio is close to 1, our DHD framework, which includes both the GBG and SISW modules, provides a performance gain compared to the variant with only GBG, particularly in the VPQ metric. This improvement is attributed to the collaborative feature aggregation component in SISW, which enhances the weight of features that are beneficial for downstream prediction tasks.
The enhanced ablation experiments, which consider both module-level and transmission-level factors, clearly demonstrate the effectiveness of both the GBG and SISW modules.
---
Rebuttal 2:
Comment: Dear Reviewer nEkT. We hope that our rebuttals have clarified your concerns. if there are any specific analyses or complementary experiment that could clear your doubts, we would be happy to try and provide them. We sincerely thank you again for your time and feedback. | Rebuttal 1:
Rebuttal: ## Answer 1: Investigate the effects of sensor noise, flight turbulence and rough terrain on the performance of DHD.
### Flight Vibrations and Uneven Terrain.
We acknowledge that flight vibrations and uneven terrain can interfere with the drone's relative height to the ground, affecting the BEV generation from the GBG module. Therefore, we introduce perturbations to the drone's altitude to simulate these conditions.
Specifically, we introduce Gaussian noise to the drone's altitude, with noise levels ranging from 0.002 to 0.01. At the highest level, this results in a maximum altitude variation of 0.5 meters, which is significant for drone flight. As illustrated in Fig. 1 of the attached PDF, the results demonstrate that when the noise level exceeds 0.01, the depth estimation advantage conferred by the geometric prior in the GBG module diminishes.
However, this limitation of terrain variations is not unique to our GBG module. Mainstream BEV generation methods, such as LSS, BEVFormer, and PETR, also assume a flat ground. Complex terrain requires further study and can be considered a distinct research direction.
The primary goal of our GBG is to explore BEV generation specifically designed for drones. While this is an initial attempt, we recognize the need to account for more complex real-world conditions in future deployments. To address this, we propose a simple yet potentially effective solution: developing a ground flatness estimation module to assess variations in the ground plane, allowing the estimated object height to be adaptively adjusted and thereby mitigating the impact of uneven terrain on subsequent BEV generation.
### Sensor Noise, Particularly in Extrinsic Parameters.
As illustrated in Fig. 2 of the PDF, increasing noise results in a gradual decline in both IoU and VPQ. In short-range settings, noise ratios below 0.003 cause negligible performance drops. However, IoU decreases by about 25% when the noise ratio is between 0.003 and 0.013. Between 0.013 and 0.020, the decline slows, with an additional reduction of approximately 10%. VPQ exhibits a similar trend.
In long-range settings, noise ratios below 0.003 also result in acceptable performance declines. However, when the ratio reaches 0.005, noticeable performance degradation occurs, with IoU dropping by 21.3% and VPQ by 32.9%. Overall, noise has a more pronounced impact on VPQ than on IoU, indicating that camera extrinsic bias more severely affects the consistency of future trajectory predictions.
Furthermore, the greater impact of extrinsic noise in long-range observations can be attributed to objects at long distances often being observed from a single perspective, lacking the multi-view validation available in short-range scenarios.
These results demonstrate that our DHD can tolerate a small amount of sensor-based extrinsic noise. Besides, larger biases in extrinsic parameters can significantly impact collaborative prediction. Therefore, accurate estimation of these parameters is crucial for maintaining high performance in collaborative perception systems. This finding is equally applicable to real-world scenarios.
## Answer 2: Investigate the number of collaborative drones.
To explore how the number of collaborating drones affects performance, we conduct relevant experiments, as shown in TABLE I of the PDF.
For the short-range, three drones are sufficient to predict trajectories, with performance comparable to that of four drones. However, for the long-range, predictive performance improves as the number of drones increases.
This difference arises because the drones in our dataset are positioned near the intersection to monitor traffic flow, making it easier to cover areas close to the intersection, which fall within the short-range area. In contrast, much of the information in the long-range area extends along specific road branches, which are only partially captured by the drones at the intersection. Therefore, having more drones results in more comprehensive coverage of the long-range area.
## Answer 3: Lack of comparison with other existing datasets.
To demonstrate the contributions of our dataset, Air-Co-pred, we conduct a comparative analysis of existing datasets in the multi-drone collaboration domain. By the submission deadline, several datasets are available for multi-drone collaboration, including two real-world datasets (VisDrone-MDOT and VisDrone-MDMT) and two simulation datasets (Airsim-Map and CoPerception-UAVs).
Regarding the existing real-world datasets, the VisDrone series has collected a substantial amount of real-world video data. However, these datasets are constructed solely from a visual perspective and do not provide any information about the drones' poses or camera parameters. As a result, they are limited to supporting 2D visual algorithms such as ReID and object tracking, and cannot be used to evaluate our proposed collaborative prediction framework, which integrates both visual and spatial information.
The simulation dataset, Airsim-Map, is designed to demonstrate the effectiveness of who2com and when2com in mitigating image degradation. However, it only provides multi-view 2D semantic segmentation masks.
The dataset most similar to ours is CoPerception-UAVs, proposed by where2comm. While this dataset focuses exclusively on multi-drone collaborative 3D object detection and has been used to validate frameworks like Where2comm and CoCa3D, it falls short in addressing joint temporal tasks. Additionally, its large sampling intervals are inadequate for validating our DHD.
To bridge this gap, we propose a more comprehensive dataset that includes detailed annotations and an appropriate sampling frequency. This dataset supports a wide range of tasks, including 2D/3D detection, BEV segmentation, multi-object tracking, and trajectory prediction, and facilitates the preliminary validation of multi-drone collaboration across various scenarios within a simulation environment.
Pdf: /pdf/bd0a37d017c319971f93b0c10dd45934cce2a216.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes a framework, Drones Help Drones (DHD), which tackles trajectory prediction of objects in the scene. DHD consists of a Ground Prior Based Bird’s Eye View (BEV) Generation (GBG) module, which provides depth estimation from the drone to an object using ground priors to create an accurate BEV representation of the features. It also utilises Sparse Interaction via Sliding Window (SISW) to minimize the data transmission cost between drones. In addition, the authors develop a new dataset for multi-drone collaboration prediction, "Air-Co-Pred". The paper is interesting in the field of collaborative AI.
Strengths: The paper is well-structured, referencing a well-established research stream in computer vision and collaborative object trajectory prediction. The following are some strengths of the paper:
- The research problem is clearly described with good visuals and diagrams, which aid in the explanation.
- There are several contributions, including BEV generation, sliding windows for sparse interaction, and Air-Co-Pred simulated dataset
- The paper provides both quantitative and qualitative assessments of their framework, compared to baseline and state-of-the-art, such as Who2com and Where2com. It demonstrates good improvements.
- The provided appendices are useful for further details and ablation studies.
Weaknesses: The paper has several weaknesses:
- It investigates methods to overcome single-drone issues, such as occlusions and blurs; nevertheless, it is also important to discuss a bigger picture of their use cases, including accident prevention and path planning, in greater details with certain limitations. For example, in accident prevention, if multiple drones collaborate and predict an accident is about to happen, what can it do? Does it then communicate/inject commands over the air to the vehicle causing certain actions?
- The DHD framework consists of feature extraction, BEV, Sparse Interaction via Sliding Windows, and Trajectory Prediction. The idea of having sliding windows for sparse interaction is quite interesting; nevertheless, relying on a local coordinate system and pixel-level weight fusion can be a weak spot when it comes to real-world settings. It is also important to examine how each of the modules contributes to the overall performance in ablation studies.
- The development of Air-Co-Pred remains questionable, based on CARLA. There are indeed many CARLA-based datasets and it is important to compare your dataset with others to prevent overlappings/duplications.
- It is vital to discuss the following papers, in relation to the work:
1. Wei, S., Wei, Y., Hu, Y., Lu, Y., Zhong, Y., Chen, S., & Zhang, Y. (2024). Asynchrony-robust collaborative perception via bird's eye view flow. Advances in Neural Information Processing Systems, 36.
2. Lu, Y., Hu, Y., Zhong, Y., Wang, D., Chen, S., & Wang, Y. (2024). An extensible framework for open heterogeneous collaborative perception. arXiv preprint arXiv:2401.13964.
3. Liang, J., Jiang, L., & Hauptmann, A. (2020). Simaug: Learning robust representations from simulation for trajectory prediction. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIII 16 (pp. 275-292). Springer International Publishing.
Technical Quality: 3
Clarity: 3
Questions for Authors: There are many questions to address as the following:
- With the DHD framework, given objects to track/predict, what is the optimal number of drones that need to be “watching” the object in order to produce the best prediction? Is it always the case that the more drones that have the object in view, the better the trajectory prediction? Why four collaborative drones in Air-Co-Pred? Can we get away with just 2 drones?
- Can you explain more about why CARLA is used to simulate and produce the dataset? Has anyone attempted trajectory prediction with multiple drones in real-world settings? Is it feasible? What needs to happen before DHD can be deployed in production in the real-world setting? Maybe it can be included in a discussion section.
Is there an optimal drone height/altitude? Is 50 meters the best height value, so it’s used in the dataset?
- How often should the drones be communicating/transmitting data to each other? Is it the same as the aerial observation samples being collected (frequency of 2 Hz)? Can it be reduced to further lower transmission data?
- How does the DHD framework respond to noises, such as flight turbulence?
- What will be the performance if we turn on/off multiple components, such as BEV or Sparse Interaction? This should be included in ablation studies.
- How far are we from real-world experiments?
- What will be the impacts of your study to the field?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have raised concerns about the use of simulated settings, which limit the research's practicality. It is important to develop real-world scenarios for future validations of the research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### [W1]:
We envision a scenario where vehicles are centrally coordinated. In this setting, collaborative drones predict abnormal trajectories that may lead to accidents. Upon identifying the risks, the drones use wireless communication to alert nearby vehicles, prompting them to take evasive actions to prevent accidents. The system can also integrate with LLMs to provide solutions. Besides, it can be used to forecast traffic congestion, offering lane recommendations.
### [W2]:
As to the concern about obtaining UAV coordinate systems in real-world scenarios, these parameters are considered readily available data sources in [1].
Regarding the pixel-level weight fusion in SISW, the reviewer's concern may stem from the computational overhead. Our approach focuses on fine-grained feature fusion, which is indeed more complex than simple addition or concatenation. As shown in TABLE III of the PDF, we supplement the results with common fusion methods. As expected, they perform worse than ours.
[1] Zhao M, Chen J, Song S, et al. Proposition of UAV multi-angle nap-of-the-object image acquisition framework based on a quality evaluation system for a 3D real scene model of a high-steep rock slope[J].
### [W3]:
Below is a description of other CARLA-based datasets.
**For Autonomous Driving:**
- **OPV2V**: Focuses V2V collaborative perception to enhance safety and efficiency in autonomous driving.
- **OPV2V-Occupancy**: An extension of OPV2V, it introduces an occupancy grid map to represent the distribution of surrounding obstacles.
- **V2X-Sim**: Simulates both V2V and V2I modes, covering a range of complex scenarios. However, it does not simulate noise environments and includes only a single type of road.
- **V2XSet**: Simulates real-world noise in V2X collaboration.
- **V2XSet-w**: An extended version of V2XSet, it includes more adverse weather conditions.
- **DeepAccident**: Generates various traffic accident scenes and records relevant sensor data. This dataset is intended for traffic accident analysis and prevention.
**For UAV Research:**
- **CoPerception-UAVs**: Refer to Answer 3 in the overall rebuttal.
It is evident that the key difference between our Air-Co-Pred and other CARLA-based datasets is our aim to solve fundamentally different problems from the outset. Furthermore, in terms of dataset characteristics, unlike the horizontal perspective of autonomous driving scenes, our multi-drone dataset focuses on aerial observations with oblique views over wide ranges. Additionally, compared to CoPerceptionUAVs, our dataset supports additional temporal tasks such as prediction and tracking.
### [W4]:
- **CoBEVFlow**: It addresses the challenge of asynchrony in collaborative perception by focusing on robustness to temporal misalignments. Although our current work does not account for asynchrony, the proposed solutions in their study provide valuable insights.
- **HEAL**: Lu et al. propose a framework for heterogeneous collaborative perception, focusing on collaboration among diverse data sources and perception models. Our work could benefit from their approach when dealing with heterogeneous members in collaboration.
- **SimAug**: This research explores learning robust representations for trajectory prediction using rich simulated data from CARLA. Specifically, they identify hard examples from multi-view information and combine them with the original view through adversarial learning to better adapt to real-world environments.
### [Q1]:
Refer to Answer 2 of the overall rebuttal.
### [Q2]:
- **Why CARLA?**
CARLA offers the flexibility to create complex urban traffic scenarios, providing rich dynamic objects along with automated annotations, including the camera intrinsics, extrinsics, and coordinate information needed for BEV generation. It can generate extensive data for algorithm validation with minimal time and manpower investment.
- **Is it feasible for multi-drone trajectory prediction?**
To the best of our knowledge, real-world trajectory prediction with multiple drones is still in an early stage. The most relevant multi-drone collaborative datasets, such as VisDrone-MDOT and VisDrone-MDMT, are primarily collected for tracking. However, their temporal continuity and ID information make them adaptable for trajectory prediction, indicating the feasibility of using multiple drones for joint prediction.
- **Why 50m?**
At this altitude, four drones near intersections can collaboratively cover an area of around 100mx100m. This height also results in significant occlusions, such as vehicles being obscured by buildings or trees, which we aim to address through multi-drone collaboration.
### [Q3]:
We set the interaction frequency to match the sampling frequency of 2 Hz. However, we can explore reducing this by leveraging recent multi-drone collaboration predictions, such as instance segmentation and offsets, to extrapolate trends and strategically fuse them with current single-view predictions.
### [Q4]:
Refer to Answer 1 of the overall rebuttal.
### [W2.1 & Q5]:
Refer to TABLE IV of the PDF.
### [Q6 & Q2.4]:
Before DHD can be deployed in real-world applications, several steps are necessary:
- **Robustness**: Can handle sensor noise and communication delays, which are common in real-world scenarios.
- **Scalability and Flexibility**: Adapt to the varying number of drones and remain resilient to drone failures or disconnections.
- **Environmental Adaptability:**: Account for weather conditions, lighting variations, and complex terrains.
- **Real-Time Performance**: Optimize for lightweight deployment on edge devices with high FPS.
### [Q7]:
To the best of our knowledge, we are the first to employ an end-to-end approach that combines multi-drone joint observations for trajectory prediction. Additionally, we provide a simulated multi-drone dataset to support this work, which can be used for various tasks and serves as a benchmark for future research in multi-drone collaboration. | null | null | null | null | null | null |
Reparameterization invariance in approximate Bayesian inference | Accept (spotlight) | Summary: The paper studies the effects of the invariance of Bayesian neural networks under reparametrization on their approximate posterior inference. The primary effect is the ambiguity on the uncertainty of the inferred function as it can be represented by multiple reparameterizations, each of which may be assigned a different uncertainty estimate. The paper takes Laplace approximation as a case and studies its properties when used for Bayesian neural net inference using linear algebraic and differential geometric tools. The paper derives an algorithm from its key theoretical outcomes and shows it to outperform its counterparts in various downstream uncertainty quantification tasks.
Strengths: * This is a very high-calibre paper, a complete piece of work, that studies an original and significant problem with impressive theoretical rigor.
* The presentation is extremely clear. Figures 1 and 2 are truly helpful to the reader to grasp the visual intuition behind the studied problem and the proposed solution.
* The compilation of theoretical tools such as the a parameter manifold on the Generalized Gauss-Newton (GGN) approximation and a diffusion defined on it are original, advanced, and elegant.
* The reported numerical results are both comprehensive and strongly in favor of the central claims.
Weaknesses: * Overall this is excellent piece of work with a mature write-up. Just its presentation may be improved tiny bit. For instance, certain terms are used very early on in the paper such as the introduction and Figure 1 in their meanings a bit unusual for the broad ML research readership, such as a "kernel" and "diffusion". While their allocation in the particular context is appropriate, reading experience may be improved if their meanings are made more explicit, at least verbally, in their first use.
Technical Quality: 4
Clarity: 3
Questions for Authors: * How does the argumentation in the first paragraph of Section 3 come together with the main message of the paper? Is the problem not instead that the posterior assigns separate masses to different weights that represent the same functional form? How exactly does this produce a pathology in the posterior besides the need to detect multiple equivalent representations and add their masses? Why do we actually need to design the related degrees of freedom? Overall I buy the argument and see the problem, but my point is that probably this bridge paragraph may be misleading the reader.
* What does f_lin^w precisely mean in Eq 4? Is it the first-order Taylor approximation of f?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The paper briefly discusses its limitations in the final paragraph of the Conclusion section. The potential negative societal impacts of the studied work are not discussed probably because they are thought to be inapplicable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the excellent review and we are very grateful for a thorough engagement with our work and for pinpointing its highlights.
We acknowledge your point (under Weaknesses) about providing a bit more explanation of unconventional terms such as kernel (that in machine learning often means something entirely different). We will take this feedback to heart and update the manuscript accordingly.
Regarding the opening paragraph of Sec. 3, then we acknowledge that, in particular, the phrasing “pathological representations” is too vague and opens the door to misinterpretations. We suggest to rephrase the sentence
“However, as we have argued, traditional approximate Bayesian inference does not correctly capture this, leading to pathological representations”
into
“However, as we have argued, traditional approximate Bayesian inference does not correctly capture this and assigns different probability measures to identical functions.”
We believe such a change clarifies the text, but are happy to take further feedback.
Yes, $f_{lin}^w$ is exactly the mentioned Taylor expansion. The equation is given in line 107 of the submitted manuscript, but it would have been better to already introduce this notation around line 64. We will make this change.
We hope this answers your questions and concerns, and are happy to discuss further. Once again, we are grateful for the favorable review that exactly captures our intention, and we hope that you will continue to fight for the paper.
---
Rebuttal Comment 1.1:
Title: Keep score
Comment: Thanks for the clarification. Rephrasing the sentence in the suggested way will indeed do the trick. While I do agree with the other reviewers that the method has an computational overhead and experiments may be improved, I still think that the technical contribution is solid and valuable and the other issues are rather secondary. Hence, I tend to keep my score for now. | Summary: The paper addresses the problem of maintaining invariance under reparameterization of Bayesian Neural Networks (BNNs). This issue undermines the reliability of BNNs since different parameterizations of identical functions produce different posterior densities. Through theoretical analysis and empirical validation, the authors demonstrate the effectiveness of their approach by extending a geometric view (in Linearized Laplace) and using a Riemannian diffusion process to achieve reparameterization invariance.
Strengths: * Proposed Riemannian diffusion process incorporates the properties of the linearized Laplace approximation into the original neural network prediction, resulting in a better posterior fit.
* The method leverages advanced mathematical tools from differential geometry, offering a sophisticated approach to maintaining reparameterization invariance.
* In both in-distribution and out-of-distribution scenarios, the proposed method consistently outperforms other approaches.
* The paper is well-organized and well-written, with each section building logically on its predecessor.
Weaknesses: * Although the proposed method is theoretically sound, its computational complexity is higher than simpler approximations. This could limit its practicality for very large-scale neural networks.
* Similar to previous works in Linearized Laplace, the need for new computational pipelines to handle the GGN matrix and its induced structure is acknowledged but not fully addressed in the paper.
* Although the experiments are comprehensive, they demonstrate results on standard datasets and relatively small to medium-sized neural networks.Testing the method on more diverse datasets and larger-scale models would strengthen the empirical validation.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Although mentioned in related work, I feel that this paper needs comparative experiments with methods such as Riemannian Laplace and (Bergman et al., 2024) and Connectivity Laplace (Kim et al., 2023).
* As mentioned in Weakness section, I would recommend the author to evaluate the Riemannian Diffusion in other domain dataset & architectures (e.g., Transformers).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are mentioned in "Weakness" and "Questions".
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review.
- “Although the proposed method is theoretically sound, its computational complexity is higher than simpler approximations. [...]”
While this is a valid comment we believe it stems from a slight misunderstanding of the main point that is being made with our experiments.
The main goal of this paper is to understand the uncertainties obtained from Laplace approximations and more generally to understand some of the problems encountered in Bayesian deep learning. More specifically decomposing the uncertainty into its two fundamental components and how they interact with the choice of the predictive (neural network or the linearized neural network). Our purpose is not to advance the state-of-the-art of Bayesian methods.
One way to think about the experiments is as follows:
Observation: Linearized Laplace underfits less (in-distribution) than Sampled Laplace.
Hypothesis: This difference is due to improper accounting of the reparameterization properties in the Sampled Laplace.
Theory: The theory presented in the paper provides mathematical justification for this hypothesis.
Experiment: We build a method that is as close to Sampled Laplace as possible while also accounting for the reparameterization properties of the neural network. This is the Laplace diffusion which turns out to not underfit in-distribution, hence supporting the extensive theory developed in the paper.
For this reason, we voluntarily spend the extra compute to get as close as possible to the theoretical ideal. Specifically, for a $P$ parameters network, computing $S$ samples by approximating the diffusion with $T$ discrete time steps and a low-rank $K$ approximation of the GGN requires $O(KP + SP)$ in memory and $O(K^2PST)$ in time, i.e. it is $ST$ times as expensive as linearized Laplace.
We could have made the method more scalable by fixing low values for $K$ or $T$. However, the resulting experiments can no longer validate the theory due to the brutal approximations. This is also why we only include comparison with baselines in the appendix omitting them from the main paper, where we only compare the different predictive functions.
Of course, we do not disregard the importance of building practical, performant, and scalable Bayesian methods. In future work, we intend to build on the here-presented theoretical insights to achieve this exact goal. However, understanding why Bayesian deep learning struggles (this paper) is a prerequisite. The main takeaway of this paper is not a particular method but a general principle: Think about reparameterization issues when building Bayesian methods.
- “Similar to previous works in Linearized Laplace, the need for new computational pipelines to handle the GGN matrix and its induced structure is acknowledged but not fully addressed in the paper.”
Indeed, our paper concludes with a brief discussion of this computational challenge. We intended to acknowledge the inherent difficulty and to draw more attention to this problem. That being said, the paper does contribute advances in this direction as our implementation uses efficient matrix-free access to the GGN using automatic differentiation combined with iterative solvers to obtain samples from the Laplace approximation. This is a significant improvement over existing implementations. We deemphasize these computational aspects of our work and only discuss them implicitly because our main focus is on the theoretical explanation.
- “Although the experiments are comprehensive, they demonstrate results on standard datasets and relatively small to medium-sized neural networks. Testing the method on more diverse datasets and larger-scale models would strengthen the empirical validation.”
We have chosen to replicate the key experiments from the Laplace Redux paper [1] which is an important reference for Laplace approximations. Their largest experiments are also on CIFAR-10 with ResNets so we don’t believe the scale of our experiments is too unreasonable. Furthermore, scaling to larger datasets and models necessarily requires making some fairly brutal approximations, by doing subnetwork inference, KFAC, diagonal approximations, etc. We are not at all opposed to this but hopefully, it is clear from the previous response why such approximations are inappropriate in the context of theory-validation.
- “Although mentioned in related work, I feel that this paper needs comparative experiments with methods such as Riemannian Laplace and (Bergman et al., 2024) and Connectivity Laplace (Kim et al., 2023).”
As mentioned above, our main experimental goal is to validate the theory. Further note that the public code for the first reference is orders of magnitude slower than our code and it is infeasible to run on as large models as we do. The public repository for the second reference has no documentation and no code for reproducing the experiments in their paper. Consequently, it would be highly non-trivial to run them on our benchmarks.
- “As mentioned in Weakness section, I would recommend the author to evaluate the Riemannian Diffusion in other domain dataset & architectures (e.g., Transformers).”
To our knowledge, it's not standard practice to evaluate Laplace approximations on transformers. We are only aware of [2] which evaluates Laplace approximations on transformers and they perform Laplace approximations only on the low-rank adaptation weights. Hopefully from the previous response, it is clear why such approximations are not appropriate for our experiments given our purpose.
We thank the reviewer for their efforts and we urge them to reconsider their assessment of the experiments in light of the additional context we have provided above.
[1] Daxberger, Erik, et al. "Laplace redux-effortless Bayesian deep learning." Advances in Neural Information Processing Systems 34 (2021)
[2] Yang, Adam X., et al. "Bayesian low-rank adaptation for large language models." arXiv:2308.13111 (2023)
---
Rebuttal 2:
Comment: Thanks for the detailed response from the authors.
The authors claim I misunderstood the paper, but I don't believe that's true. Linearized Laplace outperforms (sampled) Laplace, and there are already other theoretical approaches to the problem. (See Theorem 4.1 in [1])
In addition, since the experiment I requested is applied to a very simple transformer, I don't think it is impossible.
As a result, I will maintain the current score.
[1] Kim, SungYub, Kyungsu Kim, and Eunho Yang. "GEX: A flexible method for approximating influence via Geometric Ensemble." Advances in Neural Information Processing Systems 36 (2023).
---
Rebuttal Comment 2.1:
Comment: > The authors claim I misunderstood the paper, but I don't believe that's true. Linearized Laplace outperforms (sampled) Laplace, and there are already other theoretical approaches to the problem. (See Theorem 4.1 in [1])
This paper (already referenced by us) does not compare Linearized and Sampled Laplace. Further, note that they make no theoretical links between linearization and reparametrization invariance and they study the Hessian instead of the GGN.
> In addition, since the experiment I requested is applied to a very simple transformer, I don't think it is impossible.
It is unclear to us what the exact experiment you suggest. Previously we interpreted your review to mean a standard transformer in an NLP task. We reemphasize that this is not a usual benchmark for evaluating Bayesian methods especially Laplace Approximations where standard regression and image classification tasks are far more popular. Hence it is not obvious what priors are appropriate, what dataset should be used, and what metrics should be reported. These are important questions and we do not aim to settle them here; Tristan et al. discuss some of the pitfalls and difficulties of such experiments.
If the reviewer means a Vision Transformer (smaller version with only a few million parameters) on the standard image classification task, this is very much within reach. In fact, we even have code to run such experiments. However, doing so would require us to make some large approximations (limited Lanczos iterations and diffusion steps). The resulting experiment would, thus, not serve for theory validation, but we are happy to add them nonetheless if you see value therein.
Cinquin, Tristan, et al. "Pathologies in priors and inference for Bayesian transformers." arXiv preprint arXiv:2110.04020 (2021). (edited) | Summary: The authors provide theoretical analysis of the underfitting problem of Laplace approximation. Specifically, they show that the underfitting of Laplace approximation is due to the approximate posterior covariance is not invariant under reparameterization. Moreover, they propose a reparametrization invariant diffusion posterior to address undefitting. The method are evaluated over standard image classification benchmarks. The quality of the uncertainty for both in-distribution and out-of-distribution tasks are considered.
Strengths: 1. The paper provides, to my knowledge, the first theoretical justification of why Laplace approximation sufferes from underfitting: the approximate posterior covariance is not reparameterization invariance.
2. Given the theoretical analysis, the idea of reparametrization invariant posterior is natural and proves to be effective empirically.
3. The paper is easy to follow and well organized.
Weaknesses: 1. The proposed method, as mentioned by the authors as well, sufferes from more expensive computation. It would be good to report the time metric for readers to properly assess the practicability of the method.
2. The datasets considered are a bit outdated and the networks considred seem to be quite small, such that overall the performance is on the lower end (e.g. <90% acc. for CIFAR10). Recent BDL papers typically consider larger datasets (such as ImageNet) and deeper networks (e.g. [1]). Furthermore, last-layer Laplace seems to be competitive with or even outperform the proposed method in some experiments.
3. The method is tailored to improving Laplace approximation, and there is no discussion related to any other approximate inference techiques (e.g. variational inference).
References
[1] Antoran et al. Sampling-based inference for large linear models with application to linearised Laplace. ICLR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do not address potential negative social impact since the paper is predominantly theoretical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the excellent review, which we will reply to in parts.
- “The paper provides, to my knowledge, the first theoretical justification of why Laplace approximation sufferes from underfitting: the approximate posterior covariance is not reparameterization invariance.”
This is correct. Additionally, we are the first to provide a complete characterization of continuous reparameterizations of a neural network. Certain reparameterizations such as scale reparameterizations for ReLUs are already known but we provide a complete characterization of these in terms of the kernel manifold. Furthermore, to the best of our knowledge, we are the first to consider data-dependent reparameterizations rather than only global reparameterization (a subset of the former).
- “The proposed method, as mentioned by the authors as well, suffers from more expensive computation. It would be good to report the time metric for readers to properly assess the practicability of the method.”
For a $P$ parameters network, computing $S$ samples by approximating the diffusion with $T$ discrete time steps and a low-rank $K$ approximation of the GGN requires $O(KP + SP)$ in memory and $O(K^2PST)$ in time, i.e. it is $ST$ times as expensive as linearized Laplace.
However, we want to emphasize that our main goal here is to understand the uncertainties obtained from Laplace approximations and more generally to understand some of the problems encountered in Bayesian deep learning. More specifically decomposing the uncertainty into its two fundamental components and how they interact with the choice of the predictive (neural network or the linearized neural network). Our purpose is not to advance the state-of-the-art of Bayesian methods.
One way to think about the experiments is as follows:
Observation: Linearized Laplace underfits less (in-distribution) than Sampled Laplace.
Hypothesis: This difference is due to improper accounting of the reparameterization properties in the Sampled Laplace.
Theory: The theory presented in the paper provides mathematical justification for this hypothesis.
Experiment: In addition to the theoretical justification, we build a method that is as close to Sampled Laplace as possible while also accounting for the reparameterization properties of the neural network. This is the Laplace diffusion which turns out to not underfit in-distribution, hence supporting the extensive theory developed in the paper.
For this reason, we voluntarily pay the extra computational factor $T$ to get as close as possible to the theoretical ideal.
For example, we could have made the method more scalable by fixing low values for $K$ or $T$. While such approximations might be more practical and scalable, the resulting experiments can no longer validate the theory due to the brutal approximations. This is also why we only include comparison with baselines in the appendix omitting them from the main paper, where we only compare the different predictive functions.
Of course, we do not disregard the importance of building practical, performant, and scalable Bayesian methods. In future work, we intend to build on the here-presented theoretical insights to achieve this exact goal. However, understanding why Bayesian deep learning struggles (this paper) is a prerequisite. The main takeaway of this paper is not a particular method but a general principle: Think about reparameterization issues when building Bayesian methods.
Further discussion of these issues and time complexity is available in Appendix E.1.
- “The datasets considered are a bit outdated and the networks considred seem to be quite small [...]”
For the experiments, we have chosen to replicate the key experiments from the Laplace Redux paper [1] which is an important reference for Laplace approximations. Their largest experiments are also on CIFAR-10 with ResNets so we don’t believe the scale of our experiments is too unreasonable.
Secondly, we are certainly aware of the recent works that do Laplace approximation on larger models (see references in the paper). But they all invariably rely on fairly brutal approximations such as KFAC, last-layer, or LoRa. We are not at all opposed to this but hopefully, it is clear from the previous response why such approximations are inappropriate in the context of our experiments: these aim for theory-validation rather than the usual focus on performance and scalability.
- “The method is tailored to improving Laplace approximation, and there is no discussion related to any other approximate inference techiques (e.g. variational inference).”
While it’s true that our primary focus is on Laplace approximations, we believe that the tools developed here are more generally applicable. The theory of reparameterizations can find applicability in various topics such as continual learning and in studying the loss landscape. The issues considered here are also relevant to other Bayesian methods. It's not possible to explicate this fully but we have a brief discussion of this direction in appendix E.4. Another possible direction is Variational Inference with the diffusion posterior as a variational family. Secondly, we believe that the underfitting of Sampled Laplace is an important open problem in the community, which should not be understated. For example, this has also been discussed in a recent position paper [2]
We thank the reviewer for their efforts and we urge them to reconsider their assessment of the experiments in light of the additional context we have provided above.
[1] Daxberger, Erik, et al. "Laplace redux-effortless Bayesian deep learning." Advances in Neural Information Processing Systems 34 (2021): 20089-20103.
[2] Papamarkou, Theodore, et al. "Position paper: Bayesian deep learning in the age of large-scale AI." arXiv preprint arXiv:2402.00809 (2024)
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal, which addresses most of my concerns, and I decide to raise my score to 7. | Summary: This work questions why the linearized Laplace approximation can be effective for uncertainty estimation, whereas the vanilla Laplace approximation can lead to poor performance, such as underfitting. This work argues that this degradation is attributed to the fact that the random function derived from each parameterization of a DNN may not form an invariant function space.
Specifically, since the null space $ \text{ker} (\text{GGN}_{w}) $ is not empty for over-parameterized NNs, sampling the random function containing elements of null space could result in an inconsistent random function. This can lead to under-fitting issues as observed in Laplace approximation.
Therefore, the authors believe that sampling the random function on $ \text{im} (\text{GGN}_{w}) $, i.e., the linearized Laplace, could be more effective than the Laplace method.
Furthermore, for a given weight $w$, they explore how to sample the same function with $ f(w) $, i.e., $ f(g(w)) = f(w) $, by sampling the weight on a specific manifold $ g(w) $. To this end, they employ the concept of quotient space to elaborately define the manifold and diffusion to sample $ w $ from the manifold $ g(w) $.
Strengths: * This work reasonably reveals why the linearized Laplace approximation can be more effective than the standard Laplace approximation. This explanation appears to be novel.
* This work attempts to justify the above reasoning using a mathematical elaboration framework.
Weaknesses: * The lack of explanation of background knowledge, such as quotient space and Riemannian manifold, makes the paper difficult to understand.
* Although it appears effective compared to Laplace and Linearized Laplace, the performance improvement seems marginal when compared to other baselines such as SWAG and Last-layer Laplace.
Technical Quality: 3
Clarity: 2
Questions for Authors: * To confirm my understanding, does the Laplace diffusion denote that for a given $w$, the sample functions $ { w_t }$ are obtained by the update rule using $\text{GGN}^{+}$ as described in the SDE on the manifold?
* Does $\text{GGN}^{+}$ mean the positive definite matrix of $\text{GGN}$, which is obtained by applying SVD on $\text{GGN}$ and then using the eigenvectors with positive eigenvalues?
* Is the Laplace diffusion the post-hoc method, meaning that $w$ is first obtained by MAP inference, i.e., $w_{\text{MAP}} = R^{f}_{\mathcal{X}}(w)$, and then $w_t$ is obtained by Laplace diffusion with $\text{GGN}^{+}$?
* Compared to the performance of SWAG described in attached appendix, the performance of the Laplace diffusion does not seem significantly improved.
In this context, for a given $w$, is it important to sample the weight parameter according to the invariant Riemannian manifold?
* Rather, as considering that SWAG focuses on finding good neighborhood of $w$ in training procedure and obtains comparable performance, isn't it more important to focus on how to find good $w$ (for example, $ w_{\text{swa}} $ ) and explore the subspace of $w$? I am just curious about your opinion on this.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See above weaknesses and limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable review and questions, which will further improve the paper.
- “The lack of explanation of background knowledge, such as quotient space and Riemannian manifold, makes the paper difficult to understand.”
We acknowledge this issue, which is due to the space constraints of a conference paper. Because of the additional allowed page, we will be happy to extend the background section to cover the necessary knowledge in Riemannian geometry to benefit future readers.
- “Although it appears effective compared to Laplace and Linearized Laplace, the performance improvement seems marginal when compared to other baselines such as SWAG and Last-layer Laplace.”
While this is a valid comment we believe it stems from a slight misunderstanding of the main point of our experiments.
The main goal of this paper is to understand the uncertainties obtained from Laplace approximations and more generally to shed light on some of the problems encountered in Bayesian deep learning. More specifically decomposing the uncertainty into its two fundamental components and how they interact with the choice of the predictive (neural network or the linearized neural network). Our purpose is not to advance the state-of-the-art of Bayesian methods.
We include the baselines in the appendix and omit them from the main paper for precisely this reason. We want to validate the theory in realistic settings while staying as close as possible to the theoretical ideal (these choices are also discussed in Sec. 7 and Appendix E.1).
One way to think about the experiments is as follows:
Observation: Linearized Laplace underfits less (in-distribution) than Sampled Laplace.
Hypothesis: This difference is due to improper accounting of the reparameterization properties in the Sampled Laplace.
Theory: The theory presented in the paper provides mathematical justification for this hypothesis.
Experiment: In addition to the theoretical justification, we build a method that is as close to Sampled Laplace as possible while also accounting for the reparameterization properties of the neural network. This is the Laplace diffusion which turns out to not underfit in-distribution, hence supporting the extensive theory developed in the paper.
Of course, we do not disregard the importance of building practical, performant, and scalable Bayesian methods. In future work, we intend to build on the here-presented theoretical insights to achieve this exact goal. However, understanding why Bayesian deep learning struggles (this paper) is a prerequisite. The main takeaway of this paper is not a particular method but a general principle: Think about reparameterization issues when building Bayesian methods.
To confirm my understanding, does the Laplace diffusion denote that for a given 𝑤, the sample functions 𝑤𝑡 are obtained by the update rule using GGN+ as described in the SDE on the manifold?
Diffusion can be performed by simulating the SDE in the kernel manifold and the non-kernel manifold or both (using alternating steps, for example). In Figs. 2 and 7 we perform all 3 diffusions. For the larger experiments (Sec. 7), your description is correct. We only perform the diffusion in the non-kernel manifold. The reasons for this are discussed in Section 5.
- "Does GGN+ mean the positive definite matrix of GGN, which is obtained by applying SVD on GGN [...]?"
Essentially yes. However, we use a Lanczos decomposition of the GGN instead of SVD.
Is the Laplace diffusion the post-hoc method, meaning that 𝑤 is first obtained by MAP inference, i.e., 𝑤 MAP=𝑅𝑋𝑓(𝑤), and then 𝑤𝑡 is obtained by Laplace diffusion with GGN+?
Yes, it is a post-hoc method. As stated above we want to build a method as close to Sampled Laplace as possible which additionally is reparameterization invariant. Since Sampled Laplace is typically a post-hoc method it makes sense that Laplace diffusion should be as well.
- "Compared to the performance of SWAG described in attached appendix, the performance of the Laplace diffusion does not seem significantly improved. In this context, for a given 𝑤, is it important to sample the weight parameter according to the invariant Riemannian manifold? Rather, as considering that SWAG focuses on finding good neighborhood of 𝑤 in training procedure and obtains comparable performance, isn't it more important to focus on how to find good 𝑤(for example, 𝑤 swa) and explore the subspace of 𝑤? I am just curious about your opinion on this."
This is an interesting point of discussion, which we will approach in steps.
First, note that while the mode (i.e. the MAP) of the Gaussian approximate posterior is very important, the reparametrization issue we discuss is mainly related to the covariance. SWAG restricts the covariance to a subspace.
SWAG involves several heuristics that are hard to characterize theoretically, so here we consider a simplified version:
Fix a mode parameter $w_m$, then perform one SGD step to find a new parameter $w_1$. Repeat this (starting from $w_m$) several times.
Compute the empirical covariance of $w_1,w_2,..$ and use this for the Gaussian posterior approximation.
It’s important to clarify that the kernel directions (i.e. 0-eigenvalue directions of the GGN) depend on the dataset. We can, thus, consider both the full-dataset-kernel directions and the batch-kernel directions. Notably, the full-dataset-kernel is always contained in the batch-kernel.
The covariance obtained by simplified-SWAG is then guaranteed to be 0 along the full-dataset-kernel direction.
This shows that the simplified-SWAG approximate posterior is locally reparametrization invariant, thereby demonstrating how the developed theory can help us understand existing methods and eventually develop new ones.
We thank the reviewer for their efforts and we encourage them to reconsider their assessment of the experiments in light of the additional context we have provided above.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses to my questions. During the rebuttal period, I was able to resolve my concerns and gain a deeper understanding of the importance of the reparametrization invariance that this work aims to highlight. I am inclined to accept this work and will therefore maintain my score.
---
Rebuttal 2:
Comment: > During the rebuttal period, I was able to resolve my concerns and gain a deeper understanding of the importance of the reparametrization invariance that this work aims to highlight.
Thanks for the support. We very much agree that reparametrization invariance is important. When models grow in size (e.g. increasing depth), we spend a greater proportion of the parameters on reparametrizations. Ten years ago, models were sufficiently small that reparametrizations could practically be ignored, but today they are causing sufficiently many problems that we are forced to better understand them. We believe that reparametrizations are the reason why Bayesian approximations do not work well with contemporary models, even if they work for small models. This is why it's essential that we form an understanding of reparametrizations. | Rebuttal 1:
Rebuttal: We are grateful to the four reviewers who all argue in favor of acceptance.
We observe a general agreement that the developed theory is both novel and sheds significant light on the difficulties of Bayesian deep learning induced by reparametrizations. Some concerns have been raised about the experimental part of the paper. Specifically, there were concerns about the scalability and performance of Laplace diffusion. Here we want to clarify that the stated intent of the experiments is to support the developed theory rather than to develop a state-of-the-art and scalable method (despite Laplace diffusion’s favorable performance). For this reason, we do not optimize the method for performance but keep it as close as possible to Sampled Laplace and we also avoid approximations that might make it more scalable. We argue that the work presented provides the theoretical foundation opening up new insights that will allow for the development of new and more efficient methods that will be both computationally efficient and accurate. The main takeaway of this paper is not a particular method but a general principle: “ Think about reparameterization issues when building Bayesian methods”
We reply to the individual reviews below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HOPE: Shape Matching Via Aligning Different K-hop Neighbourhoods | Accept (poster) | Summary: The paper presents a new method for shape correspondence that extracts descriptors that are both smooth over the manifold and distinctive enough to find high precision point matching. One can view the presented method as an improvement over the 2D-GEM method, that, as opposed to the latter, does not use the eigenfunction of the laplacian and interacts with more the 2-hop neighborhood. It is extensively tested on various benchmarks in the field, near-isometric and non-isometric ones, and presented promising results.
Strengths: It presented a descriptor that shows improvement over previous ones. The method is well explained. The related work section is written well, and it adds an important context to the paper. Additionally it extensively tested and presented superior results in comparison to previous methods.
Weaknesses: The new method to some extent resembles the 2D-GEM method, which limits the novelty. As presented in the limitation, since it uses the vertex neighborhood it is vulnerable to remeshing, and it counters the statements in the paper about the method usefulness in real data scenarios (overclaiming).
Small issues: I think [1] should be referred to because of its similarly to the ideas presented in the paper, they also use alignment processing of the initial descriptors by reducing geodesic distance error.
[1] Bracha, A., et al., 2020. Shape correspondence by aligning scale-invariant LBO eigenfunctions. 3DOR: Eurographics Workshop on 3D Object Retrieval
Technical Quality: 3
Clarity: 4
Questions for Authors: see weaknesses
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the insightful and constructive comments. We address some of the Reviewer’s comments and suggestions below:
# Weaknesses:
- W1: The new method to some extent resembles the 2D-GEM method, which limits the novelty. As presented in the limitation, since it uses the vertex neighborhood it is vulnerable to remeshing, and it counters the statements in the paper about the method usefulness in real data scenarios (overclaiming). Small issues: I think [1] should be referred to because of its similarly to the ideas presented in the paper, they also use alignment processing of the initial descriptors by reducing geodesic distance error.
[1] Bracha, A., et al., 2020. Shape correspondence by aligning scale-invariant LBO eigenfunctions. 3DOR: Eurographics Workshop on 3D Object Retrieval We thank the reviewer for pointing us to this insightful work.
- RW1: We will add this work to our related work section, since indeed [1] also proposed to address some concerns with the LBO basis for shape matching. Specifically, [1] proposed to solve the problem caused by rotation, reflections, sign flips and other symmetries in the LBO basis by first aligning the eigenfunctions (vectors) of the LBO of two nearly isometric shapes and then using these aligned eigen-vectors as the new basis for the shape matching problem. We also agree with the Reviewer that the limitation incurred by HOPE from using the mesh of the shape is a real one since in the real world most data is meshed independently of each other and so the meshes may not be correlated. We are trying at the moment to study this hard problem and considering how to lift this limitation in a near future work, and will welcome any suggestions from the Reviewer. However, to appease any other concerns from the Reviewer with regards to this, we recall with the Reviewer that though HOPE suffers from this limitation, our experiments in Section 5.6 specifically lines 305-310 as well as figure~5 both help to show that though HOPE does not drastically outperform other baselines on re-meshed shapes, it is still competitive by either performing comparatively or better. Moreover, we attached some more experiments on the SMAL_r (challenging dataset, see PDF attached) as well as some qualitative results on different datasets, which both show again the benefit of choosing HOPE over competitors such as 2D-GEM, DIR, or ZoomOut.
---
Rebuttal Comment 1.1:
Title: Nice paper.
Comment: The authors answered our equations, and present a nice methodology to overcome difficulties in shape matching. | Summary: This paper introduces HOPE, a method that leverages k-hop neighborhoods as pairwise descriptors to achieve both accuracy and smoothness in shape matching. This approach aims to overcome the limitations of existing methods that often struggle to balance these two crucial aspects. The k-hop neighborhood concept involves considering the vertices within a certain distance (k hops) from a given vertex, providing a local context that can be used to compute pairwise descriptors. The authors validate their approach using several benchmark datasets, including SCAPE, TOSCA, and TOPKIDS.
Strengths: 1. The paper is well-structured, with a clear organization.
2. The method section provides a theoretical argument result. The theoretical results demonstrate the validity of the method.
3. The authors provide a comparative explanation with the main baseline, 2D-GEM.
4. The method achieves good results in partial shape matching.
Weaknesses: 1. Novelty: The methods in the paper lack overall novelty, as the proposed approach involves minor optimizations and modifications based on existing work.
2. Parameter Sensitivity: Although not explicitly discussed in the provided sections, methods that involve iterative refinement and multiple descriptors often require careful tuning of parameters. The performance of HOPE may depend on the selection of parameters such as the number of hops (k) and the weights used in the LMD optimization. A thorough analysis of the method's sensitivity to different parameters would be beneficial, possibly including guidelines or heuristics for parameter selection.
3. Experimental Results: In the compared datasets, this method does not show a significant improvement over the baseline. The advantages of the technique are not evident from the provided qualitative and quantitative results. The paper mentions other works that address smoothness and accuracy but could benefit from a more detailed comparative analysis, showing direct performance metrics against the most recent and relevant methods.
4. Benchmark: There are relatively few results on non-isometric datasets in the comparison. It is suggested to supplement results from other datasets, such as SMAL_r [a], DT4D-H [b], or other data in SHREC07 [c].
[a] Deep Orientation-Aware Functional Maps: Tackling Symmetry Issues in Shape Matching.
[b] Smooth Non-Rigid Shape Matching via Effective Dirichlet Energy Optimization.
[c] http://partial.ge.imati.cnr.it.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. What is the impact of the initialization map quality on this method, and how does it compare in tolerance level with other baselines?
2. How much does the method improve over the baseline in specific quantitative results on the SHREC16 dataset? There are no qualitative results or specific quantitative results, including geodesic error.
3. There are no quantitative metrics provided to demonstrate the smoothness of the map.
4. What is the value of kmax set in the experiments, and is it consistently optimal across different datasets?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Please refer to the Weaknesses and Questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the constructive and insightful comments. We will strive to the best of our ability to address all concerns in hopes that the Reviewer can please reconcider the reviews .
# Weaknesses:
- W1: Novelty: The methods in the paper lack overall novelty, as the proposed approach involves minor optimizations and modifications based on existing work.
- RW1: To the best of our knowledge, we are the first and only work that use different k-hop neighborhoods as pairwise descriptors to refine the map in shape matching.
- W2: Parameter Sensitivity: Although not explicitly discussed in the provided sections, methods that involve iterative refinement and multiple descriptors often require careful tuning of parameters. The performance of HOPE may depend on the selection of parameters such as the number of hops (k) and the weights used in the LMD optimization. A thorough analysis of the method's sensitivity to different parameters would be beneficial, possibly including guidelines or heuristics for parameter selection.
- RW2: We please refer the Reviewer to lines 163-164 and lines 284-286 where we mention that all our parameters are fixed for all datasets and to appendix B were we conduct the parameter sensitivity which show that HOPE is not sensitive to parameters chosen.
- W3: Experimental Results: In the compared datasets, this method does not show a significant improvement over the baseline. The advantages of the technique are not evident from the provided qualitative and quantitative results. The paper mentions other works that address smoothness and accuracy but could benefit from a more detailed comparative analysis, showing direct performance metrics against the most recent and relevant methods.
- RW3: We please recall with the Reviewer that as shown in Section 5.6 and in appendix B, HOPE performs significantly better than all baselines on 8 different datasets while using the same set of parameters with no need of tweaking.
- W4: Benchmark: There are relatively few results on non-isometric datasets in the comparison. It is suggested to supplement results from other datasets, such as SMAL_r [a], DT4D-H [b], or other data in SHREC07 [c]. [a] Deep Orientation-Aware Functional Maps: Tackling Symmetry Issues in Shape Matching. [b] Smooth Non-Rigid Shape Matching via Effective Dirichlet Energy Optimization. [c] http://partial.ge.imati.cnr.it.
- RW4: We will add and briefly expound the suggested works in the related work section. We conducted interclass matching on SMAL_r as suggested givin a total of 298 pairs averaged on the curve. Please see the results in the attached PDF. Moreover, we also believe that we have provided a wide range of experiments on several datasets to illustrate our point. We plead with the reviewer to look at the variety of datasets we used (up to 8 different datasets, 6 in the main paper in Section 5, and 2 more in Appendix D). So we plead with the reviewer to Review the scores again as we believe our work can indeed be very beneficial to the research community. Finally, for any other tests, we please refer the reviewer to our open-source code provided to test any custom datasets in mind (we will appreciate feedback on how our model performs on those as well).
# Questions:
- Q1: What is the impact of the initialization map quality on this method, and how does it compare in tolerance level with other baselines? We please refer the Reviewer to Appendices B, C where we discuss in detail with more experiments the parameter sensitivity, ablation studies, and different initializations. We equally please refer the reviewer to Section 5 and Appendix D where on all 6 datasets (in Section 5) and 2 datasets (in Appendix D) with the same parameters HOPE outperforms all other baselines.
- Q2: How much does the method improve over the baseline in specific quantitative results on the SHREC16 dataset? There are no qualitative results or specific quantitative results, including geodesic error.
- R2: For the Shrec16 dataset in Appendix D we provided the %correspondences with geodesic error curves on page 17. We apologize that it is on this page. We will move this to a better location.
- Q3: There are no quantitative metrics provided to demonstrate the smoothness of the map.
- RQ3: Though we did not use an explicit metric for the smoothness, as noted in lines 50-53, one can infer this by observing the rapid increase of the %Correspondence as it moves away from geodesic error 0 on the plots. We will welcome any smoothness metric suggested by the reviewer and add it to our work, since the ones we are familiar with such as conformal distortion, bijectivity, chamfer distance and so on all do not measure smoothness explicitly. Hence we resorted to observing how fast 100% Correspondence accuracy is reached as one moves away from geodesic error 0 closest one to the best of our knowledge.
- Q5: What is the value of kmax set in the experiments, and is it consistently optimal across different datasets? Kmax is fixed to 8 for all experiments in the paper.
- RQ5: We apologize to the Reviewer for not making this explicit in lines 284-286 (where we mention that all our parameters are fixed for all datasets). We will add this in the final version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their thorough response and the additional experiments on SMAL_r. I carefully reviewed the authors' replies and the corresponding sections of the paper. However, I believe that the response does not address my main concerns.
First, I think the paper lacks novelty. The authors emphasize using different k-hop neighborhoods, but I believe this is merely an extension of previous work without introducing new insights. Additionally, the theoretical proof provided regarding k-hop is confusing.
Regarding the experiments, specifically Fig. 4, I do not believe the results significantly improved over the baseline. While there seems to be some improvement for remeshed meshes, I suggest discussing the reasons behind these results. Moreover, I do not observe a significant improvement over GEM in most cases in the qualitative examples chosen in both the main text and supplementary materials.
Another suggestion is to provide the mean geodesic error values along with the curves so that the improvement of the method can be directly compared. I believe the entire work requires significant improvements in both writing and innovation, and I do not think this work is ready for publication in NeurIPS 2024.
---
Rebuttal 2:
Title: Response 2 to Reviewer NAeg
Comment: *NOVELTY*
Please find below the novelty of HOPE (our work), especially when compared to GEM:
- First, our proposed HOPE is the first work that uses different K-hops neighborhoods as witnessed for iterative map refinement for shape matching. We justified this in Section 4.2 (supported by theorem 4.1 )
- Second, we showed in section 4.4 (using theorems 3.1 and 4.1) that GEM does not generalize to different datasets without a significant change in the algorithm. We showed that changing the parameters as they did in their paper is really changing the algorithm, and as such GEM is really two different algorithms from which we choose one depending on the parameters, because choosing large thresholding parameters as they do for non-isometric shapes, re-meshed shapes, and partial shapes deactivates the LMD. This is a huge disadvantage because generally in real life one does not know whether the shapes one is dealing with are isometric or not.
*THEORETICAL PROOF*
We would like the Reviewer to highlight which part of the proof of the usefulness of the k-hop neighborhood was unclear so that we can address this. Since the Reviewer previously did not mention anything about this, else we would have done our best to address this.
*EXPERIMENTAL PERFORMANCE*
With regards to the experiments, the only baseline that looks competitive is indeed GEM, but again:
- HOPE outperforms GEM in most datasets. GEM does not generalize. The only reason GEM even performs well in our experiments in Section 5 and the Appendix is because we significantly changed their algorithm by using different set of parameters for different datasets as they did in their paper (since choosing large thresholding parameters as they do for non-isometric shapes, re-meshed shapes, and partial shapes deactivates the LMD). We showed this in Section 4.4 where we used a toy example from the TOPKIDS dataset to prove that if we do not change the parameters, GEM will perform extremely poorly since GEM is really two algorithms from which using different sets of parameters will select one.
- So really HOPE generalizes to all Datasets while outperforming GEM significantly in the re-meshed (Figure 5) and partial shapes (Figure 9) as well as the SCAPE (Figure 4). Moreover, in addition to outperforming GEM, HOPE also outperforms all other baselines in all experiments (by a large margin in some cases).
- We discussed in Section 4 that the Reason HOPE can generalize are: (a) first that the K-hops are noise robust (thus addressing the re-meshed shapes and noises to some extent), and (b) secondly are more unique than other shape descriptors.
*REPORTING MEAN GEODESIC ERROR*
With regards to reporting the mean geodesic errors:
- In case of acceptance, just like the SMAL_r experiments we added, we can also add these to our Appendix as suggested. | Summary: This submission proposes a descriptor utilizing k-hop neighborhoods for non-rigid 3D shape matching. The descriptor is used jointly with local map distortion for map refinement.
Overall, I am highly frustrated during reviewing this submission. While I have tried my best to parse and understand the technical details, the confusing formulations and derivations, together with missing definitions have prevented me from doing so. Overall, I would encourage the authors to revise carefully the submission, so that it is readable.
Strengths: To be honest, I have trouble reading out the paper, therefore I can hardly conclude any strength for real.
Weaknesses: The paper lacks readability, especially on the technical part. An incomplete list of frustrations I have encountered is as follows:
1. The presentation is poor. Starting from Eqn.(1), I have got lost. There has never been an explicit formula or at least a clear reference to the prior works for W_M(T^t, T^{t-1}). Similarly Q_M, Q_N are not defined. I have tried my best to parse the statements, but have to give up in the end.
2. Theorem 3.1 and its proof are also confusing. First of all, Q_M and Q_N in proof 3.1 are *not* mentioned in the theorem. Second, functional maps by its definition is defined regarding the basis, instead of some arbitrary vertex-wise descriptor. However, I did not anything around L95 saying Qs are eigenbasis (or any kind of basis regarding C).
3. In Eqn.(6), the definition of A_M, k(T, T^{k-1}) is again undefined.
4. The proof of Theorem 4.1 is pointless. Please use the standard mathematical derivation, rather than piling up a series of unverified statements.
Technical Quality: 1
Clarity: 1
Questions for Authors: 1. I would disagree with the argument in L27. Shape matching methods would rarely aim for solely smoothness without accuracy -- the latter is the most important, while the former can serve as good regularization.
2. Should niegh(i) in Eqn.(4) be neigh(i)?
3. In Fig.4, I can not see the curve regarding HOPE. It would be useful to report geodesic error in the legend.
4. If I understood correctly, the appendix should be behind checklist.
Confidence: 5
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: It looks fine to me.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the clarification seeking comments. We sincerely apologize to the reviewer for any confusion or misunderstanding. We will strive to the best of our ability to address all concerns. However, considering the good scores of other Reviewers, we strongly plead with the Reviewer to please reconsider the reviews and to please ask us any further questions that may help in this regard:
# Weaknesses:
- W1: The presentation is poor. Starting from Eqn.(1), I have got lost. There has never been an explicit formula or at least a clear reference to the prior works for W_M(T^t, T^{t-1}). Similarly Q_M, Q_N are not defined. I have tried my best to parse the statements, but have to give up in the end. W_M(T^t, T^{t-1}), Q_M, and Q_N are all defined in the paragraph below equations 1 and 2. See lines 64-76.
- RW1: Equations 1 and 2 are general formulations of the iterative refinement strategy of methods such as Kernel Matching[1], 2D-GEM[2] and GRAMPA[3], and others. For example the minimization in equation 1 is equivalent to the maximization argmax_{T^t} Tr(W_M(T^t, T^{t-1}) W_N), where T^t is the map we are trying to obtain at iteration t and the map T^{t-1} is the map that was previously obtained in iteration t-1 (which is done by all 3 papers namely; Kernel Matching , 2D-GEM and GRAMPA for example).
- W2: Theorem 3.1 and its proof are also confusing. First of all, Q_M and Q_N in proof 3.1 are not mentioned in the theorem. Second, functional maps by its definition is defined regarding the basis, instead of some arbitrary vertex-wise descriptor. However, I did not anything around L95 saying Qs are eigenbasis (or any kind of basis regarding C).
- RW2: First, we want to agree with the reviewer here that a functional map can be computed in any basis and not only in the eigen basis (see section~4 of the original functional map paper). As such, given any basis (e.g., first k eigen-vectors i.e., a truncated basis, or any other shape basis which may also be used as descriptors), a functional map can be computed. So theorem 3.1 will still hold. For example, as mentioned in this theorem, if Q_M and Q_N are singular vectors (eigenvectors) of some descriptor and act as a basis, then the first singular vectors will act as soft clusters for the points in that basis (figure 1), as such the iterative functional map refinement will be matching soft clusters in that basis which is what Theorem 3.1 says. However, we thank the Reviewer for mentioning this to us, it is a typo and we apologize. This should really be the Us mentioned in the introduction of Theorem 3.1 rather than the Qs used in the proof. We will fix this typo in the final version should our paper be accepted as this in no ways changes theorem.
- W3: In Eqn.(6), the definition of A_M, k(T, T^{k-1}) is again undefined.
- RW3: We please refer the reviewer to lines 167-172 and 173-178 where we defined A_M, A_N, A_{M,k} and A_{N,k }.The (T, T^{k-1}) in A_{M,k}(T, T^{k-1}) is trying to recover the new map T^t that aligns the columns given the previous map T^t-1 that aligned the rows. We apologize that since we already mentioned in lines 64-70 we did not see the need to repeat thjs. We will edit to make it explicit here in equation 6 too then.
- W4: The proof of Theorem 4.1 is pointless. Please use the standard mathematical derivation, rather than piling up a series of unverified statements.
- RW4: We are sorry to disagree with the Reviewer’s remarks that these statements are unverified. All these statements are mathematically sound and are verifiable to the best of our knowledge. Else, we would never make them for a conference such as Neurips. Statement 1 gives a sound interpretation of the rows of the neighborhood matrices A_{M,k} and A_{N,k} as descriptors of the nodes (neighborhoods as descriptors). Statement 2, states mathematically the role of T^t-1 in A_{M,k}( :, T^t-1 ), that is to align its columns (that is reorder the neighborhood/k-hop descriptor of each node). Statement 3 gives the mathematically verifiable statement that each entry K(i,j) in the product K=A_{M,k}(:,Tt−1)A_{N,k}, will be the number of vertices that are common in the k-hop neighborhoods of vertices i and j, after the alignment of the k-hop neighborhoods as mentioned in statement 2. Thus Statement 4 logically concludes that finding T via equation 6 is equivalent to seeking to match all the vertices i, and j whose k-hop neighborhoods have most vertices in common based on the alignment T^t−1.
# Questions:
- Q1: I would disagree with the argument in L27. Shape matching methods would rarely aim for solely smoothness without accuracy -- the latter is the most important, while the former can serve as good regularization.
- RQ1: We refer the Reviewer to our definition of accuracy as the %Correspondences as geodesic error 0. To the best of our knowledge, besides 2D-GEM, Dual Iterative Refinement and GRAMPA, most previous works do not aim for 100% Correspondences at Geodesic error 0 but rather to have this 100% Correspondences at a reasonable geodesic error as seen by even the low % correspondence curves at geodesic error 0 of most of these works. We please refer the Reviewer to 2D-GEM[1] for example who discussed this point extensively.
- Q2: Should niegh(i) in Eqn.(4) be neigh(i)?
- RQ2: We apologize for this typing error (typo). We will correct this.
- Q3: In Fig.4, I can not see the curve regarding HOPE. It would be useful to report geodesic error in the legend.
- RQ3: This is because HOPE is overlaid by 2D-GEM (zooming in one can see the pink curve of HOPE). We refer the reviewer to lines 297-302 which states “HOPE… achieves 92.54% accuracy at geodesic error 0 on the TOSCA dataset…”. Please see a modified figure in the attached PDF
- Q4: If I understood correctly, the appendix should be behind checklist.
- RQ4: We please refer the reviewer to the Neurips paper Checklist requirement at the link https://neurips.cc/public/guides/PaperChecklist.
---
Rebuttal Comment 1.1:
Title: Let us start with formulation
Comment: Still, I do not see a self-contained formulation/explanation regarding Eqn.(1). Could you please kindly refer to a specific formulation in **any** of the referred papers?
I got the idea that it is an iterative refinement, it is the undefined notations in Eqn.(1) that makes me confusing and frustrating...
---
Rebuttal Comment 1.2:
Title: On Theorem 4.1
Comment: I have to admit that I have no clue how the proof is going on, as I have trouble to understand the formulation from the beginning.
The obvious observation is that the objective of Eqn.(6) is entirely absent from your proof. Where is the Trace going?
---
Reply to Comment 1.2.1:
Title: Response to Reviewer E6Vo on "On Theorem 4.1"
Comment: Again, we thank the Reviewer for seeking more clarity from us. In hope to change the Reviewer’s comment we answer as follows: Equation 6 is the well-known Linear assignment problem where we try to find the permutation matrix T^t which maximizes the assignment of rows to columns (trace maximization). This assignment of rows to columns of a cost matrix where the rows are vetices of shape 1 and the columns are vertices of shape 2 is shape matching. And so our theorem 4.1 shows that trying to do shape matching where our cost matrix is the product of corresponding k-hop neighborhoods (given an initial neighborhood aligning map T^t-1) is the same as finding the map T^t that matches nodes with the most nodes in common in their k-hop neighborhood.
---
Rebuttal 2:
Title: On the definition of accuracy
Comment: A quick question: how does your definition of accuracy (i.e., %Correspondences as geodesic error 0) make sense at all? For instance, I have a map with 50% points at error 0 + 50% points at error 0.1, and a map with 50% points at error 0 + 50% points at error 1.0, the two are equally good under your definition, is that sensible?
Beyond that, how can you set the percentage of perfect matching as an optimization goal? It is discrete...
---
Rebuttal Comment 2.1:
Title: Response to Reviewer E6Vo on "On the definition of accuracy"
Comment: We once again thank the Reviewer for this question. We point out that the Reviewer’s scenario has to do with smoothness. Accuracy as from the definition we adopted from the Neurips paper 2D-GEM has to do with matching at geodesic error 0 strictly, while smoothness has to do with matching as we deviate slightly away from geodesic error 0 (i.e., conservation of neighborhoods). We showed that aiming for accuracy while using k-hop neighborhoods (our HOPE) will not only account for accuracy, but also smoothness in our experiments (hence addresses your scenario). For example, even when shapes do not have consistent triangulations, we showed that HOPE outperforms baselines at matching away from geodesic error 0. This can be seen in the experiments on the re-meshed shape datasets SCAPE_r, FAUST_r, TOSCA_r , and on partial shapes in the paper’s appendix (and the experiment on the non-isometric SMAL_r in our Rebuttal PDF). In all these our proposed HOPE performs comparatively better than other baselines. An explanation on why using k-hop neighborhoods to enforce accuracy also enforces smoothness has to do with the definition of smoothness itself (which in a simplified way can be defined as a preservation of neighborhoods, a fact known also in the field of graph neural networks).
---
Rebuttal 3:
Title: Response to Reviewer E6Vo on "Let us start with formulation"
Comment: We thank the Reviewer for being willing to ask us for more clarity. In hope of changing the Reviewer’s scores we respond as follows. For equation (1) of our paper we refer the Reviewer to equations 13, 14, 15 and 16 of the CVPR paper “Efficient Deformable Shape Correspondence via Kernel Matching”. Or the ICLR paper. " DEEP GRAPH MATCHING CONSENSUS" in equation 6. Moreover, equation (1) can still be obtained from papers such as the Neurips paper 2D-GEM (“Exact Shape Correspondence via 2D Graph Convolution”), by explicitly writing out their Cp in algorithm 1 using their definition of Cp in equation 10 and Zl in equation 9. This formulation is true for all iterative refinement algorithms that use pairwise descriptors.
---
Rebuttal 4:
Title: Could you please respond with mathematical formulations?
Comment: I have read all responses regarding formulation, accuracy, and Theorem 4.1. And I am still confused, could you please provide answers with self-contained, well-defined derivation to my questions, instead of a pile of sentences?
For instance, in kernel matching paper, everything is expressed as clear matrix operation, while in your case, you say something like this:
'W_M(T^t, T^{t−1}) \in R^{n×n} is the pair-wise descriptors of vertices for shape M with its rows and columns aligned using the map T^t and the previous iterations map T^{t−1}'
I have not doubted the correctness of the formulation. The problem is that the above definition is wordy and confusing. How is W_M computed/defined explicitly?
Similarly, I can imagine there is a link between the trace and assignment problem. However, your response again is descriptive. A much better answer would be to explicitly draw connection between such with **equations**.
On accuracy, the impression I got is that the accuracy you focused on is ultimately not different from the general accuracy (i.e., mean geodesic error), as shown in Fig. 4 and 5. Also an obvious fact is that defining accuracy as percentage of perfect matching is not optimizable in practice -- as the ground-truth annotation is not always available, in fact, most of the time.
Though I would be happy to discuss and open to be corrected, I would expect efficient, precise communications rather than reading *hand-wavy arguments* and deriving *claimed equivalent forms* myself.
---
Rebuttal Comment 4.1:
Title: Response to Reviewer E6Vo on "Could you please respond with mathematical formulations?"
Comment: We apologize to the Reviewer if our answers to the Reviewer’s concerns were not clear. With hope to change the Reviewer’s scores, please find attached responses in bullets below with as much formulations as possible. We hope they will help with more clarification:
- W_N \in T^{n * n} is a matrix of pairwise descriptor for vertices n of shape N. It can be geodesic distances or any other pairwise descriptors (In our case we use k-hop neighborhoods as seen in lines 166-202 of our paper)
- T^t and T^{t-1} are the current map and the previous map we want to recover and are \inR^{1*n}. For example, if we have two shapes M and N with 4 vertices each and the map T^{t-1} = [ 2, 1,0, 3], then this means vertices 0,1,2,3 in shape N correspond to vertices 2,1,0,3 (respectively) in shape M according to this map.
- The Trace in the Linear Assignment Problem (LAP): The LAP involves finding a permutation (P) of the columns of a cost matrix ( C ) such that the sum of the corresponding diagonal elements (the trace of the permuted matrix) is minimized or maximized. It is formulated as argmax_P Trace(PC). In our case our cost matrix is C = W_N(:, T^{t-1})*W_M = W_N*P’W_M (where P’ = mat(T^{t-1}) i.e., the matrix form of T^{t-1}). And our LAP is argmax_P Trace(PC) where P = mat(T^{t}) i.e., the matrix form of T^{t}).
- For accuracy, indeed directly enforcing this constraint in the optimization problem is impossible as the Reviewer rightly said because the ground truth map is not available. As such the accuracy constraint is usually enforced by trying to use accuracy enforcing pairwise or point-wise descriptors as we discussed in Section 3 of the paper.
---
Rebuttal 5:
Title: Some final remarks
Comment: For the sake of fairness, I have re-examined this submission with the clarification made by the authors. Below are my new questions:
1. On Theorem 3.1:
Let alone the confusing purpose of this theorem in the **related work** section, I am confused by the theorem description and the proof. Following the authors’ rebuttal, W in Eqn.(1) denotes pair-wise descriptor such as geodesics, Q in Eqn.(2) denotes vertex-wise descriptor such as SHOT, HKS.
The claim of theorem 3.1 is “Using **W** for the map refinement via Functional maps helps group nearby clusters together assuming the functional map is perfectly accurate. “. On the other hand, in the proof, **Q** is used throughout. To the best of my knowledge, these two should be assumed to be independent (except for cases where Q is derived from W), right?
2. On Theorem 4.1:
In general, a serious theoretical analysis for map refinement method should include at least two perspectives: 1) the proof of convergence; 2) the proof of certain property. For example, Zoomout proves that the (ideal) optimal solution is an isometric map between two shapes. Huang and Guibas [a] prove that their algorithm can recover exact maps under certain noise model (with respect to initial maps).
Regarding this submission, I am not sure what is the theoretical insight about Theorem 4.1: The claim is about agreement with T^{t-1}, which can be traced back to initialization. As I mentioned above, the expected theoretical analysis should be something like the algorithm can converge to good solution even if initialization is not perfect. To this end, arguing some property aligning with previous iterations seem point-less to me.
[a] Consistent Shape Maps via Semidefinite Programming, Q. Huang and L. Guibas, SGP, 2013.
3. On connectivity
The submission claims superior performance in TOPKIDS. In Fig.2, the error plot is nearly perfect, however the qualitative result seems to be worse than Zoomout. Also, if the method depends on adjacency, then how can it be robust with respect to topological noises, i.e., there are a lot of short cuts in the regarding meshes?
4. On experimental results
The performance gap between SCAPE and SCAPE_r (Fig.4(b) vs. Fig.5(b)) seems to suggest the proposed method to some extent depends on the unified triangulation prior, which is not promising as the shape matching community has long passed the point caring about matching shapes with uniform triangulation.
To be honest, I can go on with the question list, but my time could have been contributed to something more meaningful. I have tried my best to understand and evaluate this submission, but can hardly find any evidence for changing my initial rating.
---
Rebuttal 6:
Title: Responses to "Some final remarks" by Reviewer E6Vo
Comment: Dear Reviewer, we are grateful for the time spent to Review our work. We address the concerns and misunderstandings raised in the final remarks below:
**On Theorem 3.1:**
In our response to the Reviewer’s first Reviews we addressed this by pointing out to the Reviewer that we had made a typo here in the proof of the theorem by using Q instead of U as in the definition of the theorem. But this does not change the proof nor the conclusion of the theorem. This theorem proves that all iterative map refinement methods that use truncated basis (such as first k-eigen vectors or singular vectors of some shape descriptors whether pairwise or pointwise) will essentially try to refine the map by aligning clusters of points where with clusters being defined by this truncated basis (since it is well known that the truncated SVD or eigen decomposition are soft clusters). Thus, all these methods will fail at accuracy since it will be matching clusters rather than points. While to some extent in many cases maintaining smoothness depending on which descriptor the truncated basis was obtained from.
**On Theorem 4.1:**
Regarding the proof that the k-hop algorithm will recover the ground truth map under certain conditions, we refer the Reviewer again to Section 4.2, particularly lines 167-175 where we cited [26] and [48] which both give riguouruous and lengthy proofs for using 1 hop or k hop adjacencies for refinement. We then gave Theorem 4.1 that shows using k= 1, 2,3, …, hops for is same as using different hop neighbors as witnesses for the map refinement. As such, we propose to use different hop neighborhoods iteratively which will serve to enforce both local and global consistency in the map (See Section 4.2-4.4).
**On connectivity**
We do not only claim superior performance to ZoomOut but prove this experimentally on most if not all datasets by a hudge margin in some cases. The curves show this as pointed out by the Reviewer. For the Qualitative analysis. We refer the Reviewer to figure 4 in our attached PDF as response to all Reviewers. It clearly shows that ZoomOut fails to be accurate or even be continuous.
**On experimental results**
We refer the Reviewer to the variety of datasets used again and point out that on all these datasets including the ones where the triangulations are not consistent, HOPE (our work) outperforms all baselines in some cases by a significant margin.
We admitted that HOPE is limited because it relies on the triangulations (in our limitation Section 6). Nonetheless, we showed that even on non-correlated meshes HOPE still outperforms other baselines.
**Additional Remarks**
Given all these, we really believe our work brings a significant contribution to the shape matching community and welcome any further questions from the Reviewer. Moreover, we provided the Source code for HOPE as supplementary material which the Reviewer may use to test other custom meshes (and feel free to give us feedback). | Summary: The paper focuses on the shape-matching tasks and proposes a shape-matching refinement technique based on the K-hop neighborhood descriptor and local map distortions. The experiments show improved results compared to existing approaches in isometric and non-isometric shape registration.
Strengths: - The paper presents the background, theory, and methodology very well.
- The methodology is simple and sound and can positively impact shape-matching refinement.
- The performance, runtime, and generalizations are promising.
Weaknesses: - While there is existing work on deep and modern feature extraction, the experiments in this paper are limited to the SHOT descriptor. Incorporating other classical or learned signatures could demonstrate broader generalizable performance. Additionally, evaluating the impact of the quality of initial matches could provide further insights into the robustness of the proposed method.
- The paper is motivated by the observation that existing approaches often prioritize smoothness over accuracy. Although this paper emphasizes improving accuracy, it raises the question of whether smoothness might be compromised as a result. Detailed analysis and results are needed to assess the balance between these two aspects.
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper suggests using geodesic error as a loss function to improve smoothness and accuracy. Given that other methods also utilize geodesic error, what aspects of the HOPE approach allow it to leverage geodesic error more effectively and potentially yield superior results? How does this approach manage the balance between local and global geometric consistency?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The method might face challenges in handling topological variations between shapes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for the insightful and constructive comments. We address some of the Reviewer’s comments and suggestions below:
# Weaknesses:
- W1: While there is existing work on deep and modern feature extraction, the experiments in this paper are limited to the SHOT descriptor. Incorporating other classical or learned signatures could demonstrate broader generalizable performance. Additionally, evaluating the impact of the quality of initial matches could provide further insights into the robustness of the proposed method.
- RW1: We refer the reviewer to Appendix C where we equally use the HKS and WKS as initializations. We added more qualitative results in the attached PDF to the global response and will add them to the appendix as well.
- W2: The paper is motivated by the observation that existing approaches often prioritize smoothness over accuracy. Although this paper emphasizes improving accuracy, it raises the question of whether smoothness might be compromised as a result. Detailed analysis and results are needed to assess the balance between these two aspects.
- RW2: Mathematically, an accurate map will equally be smooth (in case the map recovered is same as the ground truth for example). But as the Reviewer points out, since most models may not perfectly recover this ground truth map, there will be a tradeoff between aiming for smoothness and accuracy as discussed in Section 3 which shows the difficulty to maintain smoothness while aiming for accuracy by methods such as such SHOT, since it does not strictly enforce smoothness. A more precise and general response will be that the level of smoothness of the final map obtained by an algorithm (aiming for accuracy) will depend on how it enforces this smoothness constraint as while aiming for accuracy. For example, in the case of HOPE, one can observe that enforcing local and global neighborhood consistency in order to aim for accuracy also enforces the smoothness constraint since points will be matched if their local and global neighborhoods are similar. We will add this discussion in the Appendix.
# Questions:
- Q1: The paper suggests using geodesic error as a loss function to improve smoothness and accuracy. Given that other methods also utilize geodesic error, what aspects of the HOPE approach allow it to leverage geodesic error more effectively and potentially yield superior results? How does this approach manage the balance between local and global geometric consistency?
- RQ1: We used the geodesic error as a measure of accuracy (via measuring the %Correspondences at geodesic error 0) and smoothness (by seeing how rapidly the %Correspondence increase as we move away from geodesic error 0). In order to achieve this accuracy we employed the concept of neighborhoods to refine an initialized map. We used both local (lower hops) neighborhoods and global (higher hops) neighborhoods to match points. When the local and global neighborhoods of points were similar (based on some initial map) we matched them together. The reviewer indeed raise an interesting future research question on how to balance the importance of local consistency with that of global consistency. We briefly discussed this in Section 3 where we highlighted the reasons on why GRAMPA failed in matching isometric shapes where the local neighborhoods of the meshes are similar. It has been studied in graph isomorphism (with models such as the Wesfeiler Lehman), and even in the field of Graph Neural Networks (in papers such as “How powerful are graph neural Networks” or “Topological graph Neural Networks”) that local neighborhoods being similar does not necessarily imply a match. For example, if in the neighborhood of node A, two of its neighbors are white and two are black, and in the neighborhood if node B the same class distribution is observed, even if the connectivity in these neighborhoods are the same, it does not necessarily mean that these two nodes are the same as their 2 hop neighbors may be significantly different. We will add discussion on this in the main paper or the appendix as well.
# Limitations:
- L1: The method might face challenges in handling topological variations between shapes.
- RL1: Indeed the main limitation of our work is that we rely on the mesh connectivity as mentioned in Section~6
---
Rebuttal Comment 1.1:
Title: Responce to rebuttal
Comment: Dear authors,
Thank you for submitting your rebuttal and addressing the concerns raised. I have reviewed the rebuttal and found that my questions have been addressed.
Best regards,
dry5 | Rebuttal 1:
Rebuttal: # Comment To All Reviewers
- First, we thank the Reviewer for their comments and please direct each Reviewer to specific responses to each as well as to the general responses here and the attached PDF.
- We added experiments on intra-class matching on the SMAL_r dataset as suggested by Reviewer NAeg. We hope it convinces the Reviewer about the quality of our work, as this experiment further demonstrates the choice of HOPE over existing baselines. We used the same parameters as on other datasets in the main paper since one Key benefit of HOPE is that we do not need any parameter tweaking. For this experiment, we match 298 intr-class shape pairs and report he average geodesic curves of HOPE, 2D-GEM, DIR, and ZoomOut.
- We added some Qualitative visuals on the maps from different baselines as suggested by Reviewer dry5. However, due to space we could not add on other datasets besides the three chosen. We will add more to the main paper in the appendix in case of acceptance
- We will equally add all related works suggested by all Reviewers to the main paper in case of acceptance.
- We apologize for any miss understanding with Reviewer E6Vo and plead for a look at our rebuttal as well as ask us any further questions as we really hope to address all concerns and convince the Reviewer about the quality of our work.
Pdf: /pdf/45f8ad11a3ced24d99299475706666dc4d7358ad.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Graph Classification via Reference Distribution Learning: Theory and Practice | Accept (poster) | Summary: Instead of compressing node embedding matrix into a graph-level vector, this paper proposes a reference distribution learning method GRDL especially designed for graph classification task. GRDL achieves graph classification by measuring the dissimilarity between distributions of the input graph and the references. Theoretical analysis of the generalization error bound offers guidance of hyperparameter tuning.
Strengths: - This paper is well-motivated to avoid information loss caused by node embedding downsampling.
- The idea of considering graph classification as a distribution comparison problem is interesting.
- The authors have solid mathematical skills.
Weaknesses: - The idea is borrowed from domain transfer learning but lacks adaption to graph classification task. In graph classification task, graphs in each dataset “are drawn i.i.d” as the authors mentioned at line 205. This means that although graphs have different classes, they belong to an identical distribution, which is contradictory to the hypothesis of this paper that different classes belong to different reference distributions.
- In my opinion, this paper didn’t avoid information compression compared to mean pooling and max pooling. MMD unifies the source and target inputs as a vertex in the same RKHS and then calculate the distance between them. I think the mapping from embedding matrix into a vertex can be regarded as information compression, and the calculation of mean discrepancy is similar to mean pooling.
- The time complexity is too high, the datasets used for large-scale experiments are unreasonable, and the competitors used in efficiency experiment is unreasonable. The time complexity is $O(N^2)$, which is the same as the highly criticised time-consuming node clustering pooling methods (such as DiffPool[1]), while node dropping methods (such as SAGP[2]) only requires a time complexity of $O(E)$. The complexity restricts its scalability to larger-scale graphs. Three so-called large-scale datasets are not really large-scale. They are only large in the number of graphs but not in the number of nodes in each single graph. The authors should add experiments on synthetic large-scale datasets. Besides, in time cost comparison, the authors didn’t choose competitors with SOTA efficiency but used two time consuming methods, which is quite unreasonable.
- The theoretical contribution of generalization error bound is limited. Firstly, the result offers fuzzy insights into the choose of hyperparameters by giving “moderate-size message passing GIN”, and “moderate-size references”. How to define “moderate”? At line 240, the authors analysis that “a network with a smaller $L4 and $r4 may guarantee a tighter bound on the population risk compared to a larger one. Therefore, a promising strategy is to use a moderate-size message passing GNN”. What’s the causality between “smaller” and “moderate-size”? That’s quite confusing. Experimental results in Appendix show that hyperparameters are still choosed on trial and error. Besides, the generalization ability comparison with GIN with mean readout is meaningless because the competitor is not the SOTA.
- Important baselines are missing in the experiments. Please add comparison results with recent graph pooling methods such as ASAP[3], MinCutPool[4], StructPool[5], MuchPool[6], TAP[7], Wit-TopoPool[8], and MSGNN[9].
References
[1]Ying Z, You J, Morris C, et al. Hierarchical graph representation learning with differentiable pooling[J]. Advances in neural information processing systems, 2018, 31.
[2]Lee J, Lee I, Kang J. Self-attention graph pooling[C]//International conference on machine learning. PMLR, 2019: 3734-3743.
[3]Ranjan E, Sanyal S, Talukdar P. Asap: Adaptive structure aware pooling for learning hierarchical graph representations[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(04): 5470-5477.
[4]Bianchi F M, Grattarola D, Alippi C. Spectral clustering with graph neural networks for graph pooling[C]//International conference on machine learning. PMLR, 2020: 874-883.
[5]Yuan H, Ji S. Structpool: Structured graph pooling via conditional random fields[C]//Proceedings of the 8th International Conference on Learning Representations. 2020.
[6]Du J, Wang S, Miao H, et al. Multi-Channel Pooling Graph Neural Networks[C]//IJCAI. 2021: 1442-1448.
[7]Gao H, Liu Y, Ji S. Topology-aware graph pooling networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(12): 4512-4518.
[8]Chen Y, Gel Y R. Topological pooling on graphs[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2023, 37(6): 7096-7103.
[9]Lv Y, Tian Z, Xie Z, et al. Multi-scale Graph Pooling Approach with Adaptive Key Subgraph for Graph Representations[C]//Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. 2023: 1736-1745.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness 1:**
Thanks for your comment. This is a misunderstanding. Almost all papers studying the theory of classification use the i.i.d assumption. The identical distribution does **NOT** contradict the multiple classes scenario. Here the identical data distribution $\mathcal{D}$ is actually a composition of $K$ sub-distributions, where each sub-distribution (related to one reference distribution in our method) corresponds to a class. An intuitive example is the Gaussian mixture models: data distribution $\mathcal{D}$ is composed of $K$ Gaussians, i.e., $p_{\mathcal{D}}\left(\mathbf{x}\right)=\sum_{k=1}^K p\left(\mathbf{x} \mid \mathbf{z}\right) p(\mathbf{z})=\sum_{k=1}^K \pi_k \mathcal{N}\left(\mathbf{x} \mid \mu_k, \Sigma_k\right)$ (samples drawn i.i.d from $p_{\mathcal{D}}$ belongs to different classes). For instance, the following popular papers on classification used i.i.d assumption as us.
[1] VN Vapnik. The nature of statistical learning theory. Springer science \& business media, 2013. (68000+ citations)
[2] PL Bartlett et al. Spectrally-normalized margin bounds for neural networks. NeurIPS 2017. (1200+ citations)
[3] C. Zhang et al. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM 64, no. 3 (2021): 107-115. (7000+ citations)
**Response to Weakness 2:**
We treat the nodes' embeddings as discrete distributions and measure the distance between two distributions using MMD. MMD [1] with a Gaussian kernel actually compares all orders of statistics between two distributions rather than the means, though this is implicitly conducted in the RKHS using the mean difference. In other words, MMD compares also the higher-order statistics between distributions while mean pooling compares only the first-order statistics, i.e., mean. Therefore, the information loss of MMD is much less than the mean and max poolings.
We used Example 2.1 to show the limitation of mean and max pooling and the motivation for we regard each node embedding matrix as a discrete distribution. Two different distributions may have the same mean and max but their MMD is never zero. MMD has also been utilized in generative models (e.g. MMD-GAN [2]) to compare distributions, where comparing the means (like mean pooling) in the feature space given by a neural network does not work.
[1] Arthur Gretton et al. A kernel two-sample test. JMLR 2012.
[2] Li et al. MMD GAN: Towards deeper understanding of moment matching network. NeurIPS 2017.
**Response to Weakness 3:**
We acknowledge that our method is efficient in terms of the number of graphs rather than the number of nodes in each graph. However, there are many graph datasets with a large number of graphs but a relatively small number of nodes in each graph, for which our method can be used.
It is really very difficult to propose an algorithm with both SOTA accuracy and SOTA efficiency. Our method has a good trade-off between them. We follow your suggestion and compare the time cost of our method with two pooling methods published in 2023 (WitTopoPool and MSGNN). The results are in the **global rebuttal** due to the character limitation. In fact, these two methods are much more costly than ours. The GNN-based topological pooling layer requires the calculation of pairwise node similarity, which is $O(N^2)$. Besides, the time complexity of its witness complex-based topological layer is also quadratic. The subgraph sampling, selection, and evolution of MSGNN are also very costly.
**Response to Weakness 4:**
Thanks for your insightful comment. Similar to the previous work on learning theory such as [1][2][3], our theorem can only show the impact of the model architecture and parameters on the generalization ability of the model and there is always a trade-off between model complexity and training accuracy, that's why we used the words like "moderate" and "smaller". The upper bounds of the training error are data-dependent, which means we may never point out exactly what model is the best according to only the theoretical analysis.
The generalization bound is the upper bound of the difference between training error and testing error. We say "a network with a smaller $L$ and $r$ may guarantee a tighter bound on..." because in the bound, the term $\tilde{\mathcal{O}}\left(\frac{\mu \bar{b}\|\mathbf{X}\|_2 c^L(L r)^{\frac{3}{2}} \bar{\kappa}^{L r} \sqrt{\theta K / n}}{N}+\gamma \sqrt{\frac{K m d}{N}}\right)$ increases with $L$ and $r$. So here the "smaller" is an exact expression.
We say "moderate-size references" because the bound scales the reference size $m$ as $\tilde{O}(\sqrt{m})$, which means the bound is not very sensitive to $m$ because of the square root. If $m$ is too small, the expressive power of the model will be low; if it is very large, the complexity $\tilde{O}(\sqrt{m})$ is high. Therefore, we say "moderate-size", meaning a trade-off, should be used. This is supported by the experiments in Appendix D.3 (Figure 6) in our paper.
Let's use an intuitive example (may not be true in practice) to further explain why we have to use "moderate" rather than a concrete value (e.g. 0, 1, $\infty$) in a discussion of the theoretical result. Suppose there is a parameter or hyperparameter $s$ of the model and the bounds scales with it as $\tilde{O}((s-1)^2)$ and $s$ does not influence the training error, then the best $s$ is 1. However, in practice, the training error may decrease when $s$ increases. The training error is data-dependent, which means we cannot find the best $s$ using the theoretical result only. Instead, one may use cross-validation or AutoML to find a good $s$.
**Response to Weakness 5:**
The added results (MinCutPool, ASAP, Wit-TopoPool, MSGNN) are in **global rebuttal**. Our method has better classification accuracy on most of the datasets and has the highest average accuracy.
**Please do not hesitate to let us know if you need more explanation or have further questions.**
---
Rebuttal 2:
Comment: Dear Reviewer anD1,
We appreciate your comments and suggestions. Did our response address your concerns? We are keen to receive your feedback and provide further explanation if necessary.
Sincerely,
Authors | Summary: The paper introduces a novel algorithm called GRDL for graph classification. GRDL treats each graph’s latent node embeddings given by GNN layers as a discrete distribution and directly classify distributions without global pooling. The authors derived generalization error bounds for GRDL and verified them numerically. The experiments on many graph datasets show the superiority of GRDL. GRDL is 10 times faster than leading competitors.
Strengths: * Originality: The proposed algorithm GRDL and the theoretical results are novel.
* Quality: The paper is well-organized and contains rich theoretical results and numerical comparisons.
* Clarity: The motivation (e.g. Example 2.1), assumptions, optimization, implication of theorems, and the experimental setting have been clearly explained.
* Significance: Graph classification is a challenging problem due to the difficulty in converting nodes’ embeddings to a vector as the global representation of each graph. The proposed method gets rid of readout operations and directly classifies the discrete distributions formed by nodes’ embeddings. It can outperform graph kernels, GIN, and graph transformers. Moreover, the paper proved that the proposed model has better generalization ability than the baseline GIN, which is a big contribution.
Weaknesses: There is no major weakness found. Please refer to my questions in the next section.
Technical Quality: 4
Clarity: 3
Questions for Authors: * Besides GIN, there are other GNN models such as GCN and GAT. Why did the authors consider GIN only in the theoretical analysis and experiments?
* I think the parameter $\theta$ of the Gaussian kernel can be absorbed into the neural network parameter. I suggest the authors discuss the necessity of setting or optimizing $\theta$ separately.
* In Theorem 3.2, the bound is related to $K$, the number of classes. But, in many literature of generalization analysis, the bound is not related to the number of classes. Could the author explain the difference?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors discussed the limitations in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Question 1:**
Thank you for your question. We chose GIN because it is a highly representative model within the class of Graph Neural Networks (GNNs) that utilize neighbor aggregation schemes. GIN is probably the most expressive model in this category, and such schemes are widely employed in many classic GNN designs, including GCN and GraphSAGE.
In terms of theoretical analysis, our approach can be readily adapted to models like GCN and GraphSAGE with only minor modifications. GAT uses an attention mechanism for aggregation, and this approach differs significantly from the general aggregation scheme employed by GIN, GCN, and many other models. We believe the analysis of GIN provides broader and more valuable insights. Additionally, as we said in our paper, the analysis on GIN's generalization ability is currently limited, and our work aims to address this gap. It's also worth noting that our proposed reference layer can be combined with other models like GCN and GAT to classify graphs without any modification.
**Response to Question 2:**
Thank you for your insightful comment. While it is theoretically possible to absorb the parameter $\theta$ of the Gaussian kernel into the neural network parameters, we believe that setting or optimizing $\theta$ separately is crucial in practice.
The main reason for this is that the initial neural network parameters and the reference distributions do not necessarily ensure that nodes of graphs $\mathbf{x}$ are close to nodes of graphs $\mathbf{x}'$. If $\theta$ or the scale of the node embeddings $\mathbf{H}$ is too large, the Gaussian kernel becomes overly sharp, resulting in almost zero values. This can lead to a situation where the Maximum Mean Discrepancy (MMD) fails to effectively quantify the distance between the embeddings and the reference distributions, of which the complexity is related to $K$.
To illustrate the importance of setting or optimizing $\theta$ separately, we actually conducted experiments on the MUTAG dataset with different values of $\theta$ on Appendix D6, Table 11. The results, shown in the following table, demonstrate that the choice of $\theta$ significantly impacts classification accuracy.
$$
\begin{matrix}
\hline
\mathbf{\theta} & 1\times10^{-4} & 1\times10^{-3} & 1\times10^{-2} & 1\times10^{-1} & 1 & 1\times10^{1} & 1\times10^{2} & 1\times10^{3}\\\\
\hline
\textbf{Accuracy} & 0.9096 & 0.9149 & 0.9113 & 0.9113 & 0.8254 & 0.6822 & 0.5737 & 0.3345\\\\
\hline
\end{matrix}
$$
This shows the necessity of carefully selecting or optimizing $\theta$ to ensure good classification performance. We will elaborate on these findings in the revised manuscript.
**Response to Question 3:**
Thank you for your insightful question. The difference arises from our model's unique use of reference distributions. In previous literature on generalization analysis, the focus has often been on models that aggregate node embeddings into vectors and then classify these vectors using another neural network, without incorporating reference distributions. This conventional approach does not typically involve a dependency on the number of classes $K$. In contrast, our model introduces reference distributions.
We have also included an additional generalization analysis theorem in the appendix (Theorem A.1) for the GIN model with mean pooling, which does not involve reference distributions. This analysis, consistent with traditional approaches, does not include $K$ in the bound. We hope this clarification and the additional theorem help highlight the unique aspects of our model and the corresponding theoretical analysis.
**Please do not hesitate to let us know if you need more explanation or have further questions.**
---
Rebuttal Comment 1.1:
Comment: The rebuttal and the additional experimental results have adequately addressed my concerns. Therefore, I raise my confidence level to 5.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your recognition of our work. | Summary: This paper proposes to make graph-level predictions via distribution comparison between node-level representations and discrete reference distributions. The authors claim that their proposed method avoids the requirements of graph pooling for graph-level tasks. and reduce the risk of information loss. Theoretical and empirical justification results are provided.
Strengths: - The authors propose a novel and simple method for graph classification and provide theoretical analysis on the generalization bound. The discussion is clear and extensive.
- The method shows advantage in time cost compared to related works.
- The authors performed extensive ablation study on the proposed method.
Weaknesses: - Equation in L138-139 (a missing equation index?) also requires summation over nodes. The description of "avoid graph pooling operation" seems to be overclaimed. I suggest that the authors reconsider it.
- An ablation study on the discrimination loss and the usage of node-level representation is required. I think the main improvement in performance may be attributed to the discrimination loss. It helps the model learn distant representations for graphs of different labels in the feature space. The performance of the proposed method should be compared with and without the discrimination loss. Besides, the authors should also implement a baseline with discrimination loss where node representations are first sum-pooling and then compared with the reference distribution.
- Minor: Latest baseline models are required for a comprehensive empirical comparison, such as graph pooling including SEP[1], GMT[2], and CoCN[3] and graph transformers including Exphormer[4], GRIT[5], and MPNN-VN[6]. Considering the time limitation and the similarity of the datasets, a comparison on part of the datasets will be sufficient.
[1] Structural entropy guided graph hierarchical pooling. ICML'22
[2] Accurate Learning of Graph Representations with Graph Multiset Pooling. ICLR'22
[3] All in a row: Compressed convolution networks for graphs. ICML'23
[4] Exphormer: Sparse Transformers for Graphs. ICML'23
[5] Graph Inductive Biases in Transformers without Message Passing. ICML'23
[6] On the Connection Between MPNN and Graph Transformer. ICML'23
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes. The authors describe the limited performances of the proposed method on certain datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness 1:**
Thank you for your feedback. The equation in lines 138-139 stems from the definition of the Maximum Mean Discrepancy (MMD). MMD is a measure between two discrete distributions, where the inputs are two matrices and the output is a scalar. This means that in the computation, elementary operations like summation are compulsory: it involves summation over the kernel values rather than the original node features. Regarding the claim about avoiding graph pooling operations, we would like to clarify that traditional graph pooling operations typically compress a graph's node embedding matrix into a single vector before classification. In contrast, our method allows for the direct classification of the graph using its node embeddings without compressing them into a single vector. We treat the node embedding matrix as a discrete distribution and handle it by MMD, a theoretical-grounded measure between two distributions. We appreciate your suggestion and will ensure that this distinction is more clearly communicated in the revised manuscript.
**Response to Weakness 2:**
Thank you for your insightful suggestion. We appreciate your interest in the impact of the discrimination loss on our model's performance. We actually included an ablation study of our proposed method without the discrimination loss in Appendix D.5 Table 10 (where the discrimination loss coefficient $\lambda = 0$). Following your second advice, we also implement a baseline (denoted as GRDL:SumDis) with discrimination loss, where node representations are first aggregated using sum-pooling and then compared with the reference distribution. The results of the ablation study regarding these two baselines are in the following table.
$$
\begin{matrix}
\hline
Method & MUTAG & PROTEINS & NCI1 & IMDB-B & IMDB-M & PTC-MR & BZR & COLLAB & Average \\\\
\hline
GRDL & \textbf{92.1} \pm 5.9 & \textbf{82.6} \pm 1.2 & \textbf{80.4}\pm 0.8 & \textbf{74.8}\pm 2.0 & \textbf{52.9} \pm 1.8 & \textbf{68.3} \pm 5.4 & \textbf{92.0} \pm 1.1 & \textbf{79.8}\pm 0.9 & \textbf{77.9} \\\\
GRDL:\lambda = 0 & 89.9 \pm 4.9 & 81.8 \pm 1.3 & 80.0\pm 1.6 & 73.1 \pm 1.5 & 51.3 \pm 1.4 & 66.6 \pm 5.9 & 89.5 \pm 2.3 & 79.0 \pm 1.0 & 76.4\\\\
GRDL: SumDis & 89.9 \pm 6.0 & 78.4 \pm 0.6 & 77.2 \pm 1.7 & 71.6 \pm 5.2 & 49.8 \pm 5.4 & 62.5 \pm 6.3 & 85.3 \pm 1.5 & 77.1 \pm 0.9 & 74.0\\\\
\hline
\end{matrix}
$$
The results show that discrimination loss can increase the performance of our model. However, our method without discrimination loss still outperforms the baseline with sum-pooling and discrimination loss. We will include these results in the revised version of the manuscript.
**Response to Weakness 2 (minor):**
Thank you so much for pointing out these baselines. Due to the time limit, we implemented SEP, GMT, and MPNN-VN. The results are shown in the following table. Our method GRDL outperformed the competitors in most cases. We will include these additional results in our paper and discuss all the six references you mentioned.
$$
\begin{matrix}
\hline
\text{Method} & MUTAG & PROTEINS & NCI1 & IMDB-B & IMDB-M & PTC-MR & BZR & COLLAB & Average \\\\
\hline
\text{GRDL (ours)} & \textbf{92.1} \pm 5.9 & \textbf{82.6} \pm 1.2 & 80.4 \pm 0.8 & \textbf{74.8} \pm 2.0 & \textbf{52.9} \pm 1.8 & 68.3 \pm 5.4 & \textbf{92.0} \pm 1.1 & 79.8\pm 0.9 & \textbf{77.9} \\\\
\text{SEP [ICML 2022]} & 89.4 \pm 6.1 & 76.4 \pm 0.4 & 78.4 \pm 0.6 & 74.1 \pm 0.6 & 51.5 \pm 0.7 & 68.5 \pm 5.2 & 86.9 \pm 0.8 & \textbf{81.3} \pm 0.2 & 75.8\\\\
\text{GMT [ICLR2022]} & 89.9 \pm 4.2 & 75.1 \pm 0.6 & 79.9 \pm 0.4 & 73.5 \pm 0.8 & 50.7 \pm 0.8 & 70.2 \pm 6.2 & 85.6 \pm 0.8 & 80.7 \pm 0.5 & 75.7\\\\
\text{MPNN-VN [ICML2023]} & \textbf{92.1} \pm 5.2 & 78.3 \pm 1.0 & \textbf{80.9} \pm 0.8 & 72.4 \pm 1.2 & 50.9 \pm 1.9 & \textbf{71.4} \pm 5.2 & 90.2 \pm 1.1 & 80.1 \pm 0.8 & 77.0\\\\
\hline
\end{matrix}
$$
**Please do not hesitate to let us know if you need more explanation or have further questions.**
---
Rebuttal 2:
Comment: I appreciate the detailed response which has addressed my concerns. The new ablation results (W2) further validate the proposed method and the selected comparison results seem promising. Please make sure to clarify the difference between the pre-pooling methods and your discrepancy measuring and summation strategy, and update your manuscript based on the rebuttal.
---
Rebuttal Comment 2.1:
Comment: Thank you so much for your feedback and support. The suggestions from you and the other two reviewers have helped us improve the quality of our paper. We will update the manuscript according to the rebuttal. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers' comments. In this rebuttal, we added the following experiments.
1. **Training time per epoch** compared to two latest baselines (WitTopoPool and MSGNN both proposed in 2023) mentioned by reviewer anD1 on both experimental datasets in our paper and three synthetic datasets. The three synthetic datasets have 2000 graphs with 100(SYN-100), 300(SYN-300), 500(SYN-500), and 1000(SYN-1000) nodes per graph, respectively. The edge number is $0.1n^2$ where $n$ is the number of nodes. See the following table.
$$
\begin{matrix}
\hline
& MUTAG & PROTEINS & NCI1 & IMDB-B & IMDB-M & PTC-MR & BZR & COLLAB & SYN-100 & SYN-300 & SYN-500 & SYN-1000\\\\
\hline
\text{GRDL (ours)} & 0.4 & 3.4 & 12.6 & 2.4 & 3.5 & 0.8 & 1.2 & 16.3 & 26.6 & 45.8 & 88.7 & 220.8\\\\
\text{WitTopoPool (2023)} & 0.4 & 2.6 & 21.4 & 2.4 & 2.6 & 1.0 & 1.3 & 39.1 & 32.9 & 50.8 & 97.5 & 201.3\\\\
\text{MSGNN (2023)} & 45.2 & - & - & - & - & 75.5 & 135.3 & - & - & - & - & -\\\\
\hline
\end{matrix}
$$
Message to Reviewer anD1: As can be seen, our method GRDL is more efficient than these two latest pooling methods you mentioned when the number of nodes in each graph is less than 1000. In fact, graph dataset for graph-level learning (not node-level learning) with an average node number larger than 1000 is rare.
2. **Ablation study** for the two terms in the objective function of our method GRDL. See the following table.
$$
\begin{matrix}
\hline
Method & MUTAG & PROTEINS & NCI1 & IMDB-B & IMDB-M & PTC-MR & BZR & COLLAB & Average \\\\
\hline
GRDL & \textbf{92.1} \pm 5.9 & \textbf{82.6} \pm 1.2 & \textbf{80.4}\pm 0.8 & \textbf{74.8}\pm 2.0 & \textbf{52.9} \pm 1.8 & \textbf{68.3} \pm 5.4 & \textbf{92.0} \pm 1.1 & \textbf{79.8}\pm 0.9 & \textbf{77.9} \\\\
GRDL:\lambda = 0 & 89.9 \pm 4.9 & 81.8 \pm 1.3 & 80.0\pm 1.6 & 73.1 \pm 1.5 & 51.3 \pm 1.4 & 66.6 \pm 5.9 & 89.5 \pm 2.3 & 79.0 \pm 1.0 & 76.4\\\\
GRDL: SumDis & 89.9 \pm 6.0 & 78.4 \pm 0.6 & 77.2 \pm 1.7 & 71.6 \pm 5.2 & 49.8 \pm 5.4 & 62.5 \pm 6.3 & 85.3 \pm 1.5 & 77.1 \pm 0.9 & 74.0\\\\
\hline
\end{matrix}
$$
3. **In addition to the 12 baselines** compared in our original submission, we **added 6 more baselines**. See the following table.
$$
\begin{matrix}
\hline
Method & MUTAG & PROTEINS & NCI1 & IMDB-B & IMDB-M & PTC-MR & BZR & COLLAB & Average \\\\
\hline
GRDL (ours) & \textbf{92.1} \pm 5.9 & \textbf{82.6} \pm 1.2 & \textbf{80.4} \pm 0.8 & \textbf{74.8} \pm 2.0 & \textbf{52.9} \pm 1.8 & 68.3 \pm 5.4 & \textbf{92.0} \pm 1.1 & 79.8\pm 0.9 & \textbf{77.9} \\\\
SEP (2022) & 89.4 \pm 6.1 & 76.4 \pm 0.4 & 78.4 \pm 0.6 & 74.1 \pm 0.6 & 51.5 \pm 0.7 & 68.5 \pm 5.2 & 86.9 \pm 0.8 & \textbf{81.3} \pm 0.2 & 75.8\\\\
GMT (2022) & 89.9 \pm 4.2 & 75.1 \pm 0.6 & 79.9 \pm 0.4 & 73.5 \pm 0.8 & 50.7 \pm 0.8 & \textbf{70.2} \pm 6.2 & 85.6 \pm 0.8 & 80.7 \pm 0.5 & 75.7\\\\
MinCutPool (2020) & 90.6 \pm 4.6 & 74.7 \pm 0.5 & 74.3 \pm 0.9 & 72.7 \pm 0.8 & 51.0 \pm 0.7 & 68.3 \pm 4.4 & 87.2 \pm 1.0 & 80.9 \pm 0.3 & 75.0\\\\
ASAP (2020) & 87.4 \pm 5.7 & 73.9 \pm 0.6 & 71.5 \pm 0.4 & 72.8 \pm 0.5 & 50.8 \pm 0.8 & 64.6 \pm 6.8 & 85.3 \pm 1.3 & 78.6 \pm 0.5 & 73.1\\\\
WitTopoPool (2023) & 89.4 \pm 5.4 & 80.0 \pm 3.2 & 79.9 \pm 1.3 & 72.6 \pm 1.8 & 52.9 \pm 0.8 & 64.6 \pm 6.8 & 87.8 \pm 2.4 & 80.1 \pm 1.6 & 75.9\\\\
MSGNN (2023) & 78.4 \pm 7.1 & - & - & - & - & 56.4 \pm 6.6 & 78.1 \pm 2.5 & - & \\\\
\hline
\end{matrix}
$$
Message to Reviewer anD1: Our method outperformed the four pooling methods you mentioned as well as the other two pooling methods SEP and GMT proposed in 2022.
Besides supplementary experiments, we made several important rebuttals:
* We explained why our method avoids pooling: MMD takes summation over kernel values instead of summation over original nodes (see responses to reviewer 5HH3).
* We added an ablation study on the discrimination loss (see responses to reviewer 5HH3).
* We added experiments of six additional baselines and compared the classification accuracy (see responses to reviewer 5HH3, anD1 and also the PDF file in global rebuttal).
* We explained why optimizing/setting $\theta$ separately is necessary (see responses to reviewer aUnT).
* We argued the use of i.i.d assumption in our theoretical analysis is correct and has been commonly used in many famous papers of learning theory (see responses to reviewer anD1).
* We added experiments of two SOTA methods' training time per epoch on both datasets used in our paper and three synthetic datasets (see responses to reviewer anD1 and also the PDF file in global rebuttal).
**We are looking forward to your feedback on our rebuttal. Thanks.** | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Delving into the Reversal Curse: How Far Can Large Language Models Generalize? | Accept (poster) | Summary: The paper investigates the reversal curse a phenomena where LLM trained on documents on the format “A is B” are unable to improve their likelihood on statements of the format “B is A”. The authors extend the work to consider documents of the format “Name is Description” and “Description is Name”, whilst also considering two tangible downstream tasks - answer completion and multiple choice answering. They find:
1) In multiple-choice settings, models are able to reverse information from “Name is Description” documents to solve questions w.r.t Description (successful mitigation of the reversal curse)
2) however in the multiple-choice settings, models trained on “Description is Name” (“B is A”) documents fail to solve multiple-choice questions better than guessing (which they dub the “thinking bias”). In this setting, both the reversal curse and the thinking bias stifle model behaviour.
3) By inspecting both chain-of-thought reasoning and saliency maps over attention heads - models regularly attend to the name within questions much more than they attend to the description - providing an explanation of why models are so bad at answering questions which begin with a description.
4) Authors demonstrate that further training and synthetic augmentation do not address these issues
Strengths: This paper provides conceptual clarity and an explanation of the original reversal curse and finds a further bias within regressive language models.
Figure 1 is exceptionally useful for explaining bais and experiments are completed over a variety of open-weight models (llama, vicuna, mistral)
The book-story dataset results in the appendix were really compelling. I recommend that the authors move/highlight these earlier in the paper.
Weaknesses: For these results, my fundamental question is if this is just a symptom of the model sizes used and if, with a larger capacity model - these biases would be mitigated. The original reversal curse work is done on GPT-3.5-Turbo models, and a similar class of models is used here. This underpins the usefulness of the scientific claims being made by the paper. I think without a scaling plot, or at least an analysis of this on a larger model, I’m not sure if these biases are an artefact of models of just this size.
Note this does not need to be interpretability or COT work (which are costly), which explains why these biases exist within models, but proof that they exist.
Technical Quality: 3
Clarity: 4
Questions for Authors: Could the above weaknesses be addressed?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: N/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are greatly encouraged by the reviewer's acknowledgement of the extensiveness and compellingness of our experiments. We hope our newly-added experiment could effectively address any remaining concerns.
> Q: For these results, my fundamental question is if this is just a symptom of the model sizes used and if, with a larger capacity model - these biases would be mitigated. The original reversal curse work is done on GPT-3.5-Turbo models, and a similar class of models is used here. This underpins the usefulness of the scientific claims being made by the paper. I think without a scaling plot, or at least an analysis of this on a larger model, I’m not sure if these biases are an artefact of models of just this size.
**Ans**: Thank you for your thoughtful suggestion. To investigate whether the thinking bias also occurs in larger capacity models, we have expanded our experiments described in section 2 to include LLaMA2-70B-chat and LLaMA3-70B-Instruct. The performance of these two models after training on the same synthetic biography dataset applied in our main experiment is presented below:
| **Finetuned Model** | **Test Set** | **Open-QA N2D** | **Open-QA D2N** | **MCQ N2D** | **MCQ D2N** |
|----------------------|------------|-------------|-------------|---------|---------|
| LLaMA2-70B-chat | NameIsDesc | 80.4 | 0.0 | 61.8 | 66.1 |
| LLaMA2-70B-chat | DescIsName | 2.4 | 97.5 | **25.5** | **27.0** |
| LLaMA3-70B-instruct | NameIsDesc | 94.8 | 3.3 | 76.0 | 65.7 |
| LLaMA3-70B-instruct | DescIsName | 5.5 | 95.8 | **24.1** | **26.7** |
The results for the larger models generally align with those observed for the smaller models presented in Table 1, as the MCQ results from the NameIsDescription group still significantly outperform those from the DescriptionIsName group. Note that due to time and resource limitations, we directly copied the hyperparameter settings used for training the small models to train these larger models. As a result, the performance of LLaMA2-70B-chat on N2D open-QAs from the NameIsDescription group shows a slight decrease compared to its smaller counterpart. Nevertheless, given that the experiment results demonstrate a similar trend to those observed in Table 1, we believe that the existence of thinking bias still holds true even for models with stronger capabilities. | Summary: The paper investigates the reversal curse, which is the finding that LLMs fail to generalize from seeing “A=B” to “B=A”, in a broader range of settings. They reproduce the reversal curse findings from Berglund et al (2023). They also investigate a new setting, multiple choice Q&A (rather than free form question answering). In this context, they find that instead of a reversal curse, models can generally answer questions well after seeing demonstrations of the form "Name is Descriptions", regardless whether the question is about Name or Description. However, they fail to answer in general after being trained on “Description is Name”. They investigate this phenomenon further by looking at models’ chain of thought and by applying saliency methods to attention layers and attribute the failure to a “Thinking Bias” in LLMs. They also show that “training on the test” does not work to improve models’ abilities here, like in the reversal curse, indicating that it’s a fundamental bias of LLMs.
Strengths: - Originality: As far as I can tell, their results are novel. They reproduce the reversal curse, and then look at a related phenomenon in multiple choice QA which hasn’t been investigated before. They also apply various different analysis methods which haven’t been applied to this problem before.
- Significance: I found this interesting and important to better understand and get a more detailed picture of the generalization abilities of LLMs. They look at several open source models and do relevant additional analysis which shows that this result is significant and not just a fluke.
- Quality: Overall the experiments seem well thought out and rigorous. Though I haven’t inspected the data myself.
- Clarity: I like a lot of the plots and examples in the paper. I also think I was able to understand the paper well overall and their exploration of LLMs' thinking bias made sense to me.
Weaknesses: My main criticism is about the presentation of the results. I think this work isn’t really about the reversal curse. Rather, I think the work is about a related, but orthogonal issue. They say “this work remains orthogonal” in Line 297 when discussing related work, but I think the rest of the paper isn’t written like this. I think the setting where you put the answer options in the context is just a different setting from the one which is defined in the paper on the reversal curse, so one should study this separately and give it a different name. It’s good to highlight the relation to reversal curse, and I really like the replication of the reversal curse too. But again, the core contribution of the paper seems to be about multiple choice QA and “thinking bias” in LLMs, so I wouldn’t frame the whole paper as “delving into the reversal curse”. Of course, I might have misunderstood something here, so I would be happy to be corrected and would like to hear how the authors see their main contribution.
Technical Quality: 4
Clarity: 2
Questions for Authors: - I think you should state explicitly from the beginning (in abstract and introduction) that, when testing multiple choice QA, you found that training on “Name is Description” works, but “Description is Name” consistently fails. I feel like you kind of buried the lead. (As an aside, the abstract reads a bit like it was written by ChatGPT to me)
- Line 31: what does it mean to have a capability but to fail to elicit it? Does it mean that a different prompt would elicit the capability? But note that using a different prompt might turn it into a different problem
- I wonder why the authors look at saliencies instead of just directly the magnitude of the attention matrices. Is there a reason to do this? Does looking at attention scores not yield the same results? Note I’m not an expert on interpretability, so someone else might be in a better position to comment on this.
- Regarding Figure 2: do you have a hypothesis why there is more flow from description in early layers when evaluating name to description? Is this because the last thing in the context is the description in this case?
- I think you don’t need to introduce the reversal curse again in related work since you already discussed it in depth in section 2.1 (and you should discuss the relation of your work to the reversal curse throughout the paper anyway)
- Line 283: Can you say more specifically how this outcome diverges? What did [52] find?
- I think you should discuss Limitations and Future Work in the main text, not in the appendix
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: I think they address the limitations adequately (except that this section should be in the main text, not appendix). Maybe one thing to mention is that as far as I can tell, they only study one specific dataset. One could of course study all of this in many other settings as well. E.g., a dataset about things other than celebrities, or a dataset where descriptions are not in natural language, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are greatly encouraged by your acknowledgment of the novelty and solidness of our findings. We hope our rebuttals can adequately address your concerns.
> W1: My main criticism is about the presentation of the results.
**Ans**: Our main contribution is to provide new interpretations and insights into current LLMs' generalization abilities, including both a clearer understanding of the original RC problem and the thinking bias. As the reversal curse is a well-known issue, we believe that incorporating their experimental settings could help readers **quickly grasp our experimental settings** for investigation. Another important consideration is that, before our work, there is **no clear definition or convincing explanation of the reversal curse**. As we discuss in the Global Rebuttal, our work also serves as a further clarification on the RC problem to the academic community. Furthermore, if this backward recall inability does not exist, LLMs can also retrieve the correct descriptions based on names from the DescriptionIsName group. Thus this deep-rooted thinking bias would **remain undetected**. Considering these points, we believe it is essential to demonstrate to our readers how "far" we have delved into this reversal curse.
[1] The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
[2] Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse
---
> Q1: I think you should state explicitly from the beginning (in abstract and introduction) that, when testing multiple choice QA, you found that training on "Name is Description" works, but "Description is Name" consistently fails.
**Ans**: Thank you for your thoughtful suggestion. In response, we will make the following changes in our revised paper:
* Line 9-11: This generalization ability is highly correlated to the structure of the fact "A is B" in the training documents. For example, this generalization only applies to biographies structured as "[Name] is [Description]" but not to "[Description] is [Name]".
* Line 38-39: Intriguingly, this generalization ability appears to be closely linked with the structure of the fact "A is B" in the training documents. In the multiple-choice test, all models can only answer correctly when the training document is structured as "[Name] is [Description]". Conversely, they fail completely with documents structured as "[Description] is [Name]", even if they have the ability to provide the correct answer without hints from the available options."
---
> Q2: Line 31: what does it mean to have a capability but to fail to elicit it?
**Ans**: The capability refers to the ability to understand the identity relation in the training documents and apply to tests. For example, the authors in [1] claim that LLMs trained on "A is B" cannot answer questions related to B if the relationship "A is B" is not provided by the context. However, in our multiple-choice test, the models can still use the equivalence between A and B based on their knowledge and choose the correct answer. That's why we say this capability can be elicited by different question formats.
[1] Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse
> Q3: I wonder why the authors look at saliencies instead of just directly the magnitude of the attention matrices.
**Ans**: Our choice of saliencies comes from the criticisms regarding **whether attention values can serve as effective interpretation tools** [1,2,3]. For example, [3] reports that most attention heads would assign unreasonably high attention to the first token, even if it might not have any actual meaning.
[1] Attention is not Explanation
[2] The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
[3] Efficient Streaming Language Models with Attention Sinks
---
> Q4: Do you have a hypothesis why there is more flow from description in early layers when evaluating name to description?
**Ans**: The prevailing belief is that early layers serve as the locations to encode lower-level information and for local information aggregation [1,2]. We believe at these layers the models are still **gathering local information** at each token position, leading to a greater flow from descriptions since they are closer to the answer position.
[1] DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
[2] Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning
[3] Locating and Editing Factual Associations in GPT
---
> Q5: I think you don’t need to introduce the reversal curse again in related work.
**Ans**: Thank you for your suggestion. We aim to emphasize the current research status on the reversal curse in this section. We will condense this part after revision.
---
> Q6: Line 283: Can you say more specifically how this outcome diverges? What did [52] find?
**Ans**: In [52], it was demonstrated that the inclusion of exemplary QAs during the training process improved models' performances on test QAs, and it was concluded that these QA examples enhanced the models' knowledge application abilities. However, we did not observe the same improvement in our experimental settings. We attribute this inconsistency to the impact of the thinking bias on the DescriptionIsName group.
---
> Q7: I think you should discuss Limitations and Future Work in the main text.
**Ans**: We will incorporate the Limitations and Future Work section back into the main text after revision.
---
> L1: Maybe one thing to mention is that as far as I can tell, they only study one specific dataset.
**Ans**: In fact, we have extended our experimental settings to **a new literature dataset, Book-Story**, to explore the potential broader implications of this bias across different types of data. We briefly mention this experiment in our main text (lines 133-135) and place the detailed description in Appendix D.
---
Rebuttal 2:
Comment: Thanks for your detailed responses and clarifications (including the new results in the other rebuttals)! I have read them and have found them interesting and convincing. I will increase my score by one point.
One comment regarding presentation: I understand that it's useful to refer to the reversal curse in this paper (including in title, abstract, introduction). I still think that your main contributions (thinking bias in multiple-choice Q&A) are separate from the reversal curse and study a different (but very related) phenomenon.
---
Rebuttal Comment 2.1:
Title: Appreciation to Reviewer's Response
Comment: Dear Reviewer p2dN,
Thank you for dedicating your time to read our rebuttals and for raising your score! We sincerely appreciate your constructive comments for enhancing the presentation of our paper. These comments will be carefully considered when introducing the contributions of our work in the final version. We hope that all readers of our work will find their interest and inspiration in the experimental phenomena and our interpretations of today's LLMs' abilities.
---
Best regards,
Authors | Summary: Building off prior work that studies the “reversal curse” in LLMs, the present paper provides additional analysis on 1) characterizing the limitations of LLMs on the reversal curse through more detailed experimentation (e.g., limitations with chain of thought prompting or providing multiple choice questions), and 2) interpreting the reasons as to why LLMs are biased towards correctly answering A is B when A is a name/proper noun. The main findings report that LLMs can improve generalization on B is A when 1) the prompt includes a multiple choice question; 2) LLMs are biased towards A (proper noun) is B (description), since this is how facts are typically represented in the training corpus; 3) This existing negative bias in LLMs cannot be mitigated by training/finetuning alone.
Strengths: 1) The paper addresses a timely question
2) The paper was, for the most part, straightforward and easy to understand.
3) The experiments are interesting, and the effects appear strong
Weaknesses: One of the main claims, that LLMs can disproportionately perform “NameIsDescription” correctly, is due to the fact pretraining datasets are biased towards having text in the form of NameIsDescription (i.e., A is B), but not the reverse. Despite the claim being mentioned several times, and though despite the claim being intuitive, the paper does not empirically quantify or demonstrate this that I could find. Are statistics reported of how often “A is B” is exhibited in the training documents relative to “B is A”? And is performance of the LLMs proportional to the ratio found in training documents? There seems to be some reference to Berglund et al., but this is a result of a prior paper, not the present paper. It would be helpful to quantify how biased LLMs are for A is B vs. B is A relative to the proportion they are exhibited in training data.
I found figure 3 to be confusing. What are the different colors supposed to indicate? (There’s no associated color bar.) Also, the incorrect answer of D makes it appear as if the incorrect selection was due to the tokenization of the name Graham Redwood.
There is also an issue of novelty – many of the reported results do not seem to particularly ‘novel’, perhaps because the results seem almost obvious. I think it would significantly help if the authors were to more clearly delineate their work from prior work in the Introduction, and to “signpost” exactly what the specific contributions of this work are (relative to prior work).
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Is the finding that names are easier to trigger recall potentially due the fact that names typically have fewer number of tokens than descriptions? Prior work has shown that the token-wise MLP layer of the transformers act as key-value memories (Geva et al. 2021). If there are fewer tokens associated with a name, wouldn’t it be easier to coordinate retrieval of memories (i.e., facts) across fewer token streams, rather than to coordinate memory retrieval across the many streams that comprise of “Description” tokens?
2. Related to a weakness mentioned above: What are the statistics/proportions that show a bias in the pretraining corpus of “A is B” over “B is A”? And how does that proportion match with actual LLM performance?
3. In multiple choice question prompting, are LLMs biased towards any particular response (e.g., ‘A’, ‘B’, ‘C’, or ‘D’)? I’m curious to know if the attention weights in a decoder-only model could potentially bias the model to retrieve more facts associated with ‘D’, since it is later in the prompt.
4. I’m skeptical of the interpretation (or over-interpretation) of “information flow” by computing the average attention weights to a given token. This concern is compounded by the fact that the models they used (Llama) are decoder-only models, which, by construction, have greater attention weights towards tokens presented later in the prompt. Might this metric be confounded by this (results in Fig. 2 & 3)?
Geva, Mor, Roei Schuster, Jonathan Berant, and Omer Levy. “Transformer Feed-Forward Layers Are Key-Value Memories.” In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 5484–95. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.emnlp-main.446.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors claim that the curse of reversal can be somewhat mitigated if multiple choice questions are used. However, this appears to be a major limitation, while also being a strange suggestion – incorporating multiple choice questions assumes that the prompter knows the correct answer. Thus, in what scenario would this be helpful, aside from evaluating and adjudicating performances of multiple models?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive suggestions and thoughtful comments. We hope that our response will effectively address your concerns.
> W1 & Q2: What are the statistics/proportions that show a bias in the pretraining corpus of "A is B" over "B is A"?
**Ans**: We conduct a statistical analysis on the English Wikipedia corpus [1]. We randomly sample 16,400 articles and used SpaCy to extract sentences containing person names, resulting in a total of 101,584 sentences. We then employ LLaMA3-70B-Instruct to judge whether the given sentence is: (1) a valid sentence and (2) uses a person's name as its subject. The results indicate that **76.9%** of valid sentences meet the criterion. Based on this and our original results, we believe there is a strong causal link between the data bias and the existence of the thinking bias. We leave a strict quantification of this bias to LLMs' performances for future works.
[1] https://huggingface.co/datasets/wikimedia/wikipedia
---
> W2: I found figure 3 to be confusing.
**Ans**: Apologies for the confusion. The colors in the figure represent the saliency scores, ranging from low (green) to high (red). And the incorrect answer 'D' and the tokenization of 'Graham' is simply a coincidence in our test examples. We will add a legend to explain the colors and include more case studies after revision.
---
> W3: There is also an issue of novelty – many of the reported results do not seem to particularly 'novel'.
**Ans**: Thank you for your suggestion. We have provided a detailed discussion of the novelty and contributions of our work in the Global Rebuttal. We hope this will address your concern.
---
> Q1: Is the finding that names are easier to trigger recall potentially due the fact that names typically have fewer number of tokens than descriptions?
**Ans**: To study whether the number of tokens affects the efficiency of LLMs' memory retrieval, we conduct a new experiment using data with extremely long names, such as "Archibald Wolfgang Montgomery Beauregard Fitzwilliam the Third". We replace each name in the original dataset with these names, resulting in 2 new sets: LongNameIsDesc and DescIsLongName. The average number of tokens of new names and descriptions is **21.8** and **19.2**, respectively. We reconduct our main experiment and report results in Table 2 in the PDF. Given the performance on MCQs for LongNameIsDesc still significantly exceeds that of DescIsLongName, we conjecture that the models are still biased towards these long names under the effect of thinking bias.
---
> Q3: In multiple choice question prompting, are LLMs biased towards any particular response (e.g., ‘A’, ‘B’, ‘C’, or ‘D’)?
**Ans**: To study whether LLMs are biased towards certain options in fact retrieval, we model MCQs as a 4-label classification problem and calculate the models' performances on the NameIsDescription group. The DescriptionIsName group is excluded because the models' random selection behaviors on this set make it uncertain to derive meaningful interpretations. The results are posted in Table 3 in PDF. The variations in F1-scores across options fall within $\pm 5\%$, indicating **no strong preference** towards certain options in fact retrieval. Some fluctuations have been observed in the distribution of precisions and recalls. We attribute this to previous observations [1,2] on the model's inclination towards specific options when they are uncertain about the answer (higher recall always accompanies lower precision).
[1] Beyond Performance: Quantifying and Mitigating Label Bias in LLMs
[2] Language Models (Mostly) Know What They Know
---
> Q4: I’m skeptical of the interpretation (or over-interpretation) of "information flow" by computing the average attention weights to a given token.
**Ans**: We respectfully disagree with this view. As indicated in our earlier response, we did not observe a strong correlation between the options and the behavior of models. Additionally, as demonstrated by a counterexample in Fig. 2, in N2D MCQs where the descriptions are presented later in the prompt, the descriptions only exhibit relatively high saliency scores in the early layers. At the middle and later layers which are considered crucial for fact retrieval and semantic processing [1,2], the models still allocate a disproportional amount of information flow towards the name, despite the greater text distance.
[1] Locating and Editing Factual Associations in GPT
[2] DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
---
> L1: The authors claim that the curse of reversal can be somewhat mitigated if multiple choice questions are used. However, this appears to be a major limitation.
**Ans**: The primary goal of our work is not to directly mitigate the reversal curse, but rather to **examine and challenge** the widespread belief about this curse and LLMs' generalization abilities in more evaluation tasks which include MCQs. We do observe that when provided with the appropriate form of training documents and the presence of both A and B, LLMs can overcome the reversal curse. However, we interpret this as strong evidence that underscores the importance of the appropriate structure of factual knowledge for knowledge injection and downstream performance.
Furthermore, although not the primary focus of our study, the observed mitigation effect in MCQs may have broader applications. For example, in RAG system, the retriever may retrieve multiple documents containing both correct and incorrect answers. Therefore, the ability to identify the correct answer would significantly impact the overall efficiency. In CoT settings, these contextual hints could also be produced by LLM itself, if the user could first help to narrow down the search range and then ask the LLM to list a few possible candidate answers before answering. Overall, we believe our new findings could contribute to both LLM interpretation and their application in future works.
---
Rebuttal Comment 1.1:
Title: Reviewer response to author rebuttal
Comment: I thank the authors for thoroughly engaging with my questions, and am impressed with the amount of work they have been able to perform during the rebuttal period. I think this paper would be a great contribution to NeurIPS, and commend the authors on their study. I will increase my score to 7.
---
Reply to Comment 1.1.1:
Title: Appreciation to Reviewer's Response
Comment: Dear Reviewer PVmx,
Thank you for your timely feedback. We sincerely appreciate your time in reassessing our work and rebuttals and are grateful for your recommendation of our work to NeurIPS. We have and will continue to improve our work based on your valuable comments as well as those from other reviewers.
---
Best regard,
Authors | Summary: The authors extend the original reversal curse dataset to two tasks: open ended QA and MCQ. The authors analyze the generalization capabilities of LLMs on reversal tasks and provide several experiments towards their claims. They show that LLMs can generalize from A is B training to B to A, when both B and A are present in the question. Some of the results are expected based on existing results. For me the most significant contribution was the result in the relative intensities across layers. I may be willing to increase my score based on responses to certain questions.
Strengths: - Strong applied work. The experiments are well posed (if only for small models), and do support the papers claims.
- In a welcome change from literature, the authors also provide negative results of their experiments in Section 4. While this is great, it is not clear from the paper how to mitigate the reversal curse on the said datasets.
- I liked the saliency score used in Section 3.2. Specifically, using saliency as an importance score for the l'th layer was insightful. I was wondering if this could be written as a dot product instead of a Hadamard product?
Weaknesses: - Is the paper analyzing reversal curse? Was that not already done by the original reversal curse paper?
- The authors claim that generalization occurs when both A and B are present in the question. Is this possibly due to the structure of attention?
- Figure 5 shows that the models have a decreased accuracy when CoT is used. Why so? Should CoT not improve the results (as per earlier descriptions in the paper)? Any discussion on this?
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 115 Was the model overfit to the fine-tuning data?
- What is the authors explanation for MCQs on NameIsDescription exhibiting generalization whereas DescriptionIsName not exhibiting the same?
- Does thinking step by step (CoT) put the name is description into context? Since in-context does not have the reversal curse problem, does CoT fix reversal curse on the given datasets?
- Is there an equivalent Table 4 identifying whether the Description is present in the CoT? Perhaps both Name as well as Description can exist in the CoT?
- Figure 5 goes against existing literature [52] as noted by the authors themselves. "We attribute this divergence to the structure of the training documents since their training samples mainly use names or personal pronouns as subjects, which generally mirrors the structure of the NameIsDescription subset." The description provided was not clear to me.
- Why not use the saliency scores of particular tokens, like the names, instead of a particular position?
- Snt and Sdt are great observations. Can these be used to actually modify the attention? i.e. since we have white-box access to the model, can one actually increase the attention manually to check if this (a) solves the RC problem, or (b) improves generalization in terms of accuracy?
- Can the authors clearly outline why their contribution is novel, and what are the major take-aways from their work?
### Suggestions:
- Based on the introduction, I could not understand the paper contributions. Can the paper provide a short section or paragraph on that in Section 1?
- The legend in Figure 5 is unclear. Does it mean that OOD are the smaller bars which are having the masks?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: - No major limitations. I feel some of these results are expected based on Berglund et al.
- My concern is on novelty. I am hoping the authors can address this question in the rebuttal.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Part 1
> W1: Is the paper analyzing reversal curse? Was that not already done?
**A**: Yes, we thoroughly analyze the reversal curse. As discussed in Global Rebuttal, we find previous discussions on this problem are **rather vague and arbitrary**. We provide conceptual clarity and an explanation of the original reversal curse (Reviewer e5Vg), which has not been done by previous works. A detailed disucssion is given in our Global Rebuttal.
---
> W2: Is the generalization that occurs when both A and B are present in the question due to the structure of attention?
**A**: Yes, but it is only a necessary condition. Another important factor is the structure of training documents, which must be organized to align with LLMs' thinking bias, such as NameIsDescription. The underlying reason we believe is that LLMs' pretraining corpus exhibits a bias towards these structures, as supported by our response to Reviewer PVmx's W1. This distribution bias shapes LLMs' recall process from Name to Description, but not vice versa.
---
> W3: Figure 5 shows that the models have a decreased accuracy when CoT is used. Why so?
**A**: The reason for the decreased accuracy when CoT is used in training is that the test is conducted to force models to output **without CoT steps**. This setting is intended to explore whether models can learn the reasoning paths from the CoT QA data and enhance the performance even when CoT is not utilized in the test [1,2]. The results when CoT is used in test are reported in Table 1 from the PDF, with no decrease observed.
[1] Large language models can self-improve
[2] Let’s think dot by dot: Hidden computation in transformer language models.
---
> Q1: Line 115 Was the model overfit to the fine-tuning data?
**A**: We disagree. To avoid overfitting, we use the hyperparameter shown in Table A5 and data augmentations for training. The performance of our models on MMLU in Table A6 also suggests that no obvious overfitting phenomenon occurs. Furthermore, we re-run the training of LLaMA2-7B and 13B chat and report the training and testing curves in Figure 1 of our PDF. Again, no overfitting was observed.
---
> Q2: What is the explanation for MCQs on NameIsDescription exhibiting generalization whereas DescriptionIsName not exhibiting the same?
**A**: The direct cause is that the models exhibit a bias towards recalling information related to names instead of descriptions when both are presented in the question. For DescriptionIsName, all models fail to recall the correct descriptions given the paired names, which has been observed by our Open-QA experiments. The root cause, we believe, is that the pretraining corpus of LLMs is more biased towards the expression of NameIsDescription, which potentially shapes the preference of the above thinking patterns in LLMs.
---
> Q3: Does thinking step by step (CoT) put the name is description into context? Since in-context does not have the reversal curse problem, does CoT fix reversal curse on the given datasets?
**A**: No. The CoT prompt we use is shown in section 3.1, line 152. Rather than directly providing the related training documents as context, we encourage LLMs to **generate this information themselves** based on their knowledge. This approach also gives us an understanding of their problem-solving process.
To your second question, we re-ran the MCQ tests and included ground-truth information related to all options in the context of the input query. The average MCQ accuracies across both NameIsDescription and DescriptionIsName on LLaMA2-7B and 13B are 96.4 and 98.2. The results suggest that it can be fixed if all ground-truth information is presented, but empirically can be hard to achieve in real applications.
---
> Q4: Is there an equivalent Table 4 identifying whether the Description is present in the CoT?
**A**: In our CoT experiment in sec. 3.1, we observe that **both names and descriptions** exist in the models' self-generated CoT steps. It is their recalling behavior (i.e., whether their recalling starts with names or descriptions) that we are more interested in. Therefore, in Table 3, we count the frequencies of the pattern "[Name] is [Description]" observed in their CoT steps, such as "Daphne Barrington is the director of 'A Journey Through Time'", but not "The director of 'A Journey Through Time' is Daphne Barrington".
---
> Q5: The description discussing the divergence between Figure 5 and [52] is not clear.
**A**: In [52], the authors show that the inclusion of exemplary QAs during training could enhance models' performances on test and conclude that these QA examples enhance models' knowledge application abilities. However, we did not observe the same improvement in our experiment. We attribute this inconsistency to the impact of the thinking bias on the knowledge application abilities within the DescriptionIsName group.
---
> Q6: Why not use the saliency scores of particular tokens, like the names, instead of a particular position?
**A**: The positions used for computing saliency scores are exactly those of the name tokens.
---
> Q7: Snt and Sdt are great observations. Can these be used to actually modify the attention?
**A**: Yes. Due to time limitations, we only experimented with heuristic methods on D2N tasks of MCQs from the DescriptionIsName group. To force models to utilize more information from descriptions, we amplify the attention score of descriptions by a factor of 2 and decrease that to the names by a factor of 0.2. This operation is applied to all attention heads from layer 10 to 30 of our trained LLaMA2-7B-chat model. The resulting correctness is boosted to 44.2%. But we also notice some strange behaviors after our brutal editing, including the output of mismatched option labels and textual content. We believe that a more intricate editing method, such as circuit finding [1], could address this issue. We leave this for future work.
[1] Localizing Model Behavior with Path Patching
---
Rebuttal 2:
Title: Rebuttal-Part 2
Comment: Dear Reviewer Mp9z:
We are sorry that we were unable to respond to all your questions in a single rebuttal section due to the character limit. We have included our responses to the remaining questions here for your reference, as well as for the benefit of other reviewers. We apologize for any inconvenience this may have caused.
# Part 2
> Q8: Can the authors clearly outline why their contribution is novel, and what are the major take-aways from their work?
**A**: We hope our response in Global Rebuttal will effectively address your concerns.
---
> Suggestion: In Figure 5, does OOD are the smaller bars which are having the masks?
**A**: Yes. And we will improve the clarity of this legend in the revised version of the paper.
---
> L1 & 2: I feel some of these results are expected based on Berglund et al. My concern is on novelty.
**A**: We respectfully disagree. We believe that the only experimental results that can be expected based on RC are the model's performance on Open-QA. The generalization difference between the NameIsDescription and DescriptionIsName groups, particularly the phenomenon that the models cannot identify the correct answer even though they can directly answer the original question without options, is **completely unexpected and even counter-intuitive**. It goes against the common belief that identifying the correct answer would be much easier than producing it from scratch. We hope that the novelty listed in the Global Rebuttal could adequately address your concerns.
---
> Additional Question: Can dot product replace Hadamard product in saliency score computation?
**Ans**: The incorporation of Hadamard product in Equation 1 is inspired by one of the first works [1] that introduced saliency analysis into language models. Suppose a new mask variable $Z\in \mathbb{R}^{L\times L}, Z\in [0, 1]$ is applied to the attention matrix $A \in \mathbb{R}^{L\times L}$ to control the interaction between token $i$ and token $j$, which gives us $A'=A \odot Z$. The sensitivity of the model to the mask variable $Z$, $I_{ij} = |\frac{\partial \mathcal{L}}{\partial Z_{ij}}|$ can be seen as the importance score of the interaction between token $i$ and $j$. After applying the chain rules, the final expression for $I$ as well as our definition of the saliency score can be written as $I_{ij} = |A \odot \frac{\partial \mathcal{L}}{\partial A}|_{ij}$. We are not sure whether dot product could serve the same purpose as well. But we remain open to any further discussion.
---
Rebuttal Comment 2.1:
Comment: Thanks to the authors. It addressed some questions I had during my initial reading of the work. I hope, if the paper gets accepted, the authors can move some of these points into the main body of the work for camera ready.
I will be increasing my score, as the rebuttal addresses some of my earlier concerns. I have no objections to the paper getting accepted. There are no technical concerns, or issues related to the soundness of the paper.
---
Reply to Comment 2.1.1:
Title: Appreciation to Reviewer's Response
Comment: Dear Reviewer Mp9z,
Thank you for your feedback! We are deeply grateful for your time in reading our rebuttals and your willingness to reassess our work. In response, we will further polish our final paper based on the valuable points raised by you and other reviewers.
---
Best regards,
Authors | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time, insightful suggestions, and valuable comments. We also notice that there may be concerns regarding the comparison of our work to previous studies on the reversal curse and the clarity of our contributions. Here, we provide a comprehensive summary of the reviewers' feedback and outline the novelty of our work, as well as our contributions and main takeaways.
# Merits of our work
The reviewers have acknowledged the strengths of our papers as follows:
* Our findings on LLMs' generalization abilities are timely, interesting, and novel. (Reviewer PVmx, Reviewer p2dN, Reviewer e5Vg)
* The experiments are extensive, insightful, and compelling in supporting the papers' claims. (Reviewer Mp9z, Reviewer PVmx, Reviewer p2dN, Reviewer e5Vg)
* Our writing is straightforward and easy to understand. (Reviewer PVmx, Reviewer p2dN)
# Novelty of our work comparing to previous studies
The reversal curse [1] is proposed based on the observation that models trained on "A is B" cannot complete the sentence "B is ...". Before this work, there is no convincing explanation or even a clear definition of this curse. For example, the original paper [1] suggests that it is *"a failure of basic logical deductions from the training documents"*, which is rather vague and acknowledged as impossible to verify. A later work [2] claims that *LLMs trained on "A is B" cannot answer questions related B as long as the relationship "A is B" is not provided by the context.* These claims raise concerns about the generalization ability of today's LLMs: *do LLMs understand their training documents, such as the equivalence between A and B? If they do, to what extent can they apply this knowledge to downstream tasks?*
Our work takes a **further step** towards answering the above question as well as the examination of these previous claims regarding the RC problem. To be more specific:
1. We **refute** the previous belief that LLMs can only realize or answer "B is A" when "A is B" is given in the context, as evidenced by the success of MCQs from the NameIsDescription group. This finding suggests that it is unfair to simply conclude RC as an inability to comprehend the training documents. It is more likely to be a backward recall deficiency (Reviewer e5Vg).
2. LLMs' abilities to apply their training knowledge can strongly correlate with the structure of the training documents. An interesting example is that, even when the models are able to **answer the question directly**, their ability to identify the correct answer from options could be **no better than random guessing** (i.e. Open-QA D2N V.S. MCQ D2N on the DescriptionIsName set). This gives us an unexpected, even counter-intuitive observation on LLMs' abilities.
3. We discover that LLMs display a bias toward using names to initiate their analysis of the query and the retrieval of related information, which explains the observed phenomenon and is **rigorously evidenced** by our comprehensive interpretation experiments (all reviewers). This finding underscores the importance of the structure of training documents on LLMs' downstream performances. We believe our work serves as the corner stone towards the mitigation of this generalization deficiency and provide varible insights for the development of more effective knowledge injection techniques.
# Contributions & Main Takeaways
* **The reversal curse should be more likely to be a backward recall deficiency in decoder-only models.** The success on the MCQs serves as a counterexample to the previous claim that LLMs cannot understand the equivalence between "A is B" and "B is A" in their training documents.
* **Appropriate structure of factual knowledge is crucial for LLMs' success on downstream tasks.** Training data adhering to specific structures, such as NameIsDescription or Book-Story, enables models to provide correct answers when sufficient leads (i.e., available options) are provided. However, when training documents deviate from the models' preferred forms, their knowledge application behaviors become unstable and even counter-intuitive (i.e., Open-QA D2N V.S. MCQ D2N).
* **LLMs display a bias toward using names to initiate their analysis of the query and the retrieval of related information.** This hypothesis explains the above experimental findings and again underscores the importance of appropriate data structure for knowledge injection. Furthermore, this finding also raises a series of new questions beyond the discussion of the original RC problems: when and what shapes such a thinking pattern? How to mitigate its negative effect? Could other LLMs' deficiencies such as hallucination [3] or social bias [4] be related to it? All these questions remain yet to be explored and would enhance our understanding of today's LLMs.
[1] The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
[2] Are We Falling in a Middle-Intelligence Trap? An Analysis and Mitigation of the Reversal Curse
[3] Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
[4] Bias and Fairness in Large Language Models: A Survey
Pdf: /pdf/946d7381e11e7176f2e940170733cfc5cc0c1338.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MotionBooth: Motion-Aware Customized Text-to-Video Generation | Accept (spotlight) | Summary: The authors propose MotionBooth, a method to fine-tune a pre-trained text-to-video model on a collection of images of a specific object to enable the model to generate controlled videos of that object. Fine-tuning incorporates three losses: diffusion loss on image data, restricted to the object's region; video preservation loss, i.e diffusion loss on sample video data, to prevent overfitting to static images; subject token cross-attention loss, to improve controllability at generation time. During inference the motion of the object is controlled via editing the cross-attention maps to amplify the attention to the object tokens within the object's bounding boxes. The camera is controlled via shifting the noised latent and filling the new regions with latents sampled from the background of the original noised latent.
Strengths: The problem of controllable video generation is of high significance nowadays. The method proposed in the paper is relatively simple and the demonstrated visual results confirm the improved controllability of MotionBooth over prior work. The contributions in the paper are supported with well-designed figures aiding the clarity of the presentation. The limitations of the proposed method are discussed in the appendix.
Weaknesses: 1) **Novelty**:
- Similar training-free object motion control was proposed in [1] and [2] for image and video models respectively, where cross-attention amplification within object's bounding box was used to control its location in the generated frame. Similar training-free camera motion control was used in [3], where a series of noised latent shifts were used to control the global scene and camera motion.
Discussion and comparisons with those works are missing in the paper.
- Some missing references: [4, 5].
2) **Method**:
- Based on the formula for subject motion control, the attention is suppressed (set to -infinity) for all query-key pairs other than the queries from the bounding box and the keys from the object tokens. This way it seems that the rest of the prompt is ignored. Moreover, it is stated in the paper that "the attention suppression outside the bounding box regions persists throughout the generation". Details clarifying this are missing in the paper.
3) **Evaluation**:
- The paper lacks quantitative evaluation. Comparing CLIP features between the frames doesn't measure temporal consistency. Metrics like FVD, or optical flow warpping error could be better choices for this purpose.
- Details on how exactly the metrics are calculated are missing. Formal definitions of newly introduced metrics are required for better understanding. E.g. how exactly is the flow error calculated in the case when not all the motion except for the object's motion corresponds to camera motion? Having those would help with understanding the significance of the improvements reported in the tables.
- The evaluation dataset consists of limited number of different object classes, mostly dogs, cats and toys. Evaluation on more general collection of object classes would better support the claims made in the paper.
- More ablation studies would better illustrate the contributions. E.g. what would happen if the missing region in the shifted noised latent was not filled with samples from the same noised latent? Or if camera control like in [3] was used instead?
4) **Results**:
- Figure 14 shows that bounding boxes often don't fully contain the object of interest. Given that the attention is suppressed outside of bounding boxes, this looks like a flaw in the proposed control and needs to be investigated.
[1] Ma et al., Directed diffusion: Direct control of object placement through attention guidance, AAAI 2024.
[2] Ma et al., TrailBlazer: Trajectory Control for Diffusion-Based Video Generation, arxiv 2024.
[3] Khachatryan et al., Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators, ICCV 2023.
[4] Wang et al., Boximator: Generating rich and controllable motions for video synthesis, arxiv 2024.
[5] Chen et al. Motion-Zero: Zero-Shot Moving Object Control Framework for Diffusion-Based
Video Generation, arxiv 2024.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1) Could the authors discuss the connection between the proposed MotionBooth and the prior work with similar training-free controls? Some ablation studies would be ideal.
2) Could the authors provide more details about their subject motion control attention suppression technique? Especially regarding the issue with attention to the prompt tokens other than the subject language tokens.
3) The authors claim that their fine-tuning is efficient (line 3 in the abstract). Could the authors provide more details on this? E.g. what is the training time to adapt the model to a new subject?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have adequately addressed the limitation and the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! Here are our replies.
**Note: Please refer to the one-page PDF for the mentioned figures and tables with an "R." in their names.**
# Novelty of our method
Thanks for the suggestion. We appreciate that most reviewers have acknowledged the novelty of our method and we will add the missing citations in the final version.
Due to the response word limit, please see the general response for comparison to related methods.
# Ablations of the latent shift filling method
To address the reviewer's question regarding additional ablation studies, we have conducted experiments to explore different methods for filling the missing region in the shifted noised latent. We present the qualitative results in `Figure R.1 (f)` and the quantitative results in `Table R.1 (c)`.
In our experiments, we tested several approaches:
1. Random Init: Filling the hole with random values. This method resulted in severe artifacts due to the disruption of the natural horizontal and vertical distribution of latent pixels.
2. Random Sample: Randomly sampling values in the latents. Similar to Random Init, this approach also produced significant artifacts and degraded the visual quality.
3. Loop: This method involves using the moved-out-of-scene values to fill the hole, creating a looping background effect. While this technique preserves video quality better than random initialization or sampling, it limits the flexibility of camera movements by producing only looping effects. Thus, it is not suitable for more varied camera controls.
4. Reflect: This is the method used in Text2Video-Zero, where the missing region is filled by reflecting the surrounding content. However, in our Text-to-Video (T2V) situation, this approach resulted in the method collapsing, failing to maintain the desired quality.
The quantitative results in `Table R.1 (c)` corroborate these observations, showing that Random Init and Random Sample lead to significant artifacts, while Loop provides better quality but lacks flexibility. The Reflect method, although effective in other contexts, did not perform well in our scenario.
In conclusion, our straight-sample method consistently maintained high visual quality without introducing significant artifacts or limiting camera movement flexibility.
# Details about attention suppresion
Thank you for your insightful question. In our approach, we manipulate the dot product between the query matrix $\mathbf{Q}$ and the key matrix $\mathbf{K}$. The final attention map is then calculated using a Softmax function across all tokens.
The amplification of the subject tokens is during the earlier denoising steps.
In the later steps, we suppress all the tokens, including the subject tokens all to $-\infty$. This does not completely ignore any tokens due to the nature of the Softmax function, which normalizes the scores across all tokens.
Our qualitative results, as presented in the paper and the accompanying one-page PDF, demonstrate that the generated videos do incorporate other aspects of the prompt, such as verbs or background elements. This shows that while the subject tokens receive amplified attention, the rest of the prompt tokens are still considered, influencing the overall output.
# Video metrics
Thanks for the feedback. Due to the word limit, please refer to the general response.
# Flow error metrics calculation
We exclude the subject region when calculating flow error, so the flow error only accounts to camera motion control.
The flow error $E_{flow}$ for each frame can be expressed as:
$$
E_{flow} = \frac{1}{|R|} \sum_{p \in R} \| \mathbf{F}_p - {\mathbf{C}} \|,
$$
where $\mathbf{F}_p$ is the predicted flow vector at pixel $p$. ${\mathbf{C}}$ is the ground truth camera motion condition. $R$ represents the region outside the subject bounding box. $|R|$ is the number of pixels in region $R$.
# Evaluation datasets
Thank you for your feedback. We acknowledge that current open-sourced, state-of-the-art T2V models, such as LaVie and Zeroscope, exhibit limitations in generating a wide variety of object types. For example, Zeroscope struggles to generate humans with its pre-trained weights. In our study, we have followed the precedent set by previous research [1,2,3] in selecting our evaluation dataset, which includes categories like pets, plush toys, cartoons, and vehicles. We believe this selection provides a sufficiently diverse set of objects to comprehensively evaluate our proposed techniques within the capabilities of the existing T2V models.
We emphasize that our approach, characterized by training-free intrinsic subject and camera motion control methods, offers strong potential for generalizability. We are confident that, with the advent of more advanced T2V models, our methods will demonstrate even broader applicability and effectiveness across a wider range of object classes.
# Out-of-bbox problem
Our approach involves controlling the approximate position of the subject primarily during the earlier denoising steps. This method can sometimes result in portions of the subject extending beyond the defined bounding box in the final output. This is a deliberate design choice, as overly restrictive bounding box conditions would lead to rigid, unnatural outputs that may appear as squared or boxed regions, which are not desired.
# Model efficiency
Training:
Fine-tuning a subject takes about 10 minutes (Line 263 in paper).
Inference:
Naive T2V pipeline: around 15.0s per video
ours:
\+ subject control: 15.3s per video
\+ camera control: 20.6s per video
\+ camera & subject control: 21.5s per video
[1] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." CVPR 2023.
[2] Wei, Yujie, et al. "Dreamvideo: Composing your dream videos with customized subject and motion." CVPR 2024.
[3] Wang, Zhao, et al. "Customvideo: Customizing text-to-video generation with multiple subjects." arXiv 2024.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Dear authors,
Thank you for your detailed rebuttal.
However, I still have some concerns and would keep the same rating for now.
1) Could you clarify how the comparisons in table R.1 (f) are done? E.g. are the backbones the same? Is the subject learning performed for all the variants?
2) Do you have any explanation why the reflect method collapsed in your case, while it seems to work reasonably well in the Text2Video-Zero paper?
3) Regarding the suppression formula. So is the equation 7 in the paper incorrect? In the rebuttal you are saying that all the tokens are suppressed to $-\infty$ for the later denoising steps, while in the equation 7 the attention edit for the subject tokens is 0. Note that if all the tokens are suppressed to $-\infty$ then the attention is indeed equivalent to taking the average of all the values, while in the case when there are any finite pre-softmax values in the attention matrix, only those will be weighted despite the normalization.
Best regards, Reviewer
---
Reply to Comment 1.1.1:
Title: Response to reviewer q2XJ
Comment: Dear reviewer:
Thanks for your comments. Here are our reply for your questions.
# Details about the expriments in `Table R.1 (f)`
Thank you for your question. In Table R.1 (f), all the baseline experiments are conducted using the Zeroscope base model for consistency across the experiments. Additionally, the subject learning process of MotionBooth is applied uniformly to all the baseline models to ensure a fair and direct comparison.
# The reason why the reflect method fails
Thank you for your question. The discrepancy between our results and those reported in the Text2Video-Zero paper can be attributed to the difference in how the initial latent $z_T$ is handled in the two approaches.
In the Text2Video-Zero paper, the authors utilize the same initial random latent $z_T$ for all frames of the video, which is then denoised with a few steps to $z_t$. The reflect latent shift is applied to $z_t$, and noise is added back to generate $\hat{z}_T,$ which serves as the starting latent for denoising the video frames. This method works well within their framework because they extend a pre-trained text-to-image (T2I) model for video generation, allowing them to maintain consistency across frames by using the same $z_T$ throughout the process.
However, our model is designed for text-to-video (T2V) generation, where the initial latent $z_T$ is randomly initialized for each frame. This approach is crucial for capturing the frame-by-frame variations necessary in video generation. When we applied the reflect method under these conditions—where $z_T$ is distinct for each frame—it resulted in collapsed outputs. This suggests that the reflect latent shift method struggles when applied to distinct latents for each frame, which is a key requirement in our T2V model.
Furthermore, we also tested using the same latent initialization ($z_T$) across all frames in our model, similar to the method used in Text2Video-Zero. Unfortunately, this approach also led to collapsed results, even when there were no subject or camera motion controls in place. This is because the pre-trained T2V model requires distinct initialized latent for each frame. This reinforces our conclusion that the reflect method is not suitable for models that require distinct latents for each frame, as is the case in our T2V framework.
We hope this explanation clarifies why the reflect method did not work in our scenario despite its effectiveness in the Text2Video-Zero paper.
# Re-clarification about the suppression formula
Thank you for your insightful question and for giving us the opportunity to clarify our approach. We apologize for any confusion caused by our previous explanation. After thoroughly reviewing our code and methodology, we confirm that Equation 7 in the paper is correct.
To clarify, our motion control module performs suppression across all denoising steps, but this suppression is only applied outside the subject's bounding box region. Within the subject region, amplification is applied during the earlier steps, while in the later steps, there is neither amplification nor suppression within this region.
Regarding the suppression of tokens, the language tokens, except for the subject tokens (e.g., verbs, background nouns), are indeed effectively ignored across the denoising process because they are assigned $-\infty$ values. This stark difference in token values is intentional, as it allows us to strongly amplify the motion control of the subject.
This raises the question of how the necessary information from other language tokens is incorporated into the generated results. We believe that this information is retrieved through the <start> and <end> tokens. As mentioned in lines 216-217 of the paper, we do not perform any attention score editing for these two tokens. This means they can share the softmax outputs from the subject tokens since they both have finite values. Transformer-based language encoders, such as CLIP or BERT, tend to encode the overall information of a sentence into these special tokens, given their training with classification tasks. As a result, even though other tokens are ignored in the softmax function, the model can still retrieve the necessary information about verbs, backgrounds, and other elements for the generated results.
To support this claim, we conducted a toy experiment where we printed out the softmax outputs in a vanilla text-to-video pipeline using the Zeroscope model. We observed that the softmax value of the <start> token is the highest, nearing 1, while the <end> token has the second-largest value. The remaining tokens, including nouns, adjectives, verbs, and conjunctions, share the remaining softmax values.
We hope this explanation addresses your concerns, and we are happy to provide further clarification if needed.
Best regards!
Authors of #9629. | Summary: The paper proposes a video generated method - for customizing a subject, along with its motion, provided in the form of bounding boxes, and camera movement provided in the form of camera pose. To do this, the training method has a subject region loss to prevent bias from background and other details in the image, and a video regularization loss function to ensure that video generation properties are not forgotten. At inference, in order to control camera movements, latents are shifted. Cross-attention maps are edited with the provided bounding boxes, to fix the subject motion.
Strengths: The paper is written well. I particularly like the introduction and related work, where the paper clearly motivates the problem, describes the challenges and goes on to describe how it solves the problem.
The method comprises of various components, and each of the components is designed very well - they are intuitive, make sense and well formulated. The method very systematically addresses various challenges involved in the problem by designing various relevant techniques. The paper is also written very well to motivate and describe the method.
Weaknesses: My main concerns are with the experiments, which are very non-convincing. While the paper does provide detailed quantitative results for comparisons and ablations on various metrics validating the solution, the qualitative results do not support that.
1. I watched the supplementary video, which has the videos for the results presented in the paper. First of all, the number of qualitative results provided is severely limited, raising questions about cherry picking. Second, the method seems to perform reasonably well on cases that either have camera motion or subject motion - the results look good. In cases involving both camera as well as subject motion, which is the theme of the paper, the results are highly suboptimal - for instance, at 1:48 of the video, the last video showing the dog does not show the subject motion appropriately.
2. I would like to see more detailed results that segregate the various kinds of motion (camera and subject), to see how effective the method is for just camera motion, just subject motion and a combination of both. This will give a clearer picture of where the method is failing.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am unable to judge the quality of this work with the limited number of qualitative results that have been provided. Moreover, the provided qualitative results are sub-optimal, esp in the case of subject + camera motion, questioning the working of the method. Given that the limit on the size of the supplementary material is considerably large, it would have been nice if the authors had provided more results showcasing their method.
Since the rebuttal does not allow for more qualitative results, I am rejecting the paper at this stage.
The method, while pretty good, probably needs a little more work to get it working for the subject + camera motion case. I recommend that the authors continue working on it to strengthen the paper.
Post rebuttal: it would have been nice to see more qualitative results and videos, but i understand that is not possible in the 1 page rebuttal pdf. Based on the submitted rebuttal, I am happy to increase the score.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! Here are our replies.
**Note: Please refer to the one-page PDF for the mentioned figures and tables with an "R." in their names.**
# Separate results for subject motion control, camera motion control, and both
Thank you for your feedback and suggestion. It's a valid point to consider the performance of our method across different types of motion scenarios.
In the main paper, we've already presented quantitative results for scenarios involving only camera motion, as shown in `Table 2`. Additionally, we provided results for cases where both subject and camera motion are controlled, as detailed in `Table 1`. Following your recommendation, we conducted an additional experiment to evaluate our method specifically for scenarios involving only subject motion. The results are presented in `Table R.1 (a)`.
For a comprehensive analysis, we have compiled the results for all three cases—camera motion, subject motion, and both—into a single table. This allows us to better understand how controlling these aspects influences the performance. Notably, we observed that when both subject and camera motions are controlled, there is a slight decrease in some metrics. For instance, with the Zeroscope model, there's a 0.1 drop in the R-CLIP score and a 0.01 decrease in the T-Cons metric when both motions are controlled, compared to controlling only subject motion. Similarly, the flow error increases slightly when subject motion control is added to camera motion control. However, there are also instances where the R-DINO and CLIP-T metrics show a slight improvement under combined control.
In conclusion, while there is a partial performance drop when incorporating both control aspects, we believe this trade-off is reasonable and acceptable within the context of the motion-aware customized video generation task. It's natural for some quality loss to occur when adding more control signals, especially with training-free methods. However, the overall quality remains above an acceptable threshold, ensuring that the generated results are still meaningful and effective. Finally, considering the novelty of the proposed subject and motion control methods and the performance gain compared to existing approaches, we believe our work makes a substantial contribution to the field.
# Examples of controlling both camera and subject motion
Thank you for your feedback and for taking the time to review the supplementary video. We appreciate your concerns regarding the number of qualitative results and potential cherry-picking. We would like to clarify and address your points.
We clarify that all the qualitative cases compared with baselines are sampled without cherry-picking. We use different subject and camera motions to better demonstrate the model capability. We will release all the qualitative results in the final version.
As indicated in the quantitative results, our method is designed to effectively control both subject and camera motion. For a more comprehensive understanding, we have provided additional qualitative examples in the revised manuscript.
In particular, `Figure R.1(g)` illustrates the "dog running on the beach" video with eight distinct camera movements. This example demonstrates the capability of our method to manage complex scenarios involving simultaneous subject and camera motion. The dog's movement from the top right to the bottom left of the frame, combined with various camera motions, showcases the robustness of our approach in maintaining both motions' integrity.
Additionally, `Figure R.1(d)` shows another challenging scenario where the camera pans to the left while zooming in on the subject by enlarging the bounding box. This example further emphasizes our method's ability to handle complex motion dynamics.
We believe these examples provide clear evidence of our method's effectiveness in different motion scenarios, including cases involving simultaneous subject and camera motion. We will consider your suggestion to include more detailed segregated results to further validate the method's performance across different types of motion.
Thank you again for your valuable input. We hope these examples address your concerns and demonstrate the robustness of our approach.
# More detailed ablations of our method and more comparisons
In the one-page PDF, we also conduct many other qualitative and quantitative experiments to thourghly analysis our method, and compare with more baselines.
To be specific, `Figure R.1 (a)`, shows we can achieve the zoom-in and zoom-out effect by gradually enlarging or shrinking the subject bounding box.
`Figure R.1 (c)` involves a scene dominated by a large candy house, nearly occupying the entire frame. Despite this, our technique effectively pans the camera to the right. This outcome suggests that our method can handle camera movements even with large foreground objects.
`Table R.1 (e)` and `(f)` shows quantitative experiments comparing with more baselines, including TrailBlazer, Directed Diffusion, Boximator, Motion-Zero, MotionCtrl, and Text2Video-Zero.. Our results generally outperforms the alternatives.
`Figure R.1 (f)` and `Table R.1 (c)` explores different methods for filling the missing region in the shifted noised latent.
We hope the extra results will provide a broader and deeper sight about our method.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Dear authors,
Thanks for your very detailed rebuttal.
I appreciate the experiments segregating the various types of motion and the additional qualitative results. They look good!
It would have been nice to see videos and more qualitative results, which are highly limited, but I understand that this is not possible at the post submission stage in a 1 page rebuttal.
I am happy to increase my score based on the rebuttal.
Thanks,
Reviewer
---
Reply to Comment 1.1.1:
Title: Response to Reviewer yPkE
Comment: Dear reviewer:
Thanks for your comments and increasing the score for our work.
We will update our draft following your comments with more qualitative results and comparison.
Moreover, we believe our method can also be applied with stronger text-to-video base models. We will add these later once the base models are open-sourced.
Best regards!
Authors of #9629. | Summary: This paper addresses the challenge of how to control the identity and the motion of the subject while generating video, in a text-to-video setup. Specifically, in addition to the standard text prompt, it allows the user to control: the subject's identity by providing a few images of it (e.g. 5 photos of my dog); the subject's position in the video, as provided by a bounding box sequence; and the camera motion, as provided by a sequence of delta-x & delta-y.
Identity preservation is achieved by two changes to the loss function, during fine-tuning on images of the subject. First a "subject region loss" zeros out gradients that lie outside the subject's bounding box during fine-tuning, and second a video preservation loss (similar to the class-specific preservation loss introduced by DreamBooth) to ensures that the ability to generate dynamic motion is not lost due to over-fitting on a few static images.
Subject and camera motion are controlled at inference time, by modifying the cross-attentional map between the prompt and the latent feature, by shifting the latent feature, respectively.
Finally, a third loss "subject token cross-attention" links the two, enabling the inference motion controls by ensuring that the cross-attention between the subject token (in the text prompt) and the diffusion model's U-Net features is strong where the object should be and weak elsewhere.
These proposed changes are validated quantitatively through several metrics: CLIP and DINO scores between the source images and the desired video locations of the subject, CLIP score between the text prompt and video frames, and a difference in optical flow between the generated video and ideal video (according to the subject and camera motion prompts).
Examples of generated videos are included in the supplemental.
Strengths: This paper addresses a very significant and timely problem -- how to effect significantly more fine-grained control in text to video models, and allow them to be more than a novelty.
The results, as shown in the supplemental video, are very impressive: the subject identity, subject motion, and camera motions very much follow the user's input.
The additional loss terms are intuitively sensible and effective. The inference modifications, although they seem a bit "hacky" (if you'll excuse me), do get the job done quite effectively.
I appreciate that the transferability of the method was demonstrated by implementing it on two different text-to-video models.
Weaknesses: These are relatively minor weaknesses in my opinion:
This region metrics measure whether the desired object is translating around the frame as per the bounding box guidance, but because they are image metrics, can't evaluate the actual "animation quality" of the subject. I believe the entire suite of metrics could be optimized by a static image crop of the subject sliding across the video frame according to bounding box motion and flow. Can you confirm whether this is correct?
The set of camera motions that can be models is limited to 2d translations in x and y, missing zooms, rotations, etc.
Although this paper presents its techniques as transferable across many latent diffusion models, they require that the "latent can be considered a "shrunk image," which maintains the same visual geographic distribution as the generated images." This should generally be true for most latent video diffusion models, but may not work with new latent encoders being developed.
Technical Quality: 3
Clarity: 4
Questions for Authors: Re "Subject Region Loss": How certain are you that the binary mask, when applied to the latent (rather than the pixels), actually does mask out the background? Given that the latent is produced by a stack of many convolutional or transformer layers, the receptive fields of each latent location almost certainly include information from the nearby background. Have you tried masking the training images directly in pixel space, rather than masking the diffusion loss? How does that compare?
Re "Ablation studies": w/o video and w/ class video give quite-nearly identical numbers. Is this a copy-paste error? If not, it's worth discussing in more detail.
In Figure 3, what is the difference between c, d, e, and f? Are they different samples from the model, or from different models?
Have you tried to achieve the desired camera motions via the text prompt, and considered that as a baseline? I have seen it work well for many basic types of motions in some recent t2v models that you cite.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The limitations described in the appendix are important enough that they should be included in the main paper.
Otherwise, yes, they have addressed limitations and impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! Here are our replies.
**Note: Please refer to the one-page PDF for the mentioned figures and tables with an "R." in their names.**
# Metrics for static image crop of the subject
We apologize for any confusion. We believe your suggestion involves cropping a video to focus on the subject, according to the bounding box, and then calculating a metric that reflects the "animation quality" within that cropped video.
This is a valuable suggestion. In response, we developed a new metric called "Region CLIP-text." This metric first crops the video to isolate the subject as guided by the bounding box. Then, it calculates the similarity between the cropped video and a text prompt describing the animation, such as running, jumping, etc. We believe this metric better reflects the quality of the subject's animation by focusing specifically on the subject's actions.
The results, presented in `Table R.1. (f)`, compare our method with other approaches in terms of subject motion control. The "Region CLIP-text" metric shows a trend similar to CLIP-T but with a stronger emphasis on the subject region. It more accurately captures the alignment of the subject's animation with the descriptive text. Our method outperforms the baselines, demonstrating the effectiveness of our subject motion control technique.
Regarding a desired video metrics, please refer to the general response due to the word limit.
# Capability of camera motion control
We acknowledge that the proposed latent shift module primarily facilitates control over 2D camera movements, such as up and down panning. However, we observe that we can achieve the zoom-in/out effect through gradually enlarging or reducting the bounding box for the subject, with subject motion control.
This behavior is demonstrated in `Figure R.1 (a)`, where examples illustrate the zoom-in and zoom-out phenomena corresponding to the bounding box adjustments. Additionally, Figure 14 in the main paper includes an example that showcases the enlargement of an object through the progressive increase of the bounding box.
By combining this technique with 2D camera motion control, we can provide a more versatile camera control experience.
As illustrated in `Figure R.1 (d)`, we demonstrate an example where the camera pans left while simultaneously zooming in. This showcases our system's ability to achieve a more complex, coordinated camera movement, highlighting the potential for more dynamic and flexible camera control in future developments.
# The assumption of "shrunk image"
Thank you for your insightful feedback. Models such as ModelScope, Zeroscope, LaVie, and VideoCrafter all adhere to the principle where the latent can be considered a "shrunk image," preserving the same visual geographic distribution as the generated output.
Regarding newer models, we anticipate that they will likely follow a similar architecture, especially given the examples of video editing capabilities demonstrated by Sora using SDEdit technology. The requirement for the latent to act as a "shrunk image" appears to be a foundational aspect of such technologies. Therefore, we believe our techniques will remain applicable and useful even as new latent encoders are developed.
# Mask the diffusion loss vs mask the training images
Thank you for your insightful suggestion. We conduct an ablation study to compare the effect of masking the background regions in the training images directly in pixel space, as opposed to masking the diffusion loss.
As demonstrated in `Figure R.1 (e)`, masking the training images significantly impairs the model's ability to learn the subject. The video example clearly shows that this approach results in a substantial degradation in the model's performance. Furthermore, the quantitative metrics, as presented in `Table R.1 (b)`, reflect a marked decrease in performance when the training images are masked.
We hypothesize that this decline is due to the masked images creating an unnatural distribution, which disrupts the learning process. Therefore, we conclude that masking the diffusion loss, rather than the training images, provides a more effective approach for maintaining the integrity of the learned representations.
# Numbers in Table 3
It is not a copy-paste error. This result demonstrates that using class-specific videos is comparable with not using any videos.
# Differences between c, d, e, and f in Figure 3
Figure 3 shows an ablation about learning loss techniques. They are all sampled from a customized Zeroscope model on the white dog. (c) means the model is trained without using subject region loss and without video loss. (e) means the model is trained with subject region loss but without video loss, etc.
# Control the camera motions through text prompt
Thank you for your insightful suggestion. We conduct an ablation study and presented the results in `Figure R.1 (h)`. We add simple text prompts such as "camera pan right" and "camera pan up right" to control the camera motion. The generated videos do follow these instructions to some extent, particularly for simpler motions like panning right. However, we find that these text prompts are often too vague, lacking the ability to specify important details such as camera motion speed or distance. Consequently, this approach yields sub-optimal results compared to our proposed method.
Quantitative results supporting this observation are shown in `Table R.1 (d)`. While text guidance results in slightly higher CLIP scores and temporal consistency, the flow error—which measures the precision of camera is significantly lower than what we achieved using our method. This demonstrates that our approach provides more precise and stable camera motion control than can be achieved with text prompts alone.
---
Rebuttal Comment 1.1:
Comment: Thank you for including the many clarifications, and the additional analyses. In particular, I appreciate the examples demonstrating camera zoom in/out, text-based camera control, and the exploration of masking in pixel space.
I have read through the other review and authors responses, and believe all of my questions and concerns have been addressed.
---
Rebuttal 2:
Comment: Dear reviewer:
Thanks for your comments.
We will update our draft following your comments with the analysis on camera zoom in/out, text-based camera control, and the exploration of masking in pixel space.
Best regards!
Authors of #9629.
Title: Response to Reviewer N7np | Summary: This paper primarily addresses the problem of motion-aware customized video generation. Traditional customized generation methods either tend to lose the motion information of the video or require additional training of motion modules. To enhance subject fidelity and video dynamics, this paper introduces subject region loss and video preservation loss. Additionally, a training-free subject- and camera-motion control method is proposed, enabling the model to flexibly control the motion information of the video. Extensive experiments demonstrate that the proposed method outperforms existing state-of-the-art methods.
Strengths: 1. This paper achieves better automized generation results than the state-of-the-method
2. The writing is clear and easy to follow
Weaknesses: 1. Some details need to be added to more clearly distinguish the differences. For example, in line 169, existing methods use class-specific data, whereas this paper uses common videos. Do the common videos need to include customized subjects? How are these common videos obtained?
2. The assumptions made by the method, such as sampling operations on the x- and y-axes when controlling camera motion, may limit its effectiveness. For instance, in scenarios with significant changes from left to right or top to bottom, or in close-up videos with large foreground objects, this operation may result in uncoordinated outcomes. However, the paper does not seem to discuss these limitations.
3. Methods that control subject and camera motion through object trajectories and camera parameters have also been proposed, such as motionCtrl, TrailBlazer, Boximator, and motion-zero. However, this paper does not provide an in-depth discussion or experimental comparison with these methods. Although some of these methods are training-based, it is still worthwhile to explore the existing gaps.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How many subject samples and/or common videos are used for training each customized model?
2. The proposed mainly explores the camera motion on panning left or right, is it possible to control other motion types, like zoom-in/out effects?
3. what is the difference between the proposed subject region loss, subject token cross attention loss, and those losses in Mix-of-show [1] for customized image generation
ref:
[1] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. This paper needs to discuss more methods addressing the same tasks, like motion control, and specify the differences between the proposed loss functions and those of existing methods.
2. More limitations or failure cases needed to be discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! Here are our replies.
**Note: Please refer to the one-page PDF for the mentioned figures and tables with an "R." in their names.**
# Details about video data used for training
Thanks for the question, during training, we **randomly** download 500 videos from the Panda-70M dataset. The Panda-70M dataset is a large-scale text-video dataset that contains 70M videos from YouTube. The videos are annotated with captions. We call them common videos because **we do not constrain them to have any specific class or the customized subjects**. The reason to use video data is to preserve the video generation ability of pre-trained T2V models.
The customization ability is learned through 4-6 images of the same subject.
For more details on the training data, please refer to `Sec. 4.1 Implementation details`, specifically lines 257-258.
# Camera motion control on extreme cases
Thank you for your valuable feedback. We conduct analysis to address the limitations you mentioned, focusing on extreme scenarios involving significant camera movements and large foreground objects.
## Large Camera Motion Speed
We test our camera motion control technique with varying levels of camera movement. The results are shown in `Figure R.1 (b)`. Initially, with camera movement [c_x, c_y] = [-0.5, 0.45], indicating a movement of half the video width to the left and nearly half downward, our technique successfully controls the camera motion.
Even when we double the camera motion speed, the model continues to output accurate results. However, at triple the speed, [c_x, c_y] = [-1.5, 1.35], the method fails, showing only a downward tiling effect.
These findings indicate that while our method can manage camera movements spanning the entire video width, it does have limitations under extremely high-speed conditions.
## Camera Motion Control with Large Foreground Objects
Results are shown in `Figure R.1 (c)`. The experiment involves a scene dominated by a large candy house, nearly occupying the entire frame. Despite this, our technique effectively pans the camera to the right.
This outcome suggests that our method can handle camera movements even with large foreground objects. We believe this effectiveness stems from the method's reliance on latent shifts, which moves both foreground and background elements simultaneously. Therefore, the presence of large foreground objects does not significantly impair the performance.
# Comparisons with more subject and camera motion control methods
Thanks for the suggestion. Due to the response word limit, please see the general response for comparison to related methods.
# Zoom-in/out
We acknowledge that the proposed latent shift module primarily facilitates control over 2D camera movements, such as up and down panning. However, we observe that we can achieve the zoom-in/out effect through gradually enlarging or reducting the bounding box for the subject, with subject motion control.
This behavior is demonstrated in `Figure R.1 (a)`, where examples illustrate the zoom-in and zoom-out phenomena corresponding to the bounding box adjustments. Additionally, Figure 14 in the main paper includes an example that showcases the enlargement of an object through the progressive increase of the bounding box.
By combining this technique with 2D camera motion control, we can provide a more versatile camera control experience.
As illustrated in `Figure R.1 (d)`, we demonstrate an example where the camera pans left while simultaneously zooming in. This showcases our system's ability to achieve a more complex, coordinated camera movement, highlighting the potential for more dynamic and flexible camera control in future developments.
# Differences of losses with Mix-of-Show
**Mix-of-Show:** This method focuses on training an ED-LoRA (Efficient and Decentralized Low-Rank Adaptation) model for individual clients. The primary aim is to merge multiple LoRA weights using a multi-concept fusion technique to handle multiple concepts effectively.
**Our Method:** We aim to eliminate the overfitting problem to the background while learning from a few subject images. Our approach includes specific loss functions designed to address this issue and improve the representation of subjects within videos.
1. Subject Region Loss:
**Mix-of-Show:** Does not explicitly focus on subject region loss. Instead, it leverages the general capabilities of the ED-LoRA model to handle multiple concepts.
**Our Method:** We introduce a subject region loss specifically designed to minimize overfitting to the background. This loss ensures that the model learns to focus on the subject itself, rather than the surrounding context, thereby improving subject-specific customization.
2. Subject Token Cross-Attention Loss:
**Mix-of-Show:** The approach does not include a specific loss function that guides the subject token with the subject's position in the video.
**Our Method:** We propose a subject token cross-attention loss to explicitly guide the subject token with the subject's position in the video. This helps in maintaining the spatial consistency of the subject across frames, ensuring that the subject is accurately represented in its correct position throughout the video.
To summarize, our method distinguishes itself by introducing specialized loss functions that address specific issues in customized image generation, such as overfitting to the background and maintaining subject position consistency. These targeted losses are not present in the Mix-of-Show approach, which focuses more broadly on decentralized training and merging of multiple LoRA weights.
We will cite Mix-of-Show and incorporate this discussion in the final version of our work. If further specific aspects of our loss functions need to be compared with those in Mix-of-Show, we would be happy to provide a more detailed comparison. Thank you for your attention to this matter.
---
Rebuttal 2:
Title: Please let us know whether we address all the issues
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and a PDF file. Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider the raising score after we address all the issues.
If you still have more questions and concerns, please comment here. We will reply it as soon as possible.
Thank you
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their thorough rebuttal to my and other reviewers' questions. Your answers helped me better understand your method and clarified several of my concerns. However, some of my main concerns remain.
I was pleasantly surprised to see that the zoom in/zoom out effect could be achieved by enlarging or reducing the bounding box. However, this effect is not implemented by the camera movement control module, which diminishes the contribution of the camera movement control. Other methods that do not rely on bounding boxes might not be able to achieve this effect. Additionally, I feel that this approach may also struggle to achieve camera motion control that involves rotating around an object.
Additionally, even though the Mix-of-Show paper does not mention it, its code uses an attention regularization loss that is quite similar to the subject token cross-attention loss in this paper. It would be helpful if the paper could also explore the effects and differences between these two implementations. Of course, since the Mix-of-Show paper does not explicitly state this, it will not be a factor in my scoring.
In summary, I appreciate the completeness of the paper, but the innovation seems somewhat limited to me. Nonetheless, I still hold a positive view toward the paper's acceptance, so I will maintain my score.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer ZPXz
Comment: Dear Reviewer,
Thank you for your valuable feedback and for taking the time to review our responses. We appreciate your insightful comments, which have helped us further refine our work.
We will update our manuscript to reflect the functional limitations of the camera motion control technique, particularly highlighting that the zoom in/zoom out effect can indeed be achieved by adjusting the size of the bounding box. We acknowledge that while this method is effective, it does not utilize the camera movement control module, which might reduce the perceived contribution of that component. Additionally, we understand your concern that other methods not reliant on bounding boxes might struggle to replicate this effect, and we also recognize the challenges in achieving camera motion control that involves rotating around an object using our current approach.
Regarding your comments on the comparison of losses with the Mix-of-Show code, we have explored this further and would like to provide additional clarification. The code url is at `https://github.com/TencentARC/Mix-of-Show/blob/main/mixofshow/pipelines/trainer_edlora.py`.
1. We have identified that the Mix-of-Show code includes a mask loss and subject token loss, which are used during the training of ED-LoRA. The mask loss, implemented in line 252 of the code, is indeed the same as ours. Both approaches focus the learning process on the subject region, minimizing the influence of background elements in the training data. This similarity is likely due to the shared goal of enhancing subject representation.
2. The subject token loss in Mix-of-Show is implemented between lines 254-259 and further detailed in lines 263-313. This loss is composed of two parts: subject token loss and adjective token loss. Both are MSE losses that penalize attention outside the subject mask while enhancing attention within the subject mask. While similar to our approach, there are notable differences:
- Mix-of-Show calculates both subject and adjective token losses, whereas our method focuses solely on the subject token loss, allowing for a more concentrated emphasis on the subject itself.
- Importantly, in Mix-of-Show, this token loss is disabled by default (as indicated in line 29 of the code). Additionally, the training configuration provided in their `README.md` also disables this loss. Given that this loss is not explicitly mentioned in their paper, it appears that it may have been an ablation experiment rather than a core component of their main experiments.
- Mix-of-Show includes a `reg_full_identity` argument in their training configuration, which is also disabled by default. When disabled, the loss formula is $\mathcal{L}_1 = - \mathbf{M} \log(\mathbf{A})$, while when enabled, it becomes $\mathcal{L}_2 = - \left[\mathbf{M} \log(\mathbf{A}) + (1 - \mathbf{M}) \log(1 - \mathbf{A}) \right]$. Only the latter formula aligns with the STCA loss in MotionBooth.
- Furthermore, the comment in line 30 of the Mix-of-Show code suggests that the choice of loss formula may vary depending on the subject (e.g., "Thanos" vs. a real person), while our approach consistently applies the loss across all our experiments.
We hope this detailed explanation addresses your concerns. We will incorporate these comparisons and clarifications into our paper to provide a comprehensive discussion of the similarities and differences between our approach and that of Mix-of-Show.
Thank you once again for your thoughtful review.
Best regards,
Authors of #9629 | Rebuttal 1:
Rebuttal: Thanks very much for the reviews! In the general response, we mainly reply to the most frequently asked questions.
**Note: Please refer to the one-page PDF for the mentioned figures and tables with an "R." in their names.**
# Novelty of our subject and camera motion control methods
We appreciate that most reviewers have acknowledged the novelty of our method. To further clarify, we are willing to elucidate the differences between our proposed techniques and some related mentioned, including TrailBlazer, Directed Diffusion, Boximator, Motion-Zero, MotionCtrl, and Text2Video-Zero.
**TrailBlazer and Directed Diffusion:** While these methods utilize a training-free approach to control object motion by manipulating cross-attention, they differ from our approach. TrailBlazer adjusts both spatial and temporal cross-attention by scaling the attention maps with a hyper-parameter less than 1, and Directed Diffusion, similarly, focuses on image generation. In contrast, our method exclusively targets spatial cross-attention and sets the attention maps outside the object's bounding box to $-\infty$. This approach not only simplifies the implementation but also enhances performance in generating motion-aware customized videos.
**Boximator:** Unlike our training-free method, Boximator is a training-based technique that requires box coordinates to be input into a newly trained self-attention layer. Since Boximator is not open-sourced, it presents a higher barrier to use. Our method's training-free nature provides a more accessible solution for controlling subject and camera motion.
**Motion-Zero:** This method also operates without additional training but employs a test-time tuning technique that adjusts the latent space using cross-attention map loss during denoising. However, this process increases generation time and memory usage significantly, from approximately 15 seconds to several minutes per video. Our experiments showed that Motion-Zero often produces collapsed videos with unrecognizable visual elements, likely due to the adverse effects of test-time tuning on the parameters and latent distributions in a customization scenario. In contrast, our approach directly manipulates the cross-attention map, adding only 0.3 seconds per video and yielding more reliable outcomes.
**MotionCtrl:** utilizes a training-based approach to control camera poses and object motion by inputting point trajectories. In contrast, our approach is training-free, enabling control over subject and camera motion without requiring additional training processes.
**Text2Video-Zero:** This method extends a pre-trained text-to-image (T2I) model to video generation by using consistent noise across frames, which is unsuitable for text-to-video (T2V) models that work with distinct noises for each frame. Additionally, while Text2Video-Zero employs a latent shifting method for overall scene movement and mirrors the latent to fill missing regions, our technique uses random sampling on the x and y-axis, providing more concise and coherent videos for camera motion control.
To substantiate our claims, we conducted quantitative experiments comparing the performance of these methods, as detailed in `Table R.1 (e)` and `(f)`. Unfortunately, some methods could not be directly compared due to the unavailability of their code. Nevertheless, our results demonstrate that our method generally outperforms the alternatives. Specifically, we observed significant limitations in Motion-Zero's test-time tuning approach for customized video generation, underscoring the strengths of our method.
We hope this response clarifies the distinctions and advantages of our approach. We appreciate your feedback and look forward to further discussions and evaluations.
# Video metrics
We acknowledge the difficulty in identifying an appropriate metric for customized video generation, as standard metrics like FVD and optical flow error typically require a ground truth video for comparison. In the context of customized subjects, such ground truth data is not available. Consequently, we have employed alternative metrics including CLIP text similarity, CLIP image similarity (between the generated image and the subject image), and DINO image similarity. These metrics have been validated in prior works [1,2,3,4] and provide a meaningful assessment of our model's performance.
However, we recognize the value of using FVD and optical flow error in situations where a ground truth or standard reference can be utilized, such as in camera motion control. As demonstrated in `Table 2` of the main paper, we have reported results using optical flow error. To further address the reviewers' concern, we have also calculated FVD by randomly selecting 1000 videos from the MSRVTT dataset. The results, presented in `Table R.1 (e)`, indicate that our method significantly outperforms existing camera control methods, including some that are training-based.
Additionally, we conducted human preference studies to capture the intuitive perception of the generated videos, as shown in `Figure 8` of the main paper. We believe these comprehensive evaluations—combining quantitative metrics and subjective assessments—provide a robust understanding of our method's quality and effectiveness.
[1] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." CVPR 2023.
[2] Wei, Yujie, et al. "Dreamvideo: Composing your dream videos with customized subject and motion." CVPR 2024.
[3] Jiang, Yuming, et al. "Videobooth: Diffusion-based video generation with image prompts." CVPR 2024.
[4] Wang, Zhao, et al. "Customvideo: Customizing text-to-video generation with multiple subjects." arXiv 2024.
Pdf: /pdf/451e41b2ae6b32dbf626c77e82319811712bb532.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper targets at animating customized subjects with both motion and camera control. To achieve that, they first customize t2v models with subject images finetuning without hurting video generation capability. Then, they propose a training-free approach to manage subject and camera motions. Extensive experiments demonstrate the superiority of their method.
Strengths: • The paper introduces the first unified video generation framework for subject customization with motion and camera movement control, which is also an interesting problem setup.
• The paper proposes a new subject region loss and video preservation loss for video customization, and a training-free method to control both subject motion and camera motion.
• The paper is well-written with clear motivations.
Weaknesses: • The subject motion control module is interesting by directly amplifying in the bounding box region and suppressing outside the box region. However, how to define the value of parameter alpha remains unclear. Is it only defined based on experimental performance? And does the authors keep this alpha parameter the same for all customization experiments?
• The proposed method only control the camera movement in 2D such as up, down. However, camera movement is quite important for 3D, such as moving into the scene, rotation, etc. The proposed camera control module is not able to handle such camera motions and can be a weakness for camera control.
Technical Quality: 3
Clarity: 3
Questions for Authors: • For subject motion control module, if the bounding box size is changing, e.g., gradually enlarging bounding box, will the method enlarge the subject accordingly?
• Subject motion is composed of both position translation and subject movements, such as walking, dancing, jumping, etc. The paper only control the subject position for subject motion. The reviewer wonders is it a bit overclaiming for controlling subject motions, since the subject movements cannot be customized or controlled?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks very much for the review! Here are our replies.
**Note: Please refer to the one-page PDF for the mentioned figures and tables with an "R." in their names.**
# The definition of hyper-parameters
We use different sets of hyper-parameters for the Zeroscope and LaVie models, as detailed in `Appendix A.3` of the paper. The selection of these hyper-parameters, including the value of parameter alpha, is primarily based on experimental performance. Once determined, we maintain these hyper-parameters consistently across all our experiments to ensure comparability of results.
Furthermore, we conduct a thorough ablation study of these hyper-parameters, including alpha, as presented in `Figure 12 (a)` and `(b)` of the main paper. This study analyze the impact of different settings, and the findings are discussed in detail in Appendix A.4. We hope this clarifies the basis and consistency of our parameter choices.
# Enlarging and shrinking subjects through subject motion control
Thank you for the question. Our subject motion control technique indeed accommodates changes in the bounding box size, such as gradual enlargement or reduction. When the bounding box is enlarged, the method appropriately scales the subject, resulting in a zoom-in effect in the generated video. Conversely, shrinking the bounding box causes the subject to appear smaller, producing a zoom-out effect.
This behavior is demonstrated in `Figure R.1 (a)`, where examples illustrate the zoom-in and zoom-out phenomena corresponding to the bounding box adjustments. Additionally, Figure 14 in the main paper includes an example that showcases the enlargement of an object through the progressive increase of the bounding box.
# The 3D controlling ability of camera motion control technique
We acknowledge that the proposed latent shift module primarily facilitates control over 2D camera movements, such as up and down panning. However, it is important to note that our work is pioneering in utilizing this approach for camera control. While our current method does not inherently support 3D camera motions like moving into the scene or full rotations, we have integrated subject motion control techniques to achieve zoom-in and zoom-out effects. By combining these techniques with 2D camera motion control, we can provide a more versatile camera control experience.
As illustrated in `Figure R.1 (d)`, we demonstrate an example where the camera pans left while simultaneously zooming in. This showcases our system's ability to achieve a complex, coordinated camera movement, highlighting the potential for more dynamic and flexible camera control in future developments.
# The expalanation to control the position of subjects as subject motion control
Thank you for your feedback. In our paper, when we refer to controlling subject motion, we are specifically addressing a limitation observed in recent text-to-video (T2V) models. Many of these models fail to generate realistic motion that involves actual positional changes of the subject within the scene. For example, given a prompt like "a dog running in the forest," some models, such as LaVie, produce animations where the dog appears to be running in place, without any change in its overall position in the video.
Our approach focuses on overcoming this limitation by explicitly controlling the subject's position, thereby enhancing the perceived realism of the motion. For instance, we ensure that the dog not only appears to be running but actually moves from one side of the scene to the other. This positional control is crucial for creating a more convincing representation of motion.
While it is true that our method does not allow for granular control over specific subject movements like walking, dancing, or jumping, we believe that our focus on controlling positional changes significantly contributes to the overall quality and realism of motion in generated videos. Additionally, specific subject movements are already guided by text prompts, which direct the model to generate appropriate actions. Therefore, we maintain that our claim of controlling subject motion is justified, as it addresses a critical aspect of motion that many existing models overlook.
---
Rebuttal 2:
Title: Please let us know whether we address all the issues
Comment: Dear reviewer,
Thank you for the comments on our paper.
We have submitted the response to your comments and also more results are shown in PDF file.
Please let us know if you have additional questions so that we can address them during the discussion period. We hope that you can consider raising the score after we address all the issues.
Thank you
---
Rebuttal 3:
Title: Response by Reviewer aAio
Comment: Dear Authors,
Thanks for your detailed responses. I have read through the rebuttal and the rebuttal has solved most of my concerns. Enlarging and shirking subjects performance is good. However, the reviewer thinks the method is a bit limited to coarse camera control and subject position control, and cannot control subject motions such as walking, jumping, etc. Therefore, the reviewer decides to keep the original score.
Best regards,
Reviewer aAio
---
Rebuttal Comment 3.1:
Title: Response to Reviewer aAio
Comment: Dear reviewer:
Thanks for your comments.
We will update our draft following your comments with the analysis on camera zoom in/out, the limitations of the camera, and the subject motion control ability.
Best regards!
Authors of #9629. | null | null | null | null | null | null |
Sketched Lanczos uncertainty score: a low-memory summary of the Fisher information | Accept (poster) | Summary: In this paper, the authors present a memory-efficient way of computing the uncertainty score of a model (the variance of the model's prediction w.r.t. distribution over the model's parameters). Specifically, they combined Lanczos method and sketching to obtain a low-rank approximation of the Generalized Gauss-Newton (GGN) matrix, which is critical in computing the score but has a square size of parameters.
The authors first introduce the Lanczos method since the GGN matrix is more tractable as an operator. However, Lanczos requires re-orthogonalizing the output vectors, leading to extra memory consumption; while the authors' key observation is that the score only depends on the norm of matrix-vector product (Eq.9) which is only slightly influenced by sketching, therefore they can sketch the output on the fly, and conduct orthogonalization and compute the norm in the sketched subspace to save memory. Consequently, the proposed method is called Sketched Lanczos Uncertainty (SLU).
For validation, the authors tested the performance of SLU on out of distribution (OoD) detection tasks with models including MLP, LeNet, and ResNet. Results showed that SLU outperforms existing metrics, achieving a higher AUROC mainly in the low-memory budget scheme (as shown in Fig.3, 4, and Tab.1).
Strengths: Originality: Good. The authors are the first to incorporate sketching with approximate matrix eigendecompositions to reduce the memory footprint.
Quality: Good. The authors provide a comprehensive analysis of their SLU method. For instance, in Fig.2 they illustrated the low-rankness of the GGN matrix and the efficacy of low-rank approximation, also with the core lemma 3.2 they showed the norm can be recovered by the sketched vectors and the sketching matrix.
Clarity: Good. The paper is well-organized for the readers to understand why each feature of SLU is proposed.
Significance: Fair. With SLU the authors compensate the sketching error with a higher-rank approximation and get better OoD detection results than baselines with a low-memory budget. SLU can also be applied to other tasks that require computing the norm of vector products with the U matrix.
Weaknesses: Concerns of this paper are mainly around the experiment part:
1. For the claim that "the disadvantage of introducing noise through sketching is outweighed by a higher-rank approximation" (lines 55~57), the authors should add a simple experiment with synthetic data to verify it.
2. Experiment settings in this paper are too simple; expect to see results with larger-scale models such as Vision Transformers.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Are there other ways of reducing the memory bottleneck in computing uncertainty scores? Since Eq.2 which authors leverage to compute the score is an approximation of the true variance, there may be better alternatives to it. For instance, is there a more computation/memory-efficient form of the covariance matrix $M$ that leads to better scores with less cost?
2. For Tab.1, why do the authors fix the memory budget as $3p$? Looking forward to justifications for this choice.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: 1. The authors only considered the $M_\text{noeig}$ case where singular values are 0/1, need to improve the flexibility of SLU to fit for other $M$s.
2. The proposed method only works well for the low-memory setting. As the number of NNs increases, Sketched Lanczos deteriorates while vanilla hi-memory Lanczos work better. (Fig.3, 4)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - "For the claim that "the disadvantage of introducing noise through sketching is outweighed by a higher-rank approximation" (lines 55~57), the authors should add a simple experiment with synthetic data to verify it."
We thank the reviewer for the valuable comment. We run an experiment on synthetic data and the results (Figure 1 in the main rebuttal pdf) confirms our claim. We now proceed to explain the setting.
For a given number of parameters $p$ and a given ground-truth rank $R$ we generate an artificial Fisher matrix $M$. To do so, we sample $R$ uniformly random orthonormal vectors $v_1 \dots v_R\in\mathbb{R}^p$ and define $M = \sum_i \lambda_i v_i v_i^\top$ for some $\lambda_i>0$. Doing so allows us to (1) implement $x \mapsto M x$ without instantiating $M$ and (2) have explicit access to the exact projection vector product so we can measure both the sketch-induded error and the lowrank-approximation-induced error.
For various values of $r$ and $s$, we run Skeched Lanczos on $x \mapsto M x$ for $r$ iteration with a sketch size $s$, and we obtain the sketched low rank approximation $U_S$.
We generate a set of test Jacobians as random unit vectors conditioned on their projection onto $Span(v_1\dots v_R)$ having norm $\frac{1}{\sqrt2}$. We compute their score both exacltly (as in Equation (9)) and sketched (as in Equation (10)). In the Figure we show the difference between these two values.
As expected, higher rank $r$ leads to lower error, and higher sketch size $s$ leads to lower error. Note that the memory requirement is proportional to the product $rs$, and the figure is in log-log scale.
To further clarify this results, on the x-axis we added the column "inf" which refers to the same experiments done without any sketching (thus essentially measuring Lanczos lowrank-approximation-error only) which coincides with the limit of $s\rightarrow\infty$. The memory requirement of this "inf" setting is $Pr$, that is the same as the second to last column where $s=P$.
- "Experiment settings in this paper are too simple; expect to see results with larger-scale models such as Vision Transformers."
We have extended the experiments with a 4M parameter Vision Transformer on CelebA dataset. Results are great (much better than the ResNet model included in the submission) and we provide figures and tables in the general rebuttal pdf.
In the $3p$ budget setting our method clearly outperforms all baselines. In the $10p$ budget Deep Ensemble slightly outperforms in detecting FOOOD101 OoD, but our method still outperform all baselines for the more challenging hold-out-classes OoD datasets.
- "Are there other ways of reducing the memory bottleneck in computing uncertainty scores? Since Eq.2 which authors leverage to compute the score is an approximation of the true variance, there may be better alternatives to it. For instance, is there a more computation/memory-efficient form of the covariance matrix $M$ that leads to better scores with less cost?"
This is a great question and the whole line of research that we reviewed in our related work section aims at answering exactly this question.
There are indeed several works that study memory efficient approximation of the matrix $M$, as referred to in the paper, the most notable are diagonal, block diagonal, and Kronecker factorization. Out of these three, we compare with the first one and outperform it, we didn't include the second one because it requires too much memory and, lastly, we didn't include the third one because it is an architecture-dependent approximation and current implementations are very restricted only to some specific types of network (differently from our model, which is architecture agnostic).
- "For Tab.1, why do the authors fix the memory budget as $3p$? Looking forward to justifications for this choice."
This is an interesting question. Our goal was to show the superiority of SLU in a low-memory regime. Given the number of baselines and ID-OoD dataset pairs, we chose to fix the memory budget in order to display our results more easily.
Then why $3p$ and not $2p$ or $5p$? This was indeed an arbitrary choice. We used both $3p$ and $10p$ (in the Appendix) as we thought this was enough to show the behavior of the method. If the reviewer thinks some different values would be interesting we are happy to run those experiments as well and include them in the paper.
Moreover, we are open to suggestions on experiments without a fixed memory budget and we would be happy to include them.
---
Rebuttal Comment 1.1:
Comment: Thanks for the great rebuttal. The two supplementary experiments in the attachment addressed my concerns (though the ViT used is still a small model with 4M params), so I increased my rating to 6.
---
Reply to Comment 1.1.1:
Comment: Regarding scaling to bigger models:
the overhead cost of the sketching operations is neglectable, and we can scale as long as we can perform GGN vector products. This has essentially twice the cost of a gradient backpropagation, such that the time complexity of computing a rank $k$ approximation is upper bounded by the time complexity of running $2k$ training epochs.
As a proof of concept we computed the score on CelebA with a **43M** parameters vision transformer in the $3p$ budget setting:
| *score* | FOOD101 | Mustache | Bald | Eyeglasses |
| ----- | ------- | -------- | ----- | ---------- |
| SLU | 0.705 | 0.715 | 0.738 | 0.680 |
| LE | 0.672 | 0.693 | 0.697 | 0.665 |
We want to emphasise that the model here has not been fine-tune trained and consequently the performance are not ideal. Nonetheless we see the same trend as in the other experiments: sketching improves results without increasing memory use. | Summary: This paper provides a new algorithm for approximating the Fisher information matrix. Approximation of the Fisher information matrix is an important task whenever the amount of parameters is very large, such as the case for neural networks. This paper makes use of Lanczos algorithm to find an approximate spectrum of the Fisher information matrix, and particularly targets the low memory setting where such approximations are desirable. In particular, in this paper the authors show that:
-Orthogonalization approximately commutes with sketching, making it feasible to sketch Lanczos vectors on the fly and orthogonalize them afterward.
-Empirically, they demonstrate that in low-memory-budget scenarios, the disadvantage of introducing noise through sketching is outweighed by achieving a higher-rank approximation.
Strengths: This paper is well written and well motivated. Indeed, approximation of the Fisher information is an important task where the parameter count is very high. This paper introduces the regime where their approximation is necessary and provides assistance to the reader for these practical choices throughout the document. The authors detail when Lanczos algorithm is applicable and when it is limited. This naturally motives their algorithm of finding a spanning set of basis vectors for the Kyrlov subspace generated by Lanczos algorithm, and then performing the orthogonalisation. Clearly storing high-dimensional basis vectors is an issue. The authors use embedding techniques to minimise this storage requirement and provide compelling computational examples.
Weaknesses: I do not see significant weaknesses with this paper. I think it would be nice if the choice of embedding procedure was better described in the main text. I believe that this might be a problem dependent choice, and whilst the authors have provided a good default choice, it would be good to know when their choice might give poor results, albeit if this seldom occurs.
Technical Quality: 3
Clarity: 4
Questions for Authors: Can the authors provide examples of when the embedding method works/fails. I believe it would be greatly beneficial to the reader if some intuition is shed on when this compression step is a useful computational resource and when it may greatly hinder performance.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have not included a limitations section. I think this must be included in a future version and include the dependence of their methods performance on the sketching algorithm used.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - "I do not see significant weaknesses with this paper. I think it would be nice if the choice of embedding procedure was better described in the main text. I believe that this might be a problem dependent choice, and whilst the authors have provided a good default choice, it would be good to know when their choice might give poor results, albeit if this seldom occurs."
The embedding is done through multiplication with a sketch matrix. It is true that several choices can be made, an alternative to the \texttt{srft} (used in all the reported experiments) is the \texttt{dense} sketch (Theorem 4, [3]
). You can try this alternative with the code provided in the submission by simply changing the command ``--sketch srft" to ``--sketch dense".
The choice of the sketch comes from the theoretical guarantees they offer. Specifically, we require it to be $(1\pm \epsilon)$ oblivious subspace embedding (Definition 2.1) and we require it to be memory efficient. We considered three candidate sketch matrices: SRFT (also named Fast JL transform) [1], Sparse JL transform [2] and Dense JL transform (Theorem 4, [3]). These three sketch matrices give different trade-offs in terms of matrix-vector product evaluation time and memory footprint, which we summarize in the following table.
Here, $p$ is the original dimension (i.e., the number of parameters), $\varepsilon$ is the approximation parameter for subspace embedding (as in Definition 2.1) and $s$ is the reduced dimension. Let $\omega$ be such that the current best matrix-multiplication algorithm runs in time $n^\omega$.
| | Time | Memory |
| ------- | -------- | ------- |
| Dense JL | $O(p^\omega)$ | $p^2$ |
| Sparse JL | $O(p \cdot \varepsilon s)$ | $p \cdot \varepsilon s$ |
| SRFT | $O(p \log p)$ | $p + s$ |
From the table above, it is clear that SRFT is best if our goal is to minimize memory footprint. At the same time, evaluation time is still quasilinear.
[1] Ailon, Nir, and Bernard Chazelle. "The fast Johnson–Lindenstrauss transform and approximate nearest neighbors." SIAM Journal on computing 39.1 (2009): 302-322.
[2] Kane, Daniel M., and Jelani Nelson. "Sparser johnson-lindenstrauss transforms." Journal of the ACM (JACM) 61.1 (2014): 1-23.
[3] Woodruff, David P. "Sketching as a tool for numerical linear algebra." Foundations and Trends® in Theoretical Computer Science 10.1–2 (2014): 1-157.
- "Can the authors provide examples of when the embedding method works/fails. I believe it would be greatly beneficial to the reader if some intuition is shed on when this compression step is a useful computational resource and when it may greatly hinder performance."
The embedding is theoretically guaranteed to work (i.e. to have a small error $\epsilon$ with high probability) as long as the sketch size $s$ is big enough. This means that, as long as this condition is satisfied, there exist no examples where the embedding fails.
To be more explicit, for any choosen sketch size $s$, there exists some $\epsilon$ such that the sketching is guaranteed to not exceed $\epsilon$-error (with high probability). This of course means that for this technique to be of any use, such $\epsilon$ needs to be small compared to the ground truth, and consequently $s$ needs to be ``big enough". If $\epsilon$ is on the same scale of the ground truth, then you essentially measure noise, and the method fails.
For example, in the synthetic data experiment (Figure 1 in the general rebuttal pdf) the ground truth is $~0.7$ (i.e. $1/\sqrt2$). You can see that $s=200$ leads to an error of $0.3$ (so likely practically useless), while $s=5000$ leads to an error of $0.07$ (so likely practically useful).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and appreciate their additional comments.
I understand that the method is guaranteed to work with $s$ chosen sufficiently large. My question was relating to providing an example where the method may fail, for instance constructing an explicit example in the small $s$ regime. Usually understanding such examples gives insight into points on a methods efficiency. It is unclear to me when I would expect to need to select $s$ to be large, or when I may be okay with a smaller value. My question related to providing some intuition on this point.
---
Rebuttal 2:
Comment: Despite significant investigations, we have been unable to find failure modes beyond those predicted by the theory. Nonetheless, for a **fixed sketch matrix $S$**, one can construct an adversarial example on which the error is huge. One such example is to build a GGN matrix whose eigenvectors are **exactly** orthogonal to the span of the sketch in parameter space (which is an easier task with smaller $s$). This failure, however, depends on the specific $S$, so it will not fail when $S$ is randomized.
Finally it should be mentioned that the sketch size $s$ has to be larger than the rank $k$. If $s$ is smaller then we will attempt to orthogonalize more vectors than the dimensionality of the space they live in, which is guaranteed to fail. | Summary: ## Summary
- This paper presents an architecture agnostic technique to compute uncertainty scores for pre-trained neural networks. The space complexity (memory usage) of their proposed technique SLU (Sketched Lanczos Uncertainty) grows logarithmically in the size of model parameters. The experiments reported in the paper use models such as LeNet and ResNet in the range 40K - 300K parameters. This is typically still smaller than pre-trained LLMs which are in millions of parameters or perhaps can be applied to LLMs in LoRA setting.
- More concretely, this paper designs an algorithm to compute the "local ensemble uncertainty estimation" score introduced in Madras et al 2019. https://arxiv.org/pdf/1910.09573
- This earlier paper quantifies "unreliability" of a prediction on the test-example as "underspecification".
- They state, "trained model is underspecified at a test input if many different predictions at that input are all equally consistent with the constraints posed by the training data and the learning problem specification (i.e., the model architecture and the loss function)"
- As a solution they introduce, "local ensembles" , a post-hoc method to measure how much a pre-trained model's prediction is underspecified for a particular test input.
- This paper provides a more memory efficient algorithm to compute this "local ensemble".
- The key observation the authors make in this paper are the following
- Orthogonalization (used in vanilla Lanczos) approximately commutes with sketching (embedding high dim vectors to lower dim to save on memory). Using this key observation they can change the order of sketching the Lanczos vectors and orthogonalization. Doing orthogonalization post-hoc leads to significant memory-savings.
- The other results they demonstrate is that in this low-memory budget regime, the disadvantage of introducing noise through sketching is countered by using a higher-rank approximation, leading to better performance.
- Background:
- **Generalized Gauss Newton Matrix**
- The local information can be characterized commonly by using the Fisher Information Matrix which is the same as Generalized Gauss Newton Matrix (GGN) Matrix. This GGN matrix is a pxp matrix where p is the number of parameters of the model. Various approximations have been proposed in the literature such as block-diagonal, diagonal etc.
- In this work, the authors approximate the GGN matrix with a low-rank matrix. The rank-k approximation of the GGN matrix can be computed through two algorithms.
- the Lanczos algorithm or
- truncated singular value decomposition algorithm.
- This is also known as Empirical Fischer matrix (Kutner et al 2019)
- **Uncertainty Score**
- The uncertainty at a datapoint x is measured as the variance of the prediction $f(\theta)$ with respect to distribution over parameter $\theta$.
- For the approaches which only train a single network, various choices of covariance are proposed which are connected to the GGN matrix described above (Kunstner et al 2019)
- Essentially, the authors in this paper show that low-rank approximation of GGN matrix is a good idea because the eigen values decay exponentially.
- **Lanczos Algorithm**
- The lanczos algorithm is an algorithm is an iterative method for tridiagonalizing a symmetric matrix G (pxp).
- If you stop this algorithm at iteration k, it returns a column orthonormal matrix V (pxk) and a tridiagonal matrix T (kxk) such that $V^T.G.V = T$
- $V^T.T.V$ approximates the projection of G onto its top k eigenvectors. Projecting G onto its top k eigenspace yields the best rank k approximation of G under any unitarily invariant norm.
- Once the approximation $V^T.T.V$ of $G$ is available, we can retrieve an approximation of top-k eigenpairs of G by diagonalizing T into $T=W\Lambda W^T$
- The benefit of Lanczos algorithm is that it only takes 3p space where p is the number of neural network parameters
- However the downside of Lanczos is that its prone to numerical instability and the $V$ may not be orthogonal. Typically standard implementations re-orthogonalize $V$ after every iteration. This loses the memory benefit of Lanczos.
- The authors dub this version as hi-memory lanczos.
- Instead of re-orthogonalizing at every step, an alternative strategy is to store all vectors and orthogonalize at the end. This approach still requires to store all vectors hence is not memory efficient. To make it more memory efficient, the authors propose to use sketching to store these vectors efficiently.
- **Sketching**
- Sketching is a technique to store high dimensional vectors to low dimensions so that expected norm of vector dot products is bounded.
- The authors use Subsampled Random Fourier Transform (SRFT) to obtain the subspace embedding.
- Finally, the authors combine the Lanczos algorithm with Sketching to propose the Sketched Lanczos algorithm.
- The paper also proposes a slight modification of sketched lanczos which first runs hi-memory lanczos for few iterations followed by sketched lanczos iterations. The idea here is that running the high memory lanczos for few iterations results in a better conditioned matrix G'.
- Finally the low rank decomposition of G is used to compute the variance of a prediction, which also gives the uncertainty score of a datapoint.
Strengths: ### Strengths
- The paper provides a good overview of various other ways of computing uncertainty and their associated limitations. Background on previous papers on which this work builds upon is also greatly appreciated.
- Experimental results show that sketched lanczos is able to achieve AUROC similar to that of Vanilla lanczos with significantly lesser memory
- The paper is comprehensive, with clear and accessible writing. The experimental results justify the claims made. The literature review provides a solid contextual foundation for the work and the background is thoroughly covered.
Weaknesses: ### Weakness
- As I alluded to earlier, the models used in experiments are primarily vision models ranging from 40K params to 300K params, it's not clear whether this approach would work for NLP pre-trained models typically ranging in millions of parameters.
- The datasets used are also primarily small scale vision datasets like CIFAR and MNIST.
- However, I still value in this work and novel insights from authors in optimizing the Lanczos algorithm using Sketching.
Technical Quality: 4
Clarity: 3
Questions for Authors: ### Questions
- Given that memory consumption of the algorithm is $O(k^2.\epsilon^-2)$. (Line 43) Since this does not depend on the number of parameters. Does this mean this can theoretically be applied to larger models? Also, how does this grow logarithmically in the size of model parameters?
- What is the reason for dip in the AUROC beyond a certain memory cost in Figure 3 and Figure 4?
- What is meant by "number of NNs" ? Do you mean number of neural network parameters?
-
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - "As I alluded to earlier, the models used in experiments are primarily vision models ranging from 40K params to 300K params, it's not clear whether this approach would work for NLP pre-trained models typically ranging in millions of parameters."
We made additional experiments, specifically, we used a 4M parameter Visual Transformer trained on the CelebA dataset. We also included an extra (easier) OoD dataset: FOOD101. The results are very good and included in the general response pdf. In the $3p$ budget setting our method clearly outperforms all baselines. In the $10p$ budget Deep Ensemble slightly outperforms in detecting FOOOD101 OoD, but our method still outperform all baselines for the more challenging hold-out-classes OoD datasets.
- "As I alluded to
The datasets used are also primarily small scale vision datasets like CIFAR and MNIST."
The submission also worked on the CelebA dataset which is not a small-scale vision problem.
With the updated experiments, we show again favorable performance on CelebA OoD detection with another and more modern architecture (Visual Transformer), in addition to the previously shown favorable performance on ResNet architecture. This experiments suggests that our method works irrespective of the choosen architecture.
- "Given that memory consumption of the algorithm is $O(k^2 \epsilon^{-2})$. (Line 43) Since this does not depend on the number of parameters. Does this mean this can theoretically be applied to larger models? Also, how does this grow logarithmically in the size of model parameters?"
The actual space consumption is $O(k\varepsilon^{-2} \cdot \log p \cdot \log(k / \delta))$ as reported in Lemma 3.1 and 3.2.
In line 43, we say that the memory consumption of our algorithm scales \emph{essentially} as $O(k^2\varepsilon^{-2})$. In the introduction, we chose to ignore factors $\log p$ and $\log k/\delta$ to make our result more understandable. The word \emph{essentially} suggests that $O(k^2\epsilon^{-2})$ is not exact but conveys the most relevant dependence. Indeed yes, this means this can theoretically be applied to larger models since the $\log p$ term is neglectable even for a billion parameters model.
- "What is the reason for dip in the AUROC beyond a certain memory cost in Figure 3 and Figure 4?"
In a nutshell, a sketch of size $O(k^2 \epsilon^{-2})$ can only handle up to $k$ vectors and guarantee an absolute error $\epsilon$ on our uncertainty score. If more than $k$ vectors are considered, the sketch becomes more noisy, which makes our uncertainty score more noisy and thus performs worse on OoD task.
- "What is meant by "number of NNs" ? Do you mean number of neural network parameters?"
In our figures we measure the memory footprint of our algorithm as the number of trained models (NNs) one could fit in such space, exactly as the size of a Deep Ensemble would. We believe that such a unit of measurement is natural. Indeed, for large models, we expect to dispose of storage comparable with that used for the model itself but not orders of magnitude more than that. | null | null | Rebuttal 1:
Rebuttal: We are grateful for the positive and constructive reviews that we reply to individually below. Some of the individual replies refer to figures and tables that are provided in the PDF attached to this message. These figures provide the requested additional experiments, including demonstrating results on vision transformers and synthetic data experiments.
The additional large-scale experiments are presented in Table 1 and Figure 2. Here we use a $4M$ parameter vision trasformer trained on CelebA, this network is able to train much better compared to the previous ResNet and, consequently, it leads to much better OoD performance. Additionally, given the difficulty of the 3 class-out datasets, we included a new (more out-of-distribution) dataset: FOOD101.
The synthetic data experiment is shown in Figure 1 and it supports the claim ``the disadvantage of introducing noise through sketching [going left on the x-axis] is outweighed by a higher-rank approximation [going up on the y-axis]" that we made in line lines 55-57. Note, for example, the two points $(s,r)=(\textnormal{inf},2)$ and $(s,r)=(20000,100)$ have the same memory requirement. At the same time, they have very different errors: 0.495 and 0.061, respectively.
We hope to have alleviated all concerns and are looking forward to further discussion.
Pdf: /pdf/eef67e48144b83461a26f1e16d5944bc2e24a7c5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding Linear Probing then Fine-tuning Language Models from NTK Perspective | Accept (poster) | Summary: The authors analyze the training dynamics of LP-FT for classification models based on NTK theory. They decomposed NTK matrix and highlight the importance of linear head norm. Additionally, they highlight the increased linear head norms can negatively affect the model calibration and can be fixed by temperature scaling. Finally, they extend their analysis to the LoRA method.
Strengths: 1. The application of NTK theory to analyze LP-FT in complex models like Transformers is interesting.
2. The theoretical analysis is thorough, well-supported by mathematical proofs, and aligns with empirical observations.
Weaknesses: 1. The experiments primarily focus on the classification tasks. Since the authors also extend their analysis to LoRA, exploring additional domains and harder tasks (reasoning, math, code generation) combined with LLMs could siginificantly strengthen the generalizability of the findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time and effort to review our paper. We appreciate your insightful question.
> Q1: The experiments primarily focus on the classification tasks. Since the authors also extend their analysis to LoRA, exploring additional domains and harder tasks (reasoning, math, code generation) combined with LLMs could significantly strengthen the generalizability of the findings.
A: We conducted experiments on question-answering tasks with the RoBERTa model on the SQuAD1.1 [1] and SQuAD2.0 [2] datasets. The following results show the following:
1. Smaller feature changes in LP-FT.
1. A significant increase in classifier weight norms during LP.
1. Effectiveness of temperature scaling on LP-FT.
These points validate our analysis. Although the F1 score of LP-FT on SQuAD1.1 is lower than that of standard fine-tuning (FT), this result is influenced by the choice of hyperparameters and could improve with further optimization.
### F1 score on test set
| Dataset | LP | FT | LoRA | LP-FT | LP-LoRA |
|---------|----------------|----------------|----------------|----------------|----------------|
| SQuAD1.1 | 24.14 ± 0.15 | 91.72 ± 0.05 | 91.57 ± 0.07 | **91.80 ± 0.09** | 91.64 ± 0.06 |
| SQuAD2.0 | 22.69 ± 0.21 | **82.50 ± 0.44** | 80.74 ± 1.56 | 81.31 ± 0.50 | 80.72 ± 0.22 |
### Classifier weight norms
| Dataset | Pretrain | FT | LoRA | LP | LP-FT | LP-LoRA |
|---------|--------------|-------------|-------------|-------------|-------------|-------------|
| SQuAD1.1 | 7.82e-01 | 8.50e-01 | 8.72e-01 | 1.21e+01 | 1.21e+01 | 1.21e+01 |
| SQuAD2.0 | 7.82e-01 | 8.41e-01 | 8.71e-01 | 1.26e+01 | 1.26e+01 | 1.26e+01 |
### Norm of the feature difference from the pre-trained model
| Dataset | FT | LoRA | LP-FT | LP-LoRA |
|---------|-------------|-------------|-------------|-------------|
| SQuAD1.1 | 7.50e+00 | 7.49e+00 | 2.92e+00 | 2.94e+00 |
| SQuAD2.0 | 7.59e+00 | 7.49e+00 | 3.17e+00 | 3.19e+00 |
### Effect of temperature scaling on the SQuAD1.1 dataset
| Metric | Method | w/o TS | w/ TS | Imp(%) |
|--------|----------|--------|-------|--------|
| ECE(%) | FT | 15.94 | 8.09 | 49.25 |
| | LoRA | 14.85 | 8.82 | 40.63 |
| | LP-FT | 14.77 | 7.21 | **51.17** |
| | LP-LoRA | 14.97 | 7.33 | 51.05 |
| MCE(%) | FT | 25.64 | 14.06 | 45.17 |
| | LoRA | 21.76 | 15.01 | 31.03 |
| | LP-FT | 22.14 | 12.20 | 44.88 |
| | LP-LoRA | 23.40 | 12.59 | **46.20** |
### Effect of temperature scaling on the SQuAD2.0 dataset
| Metric | Method | w/o TS | w/ TS | Imp(%) |
|--------|----------|--------|-------|--------|
| ECE(%) | FT | 7.42 | 1.67 | 77.50 |
| | LoRA | 5.96 | 1.66 | 72.15 |
| | LP-FT | 5.56 | 1.10 | **80.12** |
| | LP-LoRA | 4.68 | 1.24 | 73.46 |
| MCE(%) | FT | 13.66 | 5.23 | 61.73 |
| | LoRA | 10.43 | 5.89 | 43.51 |
| | LP-FT | 11.73 | 4.04 | **65.55** |
| | LP-LoRA | 9.22 | 5.68 | 38.43 |
[1] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
[2] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer qfrt,
Thank you for your great efforts to review our paper.
Does our response address your concerns?
If there is anything else, any comments would be greatly appreciated. | Summary: The authors present a theoretical analysis of the linear probing and fine-tuning framework based on neural tangent theory, supported by experiments with transformer-based models on natural language processing benchmarks. Their analysis decomposes the NTK matrix into the FT-effective component and the pre-train-effective component, demonstrating an approximation between the fine-tuning matrix and the LoRA matrix.
Strengths: 1. The paper is very well written, with detailed technical content. The authors have provided enough mathematical details to support their theoretical analysis. The proofs for theorems and corollaries are detailed delineated in the appendix.
2. A timely topic that bridges established theory and recent advancements in natural language processing.
3. The authors have provided comprehensive benchmarks to support the claims of the paper.
Weaknesses: Although the experiments included an exhaustive list of benchmarks, I noticed that you only implemented one transformer model in your setup. I believe it would be more effective to include multiple types of transformer-based models to better illustrate the practical advantages of your approach.
Technical Quality: 4
Clarity: 3
Questions for Authors: Minors: In Figures 3 and 11, it would be helpful to explicitly indicate the purpose of the shaded area in the caption to enhance readability.
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have made a clear statement about the limitations of the proposed analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time and effort to review our paper. We appreciate your valuable suggestions for improvements.
> W1: Although the experiments included an exhaustive list of benchmarks, I noticed that you only implemented one transformer model in your setup. I believe it would be more effective to include multiple types of transformer-based models to better illustrate the practical advantages of your approach.
A: We show the results of the T5 model [1] below.
In the main text, we conducted experiments using the RoBERTa model, which is an encoder model. To provide a broader perspective, we included experiments with the T5 model, an encoder-decoder model.
The results from the T5 experiments demonstrate the following:
1. Smaller feature changes in LP-FT.
1. A significant increase in classifier weight norms during LP.
1. Effectiveness of LP-FT.
These findings further support the validity of our analysis presented in the main text.
### Norm of the feature difference from the pre-trained T5 model
| Dataset | FT | LoRA | LP-FT | LP-LoRA |
|---------|--------------|--------------|--------------|--------------|
| CB | 2.77×10¹ | 1.72×10¹ | 9.51×10⁰ | 6.36×10⁰ |
| RTE | 1.46×10¹ | 1.43×10¹ | 1.18×10¹ | 7.46×10⁰ |
### Classifier weight norms of T5 model
| Dataset | Pretrain | FT | LoRA | LP | LP-FT | LP-LoRA |
|---------|----------------|----------------|----------------|----------------|----------------|----------------|
| CB | 9.97×10⁻¹ | 1.36×10⁰ | 1.51×10⁰ | 1.21×10¹ | 1.21×10¹ | 1.23×10¹ |
| RTE | 8.24×10⁻¹ | 1.01×10⁰ | 1.83×10⁰ | 3.69×10⁰ | 1.08×10¹ | 1.14×10¹ |
### Test accuracy of T5 model
| Dataset | LP | FT | LoRA | LP-FT | LP-LoRA |
|---------|----------------|----------------|----------------|----------------|----------------|
| CB | 74.40 ± 2.73 | 82.14 ± 3.09 | **84.52 ± 7.22** | **84.52 ± 2.06** | 81.55 ± 1.03 |
| RTE | 58.00 ± 1.46 | 73.89 ± 3.28 | **76.17 ± 0.96** | 75.09 ± 1.30 | 74.97 ± 1.85 |
> Q1: Minors: In Figures 3 and 11, it would be helpful to explicitly indicate the purpose of the shaded area in the caption to enhance readability.
A: Thank you for pointing this out.
In Fig.3 and 11, we added the following sentence in the current version:
**"Shaded areas represent standard errors."**
[1] Cong Fang, Hangfeng He, Qi Long, and Weijie J. Su. Exploring deep neural networks via
layer-peeled model: Minority collapse in imbalanced training. Proceedings of the National
Academy of Sciences, 118(43), October 2021
---
Rebuttal Comment 1.1:
Comment: Thank the authors' response and the addition of another set of requirements. I am impressed that the paper's findings hold across various transformer architectures. I adjusted my rating and recommended a strong accept for the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and reevaluation. We appreciate you taking the time to review our paper! | Summary: This paper studies the dynamics of Linear probing and fine tuning by means of NTK theory. The authors provide a connection between the Frobenius norm of linear probing weights and FT-effective component of the NTK matrix.
Strengths: The NTK sees the model as an Gaussian process, making it a powerful tool for analysis of neural networks’ convergence and generalization. This paper derives an interesting connection between NTK theory and fine tuning language models. The NTK is decomposed into pre-trained and fine tuned terms. Feature distortion theory is employed to interpret performance of FT.
Weaknesses: -The organization of material can be improved. for instance, you used the linear model $f(x)$ in Proposition 1, however it is introduced later in Definition 1.
-Table 2 is very uninformative and the numbers are not in line, making it hard to read. please provide precise description of each table and figure in the text.
-Same for Fig.3, some numbers are floating on it and the message of the figure is not clear.
please further enhance the presentation of results, I found it hard to infer your achievements from the numerical results section.
Technical Quality: 3
Clarity: 2
Questions for Authors: In Fig.3, What does “number of feature difference”? please provide the definition.
-How did you calculate the NTK matrix for numerical simulations? There is a body of literature for only approximation of NTK for transformers.
-Did you use empirical NTK for simulations? if yes, then can you say if your results hold in the finite width regime too?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: NTK theory in general holds in the infinite width limit. Further discussions are needed to adapt the methods in this paper to finite regime.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time and effort to review our paper. We appreciate your valuable advice and suggestions for improvements.
> W1: The organization of material can be improved. for instance, you used the linear model in Proposition 1, however it is introduced later in Definition 1.
A: The model in Proposition 1 is defined on line 136 in Section 4.1. That is, the linear model in Defition 1 is different from one in Proposition 1. However, since this is confusing, we changed the definition of the linear model in Definition 1. We now define the linear model as $f_{\text{linear}}(x) = VBx + b$, i.e., $\phi(x)=Bx$, which is a special case of the general model form $f(x) = V\phi(x) + b$ defined on line 136 and used in Proposition 1.
> W2: Table 2 is very uninformative and the numbers are not in line, making it hard to read. please provide precise description of each table and figure in the text.
> W3: Same for Fig.3, some numbers are floating on it and the message of the figure is not clear.
A: Thank you for pointing out the formatting issues and the need for clearer descriptions. We recognize the importance of clear and precise presentation and will make appropriate revisions.
### Formatting Issues in Table 2 and Figure 3
I understand that "293" in Table 2 and "328" in Figure 3, which reference line numbers in the preprint version, have caused confusion. We will ensure these numbers are removed in the camera-ready version of the manuscript to prevent any misinterpretation.
### Interpretation of Table 2
- The table shows that the FT-effective component outperforms the Pre-train effective component in terms of rank and kernel regression accuracy. This suggests that the FT-effective component has greater expressiveness.
- The superior performance of LP-FT over the standard method is indicated by higher kernel regression accuracy. The Frobenius norm and the FT ratio suggest that the significant contribution from the FT-component is the reason for this.
We have identified a typo in the Frobenius norm of the LP-LoRA, which should be 15.1, not 1.51. We will correct this in the camera-ready version.
### Interpretation of Figure 3
- Figure 3 shows the inverse relationship between the norm of the classifier weight and the norm of the feature difference. This suggests that the large classifier weight norm in LP-FT reduces the feature changes.
- The lines in the figure represent the mean values, while the shaded areas indicate the standard errors. We will clarify this point in the caption in the camera-ready version of the paper.
### Conclusion from the numerical results section
Overall, we validate the following three points in the numerical results section:
1. The changes in features during training are smaller in LP-FT than in FT, and the norms of the classifier significantly increase during LP.
1. The FT-effective component of the NTK matrix more effectively captures the input data than the pre-train-effective component and is more pronounced in LP-FT than FT.
1. A large classifier weight norm reduces the feature change during training, and its negative effects on calibration can be improved by temperature scaling.
> Q1: In Fig.3, What does “number of feature difference”? please provide the definition.
A: That is not **"number"**, is **"norm"**. We understand this might be confusing, and we will change the font size of the figure caption to make it a little more understandable.
In Figure 3, we measure the L2 norm of the difference between features extracted from the trained model and those extracted from the pretrained model. Specifically, for a pre-trained feature extractor $\phi_0$, a trained feature extractor $\phi$, and training examples $x_i (i=1,\cdots N)$, we measure $\frac{1}{N}\sum_{i=1}^N \| \phi(x_i) - \phi_0(x_i)\|_2$.
> Q2: How did you calculate the NTK matrix for numerical simulations? There is a body of literature for only approximation of NTK for transformers.
A: We describe these points in section 7.3.3 in the Appendix.
We separately calculated the pre-train-effective and FT-effective components of the NTK matrix. Following the methodology by Malladi et al. [1], we used functorch and forward-mode auto-differentiation for these calculations. To reduce computational costs, we randomly selected 10% of the parameters from the word embedding matrix for derivative calculations. For datasets with more than 250 samples, we used a subset of 250 randomly selected samples to compute the NTK matrix.
> Q3: Did you use empirical NTK for simulations? if yes, then can you say if your results hold in the finite width regime too?
> L1: NTK theory in general holds in the infinite width limit. Further discussions are needed to adapt the methods in this paper to finite regime.
A: Yes, we used empirical NTK in our experiments.
We assume that the fine-tuning dynamics of large Transformer models is explained with NTK. Although Transformer models have a finite number of parameters, the number is extremely large, and changes in parameters during fine-tuning are smaller compared to standard training.
The empirical NTK has been successfully used to analyze fine-tuning in the finite width regime in previous studies [1, 2]. Therefore, we applied the NTK to analyze the LP-FT method with Transformer models.
[1] Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, and Sanjeev Arora. A kernel-based view of language model fine-tuning. In International Conference on Machine Learning, pages 23610–23641. PMLR, 2023.
[2] Alexander Wei, Wei Hu, and Jacob Steinhardt. More than a toy: Random matrix models predict how real-world neural representations generalize. In Proceedings of the 39th International Conference on Machine Learning, 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for responding to my concerns, I increased my rating to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for reconsidering our work. We appreciate your feedback and support! | Summary: The paper presents a novel application of neural tangent kernel (NTK) theory to analyze the training dynamics of the linear probing then fine-tuning (LP-FT) method for large language models, demonstrating its effectiveness and extending the analysis to include the low-rank adaptation (LoRA) method.
Strengths: The paper stands out for its originality by applying NTK theory to the LP-FT method and extending it to include an analysis of LoRA. It's strong in terms of quality because it combines solid theoretical work with thorough experiments to back up the claims. The writing is clear and well-organized, making it easy to follow the arguments and findings. The significance of the research is evident in its potential impact on fine-tuning practices, not just in NLP but across various fields that use large pre-trained models. Overall, this paper makes a valuable contribution to advancing our understanding and methods in transfer learning.
Weaknesses: One weakness of the paper is that while it presents a solid theoretical foundation and empirical evidence, it could benefit from more detailed explanations in certain sections, particularly the experimental setup and hyperparameter tuning. This would enhance transparency and reproducibility. Additionally, the paper could further strengthen its claims by including a broader range of experiments across different domains beyond NLP. Another point for improvement is the discussion of the impact and practical applications of the findings, which could be expanded to provide more actionable insights for practitioners. Lastly, while the paper briefly mentions temperature scaling to address calibration issues, a deeper exploration of this aspect could provide more comprehensive recommendations for improving model performance.
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you expand on the practical applications of your findings? More specific examples of how practitioners can apply your recommendations in real-world scenarios would enhance the practical value of your work.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: he paper does not discuss the computational resources required for implementing LP-FT and NTK analysis. Providing information on the computational cost and efficiency of the proposed methods would help readers understand the practical feasibility of adopting these techniques.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for taking the time and effort to review our paper. We appreciate your valuable advice and suggestions for improvements.
> W1: One weakness of the paper is that while it presents a solid theoretical foundation and empirical evidence, it could benefit from more detailed explanations in certain sections, particularly the experimental setup and hyperparameter tuning. This would enhance transparency and reproducibility.
A: We have included detailed descriptions of the experimental setup and hyperparameter tuning in Section 7.3 of the Appendix. Additionally, we have made our implementation available on GitHub, though we cannot provide the specific URL at this time due to the anonymity of the review process. Our implementation uses the HuggingFace Transformers library and AdapterHub. For hyperparameter tuning, we conducted a grid search on the validation set. These details are intended to enhance transparency and reproducibility. We agree that transparency and reproducibility are crucial for academic papers.
> W2: Additionally, the paper could further strengthen its claims by including a broader range of experiments across different domains beyond NLP.
A: The effectiveness of LP-FT in the vision domain has already been validated by Kumar et al. [1], the original authors of LP-FT. Our focus is on applying LP-FT to fine-tuning language models, addressing the current demand in the NLP domain. This is why our experiments are limited to the NLP domain.
> W4: Lastly, while the paper briefly mentions temperature scaling to address calibration issues, a deeper exploration of this aspect could provide more comprehensive recommendations for improving model performance.
A: We conducted additional experiments of temperature scaling using 8 datasets from the SuperGLUE and GLUE benchmarks, as shown in Tables 14 and 15 in the Appendix. These experiments demonstrate that temperature scaling significantly improves ECE and MCE, particularly with LP-FT and LP-LoRA. We agree that a deeper exploration of temperature scaling to enhance calibration is crucial for practical applications.
> W3: Another point for improvement is the discussion of the impact and practical applications of the findings, which could be expanded to provide more actionable insights for practitioners.
> Q1: Can you expand on the practical applications of your findings? More specific examples of how practitioners can apply your recommendations in real-world scenarios would enhance the practical value of your work.
A: Our findings have two key practical applications:
1. LP-FT for language models: LP-FT is an effective method for fine-tuning language models, as it minimizes changes to valuable pre-trained features.
1. Enhancing calibration: We recommend combining LP-FT with temperature scaling to improve calibration.
To demonstrate practical effectiveness, we conducted experiments using the PubMed 20k medical dataset [2], as detailed in Section 7.4.7 and Table 16 of the Appendix. This example provides concrete scenarios for practitioners to apply LP-FT in real-world settings.
[1] Ananya Kumar, Aditi Raghunathan, Robbie Matthew Jones, Tengyu Ma, and Percy Liang. Fine-tuning can distort pretrained features and underperform out-of-distribution. In International Conference on Learning Representations, 2022.
[2] Franck Dernoncourt and Ji Young Lee. Pubmed 200k rct: a dataset for sequential sentence classification in medical abstracts, 2017.
---
Rebuttal 2:
Title: LGTM
Comment: LGTM, I will keep 7 for now.
---
Rebuttal Comment 2.1:
Comment: Thank you for reviewing our rebuttal. We are glad that we were able to address your concerns. We appreciate your support. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalized Tensor Decomposition for Understanding Multi-Output Regression under Combinatorial Shifts | Accept (poster) | Summary: This paper investigates the problem of multi-output regression under combinatorial covariate shift, namely when the input domains in the testing set differ significantly from the training set. The authors view the functional mapping from the inputs to the vector-valued output, which is evaluated discretely in the input domain, as a tensor with missing tubes, and convert the problem of making test set predictions as a tensor completion problem. Utilizing the framework of low tubal-rank tensor decomposition, the paper generalizes this framework from the discrete case to the continuous case and proposes a functional t-SVD method that decomposes the functional mapping into a series of embedding functions with associated singular values. Then an empirical risk minimization (ERM) algorithm is proposed and evaluated with a toy example numerical experiment. The major contribution of the paper is a solid theoretical analysis of the approximability of the functional t-SVD as well as the excess risk of the ERM algorithm and the writing of the paper is clear.
Strengths: * The paper bridges the low-rank tensor decomposition in the discrete domain to a functional approximation problem in the continuous domain, which is a rather novel approach when dealing with missing data issues for functional data.
* The paper provides a series of solid theoretical analyses regarding the approximability of the vector-valued functions as well as the excess risk bound of the empirical risk minimizer.
Weaknesses: 1. The empirical analyses of the paper lack comparisons with some other benchmark approaches, making it hard to evaluate the effectiveness of the proposed methodology. It would be interesting and informative to consider other tensor decomposition approaches (CP, Tucker, Tensor-Train, Tubal) as benchmarks.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The tubal rank framework introduced in this paper, to the best of my knowledge, works for 3-order tensors. Does that indicate that the functional t-SVD framework considered mainly works for inputs with dimension 2? Is it possible to generalize the current approach to higher-order tensors (and thus higher-dimensional inputs)?
2. The functional t-SVD framework in Theorem 1 resembles the Mercer decomposition of kernels. What is the main advantage of the current approach over some existing alternatives such as the multi-output Gaussian Process regression?
3. Maybe I missed it in the manuscript, but is there a systematic approach for choosing the rank $r$ when implementing the functional t-SVD? Is there any sense of optimal $r$ that might be derived by minimizing the excess risk bound w.r.t. $r$?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Additional numerical experiments that demonstrate the effectiveness of the proposed functional t-SVD over existing benchmark methods can be informative and helpful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weakness & Limitations: Lack comparisons with some other benchmark approaches**
**Re:** We appreciate the suggestion. However, we must emphasize that our work is the first, to our knowledge, to address MOR under CDS. Existing tensor decomposition methods are not designed for this specific problem.
Following your suggestion, we have provided preliminary empirical support for our theory in tensor completion under missing not at random (MNAR) settings in comparison with some other benchmark approaches. Experimental results are shown in **the PDF file**.
In our revision, we will:
- Clarify why direct comparisons may not be appropriate given the novelty of our problem setting.
- Provide discussions on how our method uniquely addresses MOR under CDS compared to existing approaches.
- Where possible, adapt existing methods to our setting and provide limited comparisons.
**Q1: Generalization to higher-order tensors**
**Re:** Excellent question! First, our Ft-SVD framework, while designed for 3-order tensors, is not strictly limited to two-dimensional input cases. It is versatile enough to handle multi-dimensional inputs that can be divided into two distinct sets.
Second, extending Ft-SVD to higher-order cases is indeed non-trivial and presents significant challenges. In response to your insightful query, we've developed a preliminary attempt at a higher-order extension of Ft-SVD.
> **Proposition (Functional higher-order t-SVD, informal).**
>
>
> Let $F: X_1 \times X_2 \times \cdots \times X_N \to \mathbb{R}^K$ be a square-integrable vector-valued function, where $X_i \subset \mathbb{R}^{D_i}$. Then there exist sets of functions $\\{U^n_i\\}_{i=1}^\infty \subset L^2(X_n; \mathbb{R}^K), n = 1,\ldots,N$, and a core function $S: X_1 \times X_2 \times \cdots \times X_N \to \mathbb{R}^K$, satisfying:
>
> $$F(x_1,\ldots,x_N) = \sum_{i_1=1}^\infty \cdots \sum_{i_N=1}^\infty S(i_1,\ldots,i_N) \ast_M U^1_{i_1}(x_1) \ast_M \cdots \ast_M U^N_{i_N}(x_N)$$
>
> where the convergence is in the $L^2$ sense, and $\ast_M$ denotes the t-product.
>- For each $n = 1,\ldots,N$, the set of functions $\\{U^n_i\\}$ satisfies the orthogonality condition:
> $$\int_{X_n} U^n_i(x) \ast_M (U^n_j(x))^{\top} dx = \delta_{ij} M^{-1}(\mathbf{1}_t)$$
> where $\mathbf{1}_t \in \mathbb{R}^{1 \times 1 \times K}$ is the t-scalar with all entries equal to 1, and $M$ is the linear transform defining the t-product.
>
>- The core function $S$ has the following properties:
>
> (i) all-orthogonality: for all $1 \leq n \leq N$, and $1 \leq \alpha \neq \beta < \infty$, we have
> $$\int_{X_1} \cdots \int_{X_{n-1}} \int_{X_{n+1}} \cdots \int_{X_N} S(x_1,\ldots,x_{n-1},\alpha,x_{n+1},\ldots,x_N)^{\top} \ast_M S(x_1,\ldots,x_{n-1},\beta,x_{n+1},\ldots,x_N) dx_1\cdots dx_{n-1}dx_{n+1}\cdots dx_N = \mathbf{0}_t.$$
>
> (ii) ordering: for all possible values of $n$,
> $$\\|S_{x_n=1}\\| \geq \\|S_{x_n=2}\\| \geq \cdots$$
> where $\\|S_{x_n=\alpha}\\|$ is the $L^2$ norm of $S$ with the $n$-th mode fixed at $\alpha$.
The main idea of this preliminary extension is to generalize the two-way decomposition to a multi-way decomposition in the functional setting. This involves decomposing a function of multiple variables into a core function and multiple sets of basis functions, one for each variable, with their interactions governed by a generalized t-product. We invite further discussion on this topic, as developing higher-order extensions of Ft-SVD presents many challenges and opportunities for exploration.
**Q2: Comparison with multi-output Gaussian Process regression**
**Re:** The main advantage of our approach over alternatives like multi-output Gaussian Process regression is its specific design for handling CDS in MOR, which is not directly addressed by existing methods. Our framework provides:
- A natural way to handle combinatorial shifts
- Interpretable low-rank representations of the function
- Potential computational advantages in high-dimensional spaces under CDS conditions
We will expand on these points in our revision.
**Q3: Rank selection for Ft-SVD**
**Re:** This paper proposes the novel concept and properties of Ft-SVD for the first time, focusing on its theoretical foundations. As such, we haven't yet developed a systematic approach for rank selection. This challenge is particularly complex because rank selection remains a difficult and actively researched problem even for existing (discrete) tensor decomposition frameworks, being considered one of the hot topics in the field. Moreover, the continuous nature of Ft-SVD adds an additional layer of complexity to this already challenging issue, making it an even more intricate problem in our context.
In our current work, determining an optimal rank presents significant challenges. The high complexity of MOR under CDS and our limited knowledge of the t-singular values of ground truth embeddings prevent us from providing a closed-form solution of the rank based on minimizing the excess risk bound. Nevertheless, your suggestion about deriving an optimal rank is excellent and opens up a promising avenue for future research.
For general Ft-SVD, we can draw inspiration from rank selection methods for classical matrix or tensor decompositions. We propose the following approaches as potential starting points:
1. Analyzing t-singular value decay and selecting r based on a threshold.
2. Employing cross-validation to choose r that minimizes prediction error on held-out data.
3. Adapting information criteria such as AIC or BIC to the Ft-SVD setting.
While these ideas show promise, they require rigorous development and validation within the Ft-SVD framework. We greatly appreciate your valuable input, as it aligns with our goal of developing a more comprehensive and theoretically grounded approach to Ft-SVD implementation in future work.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses and I appreciate the additional experiments. Please add some discussions (brief) about the generalizations to higher-order tensors to the main text. I will keep my score at 6 given that the current scope of the method covers a special case with 2-dimensional input.
---
Reply to Comment 1.1.1:
Comment: We truly appreciate your thoughtful review, particularly your recognition of our paper's solid theoretical analysis and novel approach to bridging low-rank tensor decomposition with functional approximation.
As suggested, we will concisely discuss higher-order tensor generalizations in the main text, demonstrating our method's theoretical extensibility and potential for multi-dimensional applications.
Our approach to MOR under CDS, which you noted as a rather novel approach, aims to contribute to an important area of research in the field. This extension further explores the theoretical foundations of our work.
We believe these enhancements will clarify our work's theoretical depth and its potential implications for future research. Your insights have been crucial in refining our presentation.
We appreciate your thorough review and look forward to contributing to the ongoing discussions in this important area. | Summary: This paper proposes function t-SVD for multi-output regression under combinatorial shifts. Excess-risk bounds have been derived. Using simulation experiments risk bounds with combinatorial shifts has been compared with regular risk bounds.
Strengths: 1) Proposal of functional t-SVD, which is a descent contribution.
2) Solid theoretical analysis
Weaknesses: 1) Lack of any real-data experiments. It is hard to understand the usefulness of the proposed method in real-world application though the theoretical contribution may be high.
2) Lack of comparison to other related function tensor decomposition methods either by theory or experiments.
Technical Quality: 3
Clarity: 2
Questions for Authors: Lack of any real-world application is limitation of the paper. Since the paper is proposing a functional tensor decomposition methods, it would be useful give at least one application to understand its learning capability. Can authors provide some experiments on real-data set?
Some references such as function tensor-train is missing [1]. Can authors provide more detailed related methods?
How does the proposed method compare with other functional tensor decomposition methods? How does the theoretical results (ERM bounds) compared with any existing results?
1 Alex Gorodetsky, Sertac Karaman, Youssef Marzouk, A continuous analogue of the tensor-train decomposition, Computer Methods in Applied Mechanics and Engineering, Volume 347, 2019, Pages 59-84,
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations listed but insufficient in my opinion due to any experiments with real-data.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough and constructive feedback. We address your concerns as follows:
**W1 & Q1: Lack of real-data experiments**
**Re:** We acknowledge the importance of real-world data experiments. However, the primary contribution of this paper is theoretical, as we provide the first formal definition and framework for the Combinatorial Distribution Shift (CDS) problem in multi-output regression (MOR). Our research introduces the Ft-SVD theorem and the ERM-DS algorithm, offering new theoretical insights, particularly addressing the unique challenges of MOR under CDS.
Nevertheless, we have provided preliminary empirical support through synthetic and real-world data in Missing Not at Random (MNAR) settings. These results offer initial insights into the application of our Ft-SVD framework for handling MOR tasks. The experimental results are included in **the PDF file**.
For future work, we plan to:
- Collaborate with domain experts to collect large-scale datasets reflecting CDS characteristics, especially in complex MOR scenarios.
- Design comprehensive experiments to evaluate our method's performance in real CDS scenarios, focusing on MOR.
- Explore the applicability of our theoretical framework across various practical domains, enhancing the robustness and performance of MOR models.
**Q2: Missing references and detailed related methods discussion**
**Re:** Thank you for pointing out the missing reference to the Function Tensor-Train (FTT) method [1]. This method presents a continuous analogue of the Tensor-Train decomposition, effectively representing multivariate functions using matrix-valued univariate functions. This approach efficiently captures local features or discontinuities without increasing computational cost.
We will also discuss other related methods, as recommended by Reviewer yp2H, including [Bigoni et al., 2016] (Tensor Train for functional data), [Luo et al., 2023] (Tucker decomposition with neural networks), and [Fang et al., 2024] (Bayesian CP/Tucker decomposition). These will be included in the related work section to provide a comprehensive overview of existing methodologies in functional tensor decomposition.
**W3 & Q3: Lack of comparison with other functional tensor decomposition methods, either by theory or experiments**
**Re:** We appreciate the suggestion to compare our work with existing functional tensor decomposition methods, including [1]. However, it is important to note that our study is the first to address the problem of MOR under CDS. Consequently, there are no directly comparable functional tensor decomposition methods available for experimental comparison in this specific context.
1. **Function Tensor-Train (FTT) Method [1]**: The FTT method efficiently represents multivariate functions, especially those with local features or discontinuities, using matrix-valued univariate functions. This allows for precise approximations without the need for fine discretization.
2. **Comparison with Our Proposed Method**: Our Ft-SVD method differs significantly from FTT and other existing methods. While FTT focuses on local feature approximation, Ft-SVD specifically addresses MOR under CDS. The Ft-SVD framework extends the t-SVD approach to continuous feature domains, providing novel solutions for the complexities inherent in MOR with CDS.
3. **Differences in Theoretical Results (ERM Bounds)**: We provide explicit error bounds within the ERM framework under CDS, distinguishing our work from existing literature, including [1]. These bounds demonstrate the robustness of the Ft-SVD method in MOR tasks, especially in scenarios involving distribution shifts between training and testing data. Unlike most existing methods that assume consistent distributions between training and testing, our approach specifically addresses the additional challenges posed by distribution inconsistencies in MOR tasks. We analyze the impact of distribution shifts on the generalization ability of MOR models, using a rigorous mathematical framework. We show how model performance is affected by these shifts and propose strategies for model adjustment to minimize errors. Such detailed analysis is rarely covered in existing literature.
**Innovation and Contribution**: Our study proposes a novel Ft-SVD framework that extends the classical t-SVD, particularly for addressing the challenges of CDS in MOR. This framework provides new theoretical insights and rigorous performance guarantees, establishing a foundational understanding of how to manage distribution shifts in complex MOR scenarios. The theoretical contributions include explicit error bounds and a detailed analysis of the impact of distribution shifts on model generalization, setting our work apart from existing methodologies.
We will include these discussions in the related work section of our manuscript to better highlight the distinctions and advancements offered by our method. We believe these additions will help clarify our research contributions.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for the update and I will keep my original score.
---
Reply to Comment 1.1.1:
Comment: We appreciate your detailed review and for considering our rebuttal. Thank you for recognizing the contributions of our work. We respect your decision to maintain the original score.
We have carefully addressed the concerns raised in your initial review, including adding more experimental results and clarifying our theoretical contributions. Your insights have been invaluable in improving our paper.
If you have any further questions or comments, we would be glad to address them. Thank you again for your time and expertise. | Summary: The paper introduces a novel approach to multi-output regression (MOR) under combinatorial distribution shifts (CDS) using a generalized tensor decomposition framework. The proposed Functional t-Singular Value Decomposition (Ft-SVD) extends classical tensor SVD to infinite and continuous feature domains, providing a new perspective on handling MOR tasks under CDS. The authors present a Double-Stage Empirical Risk Minimization (ERM-DS) algorithm designed to improve prediction accuracy in the presence of CDS. Through rigorous theoretical analysis, the paper establishes performance guarantees for the proposed algorithm and demonstrates its efficacy with synthetic data experiments.
Strengths: 1. Clear and Reasonable Motivation: The paper presents a clear and reasonable motivation for using tensor completion to address multi-output regression under combinatorial distribution shifts. By framing the problem as a tensor completion task, the method leverages the strengths of tensor decomposition techniques to manage the challenge of unseen feature combinations in MOR, effectively improving generalization capabilities.
2. Solid Theoretical Analysis: The detailed proofs and theoretical guarantees offered in the paper are impressive and contribute significantly to the understanding and validation of the proposed methods. This rigorous approach ensures that the findings are well-supported and credible
Weaknesses: 1. The paper's definition, introduction, and highlight of Combinatorial Distribution Shift (CDS) are insufficient. It is not clear why CDS poses a significant challenge in MOR and why it is important.
2. The relevant definitions, assumptions, and proofs are presented with excessive detail, which makes it difficult for readers to grasp the core issues and contributions. A more concise and focused explanation would help in conveying the importance and difficulty of CDS in MOR.
3. The paper does not adequately discuss the existing body of work on functional tensor completion, such as [Bigoni et al., 2016], [Luo et al., 2023], and [Fang et al., 2024]. Including a more comprehensive discussion on related works and how the proposed method compares to them would provide better context and highlight the contributions more effectively.
Ref:
- Bigoni, Daniele, Allan P. Engsig-Karup, and Youssef M. Marzouk. "Spectral tensor-train decomposition." SIAM Journal on Scientific Computing 38.4 (2016): A2405-A2439.
- Luo, Yisi, et al. "Low-rank tensor function representation for multi-dimensional data recovery." IEEE Transactions on Pattern Analysis and Machine Intelligence (2023).
- Fang, Shikai, et al. "Functional Bayesian Tucker Decomposition for Continuous-indexed Tensor Data." arXiv preprint arXiv:2311.04829 (2023).
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Many thanks for the thoughtful and constructive feedback.
**W1: Insufficient introduction of CDS in MOR**
**Re:** We acknowledge the need to better define and highlight the importance of Combinatorial Distribution Shift (CDS) in the context of multi-output regression (MOR). CDS occurs when training data covers only a limited subset of possible attribute combinations, while real-world applications often present new, unseen combinations. In MOR, CDS directly challenges model generalization by altering the joint distribution of outputs for unfamiliar input combinations, which traditional methods struggle to handle effectively.
The importance of addressing CDS has not been widely recognized for several reasons. First, many practical applications have not frequently encountered the challenge of new input combinations, leading to a lack of perceived urgency. Second, traditional MOR models often perform well under consistent data distributions, masking their limitations when faced with distribution shifts. Finally, the complexity and high costs of collecting and annotating data for new combinations have limited related research and empirical analysis, resulting in less attention to CDS issues in both academia and industry.
However, as AI technology advances and its application areas expand, CDS issues will become increasingly frequent and critical (potential application scenarios are detailed in Appendix A.1). In this context, our study provides a foundational framework and introduces novel methodologies to meet these evolving demands.
In the revision, we will refine the definition and introduction of CDS, emphasizing its importance in multi-output regression. Specifically, we will clarify how CDS complicates the learning process by causing variations in the joint distribution of outputs, which traditional methods struggle to manage. This will underscore the necessity of developing new approaches to effectively address CDS in MOR.
**W2: Excessive detail in definitions and proofs**
**Re:** We will streamline the main text as follows:
- Summarize core contributions and results concisely, avoiding excessive technical details. Key definitions and assumptions will be directly tied to the challenges posed by CDS.
- Move extensive technical explanations to the appendix, keeping the main discussion focused on central issues and contributions.
- Utilize practical examples and visual representations to illustrate the significance of our contributions without overwhelming the reader with complex details.
**W3: Inadequate discussion of related work**
**Re:** Thank you for your detailed review and feedback on our paper. We address the points regarding the lack of discussion on [Bigoni et al., 2016], [Luo et al., 2023], and [Fang et al., 2024] as follows.
1. [Bigoni et al., 2016]: This work proposed a functional tensor decomposition method based on classical Tensor Train decomposition, focusing on approximation problems in multidimensional data, especially in high-dimensional continuous spaces. The motivation was to efficiently handle high-dimensional function evaluations by reducing computational complexity through tensor decompositions. Unlike this approach, our research tackles the challenge of multi-output regression under CDS by introducing the Functional t-VSD method. This method extends the classical t-SVD framework to continuous feature domains, offering a novel perspective for solving multi-output regression problems.
2. [Luo et al., 2023]: This study utilizes Tucker decomposition combined with neural networks to model tensor mode functions, aiming to improve data recovery in multidimensional settings. The motivation behind this approach was to leverage the representational power of neural networks alongside tensor decompositions to handle complex data structures. While effective in deterministic settings, this work does not explicitly account for data uncertainty. In contrast, our work extends the t-SVD decomposition into the Ft-SVD framework and introduces the ERM-DS algorithm specifically designed to address the challenges of multi-output regression under CDS, thereby ensuring robust generalization capabilities.
3. [Fang et al., 2024]: This paper developed methods within the CP/Tucker decomposition framework with a Bayesian approach to manage multimodal data. The motivation was to incorporate uncertainty quantification and probabilistic modeling, making it particularly valuable in scenarios requiring robust predictions. However, these methods primarily focus on Bayesian modeling without directly addressing the complexities introduced by CDS. Our work distinguishes itself by introducing the Ft-SVD framework, specifically targeting the challenges posed by CDS in multi-output regression. This approach offers a robust solution for handling continuous feature domains, providing a theoretical foundation for mitigating the impact of distribution shifts.
**Innovation and Contribution**: Our approach significantly differs from the works of [Bigoni et al., 2016], [Luo et al., 2023], and [Fang et al., 2024] in terms of problem focus, motivation, and methodology. We propose an innovative Ft-SVD theoretical framework to address multi-output regression problems under CDS. By analyzing spectral properties, we design the ERM-DS algorithm, which not only captures the spectral decay characteristics in different sub-domains but also provides theoretical performance guarantees. These contributions fill a gap in the existing literature concerning CDS issues and offer effective tools for representing and analyzing multi-output functions.
We will include these discussions in the related work section of our manuscript and clarify the distinctions and advancements our method provides. We believe these additions will help better understand the context and contributions of our research.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in rebuttal, and I decided to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough review and for reconsidering our work. We greatly appreciate your decision to increase the score and for acknowledging the value of our contributions. Your insightful feedback will be instrumental in refining the paper for the final version, ultimately enhancing its quality and impact. | Summary: The authors propose a new model for decomposing vector-valued functions of two vector arguments. The proposed model considers a SVD decomposition of these functions in Hilbert space, effectively extending t-SVD to Hilbert spaces. Here t-SVD, short for tensor-SVD, consists of applying SVD to each matrix slice of a tensor, after multiplication by a unitary (although authors restrict these to orthogonal) matrix $M$ in the third dimension. They propose using this approach for Multi-Output Regression, and claim that the proposed approach is especially suited for generalizing pairs of arguments that are not present in the training data, in a similar fashion as t-SVD may be used for tensor completion.
Strengths: The method proposed in the paper appears to be very promising. The overall approach seems to be highly generalizable and it may possibly be used together with several function approximation techniques.
Weaknesses: The biggest weakness is the lack of numerical experiments. The overall idea of the paper is interesting, but it is not ground-breaking. Without the multiplication by $M$, this is essentially a regression in each dimension. Here numerical experiments are needed as a way to investigate possible choices of $M$ in applications: should we choose well-known orthogonal basis given by DFT or DCT? Or should we do a data-driven approach?
Technical Quality: 3
Clarity: 3
Questions for Authors: Overall, the statement of Theorem 4 is not very clear to me. On the other hand, it seems to be a generalization of reference [37] of the paper to multiple output regression. Can you explain what are the differences and similatities between both results? Which parts of the result in [37] generalize well to the new approach, and what were the novel analysis that needed to be developed?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I believe that the limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the constructive feedback.
**W1: Lack of numerical experiments**
**Re:** While our main contribution is theoretical, we've included proof-of-concept experiments on Page 9 in the submission. Additional results, including synthetic and real-world data, are in the **attached PDF**.
**W2: Interesting idea, but not ground-breaking**
**Re:** We appreciate your assessment that our paper presents an interesting idea. We'd like to highlight that all reviewers have recognized the novelty and significance:
- Rev. HXH7 characterized our approach as "rather novel" and praised our "series of solid theoretical analyses."
- Rev. yp2H noted our "clear and reasonable motivation" and "solid theoretical analysis," emphasizing that our method "effectively improves generalization capabilities."
- Rev. juWX described our functional t-SVD as "a decent contribution" with "solid theoretical analysis."
As NeurIPS values high quality, originality, clarity, and significance, we believe our work, while perhaps not "ground-breaking", aligns well with these criteria. Our paper makes fundamental contributions to an important and previously under-explored problem: MOR under CDS:
1. Functional t-SVD: Extending t-SVD to infinite and continuous domains, which Rev. HXH7 noted as "a novel approach when dealing with missing data issues for functional data."
2. First formal treatment of MOR under CDS: Providing a systematic definition and solution, addressing a crucial gap in current ML research.
3. ERM-DS algorithm: Derived from our Ft-SVD framework, offering new generalization bounds that Rev. yp2H described as "impressive" and "contributing significantly to the understanding and validation of the proposed methods."
Our work advances understanding of MOR under CDS and learning under complex distribution shifts. As Rev. yp2H noted, our approach "effectively improves generalization capabilities" for unseen feature combinations. While building on existing concepts, our application to MOR under CDS represents a foundational step forward in addressing this important ML problem. We believe our work lays a foundation for future research, potentially influencing the development of more robust ML models under combinatorial distribution shifts.
**W3: Choice of transformation matrix $M$**
**Re:** Thank you for this insightful observation. We would like to clarify our position:
1. Theoretical focus: Our paper primarily establishes a theoretical framework for Ft-SVD and its application to MOR under CDS. Following t-SVD literature conventions e.g. [21], we assume $M$ is given, allowing us to focus on core theoretical contributions.
2. Framework flexibility: While $M$ is assumed given, our framework accommodates various choices, enabling adaptation to different problem structures.
3. Empirical investigation: We've conducted preliminary numerical experiments comparing different $M$ options (DFT, DCT, data-specific). Results are in the **attached PDF**.
4. Future directions: Optimal $M$ selection opens new research avenues. We propose a potential approach:
> A preliminary proposal.
>
> - We seek the best $M$ in the feasible transformation set $\mathcal{M}$ (typically the orthogonal matrix group):
> $$\min_{M \in \mathcal{M}} \Phi(M)$$
> where $\Phi(M)$ measures the quality of the Ft-SVD under $M$. For example, the reconstruction error $\Phi(M) = \int_{\mathcal{X}} \int_{\mathcal{Y}} \\|F(x,y) - F_k(x,y;M)\\|^2_F dx dy,$ where $F_k(x,y;M)$ is the rank-$k$ Ft-SVD approximation under $M$.
> - To capture the interdependence between the transformation and the optimal functional representation, we employ a bilevel optimization approach:
>
> $\min_{M \in \mathcal{M}} \Phi(M) = \Phi(M, F(M))$ subject to $F(M) = \arg\min_{F \in \mathcal{F}} \Phi(M, F).$
>
> Here, $F$ represents the functional form of the data, and $\mathcal{F}$ is the space of square-integrable vector-valued functions. To solve this problem, we can use iterative methods such as gradient descent on $\mathcal{M}$.
>
The initial proposal may lack rigor and requires refinement and validation. It's beyond our current paper's scope, which focuses on establishing fundamental theory. Our work provides a foundation for future research, including optimal $M$ selection. By establishing theoretical groundwork, we enable future studies to investigate these aspects in depth.
**Q: Clarity of Thm 4 and its relation to [37]**
**Re:** We apologize for any lack of clarity. Let us address your questions:
*Similarity*: Both Thm 4 and [37] provide excess risk bounds for learning under CDS, with bounds structured to include approximation and statistical error terms.
*Difference*: Our results diverge from [37] in:
1. Framework: Introducing Ft-SVD for tensor completion in infinite-dimensional spaces.
2. Scope: Generalizing to multi-output MOR, addressing inter-output dependencies.
3. Characterization: Incorporating spectral decay patterns across frequency components.
*Generalizations from [37]*: The basic structure of the excess risk bound and the concept of analyzing learning under CDS.
*Novel analysis*: We introduce key theoretical innovations within our Ft-SVD framework to address MOR under CDS challenges:
1. Spectral analysis theory for Ft-SVD, extending t-SVD to infinite-dimensional spaces.
2. Tensor-based error decomposition for MOR based on Ft-SVD, which uncovers the intricate tensorial structure of MOR under CDS, revealing complexities not captured in matrix approaches [37].
3. Ft-SVD-specific measures of embedding quality, e.g.., generalizing the $\alpha$-conditioning conditions to tensor-structured infinite-dimensional spaces, enabling a more detailed understanding of model behavior under severe distribution shifts in MOR.
*Modification*: We will revise Thm 4 to highlight connections and differences with [37] more clearly. We'll also expand the related work section to better contextualize our contributions.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We hope this message finds you well. We have carefully addressed all the concerns raised in your initial review, including additional experimental results and theoretical clarifications.
If you have any further questions, we are more than happy to answer them. We would greatly appreciate your thoughts on our responses. Should our clarifications adequately address your concerns, we would be grateful for any reconsideration you might deem appropriate.
Please note that after August 13, 11:59pm AoE, we will no longer be able to respond to any new questions due to the NeurIPS deadline. We appreciate your understanding.
Thank you for your valuable input throughout this process.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: We appreciate the reviewers' constructive feedback and provide a summary of contributions, the feedback, and responses.
**Contributions.** Our work proposes the novel Functional t-Singular Value Decomposition (Ft-SVD) framework, addressing the challenges of multi-output regression (MOR) under combinatorial distribution shifts (CDS) for the first time. Key contributions include:
1. **Ft-SVD Framework:** Extends traditional t-SVD to infinite and continuous feature domains, allowing for handling complex, high-dimensional data through a functional approach.
2. **Double-Stage Empirical Risk Minimization (ERM-DS) Algorithm:** Developed within the Ft-SVD framework, this algorithm is tailored for MOR under CDS, leveraging spectral properties and domain-specific hypothesis classes to improve prediction accuracy.
3. **Theoretical Insights:** Comprehensive theoretical analysis within the Ft-SVD framework, detailing approximability and excess risk bounds, enhances understanding of MOR models' generalization capabilities under CDS.
**Positive Feedback:** The reviewers described the methodology as "promising and broadly generalizable" (Rev. 1ZmA) and appreciated the "clear motivation for using tensor completion in multi-output regression under combinatorial distribution shifts" (Rev. yp2H). The theoretical analysis was praised for its "rigor" and "detailed proofs and guarantees" (Rev. yp2H, HXH7). The innovation in combining low-rank tensor decomposition with functional approximation, as well as the extension of t-SVD to infinite and continuous feature domains, was highlighted (Rev. HXH7, yp2H). The clarity of the problem formulation was noted, and the potential impact of the work was seen as offering "new possibilities for a wider range of tasks" and "a new perspective on handling MOR tasks under CDS" (Rev. 1ZmA, yp2H).
**Common Concerns:** We provide responses to the common concerns as follows:
- **Numerical Experiments**: Need for more experiments (Rev. 1ZmA), especially with real-world data (Rev. juWX) and common benchmarks (Rev. HXH7).
**Re:** We acknowledge the importance of real-world data experiments. However, the core focus and primary contribution of our paper is theoretical. Our work provides the first formal definition and comprehensive theoretical framework for the CDS problem in multi-output regression. This includes the novel Ft-SVD theorem and the ERM-DS algorithm, which offer new theoretical foundations and methodologies for understanding and solving the CDS problem.
Nevertheless, we understand the need for real-world data validation and would like to address this concern. We have provided preliminary empirical support for our theory through:
- Synthetic data experiments: We conducted more synthetic experiments, validating our method in a controlled environment.
- Real tensor completion in missing not at random (MNAR) settings: We applied our method to real-world Velodyne LiDAR data.
The additional experimental results are shown **in the attached PDF**.
For future work, we propose:
- Collaborating with domain experts to collect large-scale datasets that reflect CDS characteristics in specific fields.
- Designing and implementing more comprehensive experiments to evaluate our method's performance in real CDS scenarios.
- Investigating the applicability of our theoretical framework and methods across a wider range of practical domains.
- **Clarification and Presentation** (Rev. yp2H, juWX): The need to clarify the significance of CDS in MOR and its unique challenges, as well as simplify the presentation of assumptions and technical proofs for better accessibility.
**Re:** We will elaborate on the role and importance of CDS, highlighting how our approach addresses these challenges. Additionally, we will summarize key contributions more clearly, focusing the main text on core findings and moving more details to the supplementary materials for improved clarity.
**Specific Responses:** For specific concerns raised by individual reviewers, such as the exploration of different transformation matrices (Rev. 1ZmA), clarification on Theorem 4 and related work (Rev. yp2H), and the selection of rank and potential extensions to higher-order tensors (Rev. HXH7), please see the specific rebuttal where these issues are addressed in detail.
Pdf: /pdf/a83a5fa3a485b2f3f3179efc9394c9b493e3c15e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Language-Driven Interactive Traffic Trajectory Generation | Accept (poster) | Summary: This paper proposes a large language model-based traffic trajectory generation method. Due to the designed interaction interpretation mechanism, the proposed method can generate a better corresponding trajectory. In addition, to improve the generation quality, the authors also proposed a well-designed prompt and a two-step feature aggregation. Finally, the authors also considered a variety of experiments to verify the effectiveness of the method.
Strengths: i) The LLM-based traffic trajectory generator can generate interactive traffic trajectories.
ii) A well-designed interaction-aware prompt and code-to-trajectory decoder.
iii) In addition to quantitative analysis, a visual case study is also given in the experiment.
Weaknesses: i) The author did not compare with the two related baseline methods.
ii) There are some collisions in the generated trajectory.
iii) The authors did not evaluate the diversity and scalability of the proposed generation method.
Minors:
Line 43: "a LLM-based" -> an
Line 631: “7 and 8 illustrate” -> “Figures 7 and 8”
Technical Quality: 3
Clarity: 3
Questions for Authors: i) Since the training of the proposed method also still relies on pre-collected ground truth data, I think the authors need to compare it with CTG and CTG++. Another reason is these two methods are also based on language conditions.
ii) How diverse are the trajectories generated by the proposed method? Will they be similar to the training data?
iii) In some visualization results, there are some collisions in the generated vehicle trajectories, such as the red vehicle in the overtaking in Figure 5, and the orange vehicles in Figures 7(b) and 8(b). What is the reason for this?
iv) What is the size of the trajectory data generated by the proposed method? The visualization results show that the number of vehicles is not large. Can a more complex trajectory be generated?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer awjY**
$\textbf{Question1:}$ Since the training of the proposed method also still relies on pre-collected ground truth data, I think the authors need to compare it with CTG and CTG++. Another reason is these two methods are also based on language conditions.
$\textbf{Answer1:}$ The setting of CTG and CTG++ is different from our method setting of language-conditioned trajectory generation. CTG and CTG++ require past trajectory observations to generate traffic trajectories, as outlined in their problem formulation (Sec.III-A in CTG and Sec 3.1 in CTG++). Without past trajectory observations, these models are unable to produce plausible trajectories. This is the primary reason why these two models were not selected when comparing our model to the baseline models.
$\textbf{Question2:}$ How diverse are the trajectories generated by the proposed method? Will they be similar to the training data?
$\textbf{Answer2:}$ Because of the generalization ability and multiple possibilities of the LLM's response, diversified scenarios can be generated from the same language input. Figure S4-1, S4-2 and S4-3 in the rebuttal pdf demonstrate one such case. For the same language input, the LLM will generate satisfying responses with variations, reflecting the realism and diversity of the generated scenarios by our model. Additionally, because of the LLM's generalization ability and multiple possibilities of LLM's response, the generated trajectories may differ significantly from the training data.
$\textbf{Question3:}$ In some visualization results, there are some collisions in the generated vehicle trajectories, such as the red vehicle in the overtaking in Figure 5, and the orange vehicles in Figures 7(b) and 8(b). What is the reason for this?
$\textbf{Answer3:}$ The vehicles involved in collisions in these figures mentioned are from scenarios generated by baseline models (LCTGen). This is due to two main reasons: i) the baseline method does not consider vehicle interaction and generates vehicle trajectory independently, which can lead to conflicts or collisions between different vehicle trajectories. ii) the intermediate code design in the baseline method does not account for the global movement trend, thus there may be conflicts between the vehicle end state and map boundary. In comparison, our method i) considers interaction-aware numerical codes to jointly generate interaction-aware vehicle trajectories to avoid unreasonable collisions; ii) introduces the global trend in the vehicle code V to provide a more reasonable overall trajectory moving states, thereby avoid the collision between the vehicle end state and map boundary.
$\textbf{Question4:}$ What is the size of the trajectory data generated by the proposed method? The visualization results show that the number of vehicles is not large. Can a more complex trajectory be generated?
$\textbf{Answer4:}$ We generate a 5-second trajectory at 10 fps for each vehicle. For the requirement of generating more complex traffic scenarios, LLM can smoothly assign appropriate vehicle codes to each vehicle and analyze inter-agent interactions to produce reasonable traffic scenarios. When the number of agents increases, our model is still able to generate scenarios that conform to linguistic descriptions. Figure S5 shows our model's ability to generate complex scenarios with more than 5 vehicles. Even as the number of vehicles in the scene increases, our model maintains its performance in scenario generation, handling cases with up to ten vehicles well, whereas the baseline method often results in significant vehicle collisions.
---
Rebuttal 2:
Comment: Thanks for your reply.
---
Rebuttal Comment 2.1:
Comment: Thank you for your intresent. We would like to know whether you have any further questions or requests about our work, and we would appreciate it if you could kindly reconsider your review score, taking our rebuttal into account if we have addressed the primary concerns you mentioned in your review.
---
Reply to Comment 2.1.1:
Comment: We would like to know if you have any additional questions or requests regarding our work. If we have adequately addressed the primary concerns mentioned in the review, we would appreciate it if you could consider revising your review score based on our rebuttal. | Summary: This work proposes a novel framework for traffic trajectory generation with natural language description of interactions, called InteractTraj. Specifically, the proposed framework consists of two main components: a language-to-code encoder and a code-to-trajectory decoder. The language-to-code encoder is designed to generate the code representation of the natural language description, and the code-to-trajectory decoder is used to generate the trajectory based on the code representation. The proposed framework is evaluated on two dataset and shows promising results compared to the state-of-the-art methods.
Strengths: 1. This paper use language description as the conditional information for controlling the generation of traffic trajectories, which is a interesting idea.
2. This paper is well-written and easy to follow. Each section and each component of the model is clearly described.
3. The proposed framework is evaluated on two real-world datasets and shows promising results compared to the state-of-the-art methods. The authors also do a series of experiments like ablation study and user study to further validate the effectiveness of the proposed framework.
Weaknesses: 1. It seems the proposed framework is computationally expensive; the authors should provide the details about the efficiency of the model.
2. Is the language description encoder used in this paper GPT-4? There is no comparison of the effects of different large language models as the encoder.
3. Controllability is an important contribution of this paper, which should be validated more intuitively (visual) or quantitatively.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could the proposed model handle the different length of the generated trajectories? or it only generates the fixed-length trajectories?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper addresses the problem of using language for controllable trajectory generation and discusses the limitations of the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer xSeP**
$\textbf{Question1:}$ It seems the proposed framework is computationally expensive; the authors should provide the details about the efficiency of the model.
$\textbf{Answer1:}$ The overall resources needed for model training and inference are not expensive. Only the code-to-trajectory decoder needs to be trained as the LLM in our model encoder does not need training. It takes about 12 hours for 100 epochs on 4 NVIDIA GeForce RTX040 GPUs for the decoder training process. During the inference phase, it takes about 30 seconds to generate a scenario, which includes about mainly 30 seconds to receive the response from the LLM and about 0.2 seconds to generate corresponding trajectories.
$\textbf{Question2:}$ Is the language description encoder used in this paper GPT-4? There is no comparison of the effects of different large language models as the encoder.
$\textbf{Answer2:}$ We use GPT-4 as the language description encoder in our paper, but many other LLMs also can be incorporated into our work. Here we further show the result of applying two other representative LLMs as encoders, including the Llama3-3.1-70b and Mistral-large. See Figure S2, which illustrates the visualization results provided by both Llama3 and Mistral-large for different types of interaction. We see that applying other LLMs also achieves compliant scenario generation.
$\textbf{Question3: }$ Controllability is an important contribution of this paper, which should be validated more intuitively (visual) or quantitatively.
$\textbf{Answer3:}$ Our method can control the specified vehicle trajectory generation in the scene. As controllability relies on the language input, here we validated more intuitively through visualization. Figure S3 shows the generation results under four different control descriptions applied to ego vehicles. We observe that our method can control specific vehicle trajectories through different linguistic descriptions.
$\textbf{Question4:}$ Could the proposed model handle the different lengths of the generated trajectories? or it only generates the fixed-length trajectories?
$\textbf{Answer4:}$ The length of the generated trajectories is pre-defined before the model is trained. In our paper, we generate 5-second trajectories at 10 fps in experiments. The setting of fixed-length trajectories is common in related tasks like trajectory generation [1,2] and prediction [3,4].
[1] Learning to generate diverse and realistic traffic scenarios, ICRA 2023
[2] Language-conditioned traffic generation, CoRL 2023
[3] Motiondiffuser: Controllable multi-agent motion prediction using diffusion, CVPR 2023
[4] MotionLM: Multi-Agent Motion Forecasting as Language Modeling, CVPR 2023
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's response and the extra work. This addresses some of my concerns. Currently, I think the scores given are reasonable and appropriate. | Summary: The paper introduces a novel method called InteractTraj, which is the first language-driven traffic trajectory generator capable of producing interactive traffic trajectories. This method is critical for advancing autonomous driving technology because it can generate realistic and controllable vehicle interaction trajectories based on natural language instructions, addressing the limitations of previous approaches that focused solely on generating trajectories for individual traffic participants without considering complex traffic dynamics.
InteractTraj features an innovative language-to-code encoder with an interaction-aware encoding strategy that translates abstract trajectory descriptions into concrete, interaction-aware numerical codes. Additionally, it includes a code-to-trajectory decoder that employs interaction-aware feature aggregation, integrating vehicle interactions, environmental map data, and vehicle movements to generate interactive traffic trajectories.
Experimental results show that InteractTraj outperforms SOTA methods in generating interactive traffic trajectories under diverse natural language commands, offering more realistic simulations of interactive traffic scenarios. This research is significant for driving simulation, particularly in reducing the costs of safety-critical scenarios by recreating real-world situations in virtual environments, thereby accelerating the development of autonomous driving technologies.
Strengths: **Originality**
The paper introduces a significant innovation in the field of autonomous vehicle technology and driving simulation by proposing InteractTraj, the first language-driven traffic trajectory generator capable of producing interactive traffic trajectories. This concept stands out due to its novelty in interpreting abstract trajectory descriptions into concrete, interaction-aware numerical codes, which then guide the generation of realistic traffic scenarios. The method's originality is further highlighted by its ability to encapsulate complex interactive dynamics that were previously unaddressed by focusing solely on individual traffic participant trajectories.
**Quality**
The research demonstrates high-quality methodology and rigorous experimental validation. The authors propose a language-to-code encoder with an interaction-aware encoding strategy, followed by a code-to-trajectory decoder that synergizes vehicle interactions with the environment. The comprehensive experiments conducted show superior performance over state-of-the-art (SoTA) methods, substantiated by quantitative metrics and qualitative assessments through user studies. The results indicate that InteractTraj not only generates more realistic and controllable interactive traffic trajectories but also effectively captures the essence of diverse natural language commands.
**Clarity**
The paper is well-structured and clearly articulated, making it accessible to reseachers in the field. The abstract succinctly summarizes the contributions and the significance of the work. The introduction sets the context by highlighting the importance of driving simulations in autonomous vehicle development and identifies the gap addressed by the proposed solution. The methodology is explained with sufficient detail to allow for replication, supported by figures and tables that aid in understanding the findings.
**Significance**
The impact of this work is substantial, particularly in the realm of autonomous driving and traffic simulation. By enabling the generation of interactive traffic scenarios through natural language commands, InteractTraj opens up new possibilities for creating diverse and realistic driving conditions, which is critical for testing and validating autonomous vehicle algorithms. This could potentially lead to safer and more reliable autonomous systems by exposing them to a wide range of driving situations that might be difficult or dangerous to replicate in real life.
In summary, the paper excels in its original contribution to the field, the high quality of its research, the clarity of its presentation, and the significant implications of its findings for autonomous vehicle technology and driving simulation.
Weaknesses: Despite the significant contributions and innovative approach outlined in the paper, there is a weaknesses where the work could be improved to better achieve its stated goals:
1.**Robustness to Ambiguous or Complex Language Descriptions**: The paper does not extensively explore how the system handles ambiguous or complex linguistic instructions. Since natural language can be nuanced and subject to interpretation, evaluating the system's ability to generate accurate trajectories when faced with unclear or highly detailed descriptions would provide a clearer picture of its robustness and flexibility.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. **Scope of Interaction Types**: The study focuses on a set of predefined interaction types. Could the authors clarify how the model performs when dealing with emergent or less common interaction types that might not have been explicitly trained on, like parallel driving or platooning? Would the model generalize well to such scenarios, or would it require retraining?
2. **Language Ambiguity Handling**: How does InteractTraj handle ambiguity in natural language descriptions? For instance, phrases like "the car behind" can be ambiguous in dense traffic scenarios. Does the model prioritize proximity or other factors when resolving such ambiguities?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors acknowledge some limitations of their work and outline future directions for addressing these constraints. However, there is room for improvement in terms of comprehensively addressing the limitations and discussing potential negative societal impacts. Here are some areas that could benefit from further exploration and discussion:
1. **Limited Scope of Traffic Participants**: Currently, the focus is on generating trajectories for vehicles only. The system does not account for other traffic participants such as pedestrians, cyclists, or other non-motorized vehicles. This limitation could be addressed by extending the model to encompass a broader range of entities in the traffic ecosystem.
2. **Map Generation Constraints**: The authors admit that map generation is restricted by the available map library. To enhance flexibility and realism, the system could be expanded to generate more diverse and flexible maps, possibly through generative models that create new environments based on learned patterns.
3. **Complex Interaction Handling**: While the model performs well with common interaction types, its ability to handle more complex or rare interactions has not been thoroughly evaluated. The authors could consider testing the model's performance in scenarios involving intricate multi-agent interactions to understand its limitations better.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer YJzW**
$\textbf{Question1:}$ Clarify how the model performs when dealing with emergent or less common interaction types, like parallel driving or platooning? Would the model generalize well to such scenarios, or would it require retraining?
$\textbf{Answer1:}$ Our model can generate compliant scenarios when dealing with less common interaction types. The left part of Figure S1 in the rebuttal pdf shows the experimental results. We present the results for rare interaction types including mentioned uncommon cases of parallel driving (Figure S1-1), platooning (Figure S1-2), and another uncommon case involving pulling over (Figure S1-3). We see that for less common interaction types, our method effectively translates the relevant behaviors to generate compliant scenarios.
Our model is able to generalize scenarios with emergent or less common interactions without requiring retraining. The main reason is that the LLM used in our encoding process possesses strong generalization abilities. The large language model can understand linguistic descriptions of uncommon interaction scenarios and convert them into appropriate numerical codes for decoding. Then the decoding process, which also has generalization ability due to training with massive numerical codes, would translate these codes into trajectories and generate compliant scenarios.
$\textbf{Question2:}$ How does InteractTraj handle ambiguity in language descriptions, like phrases like "the car behind" in dense traffic scenarios? Does the model prioritize proximity or other factors when resolving such ambiguities?
$\textbf{Answer2:}$ We leverage the reasoning ability of the large language model to manipulate linguistic inputs to solve possible ambiguity problems. We do not introduce additional prioritized proximity or other factors. Ambiguous situations mainly fall into two categories: those where the reference to the object is unclear, and those where there is a contradiction within the language instruction itself.
- For the first type, LLM would understand language descriptions and convert them into numerical codes to satisfy one of the meanings of phrases. We present the result of the mentioned phrase "the car behind", see the right part of Figure S1 in the rebuttal pdf. We see that when the reference to "the car behind" is ambiguous (Figure S1-4), the LLM would randomly designate one vehicle as the "behind car". However, with clearer descriptions like "the car behind ego car" (Figure S1-5), the LLM would accurately handle the descriptions.
- For the second type involving self-contradictory requirements, the scenarios generated would partially meet the instructions' criteria. Figure S1-6 illustrates this with an example of language instruction "A car is turning left and moving in a straight line." LLM may opt to generate an intersection in the scene to fulfill the left-turning requirement while disregarding the requirement for straight-line movement.
To better tackle language ambiguity, an optimal solution involves introducing LLM-human interaction to iteratively verify language descriptions. This approach will be explored in our future work.
$\textbf{Question3:}$ Currently, the focus is on generating trajectories for vehicles only. This limitation could be addressed by extending the model to encompass a broader range of entities.
$\textbf{Answer3:}$ At present, our work concentrates on trajectory generation for vehicles since vehicles are the most common subject, specifically creating interactive scenarios by modeling vehicle interactions. To incorporate various types of traffic participants, such as pedestrians and cyclists, we could utilize specialized heterogeneous networks as the backbone for the decoder network. Additionally, we plan to design specialized dynamic models, such as the bicycle kinematic model, tailored for different subjects. This comprehensive systematic design will be pursued in future work.
$\textbf{Question4:}$ The authors admit that map generation is restricted by the available map library. The system could be expanded to generate more diverse and flexible maps, possibly through generative models.
$\textbf{Answer4:}$ We use the current map library method since i) it ensures the obtained map is totally realistic and ii) the built map library could already support plenty of trajectory generation cases. Although a map generation method could generate more flexible maps, it may encounter the problem of being unrealistic. Meanwhile, in this work, our main solving problem is how to generate traffic trajectories with interactions. Thus, we hope the map is ensured to be realistic to better evaluate our trajectory generation design. In future work, we will switch the focus to map generation. To expand our method with more efficient map generation methods to obtain maps that conform more closely to linguistic inputs, we plan to use masked prediction for map expansion given partial map observations or use diffusion models for map generation.
$\textbf{Question5:}$ While the model performs well with common interaction types, its ability to handle more complex or rare interactions has not been thoroughly evaluated. The authors could consider testing the model's performance in scenarios involving intricate multi-agent interactions.
$\textbf{Answer5:}$ We have analyzed and shown some qualitative visualizations of complex or rare interaction cases in the responses A1 and A2. We will include more results of complex and rare interactions in the revised version. The quantitative evaluation of intricate multi-agent interactions is restricted by the existing benchmarks since there are no direct datasets full of intricate interactions labeled with language descriptions. In the future, when direct language-to-scenario datasets or specific datasets focusing on intricate interactions are available, we will conduct a more thorough evaluation of intricate multi-agent interaction cases.
---
Rebuttal 2:
Comment: Thank you for taking the time to address my comments | null | null | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback!
We have responded to questions based on the opinions of each reviewer. The attached PDF contains various figures. Please review it.
Pdf: /pdf/1ce1b0950689f8692221b286f92723dbefb5c698.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can Models Learn Skill Composition from Examples? | Accept (poster) | Summary: The paper studies the extent to which LLMs can learn compositional generalization over skills by finetuning with suitable training examples. Based on abundant experimental results, the paper concludes that finetuning using a dataset with skill composition examples helps the model build a “meta-skill” that allows them to generalize to more complex (i.e., composition of more skills) tasks. The paper also finds that the training samples containing higher-order skill compositions are more efficient in eliciting such a meta-skill. The paper is quite easy to follow and the conclusions might inspire more interesting applications on LLM’s finetuning.
Strengths: - The paper proposes a dataset that consists of examples with different selected skills for the analysis of skill composition.
- The results in this paper show that finetuning not only provides new knowledge to the LLM but also the skill of composing different skills, which potentially inspire novel applications using finetuning.
- The proposed method (although looks simple) efficiently improves the model’s performance on skill composition. The observation that the “skill-richer” data can induce the ability to compose skills faster would be very useful for practical LLM finetuning applications.
Weaknesses: - It might be a bit hard to read the trends and compare results in the tables, visualizing some results might be helpful.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Line 157 mentions that the data has a form of [prompt 1, answer 1, prompt 2, answer 2]. I do not quite understand why we need prompt 2 and answer 2 here. Will the analysis still hold for simple [prompt 1, answer 1] data samples?
- In line 260, the paper mentions a counter-intuitive phenomenon that the held-out performance is better than the train performance. The hypothetical explanation is that the model knows how to compose the held-out skills better than training skills. I think an experiment of switching the skill categories of the training and held-out sets can further support (or against) this hypothesis.
- The problem setting studied in this paper reminds me of the “least-to-most” prompting design [1]. This paper shows using some in-context examples of $k=1,2$ can make the model generalize to problems with large $k$. Hence I’m curious about whether combining these more complex in-context prompt designs could further improve the finetuning performance. (I think the analysis and experiment in the current submission is enough for a good paper, just point out this paper and a potential idea.)
[1] Zhou, Denny, et al. "Least-to-most prompting enables complex reasoning in large language models." ICLR 2023
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Please refer to the question and weakness part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer vwLV,
Thanks for your time and effort in reviewing our paper. Below we address your questions and suggestions.
> Q: Why use [prompt 1, answer 1, prompt 2, answer 2] instead of [prompt 1, answer 1]?
**A:** This prompt template aims to improve the Skill-Mix performance of the model being tested. “answer 1” may correctly include the skills, but it might conflict with the length constraint in Skill-Mix evaluation, i.e., it might use too many sentences to compose all skills in a short paragraph. “prompt 2” asks the model to revise and improve “answer 1”, and then the model generates “answer 2”. Since “answer 2” is usually better than “answer 1”, we include it in training. If we train on [prompt 1, answer 1], generating full-mark Skill-Mix data becomes much less efficient, and so we might not obtain enough training samples.
> Q: Training on the data generated on held-out skills and topics to verify (or against) the hypothesis the held-out skills are easier than the training skills.
**A:** Thanks for your suggestion. We generate Skill-Mix k=2 and k=3 data on held-out skills and topics, and the original Skill-Mix k=1 full-mark data (since k=1 contains both training and held-out skills), and fine-tune this data.
We summarize the results in the following table. In each entry, we present “(the performance of llama-2-7b-chat)/(the performance of model fine-tuned on training skills and topics)/(the performance of model fine-tuned on held-out skills and topics)”. The first row in the Table denotes the Skill-Mix evaluation on training skills and topics, and the second row denotes the Skill-Mix evaluation on held-out skills and topics. We directly copy “the performance of model fine-tuned on training skills and topics” from Table 2.
| | k=3 | k=4 |
|----------------------------------------|-----------|-----------|
| SkillMix on training skills and topics | 0.24/0.16 | 0.08/0.04 |
| SkillMix on held-out skills and topics | 0.37/0.40 | 0.09/0.13 |
From the result, we observe that: when fine-tuning on SkillMix data on held-out skills and topics, the SkillMix performance on held-out skills improves (compared with fine-tuning on training skills and topics), and the SkillMix performance on training skills decreases. This verifies our hypothesis that the held-out skills (and their corresponding categories) might be easier for the model to compose; this offers an explanation for our previous observation that even if we fine-tuning on training skills and topics, the SkillMix performance on held-out skills and topics is even better than on training skills and topics.
> Q: In-context examples improve fine-tuning performance?
**A:** Thanks for this suggestion! We believe that properly adding in-context examples can improve the fine-tuning performance, either with a better final Skill-Mix evaluation performance or with a lower sample complexity to reach the reported performance in the paper (thus more efficient). The primary goal of the current paper was to study the effect of vanilla SFT.
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the author's response. Most of my concerns are well resolved. After rebuttal, I confirm my original evaluation, but with more confidence (3->4). | Summary: This article studies whether or not LLMs can be fine-tuned to compose skills for text generation. The authors generated training data from GPt-4 in the style of the Skill-Mix benchmark, asking the model to generate text about a topic while using a set of k skills (e.g., sympathy, temporal reasoning, syllogism, etc.). LlaMa-2 and Mistral-7B were then fine-tuned on this synthetic data and evaluated on their ability to generalize to combinations of novel skills, and to more combinations of skills than seen during training. Using GPT-4 as a grader, the authors find that fine-tuning substantially improves compositional generalization.
Strengths: This article has a number of strengths
- addresses the important topic of compositional generalization using complex tasks
- well-written article
- technically sound
- evaluation for combining novel skills and combining novel numbers of skills
- Two graders were evaluated (GPT-4 and Claude)
- Data efficiency was also studied
This is solid work that is consistent with other recent work on "learning to compose" through training [1,2]. It addresses these ideas on a larger scale than in past studies and shows compelling results, especially the generalization to more complex combinations.
[1] Conklin, H., Wang, B., Smith, K., & Titov, I. (2021). Meta-learning to compositionally generalize. arXiv preprint arXiv:2106.04252.
[2] Lake, B. M., & Baroni, M. (2023). Human-like systematic generalization through a meta-learning neural network. Nature, 623(7985), 115-121.
Weaknesses: The tasks in the Skill-Mix benchmark seem quite artificial, and difficult for people to both produce and evaluate. Only two example generations are provide in the appendix, and the first one has odd syntax in the prompt: "The example should be a short tweet up to a few lines in the context of produce review..."
Another weakness is the use of automatic grading by GPT-4, although it seems unavoidable and they also tested another grader from Claude. Additional discussion justifying this choice and the accuracy of the grader seems warranted.
Technical Quality: 4
Clarity: 4
Questions for Authors: I don't have additional questions at this point.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: This section is fine.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 4iaf,
Thanks for your review and suggestions. We will add more visualization about the results in the next version.
> Q: Skill-Mix seems very artificial, and there are very few examples in the paper. One example in the paper (among 2) has a weird syntax in the prompt.
**A:** Thanks for your suggestion. We will put more examples in the next version, ranging not only from Skill-Mix evaluation itself but also some other tasks related to languages. As for the odd syntax in the prompt, the aim for that example is to show that this skill composition ability might lead to potential danger for safety and alignment, and in order to make the model “unsafe”, we reuse the prompt template for the Skill-Mix evaluation and only switch specific words and skills definitions.
> Q: Automatic grading by GPT-4 is a weakness
**A:** Thanks for pointing out this weakness. We agree that automatic grading may not be perfect and hard to avoid since human grading has higher variance according to Yu et al. We have newly added a consistency check between the GPT-4 and Claude-3 graders, which shows that these two graders also have high overlap when giving full marks.
In the following table, we show the Skill-Mix results, each entry shows (full mark ratio graded by GPT4)/(full mark ratio graded by Claude-3)/(full mark ratio given by both GPT-4 and Claude-3) on Skill-Mix evaluation on all skills and topics. As seen in the table, our findings are consistent between GPT-4 and Claude-3 as graders, and the high overlap between the 2 graders also implies the consistency between these 2 graders.
| | k=2 | k=3 | k=4 | k=5 |
|--------------------------------------|----------------|----------------|----------------|----------------|
| Llama-13b-chat on SkillMix_train | 0.24/0.31/0.19 | 0.02/0.07/0.01 | 0.01/0.06/0.01 | 0.00/0.00/0.00 |
| -ft’ed on D(1,2,3) on Skillmix_train | 0.65/0.69/0.58 | 0.33/0.57/0.29 | 0.15/0.26/0.12 | 0.06/0.10/0.05 |
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply to my comments. I reaffirm my positive score, and I hope the article is accepted. | Summary: The paper explores the capacity of smaller language models to learn compositional generalization from finetuning. Utilizing the Skill-Mix set-up, the study delivers comprehensive experiments to assesse how small language models can improve their performance on both in-distribution and out-of-distribution compositional tasks after finetuning on systhetic datasets regarding in-distribution easier tasks generated by GPT-4.
Strengths: - The paper is well-written.
- The paper focuses on a very important topic (compositional generalization) of current LLM research.
- It's easy to understand the intuition behind the comprehensive experiments.
Weaknesses: - The experiment pipeline is not novel and highly overlaps with the previous work [1] .
- The paper is limited on one specific method on evaluating if language models can generalize compositionally on 'harder' tasks after finetuning on 'easier' compositional examples. There exists many other evaluation methods/metrics covered in related works that are not mentioned, including [2] and [3].
- The paper does not properly explain and support the claim made in line 64-65 and line 243-244, i.e.,
> Instead, they are acquiring a higher-order meta-skill that allows them to generalize and apply to combine unseen skills.
> The results suggest that its ability to compose multiple skills does not come from overfitting training data but should be perceived as learning a *meta-skill* instead.
This claim is very strong and needs more experimental results from other compositional tasks or theoretical justifications.
[1] Yu, Dingli, et al. "Skill-Mix: A flexible and expandable family of evaluations for AI models." arXiv preprint arXiv:2310.17567 (2023).
[2] Dziri, Nouha, et al. "Faith and fate: Limits of transformers on compositionality." Advances in Neural Information Processing Systems 36 (2024).
[3] Chen, Jiaao, et al. "Skills-in-context prompting: Unlocking compositionality in large language models." arXiv preprint arXiv:2308.00304 (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: - The skill-mix paper only released small part of topics and skills to prevent people from chasing the leaderboard, so I'm wondering if the full list of skills and topics used in this paper is the same as the original paper.
- The second-last line in Table 3 shows that Mistral-7B-Instruct-v0.2 get 0 point for Skill Fraction on $\text { SKILL-MIX }(k)$ after finetuning on $\mathcal{D}_{\text {SKILL-MIX }}(1,2)$. This seems to be a typo.
- The experimental results showing performance improvements after finetuning are unsurprised unless the author can justify if the model is indeed learning how to combine skills compositionally during finetuning.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are mentioned in the Appendix section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ZBU1,
Thanks for your time and effort in reviewing our paper. Below we address your questions.
> Q: The pipeline has already been introduced by Yu et al. 2024 and there is limited novelty.
**A:** Thanks for asking this question. We want to point out some differences between the Skill-Mix paper (Yu et al. 2024) and this paper.
1. We consider fine-tuning in the paper, whereas Skill-Mix paper (Yu et al. 2024) presents a new evaluation. In order to successfully fine-tune the Skill-Mix data, we need to force the model into text mode (i.e., there is no [INST] and [/INST] around the prompt), while the original Skill-Mix evaluation is purely in the chat mode (i.e., there is always [INST] and [/INST] around the prompts). If we fine-tune in chat mode, the model’s chat ability will degenerate, and even affect some outputs during Skill-Mix evaluation.
2. Skill-Mix (Yu et al. 2024) does not separate skills into different sets, whereas our work partitions the skills based on categories (and leads to the “skill composition” results on held-out skills and their corresponding categories).
Yu et al. show that while large (stronger) models like GPT-4 can combine 4 or 5 skills well, small (weaker) models struggle. The purpose and novelty of our work come from answering the following question (inspired by that finding in Yu et al): can we teach small models to compose skills? Our experiments provide evidence to answer this question in the affirmative.
> Q: The skill composition claim made in the paper is too strong, and there are potential related works not added.
**A:** Thanks for pointing out the claims and other papers. We intend to restrict the claims in our paper under our setup (language skills) and we will make it clear in our next version. We agree our setup based on Skill-Mix is not universal. For example in [2], the authors argue that compositional generalization cannot be induced during fine-tuning (very inefficient, not generalizable to OOD), while our findings show that models can learn how to compose language skills through fine-tuning, even to some OOD tasks (higher k and held-out skills). We already had discussed many past works under different settings in our related works section, and will include more (including the papers you mentioned) in our next version. However, as we mentioned in the paper, prior evaluations/setups for compositionality often rely on rule-based languages or underlying multi-step execution, which arguably deviates from natural languages. Skill-Mix is a well-established benchmark for LLMs that focuses on the capability of language models for combining language skills in natural text settings. The results from the Skill-Mix paper align with previous theory work (Arora & Goyal 2023) and has implications of LLMs going beyond stochastic parrots. Thus, it fits perfectly for our goal of studying whether language models can learn skill composition. Our results also potentially suggested that the language models can indeed go beyond stochastic parrot with a small number of fine-tuning examples, which we believe is significant in the future development of LLMs and is of interest of AI safety and alignment.
> Q: Do you use the same skills and topic set as in Yu et al 2024?
**A:** Yes, Yu et al. provided us with all the skills and topic sets, and permission to include the skills in the Appendix. Given our findings that fine-tuning can improve the Skill-Mix performance a lot even on heldout skills, they realized that keeping some skills secret does not make the evaluation any harder.
> Q: SkillMix performance improvement does not necessarily mean skill composition
**A:** We are not sure exactly what the reviewer meant. Possibly they meant that skill composition is a broader phenomenon than what Skill-Mix covers. We agree. No single evaluation can fully test skill composition capability.
---
Rebuttal Comment 1.1:
Comment: I really appreciate author's response to my questions. I have the following question.
- From the results, it seems like finetuning on $\mathcal{D}(1,2,3)$ provides almost no improvement from finetuning on $\mathcal{D}(1,2)$ regarding the $\text{SKILL-MIX}(2)$ performance. I feel this is weird because learning from examples that are composed by 3 skills should help the model better solve a task that requires 2 skills.
---
Reply to Comment 1.1.1:
Title: Answer to the follow-up question
Comment: Thank you for the follow-up question. We apologize for not replying directly to your comment. We copy our answer in this thread.
We believe there are at least 2 reasons behind this phenomenon.
1. Our experiments show that $k=1$ is "low quality" data for Skill-Mix. It is known in various contexts (not just Skill-Mix) that mixing low-quality data into SFT can significantly lower the effectiveness of high-quality data. Indeed, using $k=3$ data alone (i.e., training on $\mathcal D(3)$) would be more powerful (achieving higher Skill-Mix performance on $k=2$ using same number of samples or even same number of tokens) than training on $\mathcal D(2)$ or $\mathcal D(1,2)$.
2. Training and evaluating on both $k=2$ is nearly "in-distribution": if there exists a technique/format/sentence structure to combine 2 language skills, the model should be able to learn the technique/format/sentence structure with enough $k=2$ data. Thus, training only on $\mathcal D(1,2)$ should already get high performance on $k=2$ evaluations and the performance might get somewhat saturated (i.e., it will be relatively hard to further improve the performance on Skill-Mix $k=2$ if already trained on enough $k=2$ examples).
---
Rebuttal 2:
Title: Answer to the follow up question
Comment: Thank you for the follow-up question. We believe there are at least 2 reasons behind this phenomenon.
1. Our experiments show that $k=1$ is "low quality" data for Skill-Mix. It is known in various contexts (not just Skill-Mix) that mixing low-quality data into SFT can significantly lower the effectiveness of high-quality data. Indeed, using $k=3$ data alone (i.e., training on $\mathcal D(3)$) would be more powerful (achieving higher Skill-Mix performance on $k=2$ using same number of samples or even same number of tokens) than training on $\mathcal D(2)$ or $\mathcal D(1,2)$.
2. Training and evaluating on both $k=2$ is nearly "in-distribution": if there exists a technique/format/sentence structure to combine 2 language skills, the model should be able to learn the technique/format/sentence structure with enough $k=2$ data. Thus, training only on $\mathcal D(1,2)$ should already get high performance on $k=2$ evaluations and the performance might get somewhat saturated (i.e., it will be relatively hard to further improve the performance on Skill-Mix $k=2$ if already trained on enough $k=2$ examples). | Summary: The authors present a (family of) fine-tuned LLMs, trained using the SKILL-MIX dataset. They show that models trained on composite tasks where each instance involves a sequence of different skills improves the model in similar composite tasks. They demonstrate generalization by fine-tuning the models on subsets of task combinations and evaluating them on held-out sets. They also evaluate out-of-distribution generalization on the number of skills sequenced together and show that training on sequences of 2 or 3 skills can help generalize to sequences of 4 and 5 skills.
Strengths: 1. The authors have done extensive experiments using the SkillMix dataset to support their arguments.
2. Using LLM to judge LLMs can introduce a lot of variability. I understand that the authors are following the procedure from Yu et al. Reading Appendix B of Yu et al. did increase my confidence and I appreciate adding another evaluator model which supports the analysis even further.
Weaknesses: 1. My main concerns with this paper have less to do with the methods of this particulaar submission, but of the dataset and methodology from Yu et al. that the authors extensively draw upon. Looking over the dataset and scoring design, it seems that the scores could be highly inflated simply by writing three disconnected sentences that each individually illustrates the skill. The only criterion in the rubric that is relevant for cohesion is ‘Makes sense’, which still can be evaluated independently for each sentence. This seems to be a rather weak form of skill compositionality that is the central point of this paper. Moreover, the authors imply that the skills and topics would not be highly related (Line 138), but given the skills presented in Appendix B, I’m not entirely convinced that this is true. I would be surprised if there weren’t at least a few hyperlinks between them.
2. The baselines presented in Section 4.1 seem a bit weak. Fine-tuning on SM_train(1) is a useful starting baseline, but it is still severely restricted in its ability to span the dataset. If I understand the authors’ intent correctly that the goal is to demonstrate that fine-tuning to combine different skills, then what I think would be a more informative comparison would be to include within the training set samples involving the same skill/topic category but use higher than 1 k. Otherwise, this confounds with length-generalization, which we know sequence models struggle with.
3. A similar concern exists for Sections 4.2 and 4.3, where training with higher k is confounded with training on longer sequences. I would be interested in understanding the effect size after controlling for sequence length how and what the distribution of sequence lengths are between the different k’s.
4. The three findings are not particularly novel nor surprising (in fact, it would’ve been more surprising if any of the three findings had been shown to be false). The authors appeal to novelty with respect to Arora and Goyal (Line 73), and if that is indeed true, then it would be helpful to provide more substantial analyses with respect to the specific claims made in the first paper (e.g. show which relations or equations from Arora and Goyal fail to predict the results of this paper).
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Perhaps I missed this in the main text, but how many seeds and dataset recombinations were used in the experiments? I’m particularly interested in the observation that the model performed better on the held-out set than the training set (Line 260) and whether that is a consistent phenomenon.
2. Could you provide a metric for how consistently the two models from Table 4 rated the responses? It would provide higher confidence if the 31% and 24% (top-left entries) actually had high overlap.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors have acknowledged the limitations and impact in the supplements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer bbsi,
Thanks for your time and effort in reviewing our paper. Below we address your questions and concerns.
> Q: Concerns about Skill-Mix
**A:** The reviewer is correct that there could be “cheating” ways to pass the Skill-Mix eval that fool the GPT4 grader, since “Make sense” is quite vague.
However, in this paper (as opposed to the original Skill-Mix evaluation paper) the models were produced by our fine-tuning, where they saw only good-quality answers to Skill-Mix questions. Human examination of the answers from our fine-tuned models has not detected such cheating behavior. Thus the conclusions in our paper about the ease of inducing compositional behavior are valid.
> Q: Confounding factor about length
**A:** Thank you for raising this important point. We agree that the sequence lengths in the training set could be a confounding factor, since longer sentences may be required to illustrate more complex language skills. To address this concern, we have now designed an experiment that takes out this confounding factor. It will be in the final version.
Experiment: In this experiment, we show that even if two SFT datasets have similar average lengths, the dataset with richer skills induces better performance on Skill-Mix.
Skill-Mix k=1,2,3 data have average lengths of 144, 205, and 273, respectively If we put all Skill-Mix k=1,2,3 data together (D_skilmix(1,2,3) in paper), the average length is 204, which is similar to the average length of D_skilmix(2). We finetune Llama 13b chat on D_skilmix(2), and evaluate its performance on the training skills and topics. The full mark ratio under different settings is presented below:
| | k=3 | k=4 |
|---------------------------------------------|------|------|
| ft’ed on D_skillmix(2) | 0.11 | 0.01 |
| ft’ed on D_skillmix(1,2,3) (6000 subsample) | 0.22| 0.08 |
For a fair comparison, we subsample D_skillmix(1,2,3) so its number of data points is smaller than D_skillmix(2). This experiment shows that, even if two sets have the same average length, the one with richer skills indeed performs better.
> Q: Findings are not surprising and it lacks illustrations of why the empirical results in this paper cannot be predicted from Arora and Goyal 2023.
**A:** Thanks for asking this question. We would like to clarify that the theory in Arora and Goyal et. al does not predict our empirical findings. (1) That theory relied on pre-training (specifically on scaling laws for pretraining) whereas we are doing fine-tuning at fairly modest scales. (2) More importantly, the theory does not predict that training on examples that combine k-tuples of skills can lead the model to improve on (k+1)-tuples of skills. We explain the technical reason below.
The theory makes two key assumptions (1) all individual skills (“ground truth” set of skills) are demonstrated during training (but most k-tuples of skills were not), and (2) for a random combination of k skills, competence on that particular k-tuple of skills corresponds to the ability to answer questions about random text pieces that involve/compose that k-tuple of skills. The theory predicts that if the model excels at composing random k-tuples of skills, it will also be good at composing random k’-tuple of skills (where k’ <= k) from the ground truth skills set. However, the theory does not predict our empirical results that training on k=3,2,1 leads to (1) performance improvement on k=4 (i.e., where k’ > k), and (2) improved performance on Skill-Mix evaluation for the held-out skills.
> Q: How many seeds are used for the experiment? Do the results depend on the choice of random seed? How stable is the effect (especially the performance on the held-out set is better than the training set)?
**A:** Due to budget constraints (cost of large number of OpenAI API queries for data generation and grading), we were unable to use multiple random seeds for all experiments in the paper. As a result, all the results presented in this paper are based on a single random seed (42).
In the following table, we show results for the following experiment with multiple random seeds: Skill-Mix evaluation performance on k=3 and k=4 for model fine-tuned on D_skillmix(1,2,3). Each row represents the seed used during fine-tuning and Skill-Mix evaluation. In each entry, we show the (Skill-Mix performance on training skills)/(Skill-Mix performance on held-out skills).
| Skill-Mix | seed 42 (in paper) | seed 13 | seed 87 |
|-----------|--------------------|-----------|-----------|
| k=3 | 0.24/0.37 | 0.22/0.35 | 0.24/0.36 |
| k=4 | 0.08/0.09 | 0.09/0.11 | 0.11/0.12 |
As shown in the table, Skill-Mix performance over different seeds is quite stable, and the finding that the Skill-Mix performance on held-out skills is better than that on training skills also holds over different random seeds.
> Q: Consistence between GPT-4 and Claude-3 as graders?
**A:** Thanks for your perceptive question. In the following table, we show the Skill-Mix results, each entry shows (full mark ratio graded by GPT4)/(full mark ratio graded by Claude-3)/(full mark ratio given by both GPT-4 and Claude-3) on Skill-Mix evaluation on all skills and topics. As seen in the table, our findings are consistent between GPT-4 and Claude-3 as graders (they have a high overlap), and the results corroborate our claim that GPT-4 is stricter than Claude-3.
| | k=2 | k=3 | k=4 | k=5 |
|--------------------------------------|----------------|----------------|----------------|----------------|
| Llama-13b-chat on SkillMix_train | 0.24/0.31/0.19 | 0.02/0.07/0.01 | 0.01/0.06/0.01 | 0.00/0.00/0.00 |
| -ft’ed on D(1,2,3) on Skillmix_train | 0.65/0.69/0.58 | 0.33/0.57/0.29 | 0.15/0.26/0.12 | 0.06/0.10/0.05 |
---
Rebuttal 2:
Title: Did we address your questions or concerns?
Comment: Dear reviewer,
Thank you again for your time and effort in reviewing our paper. As the discussion period deadline is approaching, we would like to know if our rebuttal addresses your questions or concerns and if there are any follow-up questions.
Thank you | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Wasserstein Distance Rivals Kullback-Leibler Divergence for Knowledge Distillation | Accept (poster) | Summary: This paper proposes a new algorithm for knowledge distillation by replacing KL-divergence loss with Wasserstein distance loss.
The proposed algorithm contains two parts:
(1) Logits distillation with Wasserstein distance loss, implementing with an entropy regularized linear programming;
(2) Assuming the features obey Gaussian distribution, the Wasserstein distance loss can be solved with parametric tricks.
Experiments on ImageNet and CIFAR100 show obvious improvements compared to baselines like KD, DKD, and NKD.
Strengths: (1) The paper is written clearly and is easy to follow.
(2) Sound ablations for combinations of the proposed method and previous ones.
Weaknesses: (1) Eq. (3) and Eq. (5) both have hyperparameter $\lambda$. Does the two lambda have the same value in implementation?
The ablation for $\lambda$ in Eq. (3) is missing.
(2) There are too many hyper-parameters to be tuned.
lambda in Eq. (3),
lambda in Eq. (5),
k for calculating IR,
gamma for a trade-off between D_mean and D_cov,
weight between WKD-L and WKD-F,
temperature.
Do all experiments in the paper adopt consistent hyper-parameters?
(3) What's the layer features are used for WKD-F?
Feature dimensions are usually higher than logits. Moreover, multiple-layer features are often used for feature-based methods.
Thus, it seems to be wired that WKD-F (207ms) is even faster than the original KD (215ms).
(4) The paper claims that Wasserstein distance rivals Kullback-Leibler divergence for knowledge distillation as indicated by their title.
However, as shown in Table 2(a) and Table 2(b), the KL-div method achieves 71.96 in the setting of separating target and non-target.
The WD-based method just achieves comparable results with the Polynomial kernel, class centroid, or classifier weight for calculating IR.
It seems that the IR is more important than the form of WD formulation.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Ablation for $\lambda$ in Eq. (3).
(2) Explanation and analysis of sensitivity of all hyper-parameters.
(3) How to calculate $\mu^T$ and $\mu^S$ in your implementation for Eq. (9)?
(4) Does the proposed method work well in the setting of self-KD where the student and teacher share the same architecture?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer F2zQ,
Thank you for reviewing our paper and for providing constructive and insightful comments. We appreciate your acknowledgement of good soundness, presentation \& contribution of our paper. We hope our detailed responses can address your concerns, and could you please consider increasing your rating accordingly.
> ### Eq. (3) and Eq. (5) both have hyperparameter $\lambda$. Does the two lambda have the same value in implementation? The ablation for the hyperparameter in Eq. (3) is missing.
We apologize for our typo that leads to confusion.
The **hyper-parameter (HP) in Eq. (3) is a typo and will be denoted by $\eta$**, indicating the regularizing constant of entropic term, while $\lambda$ in Eq. (5) indicates the weight of WD loss; they have different values. We ablate $\eta$ in Setting (a) and results (Acc, \%) are shown below along with Figure R1 in PDF for Global Rebuttal. We see across a decently large range the performance changes smoothly, particularly for Top-5 Acc. Notably, **$\eta$ in Eq. (3) is always set to 0.05 across the paper (Line 231)** in either classification on ImageNet \& CIFAR100 or object detection on MS-COCO.
Method|$\mid$|$\quad \eta$ |$\mid$|Top-1|$\mid$|Top-5|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|KD|$\mid$|NA|$\mid$|71.03|$\mid$|90.05|
||
|WKD-L|$\mid$|1.0e-2|$\mid$|71.65|$\mid$|90.21|
|--|$\mid$|2.0e-2|$\mid$|72.15|$\mid$|90.70|
|--|$\mid$|5.0e-2|$\mid$|**72.49**|$\mid$|**90.75**|
|--|$\mid$|1.0e-1|$\mid$|72.25|$\mid$|90.74|
|--|$\mid$|2.0e-1|$\mid$|72.09|$\mid$|90.53|
||
> ### Explanation and analysis of sensitivity of all hyper-parameters.
Above all, kindly note **as logit-based WKD-L and feature-based WKD-F are two distinct methods, their hyper-parameters (HPs) should be analyzed separately.**
WKD-L is to address the downside of lack of cross-category comparison in classical KL-Div. **We ablate its HPs containing temperature, sharpening parameter and weight in Section B.1 (Lines 636--649 and Figure 5),** where we see that they are insensitive in decently large ranges. $\eta$ in Eq. (3) is always fixed across the paper.
WKD-F aims to address the problem of KL-Div that cannot exploit geometry of data manifold. **We ablate its HPs (i.e., mean-cov ratio and the weight) in Section B.1 (Lines 650--659 and Figure 6),** and observe that they are invulnerable to large variation of values.
Further, combining WKD-L and WKD-F leads to further improvement; in this case, we simply add their losses **where all HPs are unchanged.**
Finally, for classification on ImageNet a summary of HPs is given in Section B.2; on CIFAR100, for a fair comparison, we follow DKD [3], CAT [51], ReviewKD [29], FCFD [8] and WTTM [5] *for KD within CNNs,* and follow OFA [62] *for KD across CNNs and Transformers,* tuning HPs separately for different settings (Section C). For object detection, the framework of Faster-RCNN is quite different from that of classification, and how HPs are turned are given in Section E.
> ### How to calculate $\mu^S$ and $\mu^T$ in your implementation for Eq. (9)?
On Lines 602--610 we explain concretely how to compute the means. For teacher, we calculate directly $\mu^T$ using features output by **single stage of Conv5_x with 1x1 spatial grid**; for student we use the corresponding features after the projection layer for calculating $\mu^S$.
> ### Does the proposed method work well in the setting of self-KD where the student and teacher share the same architecture?
Thanks for the suggestion.
We implement self-KD in the framework of Born-again network (BAN) (Furlanello et al., ICML’18). Specifically, we first train a network model $S_{0}$ with ground truths. With $S_{0}$ as the teacher, we use WKD-L and distill a new model $S_{1}$ with same architecture. For simplicity's sake, we do not perform the 2nd-generation distillation, i.e., distilling a new model $S_{2}$ with $S_{1}$ as the teacher, etc.
We experiment with ResNet18 and the HPs are same as those in Setting (a). For logit-distillation, state-of-the-art USKD [4] is slightly better than BAN (KL-Div), both improves the regular training without self-KD; our BAN (WKD-L) performs much better than USKD, suggesting superiority of WKD-L in self-KD setting.
|$\ \ $Method|\||Self-KD |\||Dis-similarity|\||Top-1|
|:-:|-|:-:|-|:-:|-|:-:|
|Regular|\||$\times$|\||NA|\||69.75|
||
|USKD|\||$\checkmark$|\||KL-Div|\||70.75|
|BAN|\||$\checkmark$|\||KL-Div|\||70.50|
|BAN (WKD-L)|\||$\checkmark$|\||WD|\||**71.35**|
||
> ### What's the layer features are used for WKD-F? It seems to be wired that WKD-F (207ms) is even faster than the original KD (215ms).
For WKD-F, we use the **features of single stage of Conv5\_x,** i.e., 49 features of 512-dimension that is lower than logits (i.e., 1000). Our ablation in Table 3(c) shows that single stage with 1x1 grid is the best option. Kindly note that for WKD-F, we only need to compute the mean and variances for Gaussian (Diag) that are very efficient. To resolve your concern, we also measure latencies (ms) in Setting (b), where we have 49 features of 2048-dimension for distribution matching. Compared to Setting (a), the latencies of both methods increase while that of WKD-F is slightly higher.
|Method|$\mid$|Strategy|$\mid$|Dimension|$\mid$|Latency|
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|Setting (a)
|KD|$\mid$|Logit|$\mid$|1000|$\mid$|215|
|WKD-F|$\mid$|Feature|$\mid$|512|$\mid$|207|
||
|Setting (b)
|KD|$\mid$|Logit|$\mid$|1000|$\mid$|268|
|WKD-F|$\mid$|Feature|$\mid$|2048|$\mid$|276|
||
> ### It seems that the IR is more important than the form of WD formulation.
Thanks for the concern.
Kindly note **IR is a key ingredient of our WD formulation,** without which IRs cannot play the roles, e.g., IRs cannot be used in KL-Div based knowledge distillation.
It is worth noting that the results in Table 2(b) show that WD formulation with each IR performs better than strongest KL-Div variant [4] (71.96), suggesting it is superior to KL-Div formulation.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses from the authors.
My concerns have been addressed.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer F2zQ,
Thank you for your positive feedback. We are delighted to hear that our rebuttal has effectively addressed your concerns. The responses to your constructive comments will be incorporated into the revised paper. We would greatly appreciate it if you could consider reflecting this in your updated score. | Summary: This paper proposes a Wasserstein distance based knowledge distillation method for both the logit distillation and feature distillation settings. The logit based version uses discrete WD to model the discrepancy beteween the prediction probabilites of student and teacher networks. It further uses the separation of target probability to improve performance. The feature based version minimizes the continuous WD between the patch features of an image from the student and teacher networks, under the assumption that they form Gaussian distributions. The covariance term is simplifed to its diagnoals to further improve performance. Comapred to recent baslines, the proposed method shows superior performance on ImageNet classification and COCO object detection.
Strengths: 1. Strong performance compared to recent work.
2. The presentation is easy to follow and well structured. Related works are introduced to give good contexts.
3. Extensive comparison between baselines, detailed abalation study, and has extra experiments on distillation across CNNs and transformers in appendix.
Weaknesses: One of the motivation for the WKD-L is the cross-category comparison, however, it is not clear to me why the "cross-category" comparison is attrubted as the source of improvement. Firstly, there is implicit cross-category comparision for KL based methods (line 117), so it is not a differentiator. Secondly, without the story of cross-category comparison, the WDK-F also shows improvement.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you explain the statement at line 120 "this implicit effect is insignificant"?
2. Line 38. "... features of an image are ... small size". What does the "small size" of an image referes to?
3. Are the student networks randomly initialized or initialized from a trained weights for baseline methods (line 239)?
4. Line 172 "... partition the feature maps into a kxk spatial grid", and line 289 "use ... 1x1 grid for classification". There is no grid setting for detection in the main text and can mislead readers to think grid is not used at all.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 5GY4,
Thank you for reviewing our paper and for providing constructive and insightful comments. Particularly, we appreciate your acknowledgement that our paper has excellent contribution, strong performance and good presentation. We hope our detailed responses can address your concerns, and could you please consider increasing your rating accordingly.
> ### Could you explain the statement at line 120 "this implicit effect is insignificant"? it is not clear to me why the "cross-category" comparison is attributed as the source of improvement.
Thanks for your concerns.
First, please allow us to explain what "implicit effect" exactly means in classical KD that is a logit-based distillation method. While comparing WKD-L (Figure 1b (left)) to classical KD (Figure 2), we can see that KL-Div only performs category-to-category comparison, **but lack of mechanism to *explicitly* exploit cross-category comparison via pair-wise IRs in Eq. (1),** which contain rich knowledge among categories. Nevertheless, for KL-Div based KD, through backpropagation the probability of one category affects probabilities of the other categories due to softmax function; in this sense, we say that the cross-category effect in KL-Div is implicit.
By stating "this implicit effect is insignificant", we mean that **gains brought by explicit cross-category comparison in WD is more significant than that brought by the implicit influence in KL-Div, as experimentally shown in Table 2(a).** Specifically, WD vs. KL-Div in Top-1 Acc (%) is 72.04 vs. 71.03 without (w/o) target separation and is 72.49 vs. 71.96 with (w/) target separation. As such, we attribute the improvement to cross-category comparison inherent in WD.
Additionally, kindly note that WKD-F belongs to the type of feature-based distillation methods, quite different from the type of logit-based ones, such as WKD-L and classical KD & its variants. Therefore, the effectiveness of WKD-F cannot suggest that the improvement in WKD-F is not attributed to cross-category comparison.
We are sorry for the possible ambiguity and we’ll make further clarification in the modified paper.
> ### What does the "small size" of features of an image refers to (Line 38)?
Thanks for the concern.
**Here "small size" indicates that for an input image the number of convolutional features output by a CNN is small.** For example, for ResNet50 with a standard 224x224 input image, the commonly used feature maps of stage 5 (Conv\_5x) contain only 49 (7$\times$7) features, which however are of high-dimension (i.e. 2048).
The high-dimensional features of small size not only makes non-parametric density estimation (e.g., histogram) infeasible, but also leads to non-overlapping discrete distributions, both bringing difficulty for KL-Div. One may turn to parametric distributions (e.g., Gaussian); however, KL-Div is limited as it is not a true metric and is unaware of geometry of the underlying manifold. **This downside of KL-Div motivates our 2nd contribution, i.e., WKD-F that uses Wasserstein distance as dis-similarity between Gaussians.**
> ### Are the student networks randomly initialized or initialized from a trained weights for baseline methods (line 239)?
Thanks for the concern.
For fair comparison with previous arts, we use the framework of Faster-RCNN for object detection. **For the student models, the backbones are initialized with pre-trained weights on ImageNet, while Region Proposal Network (RPN), Feature Pyramid Network (FPN), and detection head consisting of a classification branch and a localization branch are all initialized randomly.** Note that this is a standard practice widely adopted by knowledge distillation methods for object detection, as well as by all general object detection methods.
> ### There is no grid setting for detection in the main text and can mislead readers to think grid is not used at all.
Thanks for your suggestion.
We are sorry for the possible ambiguity. As described in Section E.1 in Appendix, for object detection, we use a 4$\times$4 spatial grid for computing Gaussian. We’ll make this clear in the main text.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal. It clarifies my questions and I will maintain the score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer 5GY4,
We are pleased to learn that we have effectively addressed your concerns. We appreciate your very positive comments on the soundness, presentation and contribution of our paper. | Summary: The paper introduces a novel methodology for knowledge distillation using Wasserstein Distance (WD) instead of the traditional Kullback-Leibler Divergence (KL-Div). The proposed methods include a logit distillation approach (WKD-L) that leverages cross-category comparisons and a feature distillation method (WKD-F) that models feature distributions parametrically. The authors demonstrate that their methods outperform strong KL-Div variants on image classification and object detection tasks.
Strengths: * The introduction of WD in knowledge distillation provides a fresh perspective and addresses the limitations of KL-Div, particularly in terms of cross-category comparisons and handling non-overlapping distributions.
* The use of parametric methods for feature distribution modeling, specifically Gaussian distributions, is innovative and effectively leverages the geometric structure of the data.
Weaknesses: * The paper suffers from several writing issues, including grammatical errors and unclear explanations, making it difficult to follow the arguments and methodology at times.
* Some of the assumptions made for the application of WD, particularly the choice of Gaussian distributions for feature modeling, may not hold universally. Further justification or exploration of alternative parametric methods would strengthen the work.
* The computational complexity of implementing WD-based methods, especially in large-scale scenarios, is not adequately addressed. A comparison of computational costs between WD and KL-Div would be beneficial.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Why did you choose Gaussian distributions for feature modeling in the context of WD? Are there other parametric methods that you considered, and how would they compare in terms of performance and feasibility?
* How does the computational complexity of your proposed WD-based methods compare to traditional KL-Div based methods? Can you provide a detailed analysis or empirical comparison of the computational costs involved?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: * please check the questions and weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Zit9,
Thank you for reviewing our paper and for providing constructive and insightful comments. We are grateful for your positive comments on novelty and contribution of WD based methods, such as "a fresh perspective and addresses the limitations of KL-Div" and "is innovative and effectively leverages the geometric structure of data". We hope our detailed responses can address your concerns, and could you please consider increasing your rating accordingly.
> ### Why did you choose Gaussian distributions for feature modeling in the context of WD? Are there other parametric methods that you considered, and how would they compare in terms of performance and feasibility?
Thanks for your concerns.
Kindly note that, **we have discussed why we choose Gaussians on Lines 63--68 in Section 1 and Lines 154--157 in Section 2.2.** Specifically, Gaussian is of maximal entropy for given 1st-moment (mean) and 2nd-moments (covariance) estimated from random samples (features) among the family of parametric distributions. Namely, it has maximum uncertainty with the least prior and thus is more general than other candidates. In addition, WD between Gaussians is a Riemannian metric with closed-form solution; in contrast, WDs between other parametric distributions are open problems, as far as we know.
**We have compared to other parametric methods in Table 3 (a) and Lines 267--276**, including *separate spatial 1st- or 2nd-moments [35]* as well as *separate channel 1st- or 2nd-moments [6]*. We note that both [35] and [6] fail to consider the geometry of the underlying manifold; kindly refer to Lines 205--218 and Table 1 for our discussion. The experiments show that our method outperform them by large margins with insignificant increase of computation (See Table 5).
**Following your suggestion, we consider additional parametric distributions** including Laplace and exponential distributions. We use univariate Laplace or exponential distribution for modeling each component of features. For them KL-Div can be computed in closed-form [Ref1] but WD is an unsolved problem. For Gaussian (Diag) we use univariate Gaussian for each feature component. We experiment with the same setting as in Table 3 (a) and the comparison is shown in the table below. We firstly note that, *with KL-Div*, Gaussian is better than both Laplace and exponential functions, which suggests Gaussian is a better option among the competing parametric distributions. Furthermore, Gaussian with WD outperforms Gaussian with KL-Div, which we attribute to the fact that WD can effectively exploit geometry of the underlying manifold but KL-Div cannot.
|Distribution |$\mid$|Dis-similarity|$\mid$|Top-1|
|:-:|:-:|:-:|:-:|:-:|
|Laplace|$\mid$|KL-Div|$\mid$|71.38|
|Exponential|$\mid$|KL-Div|$\mid$|70.14|
|Gaussian (Diag)|$\mid$|KL-Div|$\mid$|71.75|
||
|Gaussian (Diag)|$\mid$|WD|$\mid$|**72.50**|
||
[Ref1] M. Gil. On Rényi divergence measures for continuous alphabet sources. Master's thesis
, 2011.
> ### How does the computational complexity of your proposed WD-based methods compare to traditional KL-Div based methods? Can you provide a detailed analysis or empirical comparison of the computational costs involved?
Thanks for the comments.
Kindly note that, **we have provided an empirical comparison of the computational cost in Table 5 and Lines 306--317.** Compared to traditional KL-Div, our WKD-L, the logit-based distillation method, has somewhat higher cost due to the regularized linear programming; this limitation is discussed in Section F. Meanwhile, our WKD-F, the feature-based distillation method, has comparable cost with traditional KL-Div.
**Following your suggestion, we further conduct theoretical analysis.** The logit-based WKD-L is formulated as an entropy regularized linear programming, which can be solved by fast Sinkhorn algorithm [23]. Let $n$ be the dimension of the predicted logits, the computational complexity of WKD-L can be written as is $O(Dn^2 \log n) $ [Ref2]. Here $D=\|\|C\|\|\_{\infty}^{3}\epsilon$ is a constant, where $\|\|C\|\|\_{\infty}$ indicates the infinity norm of the transportation cost matrix $C = [c_{ij}]$, and $\epsilon>0$ indicates a prescribed error. In contrast, the computational complexity of KL-Div is $O(n)$. Despite its high complexity, WKD-L can be computed efficiently as Sinkhorn algorithm is highly suitable for parallel computation on GPU [23]. For feature-based WKD-F, the dominant cost is due to computation of means and variances. Give a set of $m$ features $\mathbf{f}_{i}$ of $l$-dimension, the means can be computed by global average pooling (GAP) that takes $O(ml)$ time; the complexity of variances is also $O(ml)$ which can be implemented by element-wise square operations followed by a GAP.
[Ref2] J. Altschuler, et al. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In NeurIPS, 2017.
> ### Some of the assumptions made for the application of WD, particularly the choice of Gaussian distributions for feature modeling, may not hold universally.
Thanks for the comment.
We agree that the Gaussian distributions may not hold universally. **Kindly note that this limitation has been discussed in Section F.** Specifically, what distributions deep features may exactly follow is an open problem, which is rarely studied in deep learning to our best knowledge. As such, Gaussian may be sub-optimal for modeling distributions of DNN features, and it is interesting to explore other parametric distributions. In addition, the Wasserstein distances between parametric distributions other than Gaussian are open questions in probability and statistics and worth future research.
> ### The paper suffers from several writing issues, including grammatical errors and unclear explanations, making it difficult to follow the arguments and methodology at times.
Thanks for your comment. We’ll further polish our manuscript, carefully addressing writing issues.
---
Rebuttal Comment 1.1:
Title: Response by Reviewer Zit9
Comment: I have carefully reviewed the feedback from other reviewers, considered the author’s rebuttal, and global responses, followed the ensuing discussion, and read the paper again . I appreciate the authors' thorough responses, particularly their clarification on W2 and new experimental results during the rebuttal period as well as Q2. Itherefore I will raise my score slightly from 5 to 6. Good luck!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Zit9,
Thank you for your positive feedback on our rebuttal and for raising your score. We are pleased to hear that your concerns have been satisfactorily addressed. The responses to your constructive comments will be incorporated into the revised paper. | Summary: The paper proposes the utilization of Wasserstein Distance based distillation as opposed to KLD, as is common in practice. This is because the latter does not facilitate cross-category comparisons. Both logit and feature based variants have been proposed. The comparisons have been shown for both classification as well as detection.
Strengths: Though straightforward look at the empirical gains are not massive, the method is theoretically sound, and the paper is easy to understand. The experiments are decently presented in my opinion. The paper provides a fresh perspective of WD in distillation.
Weaknesses: I would have been happier to see if changing the divergence led to more significant boosts, if at all possible (basis the premise of providing cross category comparisons).
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The cost expensiveness has been mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ScyW,
Thank you for reviewing our paper and for providing constructive and insightful comments. Particularly, we appreciate your positive comments on novelty ("a fresh perspective of WD in distillation"), on soundness ("theoretically sound"), and presentation \& contribution ("good"). We hope our detailed responses can address your concern, and could you please consider increasing your rating accordingly.
> ### I would have been happier to see if changing the divergence led to more significant boosts, if at all possible (basis the premise of providing cross category comparisons).
Thanks for your concern.
Our WKD provides a novel viewpoint of WD in knowledge distillation (KD), which have shown very competitive performance, compared to pre-dominant KL-Div based knowledge distillation methods. Considering that WD is overlooked and rarely studied in KD, we believe that **the promising results of our work can inspire great interest and the follow-up works have potentials to yield larger performance boosts over the KL-Div based ones.**
It is worth mentioning that **the performance improvement of WKD is non-trivial in the *commonly used settings*** which has long been studied. For example, in the setting (a) on ImageNet, for logit distillation WTTM (SOTA of 2024) improves NKD (SOTA of 2023) by 0.23 percentage points (PPs) while the latter surpasses DKD (SOTA of 2022) by 0.26 PPs; in contrast, WKD-L outperforms WTTM by 0.30 PPs; for feature distillation our WKD-F surpasses the SOTA method of ReviewKD by 0.89 PPs. Finally, we note that **our WKD considerably outperforms the competitors *in the new setting of across CNN and vision transformers***, as shown in Table 9(a) in Section C.2 of Appendix. | Rebuttal 1:
Rebuttal: Dear Reviewers, Area Chairs and Program Chairs,
We thank all reviewers again for their thoughtful and constructive comments, which are helpful in improving the quality of our paper. After carefully reading all comments and questions, we conducted additional experiments and discussions to address the reviewers' concerns. In the following, we provide a summary of the reviewers' comments and our responses.
**Firstly**, we are pleased to observe that majority of reviewers have highly recognized the innovativeness, clarity of presentation, experiments and performance of our work, initially recommending **1 accept** and **3 borderline accept**. Specifically, we are grateful for the following positive comments about our work:
* The paper provides **a fresh perspective** of WD in distillation (Reviewers ScyW and Zit9) and **a new algorithm** for knowledge distillation (Reviewer F2zQ); the use of parametric methods for feature distribution modeling, specifically Gaussian distributions, **is innovative.** (Reviewer Zit9).
* The proposed methods exhibit **strong performance** compared to recent work (Reviewer 5GY4), and show **obvious improvements** compared to baselines (Reviewer F2zQ). The experiments conduct **extensive comparison** (Reviewer 5GY4), **are decently presented** (Reviewer ScyW) and provide **sound ablations** (Reviewer F2zQ). The paper provides **extra experiments on distillation *across CNNs and transformers*** in appendix (Reviewer 5GY4).
* The paper is **easy to understand** (Reviewer ScyW), **easy to follow, written clearly** (Reviewer F2zQ) and **well structured** (Reviewer 5GY4). Additionally, the related works are introduced to **give good contexts** (Reviewer 5GY4).
---
**Secondly**, we provide point-by-point responses to all four reviewers, striving to address their concerns and questions. We will carefully revise our manuscript according to these discussions. Here, we list major experiments and discussions that have been added in responses.
> ### 1. Compared to other parametric methods for feature distribution modeling.
**Following the suggestion of Reviewer Zit9**, we compare to additional parametric distributions including Laplace and exponential distributions. **With KL-Div**, Gaussian distribution is a better option in capturing the feature distribution and knowledge transferring. **By leveraging the Wasserstein distance (WD)**, Gaussian distribution achieved further improvements in performance. For detailed results and discussions, please refer to our response to Reviewer Zit9 and Table R1 in the attached PDF. It is worth mentioning that **we have already compared to parametric methods including 1st- or 2nd-moments (spatial and channel)** in the paper (Table 3 (a) and Lines 267--276).
> ### 2. Additional experiments on self-KD settings.
**Following the suggestion of Reviewer F2zQ**, we implement self-KD in the framework of Born-again network (BAN) (Furlanello et al., ICML’18). Our **WKD-L performs much better than classical KD and the state-of-the-art method USKD**, suggesting the effectiveness of WKD in self-KD settings. The detailed results can be found in our response to Reviewer F2zQ and Table R2 in the attached PDF.
> ### 3. Ablation study on the regularizing constant of entropic term in Eq. (3).
**Following the suggestion of reviewer F2zQ**, we have conducted an ablation study on the regularizing constant of the entropic term $\eta$ in Eq. (3). Across a decently large range, the performance changes smoothly, indicating that our method is not sensitive to $\eta$. **Note that $\eta$ is always fixed across the paper.** The detailed results are provided in our response to Reviewer F2zQ and Figure R1 in the attached PDF.
> ### 4. Evaluation of training latency for Setting (b).
We have measured training latencies (ms) for Setting(a) in Table 5, where WKD-F is slightly faster than classical KD. **To resolve the concern of reviewer F2zQ**, we further measure the latencies for Setting (b), where the latencies of both methods increased while that of WKD-F is slightly higher. The detailed results are shown in response to Reviewer F2zQ and Table R3 in the attached PDF.
---
**Finally**, based on the constructive comments and suggestions from all reviewers, we will carefully revise our manuscript. We believe we can effectively address these concerns, and we kindly ask you to consider increasing your scores. We are looking forward to your positive feedback on our rebuttals, and please feel free to reach out if you have any additional questions.
Pdf: /pdf/9eaa0074d0c65ae914e251ef3dfb302d7d949709.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation | Accept (poster) | Summary: This paper introduces a novel framework that incorporates closed-loop feedback into vision-based control systems for robot manipulation tasks. The author proposes a text-conditioned video diffusion model for high-level reference path planning. The error measurement and feedback policy are obtained through an encoder-decoder architecture. This framework improves task performance over open-loop policies with a higher success rate and good instruction-following without the need for giant LLM models. The method is tested on proper benchmarks for language-based manipulation and has a thorough discussion of the method and results.
Strengths: 1. Novelty: This paper presents a novel feedback policy for visual input. The proposed measurable embedded space can help identify the error between the reference plan and the executed plan, and the encoder-decoder bridges the input-feedback-output modules. This design helps in long-horizon planning where open-loop control easily fails.
2. Presentation: This paper is well-written and provides a thorough discussion of the methods.
3. Potential impact: A feedback system for visual-based control has a potential impact in other robotics systems other than manipulators.
Weaknesses: 1. Generalization: how this framework can be generalized across different task/robot platforms/datasets? Especially for the sub-goal replan/transition, which seems to be very specific for each task. The replanning threshold is also designed by hand. Can it be determined during training?
2. The paper seems not to report the computational time for the model during testing, which is crucial for a close-loop policy in real-time. Especially with a diffusion-based policy, the sampling time could be long. Could provide more discussion on addressing this issue.
3. Do you expect your proposed architecture to scale well when pre-trained on a large dataset; what additional design might be needed?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How the dataset in CLOVER is obtained? What data helps train the CLOVER? More details will be welcomed.
2. How does CLOVER perform in environments with significant variations from the training scenarios? Have you tested the framework on completely new tasks or objects not seen during training?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author discussed the potential societal impact in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable review. We address your concerns below.
>*${\color{BrickRed}Question 1:}$* Generalization: how this framework can be generalized across different task/robot platforms/datasets? Especially for the sub-goal replan/transition, which seems to be very specific for each task. Can the threshold be determined during training?
**Generalize across platforms and datasets:** The idea of CLOVER (the closed-loop visuomotor control framework) itself is embodiment-agnostic and can be generalized across different platforms. For instance, the robot in CALVIN simulation is Franka Emika Panda, whereas ALOHA is employed for conducting real-world experiments. Therefore, the generalization is more related to data. In our work, we train CLOVER on the respective data from the two platforms and successfully conduct verification. It is also expected that training on large-scale cross-embodiment datasets may lead to zero-shot generalization.
**Sub-goal replan/transition generalization across tasks and models:** The thresholds for sub-goal replan/transition empirically generalize well across tasks. For the CALVIN benchmark, there are 34 different tasks in total and the evaluation suite involves 1,000 distinct instruction chains. We use the same set of hyperparameters for sub-goal replan and transition, which works well for all tasks and different visual encoders (Fig. 6(c)). Please also refer to *Fig. R3* in the rebuttal PDF.
**Threshold determination during training:** In our current implementation, the visual planner and the subsequent inverse dynamics-based policy are trained separately. Consequently, the required threshold can only be measured during the inference stage, when the two components are integrated. Future research will focus on training these models jointly in an end-to-end manner, potentially facilitating the threshold determination process during training.
>*${\color{BrickRed}Question 2:}$* Discussions on computational overhead
Thanks for the question. Please refer detailed discussion in the Q2 of *Global Rebuttal* session.
>*${\color{BrickRed}Question 3:}$* Do you expect your proposed architecture to scale well when pre-trained on a large dataset; what additional design might be needed?
Thanks for the insightful question.
- **Diffusion-based video generation models** have been proven to be scalable with the size of the dataset and network. All we need to do to scale up CLOVER is to collect more videos and extend the model with more channels and layers. Notably, the videos can be free of action labels, and even human videos would help as well. Works like UniPi and UniSim [16] have made very successful attempts towards building world simulators by scaling up the pre-training dataset, and it would be similar for CLOVER to perform such extensions as well.
- **Feedback-driven policy**: Its training is grounded in an inverse dynamics objective. While it necessitates action labels, it does not require high-level, task-specific knowledge for policy training. This characteristic facilitates the potential for training the policy on extensive, cross-embodiment datasets [38], thereby enabling few-shot cross-embodiment generalization.
>*${\color{BrickRed}Question 4:}$* How the dataset in CLOVER is obtained? What data helps train the CLOVER? More details will be welcomed.
We train our models exclusively on in-domain datasets. For simulation experiments, we utilize the ABC split of the CALVIN training dataset, which includes language instruction labels, to train the video diffusion model. Consecutive frames and their action labels are randomly sampled to train the low-level policy. Similar procedures are applied in real-world experiments. Additional training details are available in Appendix B2.
By decoupling the training of the visual planner from the feedback-driven policy, we allow the visual planner to be trained on diverse video datasets without action labels. As discussed in Question 3, this approach enables us to leverage internet-scale videos without action labels for training a robust and generalizable visual planner and to utilize large-scale cross-embodiment robot datasets (Open X-Embodiment [38]) for training a more effective low-level policy.
>*${\color{BrickRed}Question 5:}$* How does CLOVER perform in environments with significant variations from the training scenarios? Have you tested the framework on completely new tasks or objects not seen during training?
Thanks for the question. Zero-shot generalization to completely new tasks or objects stands as a significant challenge for current robotic research. Following the common experimental design, we mainly adopt certain variations during testing to demonstrate the generalization ability of our method.
As requested, we conduct *additional* real-world experiments in the rebuttal, by introducing entirely new objects — a yellow clay and a doll — alongside the primary interaction object. Please refer to the *Global Rebuttal* session for detailed results.
**Additional Analysis:**
- Our simulation experiment setup necessitates the policy's generalizability. In the CALVIN benchmark, the textures of the table and the positions of buttons, drawers, and sliders in the test environment differ from those in the training set, posing a substantial challenge to the generalizability of various policies. CLOVER demonstrates exceptional generalizability in this context.
- CLOVER relies on a video generation model that must reliably follow task instructions to produce corresponding visual plans. Currently, video generation models struggle to generalize to completely novel tasks (instructions) outside the training set. This challenge could potentially be mitigated by extensive pre-training on large-scale internet datasets, which we hope future open-sourced video foundation models will provide to the community. Our inverse dynamics-based policy is inherently task-agnostic and capable of generalizing to new tasks.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and the additional experiment results presented. My concern is addressed and I will raise my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thanks for considering our responses and recommending acceptance. We will update our paper according to our insightful discussions. | Summary: This paper proposes CLOVER, a closed loop visuomotor control framework that incorporates feedback mechanisms to improve adaptive robotic control. The framework consists of a text-conditioned video diffusion model for generating visual plans as reference inputs, an error measurement module to model the discrepancy between the current and goal states in the embedding space, and a feedback-driven controller that refines actions from feedback and initiates replans as needed.
Strengths: 1. The paper is well written and organized, clear to read.
2. Integrating closed loop feedback into robotic visuomotor control is interesting and useful, which can help deal with deviation and improve control accuracy.
3. Combines visual planning, error measurement, and feedback-driven control in a cohesive system.
Weaknesses: 1. Limited evaluation to real robot environment. The authors only show one experiment comprising three consecutive sub-tasks, which is not enough to test the generalizability and robustness of the approach across different environments and tasks.
2. The measurable embedding space for error quantification seems effective not be sufficiently validated. This component’s effectiveness could vary significantly in different scenarios, and the paper does not provide extensive evidence of its robustness.
3. The framework seems need a lot of computation, each step in the loop the diffusion model needs to generate a video, then get embeddings through two ViT-based encoders for RGB and depth respectively. I guess the high computation required could limit its real-time execution and practical applicability.
4. Further validation on a wider range of benchmarks and task environments is necessary to confirm the framework's generalizability.
5. Minor, in Figure 5, the authors show generated videos of four tasks conditioned on the same initial frame, but the first figure in each task looks different, better to make them the same.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Figure 3, the authors mention is obtained during a roll-out, what task and what environment are you using? How many tasks and environments did you test?
2. How long does it take to execute one task in simulation and real robot?
3. Can you show video demos of your experiments not just screenshots in the paper?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. High computation required for both training and real-time execution could limit practical applicability. The proposed framework's complexity might pose challenges in implementation.
2. The scalability of the framework to diverse and complex tasks remains uncertain.
3. The real-time execution of the feedback-driven controller might face latency issues, especially in dynamic and unpredictable environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful review and valuable comments. We address each question below.
>*${\color{BrickRed}Question 1:}$* Limited evaluation to real robot environment.
We conduct *new* real-world experiments to further validate the effectiveness and generalizability of CLOVER. Please refer to the *Global Rebuttal* session Q1.(1) and (2) above, and the qualitative analysis in *Fig. R2* of our rebuttal PDF.
>*${\color{BrickRed}Question 2:}$* The measurable embedding space for error quantification seems effective not be sufficiently validated. This component’s effectiveness could vary significantly in different scenarios, and the paper does not provide extensive evidence of its robustness.
We address the concern on robustness in the following perspectives and will update them in our revision:
- **Empirically robust to various tasks and scenes:** The CALVIN benchmark comprises 1,000 unique instruction chains across 34 different tasks. We select the most challenging scenario where the texture and position of objects to interact during testing is distinct from the training samples (ABC -> D). As outlined in Algorithm 1, our sub-goal transitions and replanning depend entirely on measurable embeddings. Thus, the performance improvement on CALVIN can be a reflection of the robustness of error qualification. For instance, using embeddings with poor robustness can cause the policy to get stuck on certain goals, as the measured distance keeps failing to meet the threshold required to trigger a sub-goal transition.
- **Robustness comparison with different embeddings for error quantification:** In the table below, We provide quantitative results of LIV which introduces tailored training objectives for dense reward learning (error measurement). CLOVER yields notable robustness compared with previous works. Further qualitative illustrations are provided in Appendix A.
\indent
|Methods|Task 1|Task 2|Task 3|Task 4|Task 5|Avg. Len.|
|-|-|-|-|-|-|-|
|LIV [61] (**newly added**)|70.8|48.2|29.2|18.2|10.2|1.77|
|CLIP Feature (Fig. 3 (a))|72.4|46.8|25.0|13.7|5.1|1.63|
|State Embedding (Ours, Fig. 3 (c))|**96.0**|**83.5**|**70.8**|**57.5**|**45.4**|**3.53**|
- **Model and parameter insensitive:** Figure 6(c) shows that we do not need to carefully cherry-pick hyperparameters about the measurement even for models with different architectures.
>*${\color{BrickRed}Question 3:}$* Computational overhead.
Thanks for the question. We provide a detailed discussion in the Q2 of *Global Rebuttal* session above.
>*${\color{BrickRed}Question 4:}$* Further validation on a wider range of benchmarks and task environments is necessary to confirm the framework's generalizability.
Thanks for the comment. On the one hand, the tested simulation benchmark CALVIN ABC-> D considers certain scene generalizations. The texture of the table and the position of buttons, drawers, and sliders in the tested environment is different from what the model has seen in the training set, which is quite challenging to the generalizability of different policies.
To further validate the generalizability of our framework, we conduct more real-world experiments under certain perturbations. *Please refer to the *submitted PDF* for detailed experiment setting and in the *Global Rebuttal* session.*
>*${\color{BrickRed}Question 5:}$* Minor, in Figure 5, the authors show generated videos of four tasks conditioned on the same initial frame, but the first figure in each task looks different, better to make them the same.
Thanks for the suggestion. In Fig. 5, we show the 2nd, 4th, 6th, and 8th frames of the generated video for simplicity. We have revised our manuscript where the initial frame has been positioned at the forefront.
>*${\color{BrickRed}Question 6:}$* In Figure 3, the authors mention is obtained during a roll-out, what task and what environment are you using? How many tasks and environments did you test?
The visualization in Fig. 3 is based on *a randomly sampled task* ("open the drawer") within the CALVIN D environment. We only show one task in Fig. 3 for simplicity and clarity. However, we note that *similar patterns are observed across all tasks*, where our measurable embedding space consistently shows monotonicity when approaching each sub-goal, compared to the other two counterparts. **We also provide a similar plot to Fig. 3, but encompassing all possible tasks within over 300+ independent rollouts, in Fig. R3 of our rebuttal PDF.** As shown in the figure, the measurability remains and generalizes well under all scenarios.
>*${\color{BrickRed}Question 7:}$* How long does it take to execute one task in simulation and real robot?
- **Simulation:** Our experiments are conducted using NVIDIA RTX 3090 GPUs. The low-level policy, with an input size of 196, operates at a frequency exceeding 70 Hz. The video diffusion model requires approximately 5 seconds to generate 8 video frames with a resolution of 128x128. It only takes around 11 seconds on average to complete a task in CALVIN simulation.
- **Real-world:** We perform experiments on an NVIDIA RTX 5000 Ada GPU with the input size of 128*96, where the low-level policy runs at more than 25Hz and the visual plan generation takes around 4.3 seconds. Overall, it takes around 38 seconds to complete the 3 consecutive tasks with our real-world robot.
We have added the detailed execution time in the revised version. Please also refer to our reply in *Global Rebuttal* for further information and comparison with other methods.
>*${\color{BrickRed}Question 8:}$* Video demos?
Thanks for the advice. Regrettably, we are not allowed to provide an external link to present the demonstration videos according to the NeurIPS rebuttal guidelines. We will release a project page with video demos when publicly releasing the work.
---
Rebuttal Comment 1.1:
Title: Comment
Comment: Thanks authors for the response to address the concerns, I increased my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for reviewing our responses and raising the score. We will update our paper based on your insightful comments. | Summary: The paper presents a novel framework named CLOVER. The proposed system aims to enhance the adaptability and robustness of robotic manipulation in long-horizon tasks by incorporating closed-loop control principles. CLOVER consists of three main components: a text-conditioned video diffusion model for generating visual plans, a measurable embedding space for accurate error quantification, and a feedback-driven controller that refines actions based on real-time feedback. The framework shows significant improvement in real-world robotic tasks and sets a new state-of-the-art performance on the CALVIN benchmark.
Strengths: 1. Originality: CLOVER introduces a unique combination of closed-loop control principles with generative models for robotic manipulation. The use of a text conditioned video diffusion model to generate visual plans is a creative approach that leverages recent advances in generative AI.
2. Quality: The methodology is well-articulated, detailing the design of each component in the framework. The inclusion of depth map generation and optical flow regularization to enhance the reliability of visual plans demonstrates the challenges in robotic manipulation.
3. Clarity: The paper is clearly written, with a logical structure that guides the reader through the motivation, methodology, and experimental validation. Figures and diagrams are effectively used to illustrate complex concepts and the overall system architecture.
4. Significance: The proposed framework addresses a critical challenge in robotics—improving the robustness and adaptability of robotic systems for longhorizon tasks. The notable performance improvements on the CALVIN benchmark and real-world tasks underscore the potential impact of CLOVER on the field of robotic manipulation.
Weaknesses: 1. Novelty of the error measurement approach: While the paper introduces a measurable embedding space for error quantification, it does not provide a detailed comparison with existing methods in terms of novelty and performance. Additional analysis or experiments comparing this approach with other state-of-the-art error measurement techniques would strengthen the contribution.
2. Clarification of the effect of closed-loop system: The article seems to tackle the long horizon manipulation tasks, but in the article there are no obvious evidences to prove it.
3. Scalability of the feedback-driven controller: The paper presents the feedback-driven controller as an effective solution for adaptive control. However, it lacks a discussion on the scalability of this approach for more complex and diverse task environments. Evaluating the controller's performance across a wider range of scenarios, particularly dynamic scenarios would provide a better understanding of its generalizability.
4. Lacked methodology clarification: Some technical aspects, particularly the algorithms and specific implementation details, are not as thoroughly explained as they could be. This makes it harder to fully grasp the intricacies of the proposed system.
5. Relatively limited experiments: The experiments conducted on the CALVIN dataset and real-world scenarios may not be sufficient to demonstrate the effectiveness of the proposed method. It is recommended to perform additional experiments in more long-horizon simulation environments such as RLBench and robosuite. Additionally, the real-world experiments are relatively limited, so further
experiments should be conducted to provide a more comprehensive evaluation.
6. Computational complexity: The incorporation of a video diffusion model and feedback mechanisms might introduce significant computational overhead. The paper does not provide an in-depth analysis of the computational requirements and how they impact real-time performance, which is crucial for practical deployment.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors provide more detailed training process used for the text-conditioned video diffusion model and the error measurement strategy, including training datas, network architecture?
2. How does CLOVER handle highly dynamic environments where changes occur rapidly and unpredictably?
3. How is the training data organized during the diffusion model generation phase of the visual plans? How many intermediate points are selected for each trajectory? I believe this needs to be explained, as it affects the performance of the final trajectory.
4. What are the computational requirements for running CLOVER in real-time, and how does it scale with more complex tasks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed some limitations, such as they validate CLOVER for simulation and real-world experiments by training the models heavily on the corresponding data. However, further discussion on the scalability of the framework and its adaptability to a wider range of robotic morphologies and environments would be beneficial.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your detailed review. We address your questions below.
> *${\color{BrickRed}Question 1:}$* Novelty of the error measurement approach. Comparison with other state-of-the-art error measurement methods.
As discussed in Sec. 2, existing works rely on additional detection models with manually set rules or high-cost VLMs to understand tasks' completion state. These impose restrictions on their applicability and efficiency.
Compared to works like LIV [61] which also tends to learn measurable embeddings, we do not incorporate additional contrastive objectives but investigate the inherent property of inverse dynamics-based policy. In the *new* results below, our method shows more robust performance for consecutive tasks. We will update the analysis in our revision.
|Methods|Task 1|Task 2|Task 3|Task 4|Task 5|Avg. Len.|
|-|-|-|-|-|-|-|
|LIV [61] (newly added)|70.8|48.2|29.2|18.2|10.2|1.77|
|State Embedding (Ours, Fig. 3(c))|**96.0**|**83.5**|**70.8**|**57.5**|**45.4**|**3.53**|
> *${\color{BrickRed}Question 2:}$* Clarification of the effect of closed-loop system (long-horizon manipulation).
Each test rollout in CALVIN consists of five distinct instructions (sub-tasks), which requires the policy to succeed in **chained** sub-tasks. Our framework's improved performance on CALVIN and long-horizon real-world tasks demonstrates its effectiveness in such settings. We acknowledge that current settings are limited compared to tasks like "making a cuisine". We will work to develop more complex long-horizon manipulation tasks in the future.
> *${\color{BrickRed}Question 3:}$* Scalability of the feedback-driven controller. Evaluating the controller's performance across a wider range of scenarios.
We provide *additional* real-world generalization experiments, including background distraction, object variation, and dynamic scenarios in *Global Rebuttal*.
**Scalability**: As discussed in Secs. 3.1 and 5, by decoupling planning and low-level control into a two-level hierarchy, the visual planner can learn from massive human videos to be robust world simulators [16]; our inverse dynamics-based policy offers greater robustness for multitask IL compared to BC-based policies [59]. Fig. R4 shows that the BC-based method RT-1 struggles with scene variation.
> *${\color{BrickRed}Question 4:}$* Lacked methodology clarification. Detailed training process used for the text-conditioned video diffusion model and the error measurement strategy.
We have detailed our model architecture and training protocol in Appendix B1 & B2, and we will publicly release all materials. In the rebuttal, we provide details below to address the question.
- **Video diffusion model:** Extending Imagen [32], we add input/output channels and separate noise injection for RGB and depth, enabling depth generation. Optical flow-based regularization is introduced using diffusion latent embeddings to create cost volumes, with a lightweight CNN serving as ContextNet, inspired by RAFT [39]. Training data consists of 8 randomly sampled frames with fixed intervals (5 in CALVIN, 20 in real-world experiments).
- **Feedback-driven policy:** We use MLPs for the action decoder, but Transformers or Diffusion Policy could be alternatives. The policy is trained with the inverse dynamics objective, using frames from random intervals in demonstrations to enhance its robustness.
> *${\color{BrickRed}Question 5:}$* Relatively limited experiments.
Thanks for the advice. RLBench and Robosuite focus on few-shot learning with tasks' horizons comparable to a single subtask in CALVIN. Thus, CALVIN is the most suitable benchmark for validating long-horizon capabilities. We will consider involving additional simulation experiments.
Please also refer to *Global Rebuttal* Q1(3) for additional real-world experiments.
> *${\color{BrickRed}Question 6:}$* Computational complexity.
We provide detailed discussion in *Global Rebuttal* Q2.
> *${\color{BrickRed}Question 7:}$* How does CLOVER handle highly dynamic environments?
Most existing works test in static environments where the interaction object remains stationary. Our work also focuses on similar settings and does not address environmental dynamics. We envision this as an important future direction and have incorporated the discussion below into the revision.
- **How to adapt to dynamic environments:** CLOVER could be adapted for dynamic settings by using multi-frame conditioning to capture velocity and acceleration information. Besides, incorporating the replanning mechanism (Appendix A) at each inference step could help handle new scenarios.
- **Additional experiments with unpredictable changes:** In the added tests, we randomly place and pick up a doll to create unpredictable visual changes. Thanks to the robustness of visual misalignment of our inverse dynamics-based policy, CLOVER surpasses RT-1 by a large margin.
> *${\color{BrickRed}Question 8:}$* How is the training data organized during the diffusion model generation phase of the visual plans?
During training, CLOVER extracts 8 frames at five-frame intervals, covering key task segments. This ensures better task alignment and fewer generation rounds during a test (1-2 rounds per task). As shown in Figs. 6(a) and 9, CLOVER effectively achieves sub-goals with adaptive steps and generalizes to different visual planners without specific adjustments. Detailed protocol is in Appendix B2 and will be further clarified.
> *${\color{BrickRed}Question 9:}$* Computational requirements. How does it scale with more complex tasks?
Please refer to *Global Rebuttal* for the discussion on computational requirements.
For more complex tasks, we may introduce LLMs to decompose the tasks into more manageable subtasks similar to HiP [29]. CLOVER can then generate visual plans for simplified tasks and perform real-time low-level control. The computational cost of this system will potentially scale linearly with the number of subtasks derived.
---
Rebuttal 2:
Comment: Thank you for your response. I've read the other reviews and the rebuttal. I’m keeping my initial score.
---
Rebuttal Comment 2.1:
Title: Official Comment by Authors
Comment: Thank you for the kind feedback. We appreciate the time and effort you have put into reviewing our work! We will update the manuscript based on your helpful review. | Summary: The authorsThe authors introduce CLOVER, a generalizable closed-loop visuomotor control framework that incorporates a feedback mechanism to improve adaptive robotic control. The method uses a text-conditioned video diffusion model to generate reference visual subgoals, an error measurement to quantify the difference between the current state and the planned sub-goal, and a feedback-driven controller via inverse dynamics model to ourput actions. Experiments on CAlVIN benchmark and ALOHA real robot shows the method outperforms all previous methods by a notable margin.
Strengths: 1. The problem of using closed-loop visuomotor control to solve long-horizon tasks is very meaningful.
2. The motivation and storyline is stated clearly. The paper is well-written and easy to follow.
3. The experiments on both the simulation and the real robot are convincing.
4. The ablation study of visual embedding, optical flow regularization, error measuring, multi modal fusion and sampling rate are very sufficient.
Weaknesses: See questions
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How is your work different from AVDC?
2. Could the authors elaborate more on the superiority of their design which use diffusion model generate reference frames and then use inverse dynamics model to get the actions, over the design like diffusion policy[60] which directly generate actions from diffusion model with a closed-loop style?
3. What is the orange dash line meaning in Figure 6(a)?
4. It seems author defines sub-goals as unreachable when the cosine distance between the state embeddings of consecutive are too far.
5. Would there be circumstance when the embedding is close but the sub-goal is actually not reachable due to singularity or other robot limit? Especially when testing the framework on real robot?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Did not test on generalization ability.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your careful review and we really appreciate your comments. We address your questions below.
>*${\color{BrickRed}Question 1:}$* How are you different from AVDC?
Our work differentiates from AVDC in the following aspects:
- **Model structure:** For the visual planning part, we introduce a novel constraint term to enhance its temporal consistency and endow the model with additional depth generation ability.
- **Action output mechanisms:** As discussed in the related work, AVDC infers actions from predicted video content with dense correspondence. It combines an off-the-shelf optical flow estimator with depth information to compute SE(3) transformations of the end-effector and transfer it into action control signals. However, CLOVER learns an inverse dynamic model to output actions and adaptively reach given goals. Paired with error measurement and feedback mechanism, CLOVER is more robust to the inherent instability of video diffusion models and can be adapted to broader scenarios.
- **Feedback policy:** The AVDC framework lacks a feedback mechanism within its visuomotor control system. In contrast, our work aims to explicitly quantify errors and incorporate them into a unified framework for long-horizon manipulation.
We have added the above discussion to the revision.
>*${\color{BrickRed}Question 2:}$* Could the authors elaborate more on the superiority of their design which use diffusion model generate reference frames and then use inverse dynamics model to get the actions, over the design like diffusion policy which directly generate actions from diffusion model with a closed-loop style?
Thanks for the question. Our decoupled paradigm has two main benefits:
- As mentioned in Section 4.2, our diffusion-based video model can effectively understand high-level instructions and generate corresponding plans. This alleviates the learning complexity of policy by decoupling the planning and control into a two-level hierarchy, where the low-level policy can be trained in a task-agnostic manner (no need for high-level instruction labels).
- The inverse dynamic model (IDM) can be established based on the generated sub-goals, which maps the state transitions to actions. In contrast, diffusion policy uses the current observation only following the paradigm of behavior cloning (BC). Previous work [59] proves that IDM could be more performant and robust than BC, and scales best with the pretraining dataset.
With the above benefits, we show the stronger performance of our method in Table 1, compared to the diffusion policy-based model "3D diffusion actor" [52], even though it utilizes more sensor input.
>*${\color{BrickRed}Question 3:}$* What is the orange dash line meaning in Figure 6(a)?
We intend to highlight the highest performance achieved by open-loop methods for clearer comparison to closed-loop counterparts. We have revised the subfigure.
>*${\color{BrickRed}Question 4:}$* It seems author defines sub-goals as unreachable when the cosine distance between the state embeddings of consecutive are too far. Would there be circumstances when the embedding is close but the sub-goal is actually not reachable due to singularity or other robot limits? Especially when testing the framework on real robots?
Thanks for the insightful question. Injecting physical constraints or considering potential robot limitations in video generation models (world models) remains a longstanding challenge in visual plan generation. Even models like Sora, which are trained with extensive data and computational resources, have been found to fail in fully adhering to physical laws. However, the reliability of visual planners can be significantly improved by training on large-scale, real-world robot demonstrations that are inherently physically plausible. The generalizability of such models to unseen embodiments during the training phase remains an area that requires further exploration.
In practice, the lack of physical feasibility in visual planning can result in scenarios where a robotic arm stops at a certain position for an extended period without the error reaching the predefined threshold. To empirically enhance the reliability of visual planning, we can implement human-defined rules to detect these conditions and trigger replanning across multiple rounds of generation, thereby addressing this limitation.
>*${\color{BrickRed}Question 5:}$* Did not test on generalization ability.
Thank you for the comment. In the current draft, the benchmark CALVIN ABC-> D itself certifies the scene generalization ability of the learned policy [45]. Specifically, the texture of the table and the position of buttons, drawers, and sliders in the tested environment are different from those in the training set, which challenges the model's generalization in unseen scenarios. In real-world robot experiments, we further validate the position generalization by putting the fish to be grasped at different positions for each rollout.
During the rebuttal, we provide *additional* real-world results to verify CLOVER's robustness against background distraction and object variation. Please refer to Fig. R1 and R2 in *submitted PDF* for detailed experiment settings and the results given in the *Global Rebuttal* session Q1(1) and (2). | Rebuttal 1:
Rebuttal: Dear Area Chairs and Reviewers,
We thank all the Reviewers for their detailed and helpful comments on our work. We appreciate the Reviewers for acknowledging our strengths and contributions, such as a **creative and novel feedback policy and cohesive system design** (DjFL, hKyy, eUsr), a useful and **meaningful research problem** of integrating closed-loop feedback into visuomotor control (hfKW, DjFL, hKyy, eUsr), **clear motivation and well-articulated methodology** (hfKW, DjFL), **sufficient ablations and notable improvements** (hfKW, DjFL), and **well-written** (hfKW, DjFL, hKyy, eUsr).
During the rebuttal phase, we have made diligent efforts to address the concerns raised by the Reviewers, add new ablation studies and real-world experiments, provide discussions on computational complexity, and add clarity and depth to address any ambiguities. Our responses to specific concerns are detailed below. We thank you all for the opportunity to improve our work with your constructive feedback.
Best regards,
The Authors
***
**Here we refer to two general questions:**
>***${\color{BrickRed}Question 1}$ (Reviewer hfKW, Reviewer DjFL, Reviewer hKyy):* More real-world experiments, or experiments on generalization.**
We conduct additional generalization evaluation with our original long-horizon task and perform two more tasks to testify CLOVER against our baselines. From the results below, it can be observed that CLOVER greatly outperforms existing works under distractions and in more challenging tasks. Please refer to the submitted rebuttal PDF for the illustrative experiment settings. We will add the results to our revision.
- **(1)** (hfKW Q.5, DjFL Q.3, hKyy Q.1, eUsr Q.5) **Robustness evaluation with visual distractions.** (See Fig. R1(a) in the rebuttal PDF for a detailed setting)
|Methods|Task 1|Task 2|Task 3|Avg. Len.|
|-|-|-|-|-|
|ACT [53]|13.3|0|0|0.13|
|R3M [54]|20.0|0|0|0.20|
|RT-1 [49]|40.0|6.7|0|0.47|
|CLOVER (Ours)|**73.3**|**66.7**|**6.7**|**1.47**|
- **(2)** (DjFL Q.3, hKyy Q.4) **Robustness evaluation with dynamic scene variation.** (See Fig. R1(a) in the rebuttal PDF for a detailed setting; Due to limited time, we choose the competitive RT-1 for comparison only in this experiment.)
|Methods|Task 1|Task 2|Task 3|Avg. Len.|
|-|-|-|-|-|
|RT-1 [49]| 33.3|0|0|0.33|
|CLOVER (Ours)|**80.0**|**53.3**|**20.0**|**1.54**|
- **(3)** (DjFL Q.5, hKyy Q.1) Experiments with **two new tasks**: Pour shrimp into the plate & Stack bowl. (See Fig. R1(b) in the rebuttal PDF for a detailed setting)
|Methods|Pour Shrimp|Stack bowl|Avg.|
|-|-|-|-|
|ACT [53]|33.3|46.7|40.0|
|R3M [54]|46.7|53.3|50.0|
|RT-1 [49]|**80.0**|66.7|73.4|
|CLOVER (Ours)|**80.0**|**86.7**|**83.4**|
>***${\color{BrickRed}Question 2}$ (Reviewer DjFL, Reviewer hKyy, Reviewer eUsr):* Computational complexity**
**Statistics:** By default, a ViT-Base (86M) backbone is employed to encode RGB data, and a ViT-Small (22M) backbone is utilized for the extraction of depth features. As indicated by Table 4, substituting the ViT-Base encoder in the RGB branch with a ViT-Small encoder also results in comparably exceptional performance. When utilizing the aforementioned configuration and running on a Nvidia RTX 5000 Ada GPU, the inference time of the proposed policy model is **less than 0.04 seconds**, which means the policy operates in real-time at a frame rate **greater than 25Hz**. We conduct simulation experiments with a server equipped with an RTX 3090 GPU, where our policy can run at **over 70Hz**. In the rebuttal, we provide the following statistics to enable a full grasp of the computational complexity of CLOVER and other competitive methods.
|Methods|Performance|Video Generation (s)|Policy (s)|Avg. Time to complete a task (s)|# Params. in total (M)|
|-|-|-|-|-|-|
|RoboFlamingo [50]|2.48|/|0.072|**8**|3000|
|SuSIE [15]|2.69|9|0.15|49|400|
|CLOVER (Ours)|**3.53**|**5**|**0.013**|11|**200**|
Note that our diffusion model serves as a high-level planner, i.e., *activated only when subtask starts and replans*. Therefore, it does not require real-time inference. Unlike image editing models like SuSIE, our video generation model can cover a larger span of task rollouts, thus fewer inferences are needed. As shown in the table above, CLOVER achieves better performance with much lighter computational requirements.
**Analysis:** We agree that the main bottleneck is in video generation. Therefore, we made multiple designs to enhance efficiency:
- Illustrated in Sec. 4.4, we use a 20-step DDIM sampler for balanced efficiency and performance. Yet, a 10-step process halves video generation time and remains competitive performance, achieving Avg. Len. 3.21 on CALVIN. In contrast, SuSIE uses a larger model with 50 denoising steps, while it yields inferior performance (Avg. Len. 2.69). Integrating advanced samplers like DPMSolver could further reduce the steps needed.
- Based on Imagen, our diffusion model downscales the channel dimension and limits the attention blocks to only 1/8 and 1/16 downsampled feature maps, cutting the attention module's quadratic computational load. We build a compact diffusion network with merely 72M parameters to minimize latency.
We will add the above results and discussions to the revision.
***
*Please refer to the rebuttal modules below for our point-to-point responses to each reviewer.*
Pdf: /pdf/edd02cdb24842ac6e13575fbd60d9517a3301b9c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sample-Efficient Constrained Reinforcement Learning with General Parameterization | Accept (poster) | Summary: The paper studies constrained Markov decision processes. Using momentum acceleration, the paper develops a primal-dual natural policy gradient-based algorithm and shows that it achieves an \epsilon global optimality with O(\epsilon^-3) samples. The policy class considered in the paper is the generic Fisher non-degenerate policies.
Strengths: * The paper is the first in the literature that derives the O(\epsilon^-3) sample complexity for constrained MDP under more general policy parameterizations than the softmax.
* The presentation of theoretical results are clear and easy to follow. I appreciate that the author presents a complete flow of proof in the main body of the paper.
Weaknesses: I mainly have the following concern, and if it can be addressed, then I am willing to adjust my rating. In literature [25], a sample complexity of O(\epsilon^-2) is derived for unconstrained MDP. To me, it seems it is not impossible to translate this result to the constrained setting by adding a primal-dual component, and if this can be done, I think the sample complexity should be O(\epsilon^-3) or so. I wonder could the author discuss the feasibility of this generalization (and perhaps the underlying challenges if it is not generalizable).
Technical Quality: 2
Clarity: 3
Questions for Authors: * I hope the author could discuss the concern I raised in the weakness section.
* Since the paper actually studies Fisher non-degenerate policies, I think a more common (and precise) way is to directly use the name "Fisher non-degenerate policies". Using the name "general parameterization" is a little bit misleading.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The author has discussed the limitation of the paper briefly in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **On the Generalizability of [25]:** The approach explored in [25] does not generalize to CMDPs. To understand the reasoning, note that one of the preliminary steps in [25] is showing a descent-like inequality. For example, Lemma 6 in [25] reads as follows.
\begin{align}
\delta\_{t+1} \leq (1-\kappa \gamma_t)\delta_t +R_t, ~~R_t=\mathcal{O}\left(\gamma_t\mathbf{E}\left[\Vert e_t\Vert\right]+ \gamma_t^2 + \sqrt{\epsilon\_{\mathrm{bias}}}\gamma_t\right)
\end{align}
where $\delta_t = J^*- J(\theta_t)$ is the optimality gap, $e_t$ is the difference between the gradient estimate and its actual value, $\gamma_t$ is a time-dependent learning rate, and $\kappa<1$ is some problem-dependent constant. Next, using recursion, and an appropriate choice of $\gamma_t$, the following result is derived (Lemma 12 in [25]).
\begin{align}
\delta_T \leq \mathcal{O}\left(\frac{\delta_0}{(T+1)^2}\right)+ \dfrac{\sum_{t=0}^{T-1}R_t(t+2)^2}{(T+1)^2}
\end{align}
The authors of [25] could prove the convergence of $\delta_T$ up to a factor of $\sqrt{\epsilon_{\mathrm{bias}}}$ precisely because each term in $R_t$ is accompanied by $\gamma_t$ which is chosen to decrease sufficiently fast with $t$. Following Lemma 6 of [25], we can prove the following bound for a primal-dual algorithm.
\begin{align}
\bar{\delta}^{L}\_t \leq (1-k\gamma_t)\delta_t^L + R_t^L
\end{align}
where $\gamma_t$ is the time-dependent primal rate, $\delta_t^L = J_r^* - J_{\mathrm{L}}(\theta_t, \lambda_t)$ is the Lagrange optimality gap, $R_t^L$ has a dependence on $\gamma_t$ that is similar to that of $R_t$, and $\bar{\delta}\_t^L = J_r^* - J_{\mathrm{L}}(\theta_{t+1}, \lambda_t)$. To obtain a recursion on $\delta_t^L$, one can use the following.
\begin{align}
\delta_{t+1}^L - \bar{\delta}\_t^L = J_{\mathrm{L}}(\theta_{t+1}, \lambda_t) - J_{\mathrm{L}}(\theta_{t+1}, \lambda_{t+1}) = (\lambda_t - \lambda\_{t+1})J_c(\theta\_{t+1}) = \mathcal{O}(\zeta_t)
\end{align}
where $\zeta_t$ is the time-dependent dual learning rate. Combining the above two results, we have the following recursion.
\begin{align}
\delta_{t+1}^{L} \leq (1-k\gamma_t)\delta_t^{L} + R_t^L + \mathcal{O}(\zeta_t)
\end{align}
Note that, unlike $R_t^L$, $\zeta_t$ has no dependence on $\gamma_t$. Thus, to establish convergence via recursion, one must choose $\zeta_t$ to decrease faster than $\mathcal{O}(t^{-1})$. On the other hand, the process of decomposing the optimality gap and constraint violation rates from the Lagrange convergence introduces a term of the form $\mathcal{O}(1/(t\zeta_t))$ (see the proof of Theorem 1 in our paper) which requires $\zeta_t$ to decrease slower than $\mathcal{O}(t^{-1})$ to obtain meaningful results. Therefore, any choice of $\zeta_t$ will either disrupt the convergence of the Lagrange function or blow up the constraint violation rates. This negates the possibility of generalizing the approach of [25] to CMDPs via primal-dual algorithms.
**On Fisher Degenerate Policies:** Thank you for this comment. We will modify the terminology in the revised paper accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank the author for the explanations. I am happy to increase my score to 6 : )
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are pleased to hear that our response addressed your key comments. Please feel free to let us know if there are any further comments or concerns that we can address. | Summary: The paper considers the constrained MDP setting. Given a target error of $\epsilon > 0$, the paper provides a primal-dual algorithm on that will return an epsilon optimal policy making at most epsilon constraint violations. The claim is that the algorithm solves such problem with O($\epsilon^{-3}$) sample complexity.
Strengths: The idea of incorporating momentum acceleration to a primal-dual algorithm seems interesting.
Weaknesses: The Slater's constant $c_{slater}$ is missing from the big O-notations in your $H$ and $K$ terms. The slater's constant is an unique quantity in CMDP and should be clearly stated in your sample complexity. In the relaxed-feasibility problem, where the algorithm returns a policy that is allowed to have some small constraint violation (such as epsilon constraint violation), the Slater's constant can be smaller than the target error epsilon. In this case, the Slater's constant can be set to be the $\epsilon$. Your paper is in this relaxed-feasibility setting.
By writing the Slater's constant $c_{slater}$ explicitly in the big O-notations, we see that $H = \mathcal{O}((1-\gamma)^{-3} \epsilon^{-1} c_{slater}^{-1})$ and $K = \mathcal{O}((1-\gamma)^{-4} \epsilon^{-2} c_{slater}^{-2})$. If the Slater's constant = epsilon, then we see that $H \propto \epsilon^{-2}$ and $K \propto \epsilon^{-4}$ making the total sample complexity to be proportional to $\epsilon^{-6}$. Then, the result is independent of the Slater's constant.
Technical Quality: 3
Clarity: 3
Questions for Authors: Your result stated in Lemma 6 is based on a previous result provided by [6]. In [6], the result relies on an oracle that, when given an input, provides a noisy unbiased gradient of a function. However, your algorithm uses a sampling procedure to obtain a gradient estimate, which appears to rely on access to a generative model (Kakade (2003) On the sample complexity of reinforcement learning). Are you considering the setting of having access to a generative model, or are you assuming access to an oracle as in [6], for which your Lemma 6 is based? If you are considering the setting of having access to a generative model, the paper should address the statistical challenges associated with sampling, and the sample complexity should include the number of queries to a simulator, incorporating $T$ into the sample complexity. Therefore, the result from [6] in Lemma 6 does not immediately apply. If you are considering the setting of having access to an oracle, it is not clear why Algorithm 1 is necessary.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: By stating the Slater's constant in the sample complexity, the sample complexity is in the order of epsilon to the minus 6 instead of minus 3 so to be independent of the Slater's constant. See comments in the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **On Slater's Constant:** We would like to clarify that, for a given problem instance, Slater's constant is fixed and outside of the control of the learner. On the other hand, the target optimality error $\epsilon$ can be chosen by the learner to be arbitrarily small. Therefore, it would be inappropriate to assign $c_{\mathrm{slater}}=\mathcal{O}(\epsilon)$. Note that, in all previous works on CMDPs, $c_{\mathrm{slater}}$ is treated as a constant, rather than a function of $\epsilon$, e.g., see [1], [2].
**On Query Complexity of the Simulator:** Our algorithm does not assume access to a gradient oracle. Rather, we assume access to a generative model that yields a sample of the next state $s'$ given the current state-action pair $(s, a)$ following $s'\sim P(s, a)$. We use this generator to obtain an unbiased sample of the gradient via Algorithm 1. It is easy to check that the expected number of queries to the generator to generate one gradient sample is $T=\mathcal{O}((1-\gamma)^{-1})$. We do account for this factor in computing the overall sample complexity of our algorithm. In particular, note that the lengths of inner and outer loops in Algorithm 2 are $H = \mathcal{O}((1-\gamma)^{-3}\epsilon^{-1})$ and $K=\mathcal{O}((1-\gamma)^{-4}\epsilon^{-2})$ respectively, which leads to an overall sample complexity of $\mathcal{O}(THK) = \mathcal{O}((1-\gamma)^{-8}\epsilon^{-3})$. We shall elaborate on the impact of $T$ on the sample complexity in the revised manuscript.
**References:**
[1] Bai, Qinbo, Amrit Singh Bedi, and Vaneet Aggarwal. "Achieving zero constraint violation for constrained reinforcement learning via conservative natural policy gradient primal-dual algorithm." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.
[2] Ding, D., Zhang, K., Basar, T. and Jovanovic, M., 2020. Natural policy gradient primal-dual method for constrained markov decision processes. Advances in Neural Information Processing Systems, 33, pp.8378-8390.
---
Rebuttal 2:
Comment: The problem that you are solving is relaxed feasibility, where the returned policy parameterized by $\theta_0,...,\theta_{K-1}$ is allowed to make some constraint violation, i.e. $E [ 1/K \sum_{k=0}^{K-1} J^{\pi^*}\_{c,rho} - J\_{c,\rho} (\theta_k) ] \leq O(c_{slater}) + \epsilon $ (this is stated in your Theorem 1). In this case, the target error $\epsilon$ can be chosen to be large and because you are allowing for some constraint violation, it is appropriate to assign $c_{slater}$ to $\epsilon$. In doing so, then the bound will be independent of the slater's constant, but it will inflate the bound to $\epsilon^{-6}$. The effect of the slater's constant should be reflected in the bound, and not be treated as a constant. If other papers treat the slater's constant as a constant, it doesn't mean it is appropriate to do so. Your H and K are missing the slater's constant and ignoring its effect. Having slater's constant also differentiates CMDP from MDP, for more details, please see Vaswani, S., Yang, L., and Szepesvári, C. (2022). Near-optimal sample complexity bounds for constrained mdps. I ask the authors to state these distinctions more clearly.
---
Rebuttal 3:
Title: Response to the Comment
Comment: Thank you for the comment. We would like to clarify the following points.
**P1:** Theorem 1 says that the expected objective optimality gap and the constraint violation are of the following form.
$$ \mathrm{Optimality Gap}\leq \sqrt{\epsilon_{\mathrm{bias}}} + \epsilon,$$ $$\mathrm{ConstraintViolation} \leq (1-\gamma)c_{\mathrm{slater}} \sqrt{\epsilon_{\mathrm{bias}}} + \epsilon \overset{(a)}{\leq} \sqrt{\epsilon_{\mathrm{bias}}} + \epsilon $$
where inequality (a) follows from the fact that $c_{\mathrm{slater}}\in (0, \frac{1}{1-\gamma}]$ (see Assumption 1). Note that both $\epsilon_{\mathrm{bias}}$ and $c_{\mathrm{slater}}$ are problem-dependent constants and have no functional dependence on $\epsilon$. We believe that our current phrasing "our algorithm achieves $\epsilon$ optimality gap and $\epsilon$ constraint violation" is the source of this confusion. Instead, if we modify it to "our algorithm achieves $\epsilon$ optimality gap and $\epsilon$ constraint violation up to a factor of $\sqrt{\epsilon_{\mathrm{bias}}}$", the impact of both the problem-dependent constants and $\epsilon$ are separately acknowledged, and one need not force them to be of the same order. It also emphasizes the fact that due to incompleteness of the parameterized policy class i.e., $\epsilon_{\mathrm{bias}}>0$, it is impossible to reach arbitrarily close to the optimal point, no matter how small $\epsilon$ is chosen. Unlike complete policy classes such as direct tabular parameterization (where $\epsilon_{\mathrm{bias}}=0$), it is one of the well-known disadvantages of general parameterization.
**P2:** We also acknowledge the fact that the choice of $H$, $K$, and the overall sample complexity is dependent on $c_{\mathrm{slater}}$. Currently, such dependence is shown only in the appendix. We can explicitly state this dependence in the main text (Theorem 1) and make it visible to the readers.
**P3:** To make a fair comparison, we will modify Table 1 to show the impact of $c_{\mathrm{slater}}$ on the sample complexities provided by other existing works (if such explicit dependence is available in the paper).
---
Rebuttal Comment 3.1:
Comment: To clarify, my intended point is as follows. As you mentioned, ConstraintViolation $\leq \sqrt{\epsilon_{bias}} + \epsilon$. The Slater's constant can be much smaller than this suboptimality bound of $\sqrt{\epsilon_{bias}} + \epsilon$, then it is reasonable to set $c_{slater} = \epsilon$. In this case, the sample complexity becomes independent of Slater's constant, but the sample complexity would then be $\propto \epsilon^{-6}$ since both $H$ and $K$ also depend on this Slater's constant. It's this distinction that I would like to emphasize.
---
Reply to Comment 3.1.1:
Comment: Thanks a lot for this. We will provide explicit dependence of the result on Slater's constant in the final version as suggested. That way, the result can be seen in different asymptotes, including when epsilon is much smaller than Slater's constant or vice versa. | Summary: This paper studied the sample complexity of learning a discounted-reward MDP with a discounted cost constraint under a general policy parameterization.
It proposes a policy-gradient-type of algorithm that combines natural policy gradient and an accelerating stochastic gradient descent algorithm proposed in [6].
Then the paper proves that with $O(1/\epsilon^3)$ samples, the policy learned by this algorithm violates the constraint by at most $\epsilon$, and is optimal up to $\epsilon_{bias} + \epsilon$ error, where $\epsilon_{bias}$ is a constant coming from policy parameterization.
The sample complexity of this policy improves upon the prior SOTA, which was $O(1/\epsilon^4)$.
Strengths: The paper is well-written. The main contribution, the algorithm, the key idea of the proof and the proof structure are clearly presented.
The improvement of sample complexity from $O(1/\epsilon^4)$ to $O(1/\epsilon^3)$ is non-trivial, following from a clever observation given in lemma 3.
The analysis looks solid, though I do have questions about a few minor details, see the questions 1 and 2 below.
Weaknesses: 1
The “sample efficient” in the title seems to overclaim the contribution of the paper, since there is still an order gap between the sample complexity of the algorithm in this paper, $O(\epsilon^{-3})$, and the lower bound, $O(\epsilon^{-2})$.
2
It is not entirely accurate to say that the algorithm achieves “$\epsilon$ global optimality gap” and “$\epsilon$ constraint violation”, as the actual bounds in Theorem 1 contain constant terms involving $\epsilon_{bias}$.
Although $\epsilon_{bias}$ is common in the analysis of RL algorithms with general parameterizations, it is crucial to discuss whether they are tight and optimal, how they compare to the prior work, and whether they are truly negligible when the policy is parameterized by a neural net of a reasonable scale.
At the very least, the sense in which your policy is epsilon-optimal should be clarified in a prominent part of the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1 I believe the definition of L in (9) has a typo: the square should be inside the expectation, rather than outside the expectation, so that its gradient with respect to w is given by (10).
Does this typo affect the proof?
2 The logic from (75) to (76) is not very clear. How did you get rid of the first term on LHS of (76), and where does $\lambda^*$ comes from?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There is no negative social impact. A discussion of limitation is included in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Weakness 1:** The term "sample-efficient" simply acknowledges the fact that the sample complexity of our algorithm is better than that of all existing algorithms in the general parameterized CMDP setup. Note that we refrain from using the term "sample optimal" since there is still a gap between our bound $\mathcal{O}(\epsilon^{-3})$ and the theoretical lower bound of $\Omega(\epsilon^{-2})$.
**Response to Weakness 2:** Thank you for mentioning the inappropriate use of the phrases "$\epsilon$ optimality gap" and "$\epsilon$ constraint violation". In the revised paper, we will change these phrases to emphasize the presence of the $\epsilon_{\mathrm{bias}}$ term. Moreover, we will also discuss the precise meaning of optimality in the introduction of our paper.
**Response to Question 1:** We apologize for the confusion. The notation $\mathbf{E}[\cdot]^2$ in (9) was intended to denote $\mathbf{E}[(\cdot)^2]$. We will add additional parentheses in the revised manuscript to make the notation unambiguous.
**Response to Question 2:** The equations $(75)-(76)$ use the fact that the average of the occupancy measures is still an occupancy measure. Therefore, the average of the occupancy measures corresponding to the policies $\\{\pi_{\theta_k}\\}\_{k=1}^K$ would be an occupancy measure corresponding to some policy, say $\bar{\pi}$. Since the value function $J_g^{\pi}$ can be written as a linear function of the occupancy measure corresponding to the policy $\pi$, the transition from $(75)$ to $(76)$ is immediate. Moreover, the optimal dual variable $\lambda^*$ comes into the picture due to the use of Lemma 10. We shall expand these arguments in the revised manuscript.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the answering all of my questions. | Summary: The paper studies the sample complexity of the constrained Markov Decision Process (CMDP), and derives an algorithm that attains the $O(\epsilon^{-3})$ that improves the SOTA sample complexity bound for CMDP by a factor of $O(\epsilon^{-1})$.
Strengths: CMDP with general parameterization is a very challenging problem, and thus there have not been many papers making progress in this challenging setting compared to the tabular CMDP setting. The authors modify the classic primal-dual natural policy gradient (NPG) framework by leveraging an accelerated stochastic gradient descent (ASGD) method to improve the SOTA sample complexity bound by a factor of $O(\epsilon^{-1})$.
Weaknesses: Even though the authors claim they have improved the SOTA sample complexity bound for CMDP with general parameterization, I still think it is hard to say the authors have "beat" the previous results. $\epsilon$ is not the only factor that matters for the sample complexity bound for discounted infinite-horizon MDPs, since $\gamma$ is also another significant factor. Unfortunately, it seems that the authors' bound has a dependency of $O(1/(1-\gamma)^{-8})$, so that makes the sample complexity bound read $O(1/(1-\gamma)^8\epsilon^3)$. Compared to a prior work [1] which has a sample complexity bound of $O(1/(1-\gamma)^6\epsilon^4)$, it is clear the two bounds are not necessarily comparable due to one bound is superior in terms of the order of $\epsilon$ and the other superior in terms of the order of $\gamma$.
Also, [1]'s algorithm has zero constraint violation while the authors' algorithm permits $O(\epsilon)$ constraint violation, which is a worse result.
Hence, unless the authors can clarify their result has improvement in $\epsilon$ and $\gamma$ and constraint violations (or at least provide stronger justifications that their algorithm requires a smaller number of samples to achieve near-optimality), it is hard to evaluate the value of the authors' result.
[1] Bai, Qinbo, Amrit Singh Bedi, and Vaneet Aggarwal. "Achieving zero constraint violation for constrained reinforcement learning via conservative natural policy gradient primal-dual algorithm." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: As mentioned, the major weakness of the paper is that you have overlooked the sample complexity bound's dependency in $\gamma$ and it seems you have a worse constraint violation result. Can you update Table 1 with sample complexity bounds having dependency in $\gamma$ and with constraint violations? More importantly, can you justify why your result has seemingly worse dependency in $\gamma$ and constraint violations?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The major limitation of the paper is the lack of proper comparison of the sample complexity bounds with prior works.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Fairer Comparison of Sample Complexities:** We would like to point out that the sample complexity of $\tilde{\mathcal{O}}((1-\gamma)^{-6}\epsilon^{-4})$ reported by [1] in their AAAI version is erroneous. The authors have subsequently corrected their result and the sample complexity has been modified to $\tilde{\mathcal{O}}((1-\gamma)^{-8}\epsilon^{-4})$ in the arXiv version (updated May 2024). Please see Table 1 in [2] to verify our claim. Therefore, in comparison to the state-of-the-art (SOTA), we improve the sample complexity in terms of the optimality gap $\epsilon$ without sacrificing the powers of $(1-\gamma)$. We also note from Table 1 in [2] that other existing results on general parameterized CMDPs are worse than our result both in terms of $\epsilon$ and $(1-\gamma)$. Following the reviewer's suggestion, we will revise the comparison table in our manuscript to exhibit the dependence of the sample complexity on both $\epsilon$ and $(1-\gamma)$. The updated table is provided below for a quick reference.
| Algorithm | Sample Complexity | Parameterization |
| :------------: | :-----------------------: | :-------------------: |
| PMD-PD [6] | $\mathcal{O}(\epsilon^{-3})$ | Softmax |
| PD-NAC [7] | $\mathcal{O}(\epsilon^{-6})$ | softmax |
| NPG-PD [5] | $\mathcal{O}((1-\gamma)^{-5}\epsilon^{-2})$ | Softmax |
| CRPO [4] | $\mathcal{O}((1-\gamma)^{-7}\epsilon^{-4})$ | Softmax |
| NPD-PG [5] | $\mathcal{O}((1-\gamma)^{-8}\epsilon^{-6})$ | General |
| CRPO [4] | $\mathcal{O}((1-\gamma)^{-13}\epsilon^{-6})$ | General |
| C-NPG-PDA [2] | $\tilde{\mathcal{O}}((1-\gamma)^{-8}\epsilon^{-4})$ | General|
| **PD-ANPG (This Work)** | $\tilde{\mathcal{O}}((1-\gamma)^{-8}\epsilon^{-3})$ | General |
| Lower Bound [8] | $\Omega((1-\gamma)^{-5}\epsilon^{-2})$ | $-$ |
The dependence of the sample complexities of PMD-PD and PD-NAC on $\gamma$ is not depicted in the papers.
**Zero Constraint Violation:** Transforming an $\epsilon$ constraint violation (CV) result to a zero CV result is a relatively straightforward exercise that is routinely employed in the literature. The main idea is to choose a slightly modified cost function $c'=c-(1-\gamma)\delta$ and demonstrate an $\epsilon$ CV result for $c'$. This can be done by following the same procedure stated in our paper that is used for producing a similar result for $c$. Observe that $J^{\pi}_{c'}=J^{\pi}_c-\delta$ for any policy $\pi$. Therefore, choosing an appropriate value of $\delta$, one can prove that the obtained $\epsilon$ CV result for $c'$ implies a zero CV result for $c$. Application of this "conservative constraint" technique is abundant in the CMDP literature, e.g., see [1], [3]. In the revised manuscript, we will provide an outline explaining how to derive the zero CV result.
**References:**
[1] Bai, Qinbo, Amrit Singh Bedi, and Vaneet Aggarwal. "Achieving zero constraint violation for constrained reinforcement learning via conservative natural policy gradient primal-dual algorithm." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.
[2] https://arxiv.org/pdf/2206.05850
[3] Ding, D., Wei, C.Y., Zhang, K. and Ribeiro, A., 2024. Last-iterate convergent policy gradient primal-dual methods for constrained MDPs. Advances in Neural Information Processing Systems, 36.
[4] Xu, T., Liang, Y. and Lan, G., 2021, July. CRPO: A new approach for safe reinforcement learning with convergence guarantee. In International Conference on Machine Learning (pp. 11480-11491). PMLR.
[5] Ding, D., Zhang, K., Basar, T. and Jovanovic, M., 2020. Natural policy gradient primal-dual method for constrained Markov decision processes. Advances in Neural Information Processing Systems, 33, pp.8378-8390.
[6] Liu, T., Zhou, R., Kalathil, D., Kumar, P.R. and Tian, C., 2021. Policy optimization for constrained MDPs with provable fast global convergence. arXiv preprint arXiv:2111.00552.
[7] Zeng, S., Doan, T.T. and Romberg, J., 2022, December. Finite-time complexity of online primal-dual natural actor-critic algorithm for constrained Markov decision processes. In 2022 IEEE 61st Conference on Decision and Control (CDC) (pp. 4028-4033). IEEE.
[8] Vaswani, S., Yang, L. and Szepesvári, C., 2022. Near-optimal sample complexity bounds for constrained MDPs. Advances in Neural Information Processing Systems, 35, pp.3110-3122.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanations for addressing my concerns completely. I have raised my score to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We are pleased to hear that our response addressed your comments. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ANT: Adaptive Noise Schedule for Time Series Diffusion Models | Accept (poster) | Summary: Paper introduces the new method, adaptive noise schedule for time series diffusion model(ANT) . This is a new methodology as per my knowledge and hence extends the boundaries of the field. Diffusion models are highly effective in generating data but their inclusion in Time Series tasks overlooks the significance of properties of time series and don't focus on dedicated noise scheduling.
The paper introduces an algorithm to choose an adaptive noise scheduling which has not been done in the past for time series. Authors clearly define the statistics used to quantify the non-stationarity and ANT score. The proposed schedule noise aims to corrupt the TS data gradually instead of abruptly as done with Linear scheduling. It also demonstrates the need for different scheduling needed for different datasets. One size fit all solution does not serve the purpose.
Another important aspect of the paper discusses and proves why diffusion embedding is not needed for linear schedule. In addition, there was several analysis which shows the importance and stability of ANT.
Strengths: Originality: The paper introduce new method ANT for adaptive noise scheduling for time series. This is an original idea in TS domain. In addition the a new metric is introduced (IIAT) to quantify the importance of ANT.
Quality: Overall the quality of the paper is good, all the statements/claims are backed with experimental results. Authors conducted extensive experiments and analysis across several datasets and tasks which However, the details about the experiments/analyses and evaluation is limited and can be explained further in details.
Clarity: The paper is well structured, clearly written making is accessible for reading to different level of background knowledge with diffusion model for TS. The new algorithm, metric and concepts are introduced clearly in the draft under review. The terms are defined and experimented are backed with tables and figures which make it clear to understand the new algorithm better.
Significance: ANT significantly improve the accuracy of the TS forecasting models by used adaptive noise scheduling and have wide applications. The paper describes how non-stationarity can be used to enhance the performane for generative tasks.
Weaknesses: Although paper is well structures and provide tables and figured for all the experiments conducted, details on the evaluation of experiments would help the reader further. E.g. How is metric calculated for the time series forecasting task; what fraction of data is used for inference and which evaluation method is used.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. There are several places in the text where language can be improved to avoid ambiguity and enhance the clarity of paper. E.g. L27 text: "Several works have been proposed to design appropriate schedules [24, 21], but they are not tailored to the TS domain, considering the characteristics of TS data.". L31: it should be M4 dataset and a reference added to it.
2. Conclusion/Summary does not discuss/report the main results quantitatively.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors does mention about the limitation at the end of the draft that ANT could take long time if to find the noise schedule if there are too many candidate schedules and this motivated future work in this direction. It is also noted that the comparison for forecasting task is done using very limited models to show the effectiveness of the ANT but more advanced models (DLinear/FEDFormer etc) could be used in comparison for completeness and also give a hint how big is the difference in metric.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1. Details on the evaluation of experiments**
We agree that some details on the evaluation of experiments were omitted due to space limitations. Here, we provide the additional details and will incorporate them into Appendix.
> 1. **Metric for TS forecasting task**
Following prior works on TS diffusion models [6,18,19,26,34,38] including TSDiff [15], we utilize the **Continuous Ranked Probability Score (CRPS)** [9] to assess the quality of *probabilistic* forecasts, where we describe the details in **Appendix B**. Note that TS diffusion models predict the *probabilistic* distribution rather than the *deterministic* value, so CRPS accounts for the uncertainty of the predicted results, unlike MSE or MAE.
> 2. **Fraction of (test) data used for inference**
All datasets used in our experiments are sourced from GluonTS [2], where training and test splits are provided. We follow the setting of TSDiff [15], where the validation set is created by splitting a portion of the training dataset, with the split ratio determined by the size of the train and test datasets.
Further details about the training and test sets are provided in **Table A.1**.
> 3. **Evaluation method**
For evaluation, we calculate the CRPS using only the **test dataset**. Specifically, we follow the approach used in TSDiff [15] by utilizing the `gluonts.transform.sampler.TestSplitSampler` from GluonTS [2], which leverages the final time steps of the test TS instances to predict future values.
### **Q1. Clarifying sentences and adding references**
Thank you for addressing this. We are planning to improve overall writing in the revision for better clarity including your suggestions. For sure, what we meant in **L27** is that noise schedules used in [24, 21] are not tailored to the TS domain (but images). Regarding M4 in **L31**, we have already mentioned **the M4 dataset** with a reference in the previous sentence (**L28**), but we agree that the term **M4** might be confusing when used standalone.
### **Q2. Main quantitative results missing in the conclusion.**
Thank you for pointing this out. As we have emphasized our results quantitatively in the introduction, we will also summarize and reiterate the main findings quantitatively in the conclusion section.
### **L1-2. Comparison with more advanced models (DLinear/FEDFormer etc).**
> **1) Application to CSDI**
We appreciate the reviewer xQJK's concern about the **limited range of models** used to demonstrate the effectiveness of ANT. We completely agree and, as discussed in **Appendix M**, we have *already* applied our method to another well-known TS diffusion model, CSDI [34]. We believe this further validates the applicability and effectiveness of our method across different TS diffusion models (in both univariate and multivariate settings).
> **2) Comparing with other TS forecasting models [A,B]**
In short, DLinear [A] and Fedformer [B] are not compared because they are for *deterministic* forecasting, while our task is *probabilistic* forecasting; TS forecasting models for these two different tasks have been developed independently.
We provide more detailed reasons as below:
- **Model**) TS diffusion models, including TSDiff, are probabilistic forecasting models, while models like DLinear and Fedformer are deterministic forecasting models, which have developed independently in prior works.
- **Metric**) For the above reason, previous works on TS diffusions [6,18,19,26,34,38] do not use them as baselines and instead employ a different metric, CRPS, to account for the uncertainty of the predicted results, rather than the MSE/MAE metrics commonly used in deterministic models.
- **Dataset**) TS diffusion models are mostly designed for short-term TS forecasting (STSF) [C], whereas models like DLinear and Fedformer are intended for long-term TS forecasting (LTSF). In LTSF tasks, datasets generally consist of long TS with fewer instances, and training and testing splits are usually based on different time periods of the same instances. In contrast, STSF tasks involve shorter time series with more instances, and the split is based on different instances of the TS.
Nonetheless, to address the reviewer xQJK's concern, we conducted an additional experiment with DLinear [A] using the Solar dataset [17]. Since DLinear is a deterministic model and cannot be directly compared with our task which uses CRPS as a metric, we employ a common technique for probabilistic forecasting [28] by predicting the mean and standard deviation of a normal distribution rather than exact values. The results show that DLinear yields a CRPS of **0.660**, while TSDiff and {TSDiff with ANT} achieve CRPS scores of **0.399** and **0.326**, respectively.
[A] Zeng, Ailing, et al. "Are transformers effective for time series forecasting?" AAAI 2023
[B] Zhou, Tian, et al. "Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting." ICML 2022
[C] Meijer, Caspar, and Lydia Y. Chen. "The Rise of Diffusion Models in Time-Series Forecasting." arXiv 2024 | Summary: This paper proposes ANT, which adaptively selects the optimal noise schedule for time series diffusion models. The ANT score is computed for each schedule offline based on the datasets' statistics, which is the basis for selection. Extensive experiments demonstrate the method's superior performance in multiple time series tasks.
Strengths: - This method is innovative. Different from previous work that directly applied the existing general framework, it takes into account the characteristics of time series data, that is, non-stationarity, to select the scheduler.
- The experiments cover a variety of tasks and data sets, and the experimental results are convincing.
Weaknesses: - These candidate schedules are manually set in advance and are not guaranteed to include the optimal schedules.
- The ANT score is a simple multiplication of three factors, which is relatively intuitive. Their high or low values cannot guarantee the quality of the schedules.
Technical Quality: 2
Clarity: 3
Questions for Authors: - What is the basis for the design of your candidate schedule? What if they do not include the optimal schedule? Will it have a big impact on the results of the experiment?
- Although a lower score indicates a better schedule, are there many cases where schedules with higher scores have better results?
- Are the optimal schedules for the same data set significantly different between different diffusion methods? Can the proposed method consider the characteristics of specific methods when selecting a schedule?
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1, Q1. Candidate schedules do not guarantee to include the optimal schedule.**
As the reviewer eJYD mentioned, the candidate schedule may not include the optimal schedule. However, we note that we do **not aim to find the optimal schedule**; our proposed ANT is a **criterion for choosing a better noise schedule from candidates based on the characteristics of the dataset**.
For experiments, we selected three widely used schedules (linear, cosine, sigmoid) [4] as our candidates, incorporating five different diffusion steps and three different temperatures for non-linear schedules. This resulted in a total of **35 schedules**, which we believe are sufficient, as they encompass most of the schedules used in previous TS diffusion studies [1,26,29,30].
As shown in **Figure 11** and **Table 12**, ANT is applicable to non-trivial schedules, i.e., it gives a higher score to sensibly designed noise schedules if they perform better than trivial ones. However, as mentioned in the conclusion, finding the optimal schedule for a given dataset using our ANT score (via optimization) remains a future work.
### **W2. Relationship between ANT score and the quality of schedule**
The reviewer raised concerns about the relationship between the ANT score and the quality of the schedule. However, the proposed ANT score aligns with the three desiderata of noise schedules, which can be considered indicators of schedule quality. These desiderata have been discussed in prior works and our paper:
- **1) $\lambda_{\text{linear}}$: Reducing non-stationarity on a linear scale**
Previous work of DDPM [24] proposed a cosine schedule (instead of linear) and argued that maintaining a consistent noise level at each step leads to better quality, as mentioned in **L38--39**.
- **2) $\lambda_{\text{noise}}$: Corruption to random noise**
Previous work [21] suggested that the schedule must be capable of corrupting the TS into random noise at the final step to ensure the generation of high-quality samples, as the reverse process (sampling) of the diffusion model begins with random noise, as mentioned in **L40--41** and **L133--135**.
- **3) $\lambda_{\text{step}}$: Sufficient steps**
Previous work of consistency models [31] emphasized the need for a sufficient number of steps to generate high-quality samples, as mentioned in **L41--42** and **L135--136**.
Furthermore, as shown in **Table 7**, using all the three components of ANT mostly results in the oracle, backing up our argument. For example, with the M4 dataset [23], using all three components results in a value of 0.026 with a Cosine(100,1.0) schedule, whereas the ANT score without considering $\lambda_\text{noise}$ yields a CRPS of 0.094 with a Cosine(10,0.5) schedule. Notably, the base schedule of TSDiff (Linear(100)) yields a CRPS of 0.036.
### **Q2. Case for higher ANT score but with better result.**
A lower ANT score generally indicates a better schedule, as shown in **Figure 8(a)**, and this tendency becomes stronger when using all three components of the ANT score. However, as the reviewer has noted, there can be exceptions where better performance is achieved with a higher ANT score, maybe because of the stochasticity of the forward diffusion process when computing ANT. Nonetheless, the schedule with the lowest ANT score should return a reasonably good performance.
In our extensive analysis, we found only one such case with the Traffic dataset [3], as shown in **Table 5**. Among the 35 candidate schedules, Linear(50) had the lowest ANT score, yet the best performance was obtained with Cosine(75, 2.0), with a marginal difference in CRPS(0.101 vs. 0.099). It is worth noting that the base schedule Linear(100) yielded a CRPS of 0.105, which is noticeably higher than both Linear(50) and Cosine(75, 2.0).
### **Q3. Optimal noise schedule depending on diffusion models**
Note that the ANT score is computed **without access to the diffusion model**, such that the schedule is selected based on **the characteristics of the dataset**, rather than the **specific design choice on the model architecture** being used.
Nonetheless, as the reviewer eJYD concerned, some specific design choices for diffusion models could affect the optimality of the schedule. As we had a similar concern, in **Appendix M**, we have applied our method to another well-known TS diffusion model, CSDI [34], where we found that ANT is effective for this model as well. We believe this further validates the applicability and effectiveness of our method across different TS diffusion models.
---
Rebuttal Comment 1.1:
Comment: Thank you for answering my questions in detail. My concerns have been addressed, so I decide to improve my rating to 6.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score!
If you have any further questions or suggestions, please feel free to share them with us. | Summary: This paper addresses the non-stationarity in time series data by proposing an ANT score to enable an adaptive noise schedule for diffusion models. It provides extensive experimental results on time series forecasting and generation tasks.
Strengths: 1. The idea of adaptively select noise schedule in diffusion models is original and interesting, which can effectively addresses the number of steps issue.
2. The paper introduces an ANT score to enable an adaptive noise schedule, which measures the discrepancy between a linear line and the non-stationary curve.
3. Experiments have been conducted on several different time series tasks including time series forecasting, refinement, and generation, which is generally solid.
Weaknesses: 1. Though the ANT score seems effective in empirical evaluations, the paper does not provide theoretical analysis on how the score plays a role in addressing the proposed limitations of traditional diffusion models.
2. The proposed ANT noise schedule can be applied to different types of diffusion models, however in the experiments the ANT is only adapted to TSDiff, which lacks enough evidence to demonstrate its generalizability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I wonder whether the authors can provide more rigorous theoretical analysis on ANT score.
2. It would be nice to see whether the ANT can be adapted to other diffusion models.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper limits the noise functions to linear, cosine, and sigmoid, and it may be more imperative to be extended to other cases.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### **W1, Q1. Theoretical analysis on the ANT score**
Previous works on noise schedules have demonstrated their effectiveness primarily through **empirical justification** rather than theoretical analysis [4,21,24]. In line with this approach, we have conducted extensive experiments to support our proposal, as highlighted by reviewers xQJK and eJYD.
Furthermore, following the previous works regarding the schedules, we focus on the intuitive aspects that schedules should meet, where the proposed ANT score aligns with the three desiderata of noise schedules:
- **1) $\lambda_{\text{linear}}$: Reducing non-stationarity on a linear scale**
Previous work of DDPM [24] proposed a cosine schedule (instead of linear) and argued that maintaining a consistent noise level at each step leads to better quality, as mentioned in **L38--39**.
- **2) $\lambda_{\text{noise}}$: Corruption to random noise**
Previous work [21] suggested that the schedule must be capable of corrupting the TS into random noise at the final step to ensure the generation of high-quality samples, as the reverse process (sampling) of the diffusion model begins with random noise, as mentioned in **L40--41** and **L133--135**.
- **3) $\lambda_{\text{step}}$: Sufficient steps**
Previous work of consistency models [31] emphasized the need for a sufficient number of steps to generate high-quality samples, as mentioned in **L41--42** and **L135--136**.
While the advances in noise scheduling for diffusion models has mostly been made empirically, we hope our work motivates future works on the theoretical understanding of a relationship between optimal noise scheduling and the characteristics of the datasets.
### **W2, Q2. Application to other diffusion models.**
We appreciate the reviewer YUJG's concern about the **limited range of models** used to demonstrate the effectiveness of ANT. We completely agree and, as discussed in **Appendix M**, we have *already* applied our method to another well-known TS diffusion model, CSDI [34]. We believe this further validates the applicability and effectiveness of our method across different TS diffusion models (in both univariate and multivariate settings).
### **L1. Functions used for candidate schedules.**
We acknowledge the reviewer YUJG's point on expanding the range of noise functions. However, our focus is **not on identifying the optimal schedule across various noise functions**, but rather on proposing the criterion of choosing effective noise schedules tailored to the dataset's characteristics.
However, we emphasize that the linear, cosine, and sigmoid functions are well-established and widely used in literature [4], and recent TS diffusion models [1, 26, 29, 30] primarily employ linear or cosine schedules. Although developing the best noise function is not our primary goal, we have explored extending our approach with a more flexible noise (and non-trivial) function, such as an ensemble of cosine functions, as demonstrated in **Figure 11** and **Table 12**, to illustrate the potential applicability of ANT for future advancements in noise functions.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my concerns and I decide to raise the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score!
If you have any further questions or suggestions, please feel free to share them with us. | null | null | Rebuttal 1:
Rebuttal: # General Comments
First of all, we deeply appreciate your time and effort in reviewing our paper.
Our work introduces **ANT** (**A**daptive **N**oise Schedule for **T**ime Series), a method for automatically predetermining proper noise schedules based on the characteristics of the dataset.
As agreed by all reviewers, our proposed ANT is original/novel and its effectiveness is supported by extensive experiments. In our responses, we addressed the concerns raised by the reviewers and supplemented our claims with additional analyses. Here are some key highlights to assist with your post-rebuttal discussion:
### **1. Application to other TS diffusion models (All)**
We found that reviewers are interested in whether ANT is applicable to other TS diffusion models, and we have *already* applied our method to another well-known TS diffusion model, CSDI [34], as detailed in **Appendix M** and guided in **L83--84**.
### **2. Comparison with other baselines (Reviewer xQJK)**
Reviewer xQJK asked us to compare other baseline methods like DLinear and FEDformer with ours. However, they are proposed for *deterministic* forecasting, while our task with diffusion models is *probabilistic* forecasting, such that they have been developed independently and are not directly comparable. Nonetheless, to address reviewer xQJK's concern, we conducted an additional experiment with DLinear on the Solar dataset and confirmed the superiority of ours (with TS diffusion models) over DLinear.
### **3. Finding the optimal schedule from finite number of candidates (Reviewer eJYD, Reviewer YUJG)**
We emphasize that our contribution is *not on finding the optimal schedule*, but on proposing a ***criterion for efficiently selecting a better noise schedule from candidates based on the dataset's characteristics***, where we used candidate functions that are well-established and widely used in literature [4]. We also confirmed that our ANT score is able to assign a high score to non-trivial schedules if they result in good performance in **Figure 11** and **Table 12**. We leave the way to find the optimal schedule for a given dataset using our ANT score (via optimization) as a future work.
### **4. Theoretical analysis on the ANT score (Reviewer YUJG)**
We note that previous works on noise schedules primarily relied on empirical justification rather than theoretical analysis [4, 21, 24], and we have also supported our proposal through extensive experiments following these works. While the advances in noise scheduling for diffusion models has mostly been made empirically, we hope our work inspires future research into the theoretical understanding of the relationship between optimal noise schedule and dataset characteristics.
We sincerely appreciate the reviewers' valuable feedback and insights. We believe these will significantly enhance the contribution of our paper, and we will integrate them into the final version. Should there be any additional points we may have overlooked or if you have further questions or suggestions, please let us know; we are eager to address them and refine our work.
Thank you very much.
Authors. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Theoretical and Empirical Insights into the Origins of Degree Bias in Graph Neural Networks | Accept (poster) | Summary: The authors consider the problem of degree bias for node classification using graph neural networks (GNNs). Much prior work has shown that GNNs tend to be much more accurate for nodes with higher degree than for lower degree. The authors survey 38 papers that pose hypotheses, sometimes contradictory, for why this degree bias exists. In this paper, the authors provide theoretical analyses on linearized versions of message-passing GNNs using random walk (RW) and symmetric (SYM) graph filters. Their bounds yield some insights on the origins of degree bias at both test time and training time and provide evidence to support some hypotheses from prior work but not others. Finally, they run experiments using the RW, SYM, and graph attention (ATT) filters on a variety of data sets, confirming their theoretical results. They conclude by providing some criteria that mitigation strategies for degree bias should target.
Strengths: - Provides a very general theoretical analysis on the misclassification probabilities of nodes using message-passing GNNs with RW and SYM graph filters.
- Empirical results support predictions from theoretical results.
- Theoretical and empirical results yield insights as to what types of mitigation strategies for degree bias may be most effective.
- Thoroughly explores hypotheses for degree bias in the existing literature and connects the theoretical results in this paper to provide evidence for some hypotheses and against others.
- Excellent use of hyperlinks to guide reader back and forth between portions of the supplement, figures, and body text. I really enjoyed reading this paper and would likely have found it much more frustrating if I had to navigate back and forth manually.
- Addresses a topic of great interest to the NeurIPS community, as illustrated by the large body of recent work on the topic.
Weaknesses: - Theorem 2 bounds the squared inverse coefficient of variation $R_{i,c^′}$ based on the inverse collision probability $1/\sum_{l=0} \alpha_i^{(l)}$. There is no relationship or bound on the inverse collision probability as a function of node degree beyond a single layer, with the authors providing only empirical results. Thus, the theoretical results are not "end to end", with this one link between misclassification probability and node degree missing.
- Analysis is for linearized versions of GNNs and not the GNNs themselves.
Minor presentation issues:
- Error bars in Figure 1 and similar figures in Appendix E are not explained in the figure caption. I see that they are explained later in the beginning of Section 3. I suggest adding (or moving) the description to the figure caption.
- Figure 2: Caption lists FAIR ATT, but those plots are not shown.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Of the four target criteria you present in Section 6, is there any ordering or predicted importance that you would recommend future work to target?
2. I noted the lack of a relationship between inverse collision probability and node degree as a weakness. Do you have any potential leads on whether it would be possible to bound this based on other quantities in the graph? This would allow your theoretical results to potentially be "end to end".
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Limitations are discussed in the supplementary material. The paper would be strengthened if a short summary of the main limitations (maybe 2 or 3 lines) could be added to the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments and positive reception! Regarding the weaknesses:
- This is a great and interesting question! We agree that it would be beneficial to close this missing link. Via some preliminary analysis, we find that it is possible to express the inverse collision probability (i.e., express the sum of $l$-hop collision probabilities) in terms of $D_{i i}$. In particular, we can show: $\sum_{l = 0}^L \alpha_i^{(l)} = \alpha_i^{(0)} + \sum_{l = 1}^L \sum_{j \in {\cal V}} [(P^l_{rw})\_{i j}]^2 = 1 + \frac{1}{D^2\_{i i}} \sum_{l = 1}^L \sum_{j \in {\cal V}} [\sum_{k \in {\cal N(i)}} (P^{l - 1}\_{rw})\_{k j}]^2$. As before, we can see that the inverse collision probability is larger (and thus $R_{i, c’}$ is larger) when $D_{i i}$ is larger, and $\sum_{l = 1}^L \sum_{j \in {\cal V}} [\sum_{k \in {\cal N(i)}} (P^{l - 1}\_{rw})\_{k j}]^2 \in o(D_{i i}^2)$. However, because $\sum_{k \in {\cal N(i)}}$ depends on $D_{i i}$, this expression does not completely isolate the impact of $D_{i i}$ on the inverse collision probability. A similar expression can be derived for SYM by expressing: $P_{sym} = D^{\frac{1}{2}} P_{rw} D^{-\frac{1}{2}}$.
Another proof route that was considered was via the following bound: $\sum_{l = 0}^L \alpha_i^{(l)} \leq \sum_{l = 0}^\infty \alpha_i^{(l)}$, as then we can plug in the steady-state probabilities of uniform random walks on graphs. However, as $l \to \infty$, $(P^l_{rw})\_{i j} = D_{j j} / 2 |{\cal E}|$ only depends on $D_{j j}$ (not $D_{i i}$ as desired). It is also possible to upper bound the inverse collision probability as: $1 / \sum_{l = 0} \alpha_i^{(l)} \leq 1 / (\alpha_i^{(0)} + \alpha_i^{(1)}) = 1 / (1 / D_{i i} + 1)$, which is in terms of $D_{i i}$; however, we cannot use this upper bound to lower bound $R_{i, c’}$.
We will continue ideating about: (1) better bounds for the inverse collision probability in terms of $D_{i i}$, or (2) an impossibility result for a bound that isolates the effect of $D_{i i}$ on $R_{i, c’}$. This being said, our paper argues that the inverse collision probability is a more fundamental quantity that influences what the community has termed “degree bias” than degree alone, which is positively associated with the inverse collision probability.
- It would be interesting as future work to investigate the effects of nonlinearities on degree bias. Towards this, a possible option is to assume that node features are drawn from a Gaussian distribution and derive high-dimensional asymptotics for degree bias in _non-linear_ GNNs using the Gaussian equivalence theorem, as in [1].
[1] Ben Adlam, Jeffrey Pennington Proceedings of the 37th International Conference on Machine Learning, PMLR 119:74-84, 2020.
- Thanks for catching the presentation issues. We will move the explanation of the error bars to the caption of Figure 1 and the captions of the figures in Appendix E. We will likewise remove “FAIR ATT” from the caption of Figure 2.
Regarding your questions:
1. We recommend that future work target criteria 2 and 3 (lines 291-294) first. These criteria are important because they reflect (to a large extent) inherent fairness issues with the graph filters that are popular in the graph learning community. For instance, the random walk and symmetric filters disadvantage low-degree nodes by yielding representations with high variance and low magnitude, respectively. It is valuable for the community to investigate filters that are adaptive or not restricted to the graph topology in a way that ensures that low-degree nodes are not marginalized through disparate representational distributions or poor neighborhood diversity.
2. We discuss potential leads above to bound the inverse collision probability in terms of node degree. The collision probabilities themselves are intimately related to (more global) properties of the graph; for instance, via eigendecomposition, $[(P^l_{rw})\_{i j}]^2 \leq \lambda^{2 l}$, where $\lambda < 1$ is the eigenvalue of $P_{rw}$ with the largest magnitude [2]. Then, the inverse collision probability is strictly greater than $\frac{1}{\sum_{l = 0}^L |{\cal V}| \lambda^{2 l}}$.
[2] Lovasz, L. M. Random walks on graphs: A survey. 2001.
3. We will definitely detail the main limitations of our paper in the conclusion of the camera-ready version.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I continue to strongly support this paper and do not view the current gap on bounding the inverse collision probability as a major issue that should prevent this paper from being published. I also support the authors plans to address the weaknesses in future work. I see no reason to change my score. | Summary: The paper provides a comprehensive analysis of the origins of degree bias in message-passing GNNs, proving that high-degree test nodes have a lower probability of misclassification, regardless of how GNNs are trained. They surveyed 38 papers on degree bias and found that existing hypotheses are often not rigorously validated and can be contradictory. They also show that some GNNs may adjust their loss on low-degree nodes more slowly during training, but with sufficient training epochs, these models can achieve their maximum possible training accuracy. The theoretical findings are validated on eight real-world networks, and a roadmap to alleviate degree bias is proposed.
Strengths: +This paper explores the important issue of the origins of degree bias in Graph Neural Networks (GNNs).
+The authors conduct a comprehensive survey on degree bias, carefully identifying and validating existing hypotheses while pointing out those that are contradictory or lack validation.
+The study examines the effects of different graph filters and analyzes degree bias separately during both test time and training time.
+The paper provides a thorough theoretical analysis and supports its claims with empirical studies on eight datasets.
Weaknesses: -It appears that the findings in this paper are limited to homophilous graphs, as the results do not extend to heterophilic graphs, according to the paper's analysis.
-The current analysis is based on graphs with a single edge type. Can these findings be expanded to heterogeneous graphs, where the edges can have different types? It could be worth studying to explore this possibility.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please address the questions regarding weaknesses.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your thoughtful feedback and questions! Regarding the weaknesses and your questions:
- Our findings do extend to heterophilic graphs. For example, in Lines 220-222, we explain that high-degree nodes in heterophilic networks do not have more negative L-hop prediction homogeneity levels due to higher local heterophily, hence we do not necessarily observe better performance for them; we empirically validate that there is a lack of degree bias for the heterophilic datasets Chameleon and Squirrel in Figure 5. Furthermore, in lines 693-696, we explain that models do not learn more rapidly for high-degree nodes under heterophily (Figures 11-12) because the “features of nodes in the neighborhoods of high-degree nodes and training nodes are dissimilar.” None of our theoretical analysis assumes homophilic networks. We will clarify any parts of the paper that suggest that our theoretical analysis does not encompass heterophilic networks.
- We agree that it is valuable to study the possibility of expanding our findings to heterogeneous graphs. For example, in our literature survey, we cover works that establish the issue of degree bias for knowledge graph predictions and embeddings (e.g., [4, 38]). Furthermore, we state in Section K, “it remains to identify the shortcomings of our theoretical findings for heterogeneous and directed networks.” Our theoretical analysis is general and covers diverse message passing GNNs, and can be extended to heterogeneous networks if messages aggregated from different edge types are subsequently linearly combined. In this setting, $R_{i, c’}$ can be computed as the sum of the prediction homogeneity quantities $(\sum_{l = 0}^L \beta_{i, c’}^{(l)})^2$ for each edge type divided by the sum of the collision probability quantities $\sum_{j \in {\cal V}} [(P_{rw}^l)_{i j}]^2$ for each edge type.
[4] Stephen Bonner, Ufuk Kirik, Ola Engkvist, Jian Tang, and Ian P Barrett. Implications of topological imbalance for representation learning on biomedical knowledge graphs. Briefings in Bioinformatics, 23(5):bbac279, 07 2022.
[38] Harry Shomer, Wei Jin, Wentao Wang, and Jiliang Tang. Toward degree bias in embedding based knowledge graph completion. In Proceedings of the ACM Web Conference 2023, WWW ’23, page 705–715, New York, NY, USA, 2023. Association for Computing Machinery.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I will maintain my score. | Summary: The paper explores the causes and characteristics of degree bias in graph neural networks (GNNs), particularly in node classification tasks. It contributes to the field by providing a rigorous theoretical analysis supported by empirical evidence across multiple datasets, revealing that degree bias is influenced by factors such as homophily and diversity of neighbors. The study also proposes a structured approach to mitigate this bias, enhancing the fairness and effectiveness of GNN applications in social and recommendation systems.
Strengths: originality: medium
quality: good
clarity: good
significance: medium
Weaknesses: Some degree-related papers should be included and discussed in this paper, for example:
[1] takes a unified view to explain the over-smoothing and heterophily problems simultaneously by profiling nodes with two metrics: the relative degree of a node (compared to its neighbors) and the node-level heterophily.
[2] investigates how does node degree, homophily and class variance influence the node distinguishability.
[3] finds that the effectiveness of graph convolution operations in enhancing separability is determined by the Euclidean distance of the neighborhood distributions and the square root of the average node degree. Furthermore, they find that topological noise negatively affects separability by effectively lowering the average node degree.
[4] observes that the prediction accuracy of high-degree nodes is usually significantly lower under heterophily and the hypothesis that "nodes with higher degrees are naturally favored by GNNs" no longer holds. In fact, the degree-wise bias is more sensitive, but not necessarily beneficial, to high-degree nodes with regard to varying graph conditions.
[1] Two sides of the same coin: Heterophily and oversmoothing in graph convolutional neural networks. In2022 IEEE International Conference on Data Mining (ICDM) 2022 Nov 28 (pp. 1287-1292). IEEE.
[2] When Do Graph Neural Networks Help with Node Classification? Investigating the Homophily Principle on Node Distinguishability. Advances in Neural Information Processing Systems. 2024 Feb 13;36.
[3] Understanding Heterophily for Graph Neural Networks. In Forty-first International Conference on Machine Learning.
[4] Liao N, Liu H, Zhu Z, Luo S, Lakshmanan LV. Benchmarking Spectral Graph Neural Networks: A Comprehensive Study on Effectiveness and Efficiency. arXiv preprint arXiv:2406.09675. 2024 Jun 14.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does homophily relate to degree bias? I expect some conclusions in this paper but cannot find them.
2. The "expected similarity of the neighborhoods" in section 5 is very similar to the similarity matrix defined in [5]
3. The criteria proposed in section 6 should be verified in real-world datasets.
[5] Revisiting heterophily for graph neural networks. Advances in neural information processing systems. 2022 Dec 6;35:1362-75.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: the authors adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback and the helpful resources! Regarding the weaknesses, we will definitely include and discuss the recommended references on homophily in our paper:
[1] provides a complementary perspective on the possible performance issues of GNNs that arise from degree disparities in graphs (e.g., low-degree nodes induce oversmoothing in homophilic graphs, while high-degree nodes induce oversmoothing in heterophilic networks). Oversmoothing is related to prediction homogeneity $(\sum_{l = 0}^L \beta_{i, c’}^{(l)})^2$ (line 149); for homophilic networks, as the number of layers in a GNN increases (i.e., as oversmoothing occurs), $(\sum_{l = 0}^L \beta_{i, c’}^{(l)})^2$ gets closer to 0 (i.e., does not increase $R_{i, c’}$), thereby not inducing as much degree bias. In contrast, our theoretical analysis demonstrates that degree bias occurs without oversmoothing and is amplified by high local homophily (lines 170-171).
[2] connects node distinguishability to node degree and homophily by analyzing the intra-class vs. inter-class embedding distance. We discuss similar quantities in lines 175-189 and lines 208-226, and will integrate connections to [2] in these discussions. However, with the exception of Section 3.5, [2] considers the CSBM-H model in its theoretical analysis, which has pitfalls (as we discuss in lines 79-83). Moreover, unlike our work, [2] does not explicitly link the misclassification error of a node to its degree in a more general data and model setting.
[3] analyzes the effect of heterophily on GNNs via class separability, which it characterizes through neighborhood distributions and average node degree. Like with [1] and [2], we will discuss connections between prediction homogeneity, homophily, and separability. Notably, similar to [2], [3] only considers the HSBM model in its theoretical analysis.
[4] observes that GNN performance is lower for high-degree nodes under heterophily. We likewise observe this in Figure 5 for Chameleon and Squirrel (which are heterophilic networks), and we will connect this observation to [4]. Moreover, our theoretical analysis explains why degree bias is not observed for heterophilic graphs; in lines 220-222, we explain that high-degree nodes in heterophilic networks do not have lower $l$-hop prediction homogeneity levels due to higher local heterophily, hence we do not necessarily observe better performance for them compared to low-degree nodes. We would additionally like to note that [4] was released on 4 Jun 2024, which was after the NeurIPS submission deadline on 22 May 2024.
Regarding your questions:
1. We make connections between homophily and degree bias in the paper. For example:
- Lines 170-171: We show that to amplify degree bias, we need to make $R_{i, c’}$ larger, for which it is sufficient to make the prediction homogeneity $\sum_{l = 0}^L \beta_{i, c’}^{(l)}$ more negative, “e.g., when most nodes in the $l$-hop neighborhood of $i$ are predicted to belong to class $c$.” This can occur when node $i$ has high local homophily.
- Lines 175-179: Our proof of Theorem 2 shows that, in expectation, $\overline{RW}$ “produces similar representations for low and high-degree nodes with similar L-hop neighborhood homophily levels.” However, low-degree nodes have a higher representation variance (due to a lower inverse collision probability), which can amplify degree bias. This entails that factors beyond homophily (e.g., diversity of neighbors) induce degree bias.
- Lines 208-214, 222-226: Our proof of Theorem 3 likewise shows that for SYM, homophily alone does not induce degree bias.
- Lines 220-222: We explain that high-degree nodes in heterophilic networks do not have lower $l$-hop prediction homogeneity levels due to higher local heterophily, hence we do not necessarily observe better performance for them.
- Lines 693-696: We explain that models do not learn more rapidly for high-degree nodes under heterophily (Figures 11-12) because the “features of nodes in the neighborhoods of high-degree nodes and training nodes are dissimilar.”
2. The similarity matrix defined by [5] is _post-aggregation_, while our matrix is _pre-aggregation_. Moreover, our similarity matrix is an interpretable quantity that naturally arose during our theoretical analysis. We are happy to cite [5].
3. We perform extensive validation of the criteria in Section 6 on 8 real-world datasets:
- Figure 3 and the plots in Section G show strong positive associations between inverse collision probability and degree for the RW, SYM, and ATT filters, and Figure 1 and the plots in Section E show strong negative associations between degree and test loss for the homophilic datasets. Hence, we validate that there is a negative association between inverse collision probability and test loss.
- The lack of degree bias observed in Figure 5 for Chameleon and Squirrel (which are heterophilic networks), compared to Figure 1 and the other plots in Section E, confirms our theoretical finding that under heterophily, the prediction homogeneity for high-degree nodes is closer to 0, so high-degree nodes do not necessarily experience better performance.
- Figures 2, 6-10 empirically confirm our theoretical finding that disparities in the expectation and variance of node representations are responsible for performance disparities. Figures 11-12 suggest that smaller distributional differences among representations (due to heterophily) can alleviate degree bias.
- Figure 2 and the plots in Section F empirically validate the training discrepancies between low and high-degree nodes that we theoretically analyze in Section 5.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and I will raise my rating to 6. | Summary: Given the wide-adoption of GNN-based node classification and the potential risk of degree-related bias, this paper systematically reviews the degree-bias paper in previous literature and proposes a theoretical framework to analyze the degree-related bias. Several insightful observations have been drawn with empirical verification.
Strengths: (1) A systematic investigation of a long-standing yet not addressed research problem, degree bias. Comprehensive summary of previous works on this has been presented
(2) The degree-related bias is urgently related to social benefit.
Weaknesses: (1) Some experimental results are not supportive of some claims.
(2) Some theoretical analyses lack intuitive justification from a graph topology perspective, e.g., the connection with network homophily/heterophily.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Since the degree bias focused on here is node classification, would it be better to note this special case somewhere in the paper, or at least for the title, would it be better to reword and limit the task scope to node classification? Is there any insight or potential thought on degree-related bias in link prediction and graph classification (here the degree would correspond to graph density)?
(2) In Figure 5, when looking at the Squirrel and Chameleon datasets, there is no significant difference in high-degree and low-degree test loss, is there a specific reason for their difference from the other datasets?
(3) Maybe it is a typo, but in the Equation at the bottom of page 4, if we unify different layers' weight transformation matrices as of dimension $\mathbb{R}^{d^0\times C}$, would it cause some dimension mismatch during matrix multiplication? Why does every layer of convolution consider the project from space $\mathbb{R}^{d^0}$ to $\mathbb{R}^{C}$? Moreover, it might be better to add back the Equation number.
(4) It is not so clear about "the diversity of neighborhoods" until the formal definition in Line 168. It might be better to provide some concise illustrations to help readers understand the diversity of neighbors when it was mentioned for the very first time.
(5) Based on Theorem 2, in addition to decreasing the negative $\sum_{l=0}^{L}\beta_{i, c'}^{(l)}$ so that $R_{i, c'}$ could increase, could we also increase the positive $\sum_{l=0}^{L}\beta_{i, c'}^{(l)}$ so that $R_{i, c'}$ could increase? Are these two scenarios corresponding to some graph local structure? Based on my understanding, as $\beta_{i, c'}^{(l)}$ measures of the local subgraph difference between class $c'$ and class $c$, it really boils down to the local homophily and local heterophily, if the value is pretty small, then it might correspond to a very small $\beta$, which might correspond to a equal contribution between class c and c' and hence the subgraph is really like a mixture between nodes from these two classes and hence cannot work very well. To sum up this point, my thinking is that it would be better to provide some intuition for some theoretical findings from a topology perspective.
(6) It might be interesting to add some analysis when a distribution shift happens.
(7) For the training loss visualization in Figure 2, it is really hard to see the difference of low/high-degree nodes for RW and ATT.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper addressed all limitations with the following exceptions:
The paper mainly studies degree-related bias for node classification. There are some other tasks such as link prediction and graph classification worth similar analysis as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your comments and thorough questions! Regarding your questions:
(1) We will make the scope of the paper clearer by indicating in the title, introduction section, and limitations section that we focus on node classification. Our findings readily lend insight into the origins of degree bias in link prediction. For example, if one uses node representations and an inner-product decoder to predict links between nodes, our results (lines 208-226) indicate that:
- In the random walk filter case, link prediction scores between low-degree nodes will suffer from higher variance because low-degree node representations have higher variance. Hence, Theorem 1 suggests that predictions for links between low-degree nodes will have a higher misclassification error.
- In the symmetric filter case, the link prediction scores between high-degree nodes will be over-calibrated (i.e., disproportionately large) because high-degree node representations have a larger magnitude (i.e., approximately proportional to the square root of their degree). Hence, our proof of Theorem 3 suggests that over-optimistic and possibly inaccurate links will be predicted between high-degree nodes.
More research is required to understand the implications of our findings for degree bias in the context of graph classification; such research likely has rich connections to the literature on spectral expressive power [1].
[1] Balcilar, Muhammet, et al. "Analyzing the expressive power of graph neural networks in a spectral perspective." 2021.
(2) The reason that we do not observe degree bias for Chameleon and Squirrel is because unlike the other datasets, these datasets are heterophilic. We intentionally include these datasets to draw contrast to the other, homophilic datasets and validate our theory. For example, in lines 220-222, we explain that high-degree nodes in heterophilic networks do not have more negative $l$-hop prediction homogeneity levels due to higher local heterophily levels; hence, we do not necessarily observe better performance for them compared to low-degree nodes.
(3) We will ensure that equation numbers are visible for all equations. The weight transformation matrices should all be of dimension $d_0 \times C$; there is not a dimension mismatch issue. This is because we consider linearized versions of message-passing GNNs. In particular, in the equation between lines 132 and 133, if we set each $\sigma^{(l)}$ to be identity and recursively expand the matrix multiplications, we end up with the expression: $Z^{(L)} = \sum_{l = 0}^L P_{rw}^l X W^{(l)}$, where $W^{(l)}$ is the sum of all the weight terms in the expansion that correspond to $P_{rw}^l$; for simplicity, we collapse each sum of weight terms into a single weight matrix. Because each $W^{(l)}$ maps the input features $P_{rw}^l X \in \mathbb{R}^{n \times d_0}$ to the outputs $Z^{(l)} \in \mathbb{R}^{n \times C}$, they must all be of size $d_0 \times C$. It makes sense to have a different weight matrix for each $P_{rw}^l X$, as we may need to extract different information from features aggregated from neighborhoods at different hops.
(4) This is great feedback. We will include a figure at the beginning of the paper that visually illustrates the concepts of prediction homogeneity and collision probability, and their connections to homophily and diversity.
(5) In Theorem 2 (lines 118-119), we assume that $\mathbb{E} [Z_{i, c’}^{(L)} - Z_{i, c}^{(L)}] < 0$ (i.e., the model generalizes in expectation); this is necessary to make a mathematically rigorous statement about degree bias via tail bounds. Thus, we cannot make $\sum_{l = 0}^L \beta_{i, c’}^{l)}$ more positive to increase $R_{i, c’}$, as this would violate the assumption of our theorem. However, intuitively, it also would not make sense that RW and SYM reduce the misclassification error for a node by predicting its neighbors to be of a different class, since message passing smooths the representations of adjacent nodes.
Furthermore, per lines 153-154, “$\beta_{i, c’}^{(l)}$ measures the expected prediction score for nodes j, weighted by their probability of being reached by a _length-l random walk_ starting from i.” Because $\beta_{i, c’}^{(l)}$ depends on the distribution of random walks from $i$, it is intimately related to local graph structure. Indeed, $\beta_{i, c’}^{(l)}$ can be interpreted as a “local subgraph difference” and is highly influenced by the local homophily of $i$ (as we say in lines 170-174). However, $\beta_{i, c’}^{(l)}$ is also influenced by the presence of $l$-hop neighbors contained in the training set, as the model is more likely to correctly classify these nodes by a large margin (lines 172-173); hence, $\beta_{i, c’}^{(l)}$ does not _only_ boil down to local homophily. When revising our paper, we will include and expand on these clarifications on the similarities and differences between $\beta_{i, c’}^{(l)}$ and local homophily, to provide better intuition from a topological perspective.
(6) We can add some analysis for distribution shifts. For example, our results in Section 4 can be built upon to show that shifts in local homophily from train to test time reduce test-time prediction performance, which can bring $\beta_{i, c’}^{(l)}$ closer to 0; this can increase $R_{i, c’}$, thereby not inducing as much degree bias.
(7) We will make the training loss visualizations in Figure 2 larger. The primary takeaway from this figure is that, in the case of RW and ATT, the training loss curves for low and high-degree nodes (including error bars) overlap during the first ~20 epochs of training; however, for SYM, the loss curve for high-degree nodes descends more rapidly than the curve for low-degree nodes.
Regarding the weaknesses:
(1) Could you please elaborate on which experimental results are “not supportive of claims?” We would like to address or clarify any potential mismatches between our theoretical analysis and experiments.
(2) Please see (5) above.
---
Rebuttal Comment 1.1:
Title: Thank you for your response!
Comment: Despite most of the concerns have been addressed, I still have follow-up questions:
(1) **In the random walk filter case, link prediction scores between low-degree nodes will suffer from higher variance because low-degree node representations have higher variance. Hence, Theorem 1 suggests that predictions for links between low-degree nodes will have a higher misclassification error.**, there is some other work proving that local clustering coefficient of nodes decreases as node degree increases and since LCC is very related to node link prediction performance, I am not sure whether this intuition is right or not.
(2) **Could you please elaborate on which experimental results are “not supportive of claims?” We would like to address or clarify any potential mismatches between our theoretical analysis and experiments.** In Figure 6 middle row, I didn't see the training loss decrease more rapidly for SYM on low-degree nodes as claimed in the main paper.
---
Reply to Comment 1.1.1:
Title: Thank you for your follow-up questions!
Comment: We are glad that most of your concerns have been addressed! Regarding your follow-up questions:
(1) Thanks for bringing up this interesting perspective. Indeed, it has been observed that the local clustering coefficient (LCC) decreases as node degree increases [1]. This is in part due to real-world networks being sparse and high-degree nodes inducing a larger number of possible connections between neighbors (i.e., a larger denominator when computing the LCC).
Some works have studied the impact of clustering coefficients on link prediction performance [2, 3, 4]. However, these papers only consider the effect of the *global* clustering coefficient of a network on overall link prediction performance for that network. That is, these papers find that the overall link prediction performance of network embedding algorithms is often higher for networks with a larger global clustering coefficient (and even then, this is not observed for some algorithms like Matrix Factorization [2]). These papers do not consider disparities in link prediction performance across nodes in the same network with different local clustering coefficients; the trend of better link prediction performance with a higher clustering coefficient may not hold locally because the global clustering coefficient is a simple average of and does not account for variance in local clustering coefficients across nodes. Furthermore, [2, 3, 4] do not consider initial node features or graph neural networks, which often have a narrower receptive field than spectral embedding methods (e.g., random walk, eigendecomposition).
Moreover, the labels and evaluation for link prediction can confound intuition. Unlike node classification, the labels for link prediction (i.e., the existence or not of a link) make the task naturally imbalanced with respect to node degree; high-degree nodes have a much higher rate of positive links than low-degree nodes. This association between degree and positive labels can influence the misclassification error. In addition, many published link prediction evaluation results are based on label sampling methods that favor high-degree nodes [5].
Ultimately, more rigorous theoretical analysis and experimentation are needed to confirm the implications of node degree for link prediction performance.
(2) The training loss curves in Figure 6 still support our theoretical analysis. Theorem 4 reveals that for $\overline{SYM}$, node degree _and_ the (degree-discounted) expected feature similarity $\tilde{\chi}_i$ affects the rate of learning. On the other hand, Theorem 5 indicates that for $\overline{RW}$, while we do not expect node degree to impact the rate of learning, the expected feature similarity $\chi_i$ is still influential. Hence, interpreting Theorems 4 and 5 jointly, we expect and accordingly observe that the orange curve for SYM has a steeper rate of decrease *relative* to the orange curve for RW as the number of epochs increases. We will revise the interpretation of our results to make it more clear that while node degree highly affects the rate of learning, differences in $\chi_i$ across nodes of different degrees are also influential (lines 268-273).
[1] Vázquez, Alexei, Romualdo Pastor-Satorras, and Alessandro Vespignani. "Large-scale topological and dynamical properties of the Internet." Physical Review E 65.6 (2002): 066130.
[2] Robledo, O.F., Zhan, XX., Hanjalic, A. et al. Influence of clustering coefficient on network embedding in link prediction. Appl Netw Sci 7, 35 (2022). https://doi.org/10.1007/s41109-022-00471-1
[3] Feng, Xu, J. C. Zhao, and Ke Xu. "Link prediction in complex networks: a clustering perspective." The European Physical Journal B 85 (2012): 1-9.
[4] M. Khosla, V. Setty and A. Anand, "A Comparative Study for Unsupervised Network Representation Learning," in IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 5, pp. 1807-1818, 1 May 2021, doi: 10.1109/TKDE.2019.2951398.
[5] Aiyappa, Rachith, et al. "Implicit degree bias in the link prediction task." arXiv preprint arXiv:2405.14985 (2024). | Rebuttal 1:
Rebuttal: We thank the reviewers for their thoughtful and helpful comments! We are pleased to read that:
- Reviewer Ao8B finds our paper constitutes a “systematic investigation of a long-standing yet not addressed research problem.”
- Reviewer Xp6E finds our paper has good quality and clarity.
- Reviewer USpG finds our paper “provides a thorough theoretical analysis and supports its claims with empirical studies on eight datasets.”
- Reviewer Vv7z finds our paper “provides a very general theoretical analysis on the misclassification probabilities of nodes” and “really enjoyed reading this paper.”
We address the weaknesses and questions raised by each reviewer in individual responses below. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unlocking the Capabilities of Masked Generative Models for Image Synthesis via Self-Guidance | Accept (poster) | Summary: This work proposes a self-guidance method for masked generative models to improve the diversity and quality of class conditional image generation. The main challenge is to design a semantically meaningful smoothing for the discrete VQ token space such that coarse scale information can be extracted. The authors propose an auxiliary task – error token correction to minimize fine-scale details and utilize TOAST for efficient model fine-tuning.
Strengths: - The research problem of introducing self-guidance in discrete space is well-motivated. The background section is well-written, and the discussion of related work is thorough. I find the analogy to guidance of continuous diffusion model very helpful.
- The figures are well-made, such as the qualitative visualization is Fig.1.
- The quantitative metrics from the experiments also demonstrate the effectiveness of the method. The ablation studies on different hyperparameters are informative.
Weaknesses: - From the qualitative visualizations alone (Fig.4), the proposed method does not seem have too different outputs compared with MaskGIT (the middle column). I wonder if the guidance benefit is stronger on other resolutions’ results, like 128 x 128 or 64 x 64.
- Maybe some figures could be added to accompany section 3.2 to better illustrate the process of the auxiliary task.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is $\bf{m}_i$ in equation (7)? Could you provide more intuitions or visualizations of the claims in line 192- 195. Why does this auxiliary task naturally minimizes fine-scale details?
- What is the range of the experimented temperature values in Fig. 3?
From Fig.3, it seems that the proposed sampling method has very different performance based on the choice of temperature. The authors also mention that they choose different temperatures based on the resolution and sampling steps. I wonder why is the proposed method so sensitive to temperature?
- Have the authors tested the model on the 128x128 resolution ImageNet benchmark? What does the FID/IS curve look like?
- What happens with strongly guided samples? For instance, for classifier-free guidance in diffusion models, strongly guided samples exhibit saturated colors. I wonder if there is any artifact related to this proposed method when the choice of guidance scale or sampling temperature is high?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors already mention some directions for future work, such as demonstrating the guidance on text-conditioned image generation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thorough review. We address your concerns and questions below. All visual results can be found in the submitted PDF. Please zoom in on the figure of the submitted PDF for the best view.
$\\textbf{Clarified visual comparison.}$
In Fig. 6 of the submitted PDF, we directly compare the sampled images using MaskGIT and the proposed method with the same random seed on ImageNet 256 and 512 resolutions. With the proposed guidance, fine-scale details are enhanced, generating higher-quality images. Please refer to the Global Response for a detailed description.
$\\textbf{Impact of guidance in lower resolution.}$
Since the proposed guidance utilizes semantic smoothing, the effectiveness of the guidance can be affected by the resolution of the dataset or input token length. We first note that the proposed guidance is built upon the pre-trained vector quantizer and MGMs. However, we cannot find publicly available vector quantizers and MGMs trained on ImageNet 128 resolution. We tried to train both VQGAN and MaskGIT, which have to be trained sequentially due to the dependency. However, within the limited time frame and computational budget, we are unable to generate reliable results from MaskGIT, generating output with very low visual quality, which shows an inception score of around 20.
Though the VQGAN and MaskGIT pretraining is not converged, we measure the performance improvement by attaching the proposed guidance to the MaskGIT with ImageNet 128 resolution. We measure the IS (larger is better) and FID (lower is better) with a sampling temperature of 2.0. The IS increases from 21.2 to 29.5, and FID decreases from 44.0 to 33.6. Although the hyperparameter search is limited, the FID improvement is significant (23%) compared to the improvement in 256 (50%) and 512 (36%) resolutions. We expect that the performance improvement at lower resolutions will be similar if a well-pretrained quantizer and MaskGIT are utilized and sampling hyperparameters are suitable.
$\\textbf{More intuition for auxiliary task and visualization.}$
Thank you for your recommendation, which helps us better illustrate the proposed auxiliary task.
We thought it would be beneficial for all reviewers to share deeper insights into the underlying intuition and the visual results, so we provided the review in a global response. Thank you once again for your valuable recommendation!
$\\textbf{Choice of the sampling temperatures.}$
In Fig. 5 of the main manuscript, utilized sampling temperatures are listed below.
- 256: [3, 4, 5.5, 15, 20, 23, 25, 30, 40, 60, 80]
- 512: [6, 10, 15, 20, 25, 30, 35, 40, 45, 50, 70, 90]
The sampling process of MGMs is known to be sensitive to the choice of various sampling hyperparameters [1]. Sampling temperature plays a crucial role in resolving the multi-modality problem, as explained in section 2. Thus, moderate sampling temperature can increase the sample quality and diversity, simultaneously. After some point, when the temperature increases, diversity increases while quality deteriorates. This is because the large sampling temperature gives strong randomness for re-masking of MGMs ($p(m_{t−1}|m_t, \\hat{x}_{0,t}$) in line 99), potentially masking relatively accurate tokens while leaving unrealistic tokens unmasked. As a result, the sampling of MGMs is sensitive to the sampling hyperparameters. We believe that our extensive ablation experiment can provide valuable insights for the community.
$\\textbf{Impact of the strong guidance and high sampling temperature.}$
In Fig. 11 of the submitted PDF, we compare the samples with our default config, with strong guidance (s=5) and high sampling temperature ($\\tau$=50). We found that the strong guidance often leads to undesirably highlighted fine-scale details (left column in Fig. 11 b) or saturated colors (right column in Fig. 11 b), similar to the large scale of CFG. High temperatures often lead to the collapse of the overall structure.
As an extension of Fig. 5 a of the main manuscript, we further plot the FID and IS curves using larger guidance scales in Fig. 6 d. Using a large guidance scale decreases fidelity and diversity due to the phenomenon we mentioned above. We will add the analysis and figure in the final manuscript for a thorough investigation and to give further insight into the proposed guidance.
$\\textbf{Further clarification.}$
The $\\mathbf{m}\_{i}$ in eq. 7 denotes whether the $i$th input token $z\_t$ is masked or not. $\\mathbf{m}\_i=0$ if the $i$th input token is masked and $\\mathbf{m}\_i=1$ if not.
Since the masked index of $z\_t$ and $x\_t$ are identical, the $\\mathbf{m}\_{i}$ in eq. 7 equals to the $\\mathbf{m}\_i$ in eq. 1. We will clarify this in the final manuscript.
---
[1] Chang, Huiwen, et al. "Muse: Text-to-image generation via masked generative transformers." arXiv preprint arXiv:2301.00704 (2023).
---
Rebuttal Comment 1.1:
Title: thank you
Comment: Thank you to the authors for their detailed response and the valuable insights provided on the intuition behind the proposed method. Based on this additional information, I have decided to increase my rating to 7: Accept. | Summary: This paper explores the use of masked generative models for image generation. While these methods are typically efficient, they sometimes fall short in terms of quality compared to diffusion models. To reduce this gap, this paper proposes a self-guidance algorithm, which is further improved with semantic smoothing. These ideas are shown to be effective for improved image generation using MGMs. The method is compared to a wide variety of baselines on class-conditioned generation, on common metrics, and against a set of baselines.
Strengths: - This paper tackles an important problem in image generation, which is the use of Masked Generative Models for efficient image generations. This family of models have shown to be more efficient than diffusion models, but have so far struggled to catch-up with the quality they can achieve. This paper improves the quality of these models, to make them competitive in quality while preserving their benefits in terms of efficiency.
- At the core fo the method lies a self-guidance method, which I believe is a sound and well motivated idea, which can be potentially included in generative models for other applications.
- The ideas in the paper are simple in a good way, and they are not particularly tied to a specific application. I believe they could be generalizable to many other problems.
- I believe the related work is well analysed, as far as I am familiar with previous work on this topic.
- Evaluation is extensive, many different hyperparameters are evaluated with respect to their impact on generation quality.
Weaknesses: - The paper is at times hard to follow, lacking clarity in some sections (particularly 5). Images and figures are sometimes small and hard to see.
- I do not believe implementation details are enough for reproducibility. Code will not be provided, which will make this paper hard to reproduce and it will reduce its potential impact and room for future comparisons.
- Given that a particular benefit of the method of this paper is its computational efficiency in comparison to diffusion-based approaches, I believe a detailed analysis of these should be included.
- I believe the paper would benefit from showing results on text-to-image generation.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How would this method generalize to text-to-image methods or image-to-image translation problems?
- How tied is this method to VQGAN? Could other encoders be used? If so, what would be their impact?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The paper is fine in this regard. It could, however, provide more details on how the limitations could be addressed in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address your concerns below. All visual results can be found in the submitted PDF. Please zoom in on the figure in PDF for the best view.
**Clarifications of implementation details, figures, and code release.**
Thank you for your suggestion to make our research clearer and more reliable. We will clarify and magnify the figures in the final manuscript. We will add more visual results for a clear comparison of the proposed method to the base MaskGIT model, as illustrated in Fig. 6 of the submitted PDF. Please refer to the Global Response for a detailed description.
The proposed method consists of two parts: MaskGIT and TOAST. Since we have not changed the MaskGIT architecture, we briefly explain the implementation details of TOAST below.
The TOAST modules consist of three parts: token selection module, channel selection module, and linear feed-forward networks.
- (i) The token selection module selects the task or class-relevant tokens by measuring the similarity with the learnable anchor vector $\\xi_c$. We generate class conditional anchor $\\xi_c$ with simple class conditional MLPs.
- (ii) The channel selection is applied with learnable linear transformation matrix P. Then, the output of the token & channel selection module is calculated via $z_i = P \\cdot sim(z_i,\xi_c) \\cdot z_i $, where $z_i$ denotes the $i$th input token.
- (iii) After the feature selection, the output is processed with $L$ layer MLP layers, where $L$ is equal to the number of Transformer’s layers. The output of the $l$th layer of MLP blocks is added to the value matrix of the attention block in $(L-l)$th Transformer layer (top-down attention steering). Following the previous work [1], we add variational loss to regularize the top-down feedback path. A more detailed process and theoretical background can be found in [1].
Most of the implementation can be found in the official TOAST repository. Since our work is also based on the public repository, as mentioned in the main manuscript, we believe that the proposed guidance can be easily implemented and applied in various code bases. We will add the implementation details in the final manuscript for understandability and reproducibility. We also plan to release the source code as soon as we complete the documentation.
**Measuring the computational efficiency.**
We measure the computational efficiency by calculating the sampling time on A6000, where the batch size is set to 50. In Fig. 9 of the submitted PDF, we plot sampling time with FID/IS values compared to diffusion-based methods and various MGMs. Note that we exclude Token-Critic and DPC since the code is not publicly available. Though each model has slightly different implementation details for the architecture, an order of magnitude smaller NFEs of MGMs (see Table 1 of the main manuscript) lead to a significantly efficient sampling process compared to diffusion-based methods. With the proposed self-guidance, the proposed method surpasses the diffusion-based approaches and MaskGIT in both sampling efficiency and performance.
**Generalization to text-to-image method or image-to-image translation.**
The proposed method can be generally adopted for various tasks of MGMs. We briefly explain how the proposed method can be extended to various tasks.
- Text-to-Image generation (T2I): Because the proposed guidance can be generally defined with text condition, the proposed method can be easily extended to the T2I generation by replacing the class conditional module in TOAST with the text conditional module. Except for the architecture, the training and sampling strategies illustrated in Fig. 2 of the main manuscript can be adopted without modification for text embedding.
- Image-to-Image generation (I2I): For Image-to-Image generation, such as style transfer, the proposed method can be adopted to further enhance the sample quality. For instance, MaskSketch [2] utilizes pretrained MaskGIT to generate images from the sketch input. The proposed method fine-tuned on the target dataset can be additionally attached for the enhanced sample quality.
**Choice of the quantizer.**
Similar to continuous diffusion, the (discrete) latent space brings several strengths, such as semantic richness and computational efficiency. As a result, modern MGMs and various discrete domain generative image models mostly utilize vector-quantize-based encoder-decoders such as VQGAN and VQVAE. In this regime, our work is built upon the MGMs, which operate on VQ space. The latent space of various VQ-based encoders shows similar characteristics, such as rich semantic information, and lacks continuous semantic structure in contrast to pixel space. Furthermore, the proposed guidance and the auxiliary task can be generally defined in any discrete space. As a result, we expect that the proposed guidance will not be tied to the specific quantizer architecture and that the performance improvement will be similar.
---
[1] Shi, Baifeng, et al. "TOAST: Transfer Learning via Top-Down Attention Steering."
[2] Bashkirova, Dina, et al. "Masksketch: Unpaired structure-guided masked image generation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. | Summary: The paper focuses on enhancing Masked Generative Models (MGMs) for image synthesis.
The authors identified several reasons for the underperformance of MGMs: 1) lack of sequential dependencies, 2) multi-modality problem, 3) compounding decoding errors, 4) limitations of low-temperature sampling and 5) ineffective guidance techniques.
To address these issues, the authors present novel techniques and methodologies to improve their performance.
The main contributions of the paper are as follows:
1) Generalized Guidance Formulation.
2) Self-Guidance Sampling Method.
3) Parameter-Efficient Fine-Tuning.
Strengths: - Originality
The paper presents a highly original approach by extending guidance methods from continuous diffusion models to MGMs. In particular, the use of self-guidance sampling method and an auxiliary task for semantic smoothing in the VQ token space are novel approach that address specific challenges in MGMs.
- Quality
The paper demonstrates a high level of technical quality. The authors have thoroughly explained why their methods can enhance performance and have supported their claims with extensive experiments and ablation studies on various variables.
- Clarity
The paper is clearly written and well-structured. The authors provide detailed context relative to prior work, facilitating a clear understanding of the background and contributions of the proposed approach. The detailed explanations of the equations and methodologies, accompanied by ample examples, significantly enhance the overall readability and accessibility of the paper.
- Significance
The contributions of the paper are significant for the advancement of the MGMs field. By addressing key challenges in MGMs, the proposed methods offer substantial improvements in the quality and diversity of generated samples.
Weaknesses: Particularly due to inadequate visual results, some aspects are difficult to comprehend. Section 3.3 asserts that TOAST is the most suitable model for the study's task, but lacks sufficient explanation on why TOAST is the only suitable model. It would be more convincing if the paper showed some visual results from using other models, which would provide sufficient justification.
Explanations for the choice of hyperparameters are insufficient. While the paper demonstrates trade-off relationships in hyperparameter settings across some datasets in removal experiments, the sampling time steps do not exhibit such trade-offs. It is crucial to provide adequate explanations for why these settings differ from those maintained for the ImageNet 512×512 dataset.
Technical Quality: 3
Clarity: 3
Questions for Authors: Most hyperparameters in Figure 5 show a trade-off relationship between FID and IS scores. However, increasing the number of sampling time steps improves performance in both metrics. So, could you explain why sampling time steps were set to 12 and 18 in each experiment? Even though other methods are known to use hundreds of sampling time steps, 50 seems quite small in comparison.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did not fully address the limitations of the paper but did mention the need for attention to AI ethics due to the rapid growth of generative models, touching on the societal impact. To improve the paper, we suggest the following:
1) Although ImageNet is a comprehensive dataset, it is insufficient to prove generalizability. Therefore, conducting experiments on other datasets used in existing MGMs, such as MSCOCO or Conceptual Captions, will make the paper more complete.
2) Include more visual results. To clarify the reasons for selecting specific models or hyperparameters used in the paper, it is much more effective to show them through visual results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comprehensive review and for identifying the ambiguous aspects of our experimental designs. We address your concerns and questions below. All visual results can be found in the submitted PDF. Please zoom in on the figure of the submitted PDF for the best view.
**Include more visual results for clarification.**
To clearly show the improvement with the proposed guidance compared to MaskGIT, we add more visual results in Fig. 6 of the submitted PDF. Please refer to Global Response for a detailed description. We also visualize the impact of the different sampling hyperparameters in Fig. 11 of the submitted PDF. The high guidance scale may deteriorate the sample quality by overly enhancing fine details, and high sampling temperature often leads the overall structure to collapse.
**Using another model for the auxiliary task.**
Although we argued that TOAST could be efficiently and naturally adopted to learn auxiliary tasks, other PEFT methods can also be adapted to generate guidance similarly. Before validating, we note that the benefits of using TOAST are twofold:
1. MGMs exhibit training bias of only predicting the [MASK] tokens while treating other input tokens as ground truth. As a result, MGMs tend not to correct errors in the input tokens. We resolve this by simply masking all tokens of the 2nd stage of TOAST prediction.
2. Since the output of the 1st stage of TOAST is directly reused to sample $p_\theta(\hat x_{0,t})$, it brings more efficiency for sampling (see Fig. 2 of the main manuscript).
We experiment with the prompt tuning-based approach with the proposed auxiliary loss. The prompt tuning shows reasonable performance for transfer learning of MaskGIT [1]. The prompt token length is set to 128 following [1], and the batch size is reduced to 128 due to the memory capacity. We first forward the $z_t$ with prompt tokens to obtain ${\mathcal{H}}(z_t)$ in eq.7. Then, we forward the MaskGIT with the ${\\mathcal{H}}(z_t)$ to calculate the ${\\mathcal{L}}_{aux}$ in eq.7. To train the model, we directly input the ${\\mathcal{H}}(z_t)$ after the embedding layer of MaskGIT.
We set the guidance scale to 0.2 since we manually found the result guidance was too strong. The result in the table below and Fig. 8 of the submitted PDF show that the improvement is marginal compared to ours. We suspect that the naïve prompt tuning does not suitably deal with the error correction due to the bias of MGMs (see loss curve in Fig. 8). Furthermore, it requires 3 models forward for each sampling step.
With the above discussion and experiments, as well as the discussion in the main manuscript, we verify that the TOAST is a more suitable approach compared to the other PEFT methods in terms of performance and efficiency. We expect that a more meticulously designed model for the proposed guidance can further improve efficiency and performance and leave it for future work.
|| NFEs | FID$\\downarrow$|IS$\uparrow$|
|----------|----------|----------|----------|
|MaskGIT|18|6.56|203.6|
|Prompt tuning w/ ${\\mathcal{L}}_{aux}$|18$\\times$3|5.78|209.8|
|Ours (TOAST)|18$\\times$2|$\\textbf{3.22}$|$\\textbf{263.9}$|
**Explanations for the choice of hyperparameter (sampling step).**
In our preliminary research on the impact of sampling steps ($T$), we mainly investigated $T$ around $\sim$18, which is commonly adopted by prior works. We further explore the larger $T$ in Fig. 10 of the submitted PDF using $T$=24 and 36 with sampling temperature $\\tau$=35 and 60, respectively. The results indicate that using $T$, larger than 18, does not ensure a performance increase, as the metrics either saturate or deteriorate despite higher computational costs.
Similar to findings in MaskGIT, where the optimal performance-efficiency trade-off is observed around $T$=8, our method shows a "sweet spot" around $T$=18. Up to this point, both quality and diversity increase as sampling timesteps and computational costs increase while outperforming MaskGIT with similar NFEs. This demonstrates that the proposed method shows more scalable and efficient results compared to the MaskGIT. We also found a similar phenomenon for experiments on ImageNet 512x512, and the optimal timestep is around 12~18 steps. In this context, we opt for the sampling timestep of 12 and 18 for the results. We will add the above experimental results and our analysis in the final manuscript to give a thorough insight into choosing the sampling steps.
**Generalization to more complex datasets.**
Thank you for your valuable comments. Since the suggested datasets are primarily utilized for text-to-image (T2I) generation tasks, pretrained VQGAN and MaskGIT are not publicly available.
Instead, the T2I MGMs, MUSE [2], can be utilized to demonstrate the generalization. Though the main concern of the paper is to improve the generative capabilities of MGMs in class conditional image generation, the proposed guidance and utilized fine-tuning architecture can be extended to T2I generation. Given that the only difference is the replacement of the class condition with a text condition, we can simply substitute the class conditional module in TOAST with a text conditional module while maintaining other fine-tuning and sampling strategies. (Please refer to the response for Lc5P for a detailed TOAST implementation)
However, within the limited time frame, it is challenging for us to find reliable code (since the official MUSE is not publicly available), preprocess the large datasets, and ablate the generalizability of the proposed guidance. Therefore, we leave the generalization performance on T2I as future work.
---
[1] Sohn, Kihyuk, et al. "Visual prompt tuning for generative transfer learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Chang, Huiwen, et al. "Muse: Text-to-image generation via masked generative transformers." arXiv preprint arXiv:2301.00704 (2023). | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers and Area Chairs,
We thank the reviewers for their constructive feedback. We are glad to take various helpful reviewer comments to clarify and complete our work. Reviewers agreed on the originality, motivation, soundness, and significance of the paper. At the same time, reviewers are concerned about the broader applicability of the proposed guidance, the need for enhanced clarity in our visual representations and implementation descriptions, and the effect of a more diverse set of hyperparameters.
Here, we first deal with the most common concerns and questions from reviewers. Throughout all responses, we refer to the originally submitted paper as a $\textbf{main manuscript}$ and the submitted PDF file in the rebuttal period as a $\textbf{submitted PDF}$. Due to the page limit, figures and captions in the submitted PDF could be small. Please zoom in on the figures for the best view.
**More clarified visual results. (qSSj, Lc5P, cCTZ)**
All reviewers suggested providing more visual results to better clarify, demonstrate, and strengthen the effectiveness of the proposed guidance. In response, we have sampled visual results from ImageNet at 512x512 and 256x256 resolutions, which are shown in Fig. 6 of the submitted PDF. We generate paired samples from MaskGIT and the proposed method by fixing all random seeds. Qualitative results show that the proposed guidance enhances fine details, generating samples with high fidelity. Detailed facial attributes or local structures are generated accurately with the proposed guidance, whereas MaskGIT fails to generate plausible images.
**More intuition for the auxiliary tasks and visualization of the impact. (cCTZ)**
Reviewer cCTZ has raised a concern that further intuition or visualization for the auxiliary task is required to clear up the claim. We are glad to take helpful comments to clarify our work. Though it is not a common concern, we respond here to share more intuition and insights about the proposed guidance with the reviewers.
To learn the auxiliary task, we randomly replace a proportion of input tokens as error tokens, which act as semantic outliers of input tokens. To capture the semantic outlier, we argue that the model implicitly learns to smooth the vicinities of the data $z_t$ by leveraging coarse information from the surrounding context while minimizing the fine-scale details. For a simple example of detecting numerical outliers in a 1-dimensional signal, a straightforward way to correct outliers, such as impulse noise, is by measuring the distance to the local mean and correcting by smoothing the signal, such as low-pass filters. Similarly, to correct the unknown error tokens, which are semantic outliers, the model implicitly learns to smooth input $z_t$ to deal with unknown error tokens.
To demonstrate the above process, we visualize the effect of guidance using guidance scales 5 and -5. Given that the output guidance logit is semantically smoothed output, by its definition, the positive guidance means that it guides the sampling process toward enhancing fine details. Contrarily, the negative guidance scale means that it guides the sampling process toward reducing the fine details. More specifically, with the auxiliary task, the guidance logit includes semantically smoothed information. Thus, samples with a negative guidance scale result in reduced semantic information, such as fine details or local patterns. We visualize the effect of the positive and negative guidance scale in Fig. 7 of the submitted PDF.
We mask 80% of the input VQ tokens and visualize the one-step prediction outputs (b) without guidance, (c) with a positive guidance scale, and (d) with a negative guidance scale. The visualization shows that the positive guidance enhances the semantic fine details, while the negative guidance reduces the semantically fine details, such as patterns in feather and facial attributes. This demonstrates that the output logit trained with the auxiliary task can effectively capture the semantically smoothed logit, and the result guidance helps to improve the sample quality by enhancing fine details.
**Numerical error (typo) in the paper**
We apologize for the error in the main manuscript. In Table 2 of the main manuscript, we found that the FID and IS values for our method should be corrected from 3.04 and 240.8 to 3.22 and 263.9, respectively, following the values in the main table (Table 1). We note that it does not impact the discussion and conclusions of our study since it still outperforms other ablation results. We appreciate your understanding!
Pdf: /pdf/a23e3af1a68b2ee87674369f66fb6485a427dd51.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Does Egalitarian Fairness Lead to Instability? The Fairness Bounds in Stable Federated Learning Under Altruistic Behaviors | Accept (poster) | Summary: The paper builds on a previous model of FL to study trade-offs between core stability and egalitarian fairness when agents exhibit altruistic behaviors. The authors provide egalitarian fairness bounds under core stability solutions across different cases of altruism. There is also a small experimental part where the authors verify their theoretical results over heterogeneous altruistic behaviors as well.
Strengths: - The idea of introducing the altruistic behavior of the agents in an FL setting is very interesting and well-motivated.
- The modeling of the altruistic behaviors is very clear and reasonable.
Weaknesses: - I am not sure if the authors adequately address their intended question: Does Egalitarian Fairness Lead to Instability? As I read the text, I expected the authors to design egalitarian fair solutions and assess their stability. However, they focus on explaining how high levels of egalitarian fairness are expected under core stability.
- In my view, the most important results are in Section 5.2. However, I am not sure how significant and interesting these findings are. It is clear that there is a trade-off between core stability and egalitarian welfare since they can be seen as two different fairness constraints with quite different goals. So, it is quite expected that we cannot have perfect egalitarian fairness when the goal is to ensure that the grand coalition is stable. Moreover, regarding the bounds of lambda, which are these long expressions, I am quite unsure how intuitive and helpful they can be.
- The findings in Section 5.1 are not surprising at all. When the utility function of the agents changes, it is expected that a core stable solution will change, and different egalitarian welfare levels will be achieved under different solutions.
- The experiments are conducted with an extremely small number of agents, namely 4. I would expect experiments with at least, say, 15 agents for the results to be more interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What do we learn from the experiments? What is the take away message?
- Are the bounds that you provide in Section 5.2 tight?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: No limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for finding our work interesting and well-motivated. We also appreciate the detailed comments posed by the reviewer. Please find below the point-to-point responses to the reviewer's comments.
> **W1 (Question addressing):**
We'd like to clarify how the obtained egalitarian fairness bounds address our original question: *Does Egalitarian Fairness Lead to Instability?*
- We derives the lower bounds of achievable egalitarian fairness under different client behaviors for an FL system to maintain core stability. These bounds clarify that **"egalitarian fairness leads to instability" happens only when a model trainer prioritizes fairness below the derived fairness bounds**, answering the original question with **tight theoretical support**.
We develop egalitarian fairness bounds for stable FL systems **instead of** designing egalitarian fair solutions and assessing their stability for:
- Rationality: Core stability is a prerequisite for the healthy operation of FL systems. As a result, we investigate the egalitarian fairness bounds under core stability.
- Tightness:Designing egalitarian fair solutions and assessing their stability does not provide a tight solution about egalitarian fairness bound. As $\lambda$ is a continuous variable, it does not yield a tight solution to adjust the achieved egalitarian fairness $\lambda$ and monitor when stability is compromised.
> **W2 (Findings in Section 5.2):**
Thank you for your comment. The **significance** of Section 5.2 lies in offering **tightly theoretical support** to clarify the misconception that "egalitarian fairness leads to instability in FL" in previous work **rather than relying on intuition**.
Specifically, Section 5.2 theoretically analyzes the achievable egalitarian fairness bounds under **various client behavior scenarios**, providing insights into **establishing appropriate egalitarian fairness** in fair FLs. Additionally, we discuss how to extend the calculation of fairness bounds to accommodate clients with **heterogeneous behaviors** within the coalition.
Under behaviors such as equal altruistic or friendly altruistic, the complexity of the egalitarian fairness bounds increases because each individual must consider multi-related clients, which introduces additional variables into the calculations.
> **W3 (Findings in Section 5.1):**
Thank you for your comment. We provide Section 5.1 to **highlight** that clients' altruistic behaviors and friendship relationships affect optimal egalitarian fairness in a core-stable coalition and **empirically refute** the misconception that "egalitarian fairness leads to instability."
> **W4 (More clients):**
Following your suggestion, we add experiments on more clients. The results below verify that the egalitarian fairness bounds in a stable FL system are influenced by varying client behaviors and friendship networks and verify that the calculated egalitarian fairness bounds are closely aligned with the actual results (minor deviations exist due to discretization errors from adjustments in the fairness constraint parameter, $p$).
**Tab. 1 Calculated egalitarian fairness bound and Actual achieved egalitarian fairness under fully-connected**
| | Selfish | Purely Welfare Altruism | Friendly Welfare Altruism | Purely Equal Altruism | Friendly Equal Altruism | 6 Selfish + others Friendly Equal Altruism |
|--------------------------------------|---------|--------------------------|----------------------------|------------------------|--------------------------|-----------------------------------------|
| **10 Clients [10,20,30,40,50,60,70,80,90,100]** |
| Calculated egalitarian fairness bound | 1.27 | 1.02 | 1.04 | 1.16 | 1.17 | 1.27 |
| Actual achieved egalitarian fairness | 1.27 | 1.03 | 1.05 | 1.16 | 1.17 | 1.27 |
| **15 clients [10,10,30,30,40,40,50,50,60,60,70,80,90,100,100]** |
| Calculated egalitarian fairness bound | 1.18 | 0.99 | 1.04 | 1.10 | 1.12 | 1.13 |
| Actual achieved egalitarian fairness | 1.18 | 1.00 | 1.04 | 1.10 | 1.13 | 1.14 |
**Tab. 2 Calculated egalitarian fairness bound and Actual achieved egalitarian fairness under partially-connected**
| | Selfish | Purely Welfare Altruism | Friendly Welfare Altruism | Purely Equal Altruism | Friendly Equal Altruism | 6 Selfish + others Friendly Equal Altruism |
|--------------------------------------|---------|--------------------------|----------------------------|------------------------|--------------------------|-----------------------------------------|
| **0 - 2 - 4 - 6 - 8 - 0, 1 - 3 - 5 -7 - 9 - 1** |
| Calculated egalitarian fairness bound | 1.27 | 1.08 | 1.07 | 1.20 | 1.18 | 1.27 |
| Actual achieved egalitarian fairness | 1.27 | 1.09 | 1.07 | 1.20 | 1.19 | 1.27 |
| **0 - 2 - 4 - 6 - 8 - 10 - 12 - 14 - 0, 1 - 3 - 5 -7 -9 -11 - 13 - 1** |
| Calculated egalitarian fairness bound | 1.18 | 1.04 | 1.04 | 1.13 | 1.12 | 1.18 |
| Actual achieved egalitarian fairness | 1.18 | 1.04 | 1.05 | 1.13 | 1.12 | 1.18 |
>**Q1 (Experiments):**
Thank you for your question. Our experiments highlight two key findings:
- The theoretically derived achievable egalitarian fairness bounds align closely with the fairness achieved in a stable FL system, confirming their **tightness**.
- The **adaptability** of these bounds is further validated by Figure 5, which compares the theoretical and actual egalitarian fairness in an additional linear regression task.
>**Q2 (Tightness):**
Thank you for your question. The theoretical bounds are **tight**, which are rigorously derived under the framework set by Lemma 1 and adhere to the conditions for core stability specified in Lemma 2 to 4.
---
Rebuttal Comment 1.1:
Title: No further questions
Comment: I would like to thank the authors for their answer and the clarifications. I do not have any further questions at this point. I increase my score based on the new experiments and the further explanation regarding the scope of the paper. | Summary: The authors explored the impact of egalitarian fairness on the stability of Federated Learning (FL) systems. FL allows multiple clients to train a global model collaboratively without sharing their local data, thus preserving privacy. The authors addressed the concern that achieving egalitarian fairness, which aims to ensure uniform model performance across clients, might destabilize the FL system by prompting data-rich clients to leave. The authors modeled FL systems as altruism coalition formation games (ACFGs) and propose theoretical bounds for egalitarian fairness that maintain core stability under various types of altruistic behaviors. They disproved the notion that egalitarian fairness necessarily leads to instability and provided experimental validation of their theoretical bounds.
Strengths: 1. The authors presented a novel perspective by integrating concepts from game theory and social choice theory to analyze the stability of FL systems under the influence of egalitarian fairness.
2. The theoretical contributions are robust and well-supported by rigorous mathematical proofs. The proposed fairness bounds provide valuable insights into maintaining stability in FL systems.
3. The paper is well-organized, with clear definitions and logical progression of ideas. The use of examples and detailed explanations helps in understanding the complex concepts.
Weaknesses: 1. The experiments conducted to validate the theoretical bounds are limited to relatively simple tasks, such as mean estimation and linear regression with a fixed number of clients. While these experiments demonstrate alignment with the theoretical results, extending the validation to more complex and diverse FL scenarios would strengthen the empirical evidence.
2. The sensitivity of the proposed fairness bounds to variations in client behavior and network topology is not fully explored. While the authors provided insights into different types of altruistic behaviors and their impact on stability, a more detailed analysis of how changes in client behavior or the structure of the friends-relationship network affect the theoretical bounds would add depth to the findings.
3. The use of clients' and client's is not rigorous enough (such as Line 22 and Line 128) and needs to be carefully checked.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do the proposed fairness bounds perform in more complex FL scenarios?
2. How sensitive are the theoretical bounds to variations in client behavior and network topology? Are there specific conditions under which the bounds might fail to ensure stability?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and potential negative societal impacts of their work in Section 7. They have acknowledged the influence of clients' altruistic behaviors and the configuration of the friend-relationship network on the stability and fairness within federated learning (FL) coalitions. They also identified potential limitations such as the impact of proportional fairness and the necessity to design suitable incentive mechanisms to retain clients when high egalitarian fairness is mandatory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted that the reviewer found our work novel and valuable, with robust theoretical support.
Thank you for your positive opinions and insightful comments.
> **W1 & Q1 (More complex and diverse FL scenarios):**
Thank you for your comment. Following your suggestion, we first add more experiments on more complex scenarios, as in Tab.1 and Tab.2. The results confirm that the calculated egalitarian fairness bounds are closely aligned with the actual achieved fairness values (minor deviations due to discretization errors from adjustments in the fairness constraint parameter, $p$).
**Tab. 1 Calculated egalitarian fairness bound and Actual achieved egalitarian fairness under fully-connected**
| | Selfish | Purely Welfare Altruism | Friendly Welfare Altruism | Purely Equal Altruism | Friendly Equal Altruism | 6 Selfish + others Friendly Equal Altruism |
|--------------------------------------|---------|--------------------------|----------------------------|------------------------|--------------------------|-----------------------------------------|
| **10 Clients [10,20,30,40,50,60,70,80,90,100]** |
| Calculated egalitarian fairness bound | 1.27 | 1.02 | 1.04 | 1.16 | 1.17 | 1.27 |
| Actual achieved egalitarian fairness | 1.27 | 1.03 | 1.05 | 1.16 | 1.17 | 1.27 |
| **15 clients [10,10,30,30,40,40,50,50,60,60,70,80,90,100,100]** |
| Calculated egalitarian fairness bound | 1.18 | 0.99 | 1.04 | 1.10 | 1.12 | 1.13 |
| Actual achieved egalitarian fairness | 1.18 | 1.00 | 1.04 | 1.10 | 1.13 | 1.14 |
**Tab. 2 Calculated egalitarian fairness bound and Actual achieved egalitarian fairness under partially-connected**
| | Selfish | Purely Welfare Altruism | Friendly Welfare Altruism | Purely Equal Altruism | Friendly Equal Altruism | 6 Selfish + others Friendly Equal Altruism |
|--------------------------------------|---------|--------------------------|----------------------------|------------------------|--------------------------|-----------------------------------------|
| **0 - 2 - 4 - 6 - 8 - 0, 1 - 3 - 5 -7 - 9 - 1** |
| Calculated egalitarian fairness bound | 1.27 | 1.08 | 1.07 | 1.20 | 1.18 | 1.27 |
| Actual achieved egalitarian fairness | 1.27 | 1.09 | 1.07 | 1.20 | 1.19 | 1.27 |
| **0 - 2 - 4 - 6 - 8 - 10 - 12 - 14 - 0, 1 - 3 - 5 -7 -9 -11 - 13 - 1** |
| Calculated egalitarian fairness bound | 1.18 | 1.04 | 1.04 | 1.13 | 1.12 | 1.18 |
| Actual achieved egalitarian fairness | 1.18 | 1.04 | 1.05 | 1.13 | 1.12 | 1.18 |
We'd also like to clarify why we currently use mean estimation and linear regression for our experiments: the error in these tasks is exact and can be expressed in closed form, which allows us to do the following **tight** theoretical analysis. For more complex tasks, such as neural network training, the error is uncertain and influenced by model parameters, training methods, and specific client data distributions. However, the suggested point is well taken: We acknowledge that our paper focuses on FL tasks with errors exactly enough to facilitate rigorous theoretical analysis. We explained this point in Section 3, **Lines 131-133**, and gave this limitation in **Lines 349-351**. We would like to revise the paper, particularly the abstract and introduction, to further clarify that "we use the **stylized** model to generate the following **insights** in fairness."
>**W2 & Q2 (Sensitivity):**
Thank you for your comment. Based on Eq. (3) to (7), we'd like to analyze how the theoretical bounds are sensitive to client behavior and network topology. Taking the fairness bound under purely welfare altruistic behaviors in Eq. (4) as an example, the set of friends $F_i$ for each client $i$ changes with different network topologies, resulting the variance on parameters **$f^{opt} _{\pi_s,1}$, $f^{opt} _{\pi_s,2}$** (variables that identify the client with the smallest data volume within the network topologies of each sub-coalition), thereby leading to different theoretical bounds.
Following the same settings as the mean estimation task in Section 6, we constructed three types of network topologies:
- T1: fully connected;
- T2: 0 (20), 1 (40) - 2 (50) - 3 (100) - 1 (40) , where only clients 1, 2, and 3 are connected;
- T3: 0 (20), 1 (40), 2 (50) - 3 (100), where only clients 2 and 3 are connected.
According to our analysis, the values of $f^{opt} _{\pi_s,1}$, $f^{opt} _{\pi_s,2}$ will gradually increase from T1 to T3, leading to a decrease in the denominator value in the theoretical bounds. Consequently, the theoretical bounds are expected to increase.
The experimental results, as shown in the table below, are **consistent** with our expectations. The analysis and experiments demonstrate that the theoretical bound calculation is **effective** under varying client behavior and network topology.
| | Purely Welfare Altruism | Friendly Welfare Altruism | Purely Equal Altruism | Friendly Equal Altruism |
|---------|--------------------------|----------------------------|------------------------|--------------------------|
| **fully-connected** |
| Calculated egalitarian fairness bound | 1.08| 0.98| 1.30| 1.23|
| Actual achieved egalitarian fairness | 1.08 |0.98| 1.31| 1.24|
| **0,1-2-3-1** |
| Calculated egalitarian fairness bound | 1.12 | 1.16| 1.40| 1.32|
| Actual achieved egalitarian fairness | 1.12 | 1.16| 1.41| 1.33|
| **0, 1 , 2-3** |
| Calculated egalitarian fairness bound | 1.91 ↑ | 1.29 ↑ | 1.91 ↑| 1.43 ↑|
| Actual achieved egalitarian fairness | 1.91 ↑ | 1.30 ↑| 1.91 ↑| 1.44 ↑|
> **W3 (Check the use of "clients'" and "client's"):**
Thank you for your comment. We have checked the use of 'clients'' and 'client's' throughout the paper and verified their logical and grammatical correctness. | Summary: This paper examines the relationship between egalitarian fairness concepts and stability in federated learning, where multiple clients collaboratively train a shared model while retaining local data privacy. Egalitarian fairness promotes uniform model performance across clients, but this can reduce performance for data-rich clients, potentially causing instability, as these clients might leave for better-performing coalitions. This work employs cooperative game theory and social choice to frame FL systems as altruism coalition formation games, suggesting that core instability issues are linked to clients' altruism and their network of relationships. The authors propose optimal egalitarian fairness bounds that maintain core stability under various altruistic behaviors, suggesting that egalitarian fairness does not necessarily lead to instability.
Strengths: The main strength of this paper is its comprehensive approach to analyzing federated learning under both selfish and altruistic behavior. By considering both the performance of machine learners and the game-theoretic aspects of how federated learners interact, the authors provide a well-rounded analysis of the relationship between egalitarian fairness and stability. The use of game theory and social choice theory to frame FL systems as altruism coalition formation games is particularly innovative, linking instability issues to clients' altruism and their network of relationships. The proposed optimal egalitarian fairness bounds that maintain core stability under various altruistic behaviors are a significant theoretical advancement, disproving the assumption that egalitarian fairness inevitably leads to instability. The technical correctness of the results, subject to the model assumed, further solidifies the paper's contribution to advancing the state of the art in this field. Experimental validation then convincingly supports these theoretical findings with empirical outcomes.
Weaknesses: I'm left with some questions about the motivations for the model, though the authors do a good job motivating it with prior work. I'm also left wondering whether some results and assumptions can be generalized, as the analysis is somewhat rigid; the paper would be much stronger if some results held in greater generality. I also question the model of FL used here (FedAvg), as it simply averages over model parameters (it would be nice to see models where clients submit gradient updates rather than complete models, and also where the central authority itself optimizes for egalitarian or other fairness objectives, rather than averaging). Finally, I argue the client utilities don't fully reflect the impact of overfitting.
Technical Quality: 4
Clarity: 3
Questions for Authors: Clearly the FedAvg rule for $\theta$ (line 126) promotes privacy in some sense, but with modest added noise, is the mechanism differentially private?
I would argue that definition 1 is more closely related to the demographic parity concept of equalized error [10], not egalitarian fairness. I would claim that in loss contexts, egalitarian welfare (or more aptly, malfare) is simply the max over average losses of agents [8], but inequality between agents is not antithetical to egalitarian fairness, rather egalitarianism is indifferent to the performance of high performing groups so long as low performing groups cannot be improved. I suppose this is more of a terminology issue, though I do ask if an additive version of this could be explored, where error values are compared to the maximum per group error.
Lemma 1 is quite confusing to me. $\mu_{e}$ depends on i, but notation does not reflect this. By expected value of the variance, you seem to mean expected value of the raw variance, but it's not clear why I care about that quantity? Is this assuming realizability, and therefore the square here corresponds to square loss? Is the method specific to square error though? Why not work directly in terms of loss? These inferences are somewhat confirmed in section 6, though I suspect a method is not really specific to square error, and I claim a more general description in terms of generic loss and loss variance would be easier to follow.
I also think this result does not properly consider overfitting, so I don't think it's very well motivated in a machine learning context. Loss variances should be model (parameter) dependent, so if this is in terms of minimum variance, I suspect it's a reasonable lower bound, and if maximum variance is considered (and multiplied by the log of the model class cardinality or some similar capacity measure), it may be a reasonable upper bound? It's also not totally clear to me which quantities refer to distributions and which refer to data sets in this definition. I would be interested to see a more general analysis, in terms of quantities like Rademacher averages that better characterize overfitting in machine learning [1,2,5]. Some discussion of how this applies to fair learning in particular would be appreciated, especially as works like [9] show that the process of optimizing fair ML objectives has a regularizing effect and can actually reduce overfitting.
In lemmas 2,3,&4, the requirement is that this holds for all i, correct? These are a bit tricky to read, because $\mu_{e}$ depends on i, although $\sigma^{2}$ does not, and the notation does not reflect this. Moreover, it seems unfortunate that we reach the same conclusion in all three cases, is this necessary (i.e., are there counterexamples to improvement in each?).
I think these results can only be true if client distributions are drawn IID. This should be clarified in lemma 1. This is a big assumption, since I might expect friends’ distributions to be correlated (which incentivizes them to deviate, harming core stability). But I also need to ask, is the variance between distributions the sample variance or the distribution variance? I think this matters a lot, since presumably there are relatively few agents, and if the sample variance is what matters, then why can't we condition on the sample (in which case independence wouldn’t matter)?
In section 4.1, I think it would be nice to generalize this idea to define a client's utility function as some welfare function of their own value and the value of their friends. I think this would simplify the presentation, but also you would be able to use more sophisticated concepts, like weighted power-means.
Moreover, since the same result holds for purely altruistic and friendly altruistic cases, this immediately implies that the client utility functions can be the utilitarian maximum social welfare (malfare) function [6,11], and some set of weighted utilitarian welfare (malfare) functions. I suspect it could easily generalize to any weighted utilitarian welfare function. I would also be very interested to see if these results held for any unweighted power-mean welfare function, any weighted power-mean welfare function, or any Gini malfare function [6], as they do in some sense lie between welfare altruism and equal altruism, though I don't think these results follow directly from your stated lemmas.
Minor points:
149 “satisfy” should be “satisfies.” Moreover this definition feels a bit redundant with (1).
158 It seems strange to mix a sociological definition of friend with a mathematical definition. Moreover, while I am not a sociologist, it seems wrong that “friend” would be the most intimate trusted voluntary category. If so, what is a best friend, and where does that leave a partner? But is this discussion even necessary?
Perhaps definitions 2 3 and 4 can be merged to describe the coalitional game?
163 “does not exist” to “does not exist a”
171 I dislike this terminology. I would describe both types of altruism as considering different types of the welfare among friends, but I would term them “egalitarian altruism” and “utilitarian altruism,” respectively [1].
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: FedAvg
I don't love the FedAvg aggregation rule of line 126 (though I appreciate that the authors analyze an established model). This seems appropriate when groups train relatively similar models on their own, and the lost surface is relatively smooth. I suspect this is a decent rule when each client has the same optimal model, but due to limited sampling wouldn't identify this from their own data.
The approach seems particularly problematic for non-convex loss surfaces, or models with symmetry (e.g., neural networks or mixture models where averaging the parameters of two identical models can produce a completely different model). I’d like to see more discussion of these limitations (or generalization of the model).
However, I would like to see some consideration of other models of FL, for instance when clients send gradient updates, rather than raw model parameters. From reading the introduction, I expected the FL to be trained to incentivize egalitarian fairness, but this simple averaging rule is neither utilitarian optimal, nor is it egalitarian optimal. Works like [3] consider sampling implications for this, and biased SGD analyses [4,7] also seem appropriate to discuss, at least in related work.
Overfitting
I would also like to see some consideration of overfitting. It seems tautological that nonaltruistic agents would prefer to train their own model when only training loss is considered, and while altruism may incentivize them to share their data, there is also the non-altruistic effect of reduced overfitting. It seems this would also disincentivize the formations of small coalitions, so I would be very interested to see what impact these factors have on the work.
References:
[1] An axiomatic theory of provably-fair welfare-centric machine learning
C Cousins
Advances in Neural Information Processing Systems 34, 16610-16621
[2] Revisiting fair-PAC learning and the axioms of cardinal welfare
C Cousins
International Conference on Artificial Intelligence and Statistics, 6422-6442
[3] Jacob D Abernethy, Pranjal Awasthi, Matthäus Kleindessner, Jamie Morgenstern, Chris Russell, and Jie Zhang. Active sampling for min-max fairness. In International
Conference on Machine Learning, volume 162, 2022.
[4] A guide through the zoo of biased SGD
Y Demidovich, G Malinovsky… - Advances in Neural …, 2024 - proceedings.neurips.cc,
[5] Uncertainty and the social planner’s problem: Why sample complexity matters
C Cousins
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …
[6] Algorithms and Analysis for Optimizing Robust Objectives in Fair Machine Learning
Cyrus Cousins
[7] Hu, Yifan, et al. "Biased stochastic first-order methods for conditional stochastic optimization and applications in meta learning." Advances in Neural Information Processing Systems 33 (2020): 2759-2770.
[8] John Rawls. A theory of justice. Harvard University Press, 1971.
[9] Cousins, Cyrus, I. Elizabeth Kumar, and Suresh Venkatasubramanian. "To Pool or Not To Pool: Analyzing the Regularizing Effects of Group-Fair Training on Shared Models." International Conference on Artificial Intelligence and Statistics. PMLR, 2024.
[10] Dwork, Cynthia, et al. "Fairness through awareness." Proceedings of the 3rd innovations in theoretical computer science conference. 2012.
[11] Deschamps, R., Gevers, L.: Leximin and utilitarian rules: A joint characterization. Journal of Economic Theory 17(2), 143–163 (1978)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted that the reviewer found our motivations and methods innovative and significant towards an important research question. Thank you for your positive opinions and insightful comments.
**Weakness:** Thank you for your thoughtful comments. We provided responses to the points in Q4 and L2 (Overfitting), Q7 and Q8 (Welfare function), L1 (FedAvg).
>**Q1 (Privacy):**
FedAvg can provide differential privacy with modest added noise, i.e., DP-FedAvg [1] adding Gaussian noise to the final averaged update to ensure privacy.
[1] McMahan, H. Brendan, et al. "Learning Differentially Private Recurrent Language Models." ICLR. 2018.
>**Q2 (Terminology):**
The distinction between egalitarian fairness and demographic parity lies in the focus on fairness, whether it's between individual clients or between protected groups defined by sensitive attributes. In FL, improving the model's performance on some clients inevitably reduces performance on others; thus, minimizing the maximum loss over an average of agents becomes a sufficient condition for equalized error among clients and vice versa. Therefore, we adopt equalized error among clients to quantify egalitarian fairness, consistent with [2].
[2] Kate Donahue and Jon Kleinberg. Fairness in model-sharing games. WWW '23. 2023.
>**Q3 (Variables in Lemma 1):**
The notation $\mu_e$ does not specify a particular $i$ because, according to Lemma 1, the mean value $\theta_i$ and standard deviation $\epsilon_i$ of each client's dataset $\mathcal D_i$ is i.i.d. sampled from $\Theta$, resulting in a **consistent** $\mu_e$ across clients. Additionally, the error in Eq. (1) is derived from $err_i=\left(\boldsymbol{x}^{T} \hat{\boldsymbol{\theta}}_{i}-y\right)^{2}$, which is actually a generic loss.
>**Q4 and L2 (Overfitting):**
Overfitting in machine learning leads to uncertainty in the error outlined in Eq. (1), which is influenced by model structure and parameters, choice of training algorithms, and the specific data distributions of clients, etc. Centering on exploring the relationship between egalitarian fairness and stability, we use the stylized model in Lemma 1, which provides a closed-form error to derive precise relations and generate insights into fairness settings in fair FLs. We'd like to revise Lines 349-351's discussion by clarifying the overfitting challenges in more generalized machine learning scenarios. Thank you for highlighting this point!
>**Q5 (Lemmas 2,3,&4):**
The requirement $n_i\le \frac{\mu_e}{\sigma^2}$ holds for all $i=1,...,N$, as explained in Q3 that $\mu_e$ is consistent across clients. Lemmas 2–4 provide sufficient conditions for core stability across these behaviors. This setup ensures that each lemma is tailored to specific scenarios, supporting the tightness of our subsequent propositions.
> **Q6 (IID and Variance):**
Clients are **heterogeneous** as they have different parameters governing their data distribution and different
amounts of data they have noisily drawn from their own distribution in Lemma 1. Regarding your second question, the mentioned variance refers to **distribution variance** rather than sample variance.
> **Q7 and Q8 (Welfare function):**
We'd like to clarify how the key ideas in our work **extend** to more utility functions. Taking the weighted power-mean welfare function to construct utility functions as an example and considering scenarios where clients exhibit friendly equal altruism, the utility of the $i$-th client is:
$u_i(\pi_g) = \left( \sum_{i=1}^{|F_i|} w_i err_i^q(\pi_g) \right)^{\frac{1}{q}}$.
The coalition $\pi_g$ remains core-stable if,
$\exists i\in \pi_s,\left( \sum_{i=1}^{|F_i|} w_i err_i^q(\pi_g) \right)^{\frac{1}{q}} \le
\left(w_i \cdot err_i^q(\pi_s)+\sum_{f \in F_i \cap \pi_s} w_f err_f^q(\pi_s)+\sum_{f \in F_i \cap \pi_c} w_f err_f^q(\pi_c) \right)^{\frac{1}{q}}$.
Following a similar process in Appendix Eq. (16)-(27),
we have $\lambda \ge \frac{\left ( {N_s}^2\cdot{N_c}^{2}\cdot (N_g\cdot n_l+d(\pi_g,n_m)) \right )^q }{{N_g}^{2q}\left ( w_{k_{\pi_s}}\cdot N_c^{2q}\cdot \left (N_s\cdot n_l+d(\pi_s,n_{k_{\pi_s}}) \right )^q + N_c^{2q}\cdot \underset{f\in \hat{F_s}}{\sum} w_f\cdot \left ( N_s\cdot n_l+d(\pi_s,n_{f}) \right )^q
+N_s^{2q}\cdot \underset{f\in \hat{F_c} }{\sum} w_f\cdot \left ( N_c\cdot n_l+d\left ( \pi_c,n_f \right )\right )^q\right ) }$.
When setting weights w=[0.7,0.1,0.1,0.1] and $q$=1.2, the actually achieved egalitarian fairness was **1.45**, aligned with the calculated fairness bound of **1.44**. Minor deviations exist due to discretization errors of $p$ in Line 312.
> **Minor points (Sociological definition and terminology):**
Thank you for pointing out the typo errors. Regarding "friend" in Definition 3, we attempt to draw an analogy from sociological friendships to describe how "friends" are in FL. We think **removing** the sociological definition of "friend" will be better. Examples of "friendship" in FL, such as Lines 47-49, are enough for readers to better understand the "friend " in FL. We agree that "**utilitarian**" better reflects the focus on the worst-off client than "welfare" and revise it following your suggestion. Thank you for your suggestion!
> **L1 (FedAvg):**
Regarding the expansion into broader FL models, our mean estimation task model, adapted from Donahue & Kleinberg, serves as a prototypical task for analyzing the stability of federating coalitions. However, the broader point is well-taken: We should clarify that our paper is investigating an FL model with errors exactly enough to allow for **tight** theoretical analysis. We explained this point in Section 3, **Lines 131-133**, and gave this limitation in **Lines 349-351**. We would like to revise the paper, particularly the abstract and introduction, to further clarify that "we use a **stylized** model to generate the following **insights** in fairness."
> **L2 (Overfitting):**
Please refer to Q4 and L2 (Overfitting). | Summary: This paper rigorously answered a critical question regarding fair federated learning: Does egalitarian fairness lead to instability? It analyzed and presented the influence of clients’ altruistic behaviors and the configuration of the friend-relationship network on the achievable egalitarian fairness within a core-stable federated learning (FL) coalition.
The optimal egalitarian fairness bounds without compromising core stability are explored theoretically. These novel insights can be leveraged to establish appropriate egalitarian fairness in improving the alignment of FL processes with societal welfare and ethical standards.
Strengths: + A novel research on the influence of clients' altruistic behaviors is well explored, which offers deep insights and a significant impact on the domain of fair Federated Learning.
+ A solid Modelling of fair FL systems as altruism coalition formation games (ACFGs) based on game theory and social choice theory.
+ Concrete bounds of optimal egalitarian fairness that an FL coalition can achieve while maintaining core stability under various altruistic behaviors.
+ Novel key insights are obtained: (1) the instability issues emerging from the pursuit of egalitarian fairness are significantly related to the clients’ altruism within the coalition and the configuration of the friends-relationship networks among the clients; (2) the theoretical contributions clarify the quantitative relationships between achievable egalitarian fairness and the disparities in the sizes of local datasets, disproving the misconception that egalitarian fairness inevitably leads to instability.
+ Extensive experiments are conducted to evaluate the consistency of theoretically derived egalitarian fairness bounds with the empirically achieved egalitarian fairness in fair FL settings.
Weaknesses: - In subsections 4.2 and 5.1, it is better to briefly clarify what the colors (blue, red, orange) mean in this context. Do you intend to use colors to better illustrate the results in Table 1?
- In Line 261, though "Proofs for propositions are given in Appendix" was stated, and each proof can be located, it is better to make a clearer pointer for each proposition. For example, which location of the proof in the Appendix is for which proposition?
- In Line 307, Example 1 is used to show which proposition? Better to clarify.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the identified weak points for details.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are delighted that the reviewer recognized the significance of our research and its novel key insights. Thank you for your positive feedback and insightful comments.
> **W1 (Illustration):**
Thank you for your comment. In Sections 4.2 and 5.1, we utilize color text to better illustrate the results in Table 1. Different colors (blue, red, orange, green) are used to indicate stable coalition structures under different behaviors.
>**W2 (Navigation):**
Thank you for your suggestion. We'd like to revise Section 5.2 and connect the propositions to their corresponding proofs in the Appendix to improve clarity and navigation.
> **W3 (Example 1):**
Thank you for your comment. **Example 1 shows how to calculate the achievable egalitarian fairness bound for heterogeneous behaviors (Lines 303-306)**. To improve clarity, we'd like to revise this paragraph (Lines 303-306) by adding "An example to calculate the achievable egalitarian fairness bound heterogeneous behaviors is as follows:".
---
Rebuttal 2:
Title: My comments have been well addressed
Comment: Thank you for the authors' response. My comments have been well addressed. | Rebuttal 1:
Rebuttal: Dear Chairs and Reviewers,
We kindly thank all the reviewers for their time and for providing valuable feedback on our work. We appreciate that reviewers have pointed out that our work is novel (Reviewer LduT, r5t6), significant (Reviewer LduT, CtFP) with valuable insights (Reviewer LduT, r5t6), interesting and well-motivated (Reviewer X2VX), and our results are convincing with solid analysis (Reviewers LduT, CtFP, r5t6).
In response to the reviews, we ran a series of experiments to show the generality of our findings under weighted utilitarian welfare function (Reviewer CtFP), network topology (Reviewer r5t6), number of agents (Reviewer r5t6 and Reviewer X2VX).
Kind regards,
The authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner | Accept (poster) | Summary: This paper introduces FlowTurbo, a framework to accelerate flow-based generative models. The key contributions are:
1) a lightweight velocity refiner to estimate the offset of velocity efficiently during sampling;
2) a pseudo corrector to reduce the number of model evaluations while keeping the original second-order convergence;
3) sample aware compilation to compile model evaluations, sampling steps and cfg into a static graph for extra speedup;
Extensive experiments with both class-conditioned and text-to-image generation show significant acceleration (30-58%). Notably, on ImageNet 256x256 generations, FlowTurbo achieves the fastest sampling speed (38 ms/image) with FID 3.93. Moreover, due to the compatibility with multi-step sampling, it can also be applied to other tasks including image editing, inpainting, etc.
Strengths: Given the popularity of flow-based generation models, SD3 for image and Sora for video, flow-based model acceleration is an important area, which is under-explored compared with diffusion-based models.
Motivated by the stability of velocity prediction during sampling, the authors proposed a lightweight velocity refiner. The pseudo corrector technique reduces the model evaluation while preserving the second-order convergency. These ideas are novel and effective. The experimental results are comprehensive covering generation task with class-conditional and text-to-image generation, editing tasks including editing, inpainting, object removal. The ablation studies are thorough, examining each component's contribution and various hyperparameters. Finally, the code is also provided for reproducibility. The paper is easy to follow, for example, Figure 2 illustrates the motivation for light-weight estimation model very clearly.
Weaknesses: For text-to-image generation, insta-flow is adopted as the teacher model, which is built on top of sd1.5. It would be great to experiment with other flow-based models that are natively trained with flow matching, for example, sd3.
It would also be interesting to see how FlowTurbo performs when the model size changes. For example, SiT-XL vs SiT-L.
Finally, one missing ablation is the architecture of refiner. how to design a refiner for different architectures?
Technical Quality: 3
Clarity: 3
Questions for Authors: Can this method also be applied to video-based generation models? The inference is much slower for video generation compared with image generation. it would be great to see if there is a unified method for both image and video generation.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The proposed method is limited to flow-based generation models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.
**Q1: About experiments on other models.**
**[Reply]** Thanks for your advice. Since SD3 is open-sourced a few months later than the submission, we didn't include experimental results on it in our original paper. In the following table, we compare the sampling quality (measured by FID 30K $\downarrow$ on MSCOCO) and the sampling speed of the default sampler of SD3 (Euler's method) and our FlowTurbo:
| Method | FID 30K $\downarrow$ | Latency (s / img) |
| --------- | -------------------- | ----------------- |
| Euler | 28.4 | 2.25 |
| FlowTurbo | 28.3 | 1.35 |
Besides, we also provide qualitative comparisons on SD3 and the newly released SOTA model FLUX.1-dev in Figure A2 and Figure A3 in the attached one-page PDF, where we demonstrate FlowTurbo can generate more details than the baseline.
**Q2: About different model sizes.**
**[Reply]** Thanks for pointing out this. In our original paper, we conducted experiments with models of various sizes including SiT (675M), InstaFlow (860M), Lumina-Next-T2I (2B). We also agree that it is interesting to see the performance on similar architectures with different model sizes such as SiT-XL and SiT-L. However, SiT only released their SiT-XL-2 model and we failed to find another publicly available family of models that satisfied the requirement. We will add the comparison once the SiT releases another model such as SiT-L or SiT-B.
**Q3. About the refiner architecture**
**[Reply]** When designing the architecture for the velocity refiner, we followed a simple rule to make the refiner have a similar architecture as the base velocity predictor but with much fewer parameters (~5% of the base model). The detailed architecture is described in Section 4.1 and Appendix C. For example, since SiT consists of multiple transformer blocks, we simply use a single block as the refiner. For text-to-image generation, we reduce the number of layers and channels of the UNet. In our early experiments, we did have tried another architecture for class-conditional image generation, where a SiT-S (a smaller version of the base velocity predictor SiT-XL) is adopted as the refiner (as shown in the following table):
| Sample Config | Refiner Architecture | Params | FID |
| ------------- | -------------------- | ------ | ---- |
| $H_5 P_7 R_3$ | SiT-S | 33M | 2.53 |
| $H_5 P_7 R_3$ | a block of SiT-XL | 29M | 2.22 |
| $H_8 P_9 R_5$ | SiT-S | 33M | 2.24 |
| $H_8 P_9 R_5$ | a block of SiT-XL | 29M | 2.12 |
We find that using a block of SiT-XL as the refiner is slightly better than the SiT-S. These results demonstrate that our framework is robust to the choice of model architectures for the velocity refiner. We will add these discussions and results to Appendix C of the revised paper.
**Q4. About the application to video generation models**
**[Reply]** Thanks for your advice. Since our FlowTurbo is designed to accelerate flow-based generative models, it can be applied to video generation models as long as they are trained via flow-matching (i.e., has a stable velocity during the sampling). Specifically, we experiment with the recently released model Open-Sora and provide the qualitative results in Figure A4 of the attached one-page PDF, where we show our FlowTurbo generates video with higher fidelity under the same computational budgets.
---
Rebuttal Comment 1.1:
Comment: Thanks the authors for providing the detailed rebuttal and response to my questions. I will maintain my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your time and postive feedback
Comment: We greatly appreciate your response and valuable suggestions, which will improve the quality and impact of our paper. We will incorporate your feedback and update our paper accordingly in the revision. | Summary: This paper explores enhancing flow-based generative models for image generation by accelerating the sampling process while maintaining or improving image quality. The key contribution, FlowTurbo, is a new framework that introduces a lightweight velocity refiner to adjust velocity predictions during sampling, reducing computational costs. Additionally, techniques like pseudo correctors and sample-aware compilation further optimize the sampling speed. The results show significant acceleration in image generation tasks with improved FID scores, establishing a new benchmark for real-time image generation.
Strengths: Overall I find that the writing is clear, concise, and well-structured, making it easy for readers to follow the arguments and understand the key points. This paper starts from the distinct features of flow models and naturally proposes corresponding solutions. Comprehensive ablations validate the effectiveness of the proposed methods.
Weaknesses: - The design of three types of blocks in sampling is somewhat confusing to me. Why do you have to use these types of blocks and arrange them in this order? Do you have to empirically decide the order and numbers of these blocks for each task/domain/model?
- The visualization in Figure 1 is interesting. However, visualizing the discretization error or curvature [1] during sampling seems to be a more suitable and accurate method.
- Some important related works [1,2], tailored for flow models, should be discussed and compared.
[1] Nguyen, Bao, Binh Nguyen, and Viet Anh Nguyen. "Bellman optimal step-size straightening of flow-matching models." *arXiv preprint arXiv:2312.16414* (2023).
[2] Shaul, Neta, et al. "Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models." *arXiv preprint arXiv:2403.01329* (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: If I understand correctly, the proposed method can be applied to diffusion models as well. (Although Figure 1 illustrates flow models may be more suitable.) Have you tested the results on diffusion models like SD or PixArt?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.
**Q1: About the design of sampling blocks.**
**[Reply]** Sorry for the confusion. As illustrated in Figure 1, the velocity during the sampling process stabilizes in the final few steps. At this stage, we employ a lightweight refiner to adjust the velocity offset. Additionally, our pseudo corrector is crafted to efficiently achieve second-order convergence, which requires second-order intermediate results (achieved by several Heun's steps at the start) for initialization. Consequently, the sequence of the three block types should be: Heun's block, pseudo corrector block, and refiner block. As the results shown below, changing the order of the blocks will lead to worse sampling quality:
|Method|FID $\downarrow$|Latency (ms / img)|
|-----|----|-----|
|$H_5 P_7R_3$|2.24|72.4|
|$P_5 H_7R_3$|2.38|79.2|
|$P_5 R_7H_3$|13.66|53.7|
|$H_5 R_7P_3$|30.94|60.4|
Regarding the number of steps for each block, we have included comparisons in Table 3 and Table 4. Generally, we apply the refiner during the last 20-40% of the sampling steps, where the velocity is relatively stable. We can always adjust the ratio between Heun's steps and pseudo-corrector steps to balance sampling quality and complexity (as seen in Table 3d). Moreover, since reconfiguring the steps doesn't require additional training, we can select an optimal configuration based on computational resources and qualitative results. We will incorporate this detailed analysis and discussion into our revised paper to offer better guidance on designing the configuration of our FlowTurbo.
**Q2: About the visualization.**
**[Reply]** Many thanks for your valuable advice. We agree that visualizing the curvature is more suitable and accurate. We provide the comparisons of the curvatures of different models including DiT, SiT, SD3, FLUX.1-dev (a newly released SOTA flow model), Open-Sora (a flow-based video generation model) in the Figure A1 in the attached PDF. The visualization of curvature basically aligns with our original PCA visualization, where we find the flow-based models enjoys much smaller curvature, indicating the sampling trajectory is straighter. We will replace the visualization in Figure 1 with curvature in our revised paper.
**Q3: About the related work.**
**[Reply]** Thanks for pointing out this. We agree that these two papers are related to our paper and we will discuss them in the related work (L107) as follows:
> There are also some works to accelerate the sampling of flow-based models, such as computing the optimal sampling stepsizes [1] and solver distillation approaches [2].
In our FlowTurbo, we do not consider the variation of stepsizes during the sampling, which makes our method orthogonal to [1]. In other words, we can combine [1] and our FlowTurbo to improve the sampling efficiency. Following [1], we perform experiments on CelebA-HQ and the results are as follows:
|Method|NFE|FID$\downarrow$|
|--|---|---|
|Euler|6|127.01|
|Bellman [1]|6| 72.54|
|Bellman [1] + FlowTurbo (ours)|6|**68.74** |
|Euler|8|109.42|
|Bellman [1]|8| 49.80|
|Bellman [1] + FlowTurbo (ours)|8|**45.95** |
In the following table, we compare our FlowTurbo with Bespoke Non-Stationary (BNS) Solvers [2] on ImageNet. Note that BNS only provides the results on ImageNet 64x64 while we adopt the experimental while our experiment setting follows SiT (ImageNet 256x256) and we cannot find a good publicly available flow-based model trained on ImageNet 64x64. The different resolution and base models choices might slightly affect the results. We also report the number of function evaluations (NFE) in the table, where we use $f_1$ and $f_2$ ($f_2\approx 0.1 f_1$) to represent the velocity predictor and refiner, respectively.
|Method|Resolution|ImageNet FID 50K| NFE |Training Costs|
|-----|-------|-----|-----|------|
|BNS| 64x64| 2.62|16 | 2-10 GPU days|
|FlowTurbo ($H_5 P_5 R_2$)| 256x256 |2.46 | 15 $f_1$ + 2 $f_2$ | 2 GPU hours|
We also provide the result on MS COCO text-to-image generation:
|Method|Resolution|COCO FID 30K| NFE |Training Costs|
|-----|-------|-----|-----|-----|
|BNS| 512x512|14.68| 20 | 15-24 GPU days|
|FlowTurbo ($H_3 P_6 R_3$)| 512x512|11.95 | 12 $f_1$ + 3 $f_2$ | 5.5 GPU hours|
These results clearly show that our FlowTurbo is better in both sampling quality and training efficiency. Besides, BNS solvers struggle to generalize unseen NFEs while our FlowTurbo allows flexible adjustment of the sampling configuration after the velocity has been trained. We will add the above discussion and comparisons in our revised paper.
**Q4: Applying FlowTurbo for diffusion models.**
**[Reply]** That's a good question. In our early experiments, we did try to transfer a similar idea of FlowTurbo to diffusion models. Specifically, we adopted PixArt as the base model (diffusion transformer) and tried to learn a refiner for the $\epsilon_\theta$. However, just as Figure 1 suggests, the $\epsilon_\theta$ of diffusion models changes a lot during the sampling and it is really hard to regress the offset of $\epsilon_\theta$. We have also tried to convert the $\epsilon_\theta$ to an equivalent velocity through Equation (32):
$$\boldsymbol{v}=\frac{\dot{\alpha}_t}{\alpha_t}x + (\dot{\sigma}_t-\frac{\dot{\alpha}_t\sigma_t}{\alpha_t})\boldsymbol{\epsilon},$$
but the refiner still cannot converge. We think some techniques such like [3] to transform the sampling path to be straighter could be helpful, and we will leave this for future work.
[3] Shaul et. al, Bespoke Solvers for Generative Flow Models, arxiv 2023, ICLR 2024
---
Rebuttal Comment 1.1:
Comment: Thanks for this detailed response from the authors. This rebuttal indeed answers many of my questions. Although I still think introducing three types of blocks in a specific order is not the best solution for speeding up, the authors demonstrate its potential, such as in combination with other advanced methods or applying to more advanced models. So, I will raise my score acoordingly.
---
Reply to Comment 1.1.1:
Title: Thanks for raising your score and providing valuable feedback
Comment: Thanks for raising your score and providing valuable feedback. We are glad to know that most of your concerns have been resolved. Regarding the design of our method such as the types of sampling blocks, we will provide more detailed discussions and analyses through both visualizations and experiments (as we have shown in the rebuttal) in the revised paper. | Summary: The authors propose FlowTurbo, a method to adapt pre-trained flow-based generative models for faster sampling. The method is based on the observation that the predicted velocity field is quite stable throughout the integration time and hence at each timestamp only small refinement from the previous step is required. Therefore, the authors propose to train a lightweight refinement network and use it at certain timestamps instead of the original velocity field predictor. In addition to the refinement network the paper also suggests to speed up the Heun's integration method by reusing the velocities predicted at previous corrector steps. All the sampling blocks are compiled for further speed up. As a result, FlowTurbo demonstrates great trade-off between speed and quality compared to prior work and achieves sota results on certain tasks.
Strengths: The problem of speeding up generative models is of high importance nowadays. While the majority of the methods aims for decreasing the number of denoising steps, the authors of FlowTurbo propose to make each step significantly cheaper. This is achieved at low additional training costs and could have high impact on the methods that explicitly utilise the iterative nature of modern diffusion- or flow-based generative models (such as [1, 2, 3], see references in Weaknesses). The method is well-grounded and carefully ablated. The experiments demonstrate the effectiveness of FlowTurbo compared to default sampling.
Weaknesses: 1) **Prior work**: The paper does a good job with contextualizing it relative to the diffusion-based models. However, references to many flow-based methods are missing, which include:
- Models that leverage flow matching for generative process [2, 4, 5, 6, 7, 8, 9].
- And (which is more connected to the topic of the paper) the methods that try to speed up the sampling for flow-based models [10, 11, 12, 13].
2) **Method**: Despite the good results, it seems that it is not trivial how to choose the exact configuration of the sampling blocks, i.e. the order and the number of different steps. From the first sight the configurations presented in the paper are quite random. At least some intuition on how to setup those is required. Otherwise, if the configuration needs to be tuned for every other dataset, the method is not that straight-forward to apply.
3) **Evaluation**:
- The tables would be simpler to read if NFEs (number of function evaluations) were also included as a separate column.
- Is the step of the baseline method also compiled in tables 1a and 1b? If not, it would be nice to add this experiment, in addition to the ablations, to see the effect of other parts.
[1] Watson et. al, Novel view synthesis with diffusion models, ICLR 2023
[2] Davtyan et. al, Efficient Video Prediction via Sparsely Conditioned Flow Matching, ICCV 2023
[4] Hu et. al, Motion Flow Matching for Human Motion Synthesis and Editing, arxiv 2023
[5] Fischer et. al, Boosting Latent Diffusion with Flow Matching, arxiv 2023
[6] Davtyan et. al, Learn the Force We Can: Enabling Sparse Motion Control in Multi-Object Video Generation, AAAI 2024
[7] Davtyan et. al, Enabling Visual Composition and Animation in Unsupervised Video Generation, arxiv 2024
[8] Gui et. al, Depthfm: Fast monocular depth estimation with flow matching, arxiv 2024
[9] Hu et. al, Flow Matching for Conditional Text Generation in a Few Sampling Steps, arxiv 2024
[10] Shaul et. al, Bespoke Solvers for Generative Flow Models, arxiv 2023, ICLR 2024
[11] Lee et. al, Minimizing Trajectory Curvature of ODE-based Generative Models, ICML 2023
[12] Pooladian et. al, Multisample flow matching: Straightening flows with minibatch couplings, ICML 2023.
[13] Tong et. al, Improving and generalizing flow-based generative models with minibatch optimal transport, TMLR 2023.
[14] Lipman et. al, Flow matching for generative modeling, ICLR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) In the paper the order of blocks $H_{N_H}P_{N_P}R_{N_R}$ is fixed, and only the number of the blocks $N_H$, $N_P$ and $N_R$ changes. What is the reason behind this? Have you tried other schemes, e.g. sequentially using multiple blocks of type $H_{N_H}P_{N_P}R_{N_R}$?
2) A rather nitpicky comment: Multiple times throughout the paper it is mentioned that the linear interpolation corresponds to the optimal transport from the noise distribution to the data distribution. It is not precisely correct, as the linear interpolation corresponds to the optimal transport from the noise distribution to the distribution concentrated around a particular data point [14]. Methods that attempt to steer flow-matching towards optimal transport between noise and data involve additional tricks [12, 13].
Some typos:
1) Line 137: "interpolation between $x_0$ and $\cancel{v}$ $\epsilon$".
2) Line 170: "simulate the $\cancel{x_t}$ $x_{t_i}$".
3) Eq. 15: $\cancel{x_i \leftarrow x_{i-1}}$ $x_{t_i} \leftarrow x_{t_{i - 1}}$
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations and the potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on our work! We address the questions and clarify the issues accordingly as described below.
**Q1: About prior works on flow-based methods.**
**[Reply]** Thanks for your suggestions. We agree that these works are important in the area of flow-based models, and we will include the mentioned papers in the revised paper. Specifically, we will modify L31 to:
> Alongside the research on diffusion models, flow-based models have garnered increasing attention due to their versatility in modeling data distributions, and have been widely applied in various domains including image generation [5], video generation [6, 7], video prediction [2], human motion synthesis [4], depth estimation [8], text generation [9], etc.
And for the acceleration methods [10-13] for flow-based models, we will add the discussion of these methods in our related work (L107):
> There are also some work aim to obtain better flow paths which are straighter and more effective for sampling. These work includes some useful technique such as transformed sampling paths [10], minimizing the trajectory curvature [11], multisample flow matching [12], and minibatch optimal transport [13].
**Q2: About the configuration of the sampling blocks.**
**[Reply]** Sorry for the confusion. According to the observation in Figure 1, the velocity during the sampling would become stable at the final few steps, where we adopted a lightweight refiner to regress the velocity offset. Besides, our pseudo corrector is designed to efficiently achieve 2-order convergence, which requires a 2-order intermediate result as initialization. This explains why we need several Heun's steps at the beginning. We also provide some comparisons of different orders and schemes (including repeating the blocks) as follows:
|Method|FID $\downarrow$|Latency (ms / img)|
|-----|----|-----|
|$H_5 P_7R_3$|2.24|72.4|
|$P_5 H_7R_3$|2.38|79.2|
|$P_5 R_7H_3$|13.66|53.7|
|$H_5 R_7P_3$|30.94|60.4|
|Method|FID $\downarrow$|Latency (ms / img)|
|-----|----|-----|
|$[H_1 P_1R_1]_{\times 2}$| 8.15 | 34.8 |
|$H_2P_2R_2$|6.39| 34.8 |
|$[H_1 P_1R_1]_{\times 3}$| 4.34 | 45.4 |
|$H_3P_3R_3$|3.34| 45.4 |
As for the number of different steps, we have included some comparisons in Table 3 and Table 4. Generally speaking, we choose to apply the refiner at the last $20\sim 40$% sampling steps where the velocity is relatively stable, and we can always adjust the ratio between Heun's steps and pseudo-corrector steps to control the trade-offs between sampling quality and complexity (Table 3d). Besides, since adjusting the configuration does not require further training, we can always select a good configuration according to the computational budgets and qualitative results. We will add these detailed analyses and discussion to our revised paper to provide better guidelines on how to design the configuration of our FlowTurbo.
**Q3: About the evaluation.**
**[Reply]** Thanks for the comments. We provide the results in Table 1a and 1b with a column of NFEs and the baseline methods compiled as follows. We use $f_1$ to denote the function evaluation of the original velocity predictor and $f_2$ to denote the evaluation of our velocity refiner.
**_Table 1a: Class-conditional Image Generation, ImageNet (256x256)_**
| Method | Sample Config | FLOPs (G) | FID 50K $\downarrow$ | Latency (ms / img) | NFE |
|---------------|------------------|-----------|------|-----------------------|------|
| Heun's | $H_8$ | 1898 | 3.68 | 68.0 | 16 $f_1$ |
| FlowTurbo | $H_2 P_4 R_2$ | 957 | 3.63 | 41.6 | 8 $f_1$ + 2 $f_2$ |
| Heun's | $H_{11}$ | 2610 | 2.79 | 88.2 | 22 $f_1$ |
| FlowTurbo | $H_2 P_8 R_2$ | 1431 | 2.69 | 55.2 | 12 $f_1$ + 2 $f_2$ |
| Heun's | $H_{15}$ | 3559 | 2.42 | 115.3 | 30 $f_1$ |
| FlowTurbo | $H_5 P_7 R_3$ | 2274 | 2.22 | 72.5 | 17 $f_1$ + 3 $f_2$ |
| Heun's | $H_{24}$ | 5694 | 2.20 | 176.2 | 48 $f_1$ |
| FlowTurbo | $H_8 P_9 R_5$ | 3457 | 2.12 | 100.3 | 25 $f_1$ + 5 $f_2$ |
**_Table 1b: Text-to-image Generation, MS COCO 2017 (512x512)_**
| Method | Sample Config | FLOPs (G) | FID 5K $\downarrow$ | Latency (ms / img) | NFE |
|---------------|------------------|-----------|------|-----------------------|------|
| Heun's | $H_4$ | 3955 | 32.77 | 92.0 | 8 $f_1$ |
| FlowTurbo | $H_1 P_2 R_2$ | 2649 | 32.48 | 68.4 | 4 $f_1$ + 2 $f_2$ |
| Heun's | $H_5$ | 4633 | 30.73 | 108.0 | 10 $f_1$ |
| FlowTurbo | $H_1 P_4 R_2$ | 3327 | 30.19 | 84.5 | 6 $f_1$ + 2 $f_2$ |
| Heun's | $H_8$ | 6667 | 28.61 | 156.2 | 16 $f_1$ |
| FlowTurbo | $H_1 P_6 R_3$ | 4030 | 28.60 | 104.8 | 8 $f_1$ + 3 $f_2$ |
| Heun's | $H_{10}$ | 8023 | 28.06 | 188.4 | 20 $f_1$ |
| FlowTurbo | $H_3 P_6 R_3$ | 5386 | 27.60 | 137.0 | 12 $f_1$ + 3 $f_2$ |
These results clearly show that our method still surpasses the compiled baseline by large margins, demonstrating the effect of our velocity refiner and the pseudo corrector. We will modify these in the revised paper.
**Q4: About the description of the linear interpolation.**
**[Reply]** Thanks for pointing out this. We agree that linear interpolation connects noise distribution to the distribution concentrated around a particular data point in the conditional flow-matching defined in [14]. We will modify the description in L37, L89, L141 in the revision.
**Q5: About the typos.**
**[Reply]** Thanks for your careful reading. We will modify these typos accordingly in the revised paper.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Dear authors,
Thank you for your time and valuable additional clarifications in the rebuttal.
I will keep my original positive rating.
Best regards,
Reviewer
---
Reply to Comment 1.1.1:
Title: Thanks for your time and postive feedback
Comment: We greatly appreciate your response and valuable suggestions, which would improve the quality and comprehensiveness of our paper. We will update our paper accordingly in the revision. | Summary: The paper presents a new approach to accelerate the sampling process in flow-based generative models. Unlike diffusion models, flow-based models, which are based on learning velocity fields through flow-matching, have not seen extensive development in efficient sampling techniques. The authors introduce FlowTurbo, a framework that utilizes a lightweight velocity refiner to estimate velocity during sampling, significantly reducing computation time while maintaining or improving image quality. Additionally, the paper proposes pseudo corrector and sample-aware compilation techniques to further enhance sampling efficiency. Experimental results demonstrate that FlowTurbo achieves substantial speedups and high-quality results in both class-conditional and text-to-image generation tasks.
Strengths: 1. **Technical contribution**: The introduction of a lightweight velocity refiner to approximate the offset between the velocities between adjacent timesteps seems sound and is efficient as it requires only a light-weight model.
2. **Additional Techniques**: The integration of pseudo corrector and sample-aware compilation techniques shows a well-rounded approach to improving both speed and quality of the generative process.
3. **Extensive Experiments**: The paper provides thorough experimental results, demonstrating the effectiveness of FlowTurbo across different tasks and models, including detailed comparisons with existing methods.
4. **Real-time Performance**: Achieving real-time image generation with substantial improvements in inference time without compromising quality is a notable accomplishment.
5. **Versatility**: FlowTurbo’s compatibility with various applications such as image editing and inpainting highlights its flexibility and potential for broader applications.
Weaknesses: 1. **Limited Comparison**: There are many efficient diffusion distillation methods, e.g., consistency distillation [1,2], consistent trajectory model [3], distribution matching distillation [4], adversarial approaches [5,6] (which also distilled using model trained with rectified flow). However, the paper has not been compared with any of those, which makes the comparison incomplete. In order to describe the technical advantage of using lightweight refiner, the author should compare with full fine-tuning (i.e., distillation) or with parameter efficient fine-tuning, e.g., LoRA. Thus, Table 2 should be filled with various baselines in order to precisely state ‘comparison with the state-of-the-arts’.
2. **Complexity of implementation**: In addition, while the method seems sound, the method is somewhat complicated to use all Heun’s method, corrector step, etc. This might be an obstacle to scale this method to a large-scale.
**References**\
[1] Song, Yang, et al. "Consistency models." arXiv preprint arXiv:2303.01469 (2023). \
[2] Song, Yang, and Prafulla Dhariwal. "Improved techniques for training consistency models." arXiv preprint arXiv:2310.14189 (2023).\
[3] Kim, Dongjun, et al. "Consistency trajectory models: Learning probability flow ode trajectory of diffusion." arXiv preprint arXiv:2310.02279 (2023).\
[4] Yin, Tianwei, et al. "One-step diffusion with distribution matching distillation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\
[5] Sauer, Axel, et al. "Adversarial diffusion distillation." arXiv preprint arXiv:2311.17042 (2023).\
[6] Sauer, Axel, et al. "Fast high-resolution image synthesis with latent adversarial diffusion distillation." arXiv preprint arXiv:2403.12015 (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: My major concern is that the paper considers improving the sampling efficiency of flow-based models which is good, but lacks comparison with other fast-sampling diffusion models. Given the theoretical or practical similarities between diffusion models and flow matching models, I believe the previous methods can be also applied to flow matching models. In that sense, there is a large gap in terms of performance, as previous works can sample within only 2-4 steps, e.g., consistency models. In order to elaborate the author’s claim that the velocity refiner is efficient and as good as those methods, the author should provide some additional comparison with those methods.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors mentioned some limitations that those method could not be used for diffusion models, yet it could be in a better version if they demonstrate with detailed analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We address the questions and clarify the issues accordingly as described below.
**Q1: About comparisons to distillation-based methods.**
**[Reply]** Thanks for your suggestions. We have detailedly discussed the difference between our FlowTurbo and distillation-based methods in our original paper (L62-L68, L100-L104, L205-L214). Although the previous diffusion distillation methods can be directly adapted to flow-based models, they do not investigate the unique properties of the flow sampling trajectory. Our method, however, aims to offer a training-efficient solution for accelerating flow-based image generation. As Reviewer PyDm suggests, the low additional training costs can make our FlowTurbo have a high impact on the acceleration of generative models.
We also agree that it would be better to include more distillation methods for comparison in Table 2. We summarize the comparisons between our FlowTurbo and [1-3] on ImageNet in the following table. Note that all these methods only perform experiments on ImageNet 64x64, while our experiment setting follows SiT (ImageNet 256x256) and we cannot find a good flow-based model trained on ImageNet 64x64. The different resolution and base model choices might slightly affect the results.
|Method|Resolution|FID 50K|Latency (ms / img)|Training Costs|GPU|
|-----|-------|-----|-----|------|----|
|CD [1]|64x64|4.70| 70.1 | 600K iters x 2048 batch size|64 A100 GPU|
|iCT-deep [2]|64x64|2.77|70.1 | 800K iters x 4096 batch size| A100 Cluster|
|CTM [3]| 64x64 | 1.73|70.1 | 30K iters x 2048 batch size|8 A100 GPU|
|FlowTurbo (ours)| 256x256 |2.22 | 72.5 | 30K iters x 18 batch size| 1 A800 GPU|
Besides, we also provide comparisons with more distillation methods [4-5,7-8] on text-to-image. We adopt the FID 30K on COCO as the evaluation metric. We do not include [6] since we cannot find the same evaluation metric in their paper and the models have not been released.
|Method|FID 30K |Latency (s)|Training Costs|GPU|
|-----|-----|-----|------|----|
|LCM [7]|11.1|0.19| 100K iters x 72 batch size |8 A100 GPU|
|LCM-LoRA [8]|23.62|0.19| - |-|
|DMD [4]|14.93|0.09|20K iters x 2304 batch size|72 A100 GPU|
|ADD-M [5]| 20.33 | 0.09 | 4K iters x 128 batch size| - |
|FlowTurbo (ours)| 11.95 | 0.14| 10K iters x 16 batch size|1 A800 GPU|
From the above results, we demonstrate that our FlowTurbo can achieve competitive sampling quality with previous diffusion distillation methods while requiring much fewer training costs. We will add these results to our revised paper to make the comparison clearer.
[7] Luo, Simian, et al. "Latent consistency models: Synthesizing high-resolution images with few-step inference." arXiv preprint arXiv:2310.04378 (2023).
[8] Luo, Simian, et al. "LCM-LoRA: A universal stable-diffusion acceleration module." arXiv preprint arXiv:2311.05556 (2023).
**Q2: About the implementation.**
**[Reply]** Thanks for the valuable comments. The motivation to adopt the velocity refiner is quite clear according to Figure 1, where we show the velocity tends to become more and more stable during the sampling. Heun's method is a well-known ODE sampler that is adopted in our baseline SiT, and the pseudo-corrector is proposed to reduce the inference costs while attaining the same convergence order as Heun's method. Therefore, the sampling paradigm of FlowTurbo is quite intuitive and reasonable. As for the scalability of our method, we have conducted experiments with SiT (675M), InstaFlow (860M), and Lumina-Next-T2I (2B) in our paper. Besides, we also conduct experiments on the provide the newly released Stable-Diffusion-3-Medium (8B) and compare our FlowTurbo with the default sampler Euler's method:
|Method|FID 30K $\downarrow$|Latency (s / img)|
|-----|-----|-----|
|Euler| 28.4 | 2.25 |
|FlowTurbo| 28.3 | 1.35 |
The sampling quality is measured by FID 30K $\downarrow$ on MSCOCO. Besides, we also provide some qualitative comparisons on SD3 and the newly released SOTA model FLUX.1-dev (12B) in Figure A2 and Figure A3 in the attached one-page PDF, where we demonstrate that FlowTurbo can generate more visual details.
We believe the results on SD3 and FLUX.1-dev, together with the experiments on various flow-based models in our original paper can demonstrate the effectiveness of our FlowTurbo at scale.
**Q3: Applying FlowTurbo for diffusion models.**
**[Reply]** Thanks for pointing out this. In our early experiments, we did try to transfer FlowTurbo to diffusion models. Specifically, we adopted PixArt as the base model (diffusion transformer) and tried to learn a refiner for the $\epsilon_\theta$. However, just as Figure 1 suggests, the $\epsilon_\theta$ of diffusion models changes a lot during the sampling and it is really hard to regress the offset of $\epsilon_\theta$. Alongside attempting to directly regress the offset of $\epsilon_\theta$, we have also tried to convert the $\epsilon_\theta$ to an equivalent velocity through Equation (32):
$$\boldsymbol{v}=\frac{\dot{\alpha}_t}{\alpha_t}\boldsymbol{x} + (\dot{\sigma}_t-\frac{\dot{\alpha}_t\sigma_t}{\alpha_t})\boldsymbol{\epsilon},$$
but the refiner still cannot converge. To better understand the difference between diffusion and flow-based models, we plot the curvature of the sampling trajectories in Figure A1 in the attached PDF, where we show the flow-based models enjoy lower curvature (indicating the sampling path is straighter). We will add more theoretical and empirical analysis in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal.
It is interesting that the proposed method does not applies to diffusion model, even when changed to velocity prediction. So if we want to apply FlowTurbo for pretrained diffusion model, we have to change to flow-based model as InstaFlow did, and then apply FlowTurbo. Elaborating on this point would make the paper more valuable.
I appreciate the authors' efforts on new experimental results for recent SOTA flow-based models. Most of my concerns were resolved and I update my rating to 5.
---
Reply to Comment 1.1.1:
Title: Thanks for raising your score and providing valuable feedback
Comment: We are glad to know that most of your concerns have been resolved. FlowTurbo is motivated by the visualization of the sampling trajectory (Figure 1), and leverages the unique property of flow-based models which have $v_t$ as a ''stable value''. As for applying FlowTurbo for pretrained diffusion models, we think the core idea is to perform some invertible transformation $f$ of the intermediate results $x_t$ such that $f(x_t)$ is a ''stable value'', and learn a lightweight refiner to regress the offset of $f(x_t)$. We will leave this for future work to make our FlowTurbo a more general framework for the acceleration of both diffusion and flow-based models. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for the positive feedback and valuable comments on our work. In the attached one-page PDF, we provide more visualizations and qualitative results to better illustrate the motivation and effectiveness of our FlowTurbo. In Figure A1, we compare the curvature (as suggested by Reviewer RYd7) of the sampling trajectories of different models including DiT, SiT, SD3-Medium, FLUX.1-dev, and Open-Sora. In Figure A2-A4, we provide more qualitative comparisons on SD3-Medium (a SOTA flow model by StabilityAI, released on June 12th), FLUX.1-dev (a SOTA flow model by Black Forest Labs, released on August 3rd), Open-Sora (a video-generation flow model). These qualitative clearly demonstrate our FlowTurbo generates visual content with higher fidelity and more details than the default sampler.
Pdf: /pdf/bdb3d92032a82731acde2a86fc6f95685b593693.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation | Accept (poster) | Summary: The paper proposes an LLM-based personal activity generation by utilizing existing OpenAI LLM through API, named as LLMob. The problem that authors are time and important research topic in the area of human mobility simulation. While existing generative solely trained on the given dataset, LLM agent has already trained internet-wide dataset that allows it to have possibility of generating human-like trajectories. The LLMob generates activity trajectories in two stages: the first stage identifies (learns) activity patterns with respect to different personas introduced by the authors, and in the second stage a synthetic version of the activity sequences generated by LLM agent. The experiments are performed on an activity dataset, and performance is benchmarked by multiple generative models.
Strengths: The paper is a good example of an agentic-LLM, showcasing the powerful capabilities of LLMs in human mobility modeling. This application is significant as it uses LLMs' data processing and predictive abilities to improve our understanding of human movement patterns, impacting urban planning, transportation, and public health. The paper is well-structured, with a clear and coherent flow, and features a promising presentation that enhances comprehension.
Weaknesses: These are the set of weaknesses of the paper from my perspective:
1-) The authors performed experiments on one dataset. It might be interesting to examine the performance with dataset from a different region.
2-) The model solely relies on the pre-trained LLM models. While the LLM capabilities and power are increasing, which is good for this work, separate fine-tuning might be required for concrete human mobility modeling.
3-) The motivation retrieval part is not clear, especially the learning-based motivation retrieval.
Technical Quality: 3
Clarity: 4
Questions for Authors: While Phase 1 and Phase 2 make sense. I have concerns about how much information was carried from Phase 1 to Phase 2. The following are the questions that can help me to understand the details:
1-) Did you utilize the whole dataset in Phase 1?
2-) How are the trajectories and personas sampled in Phase 2?
3-) How do the generated activities match with the input trajectories? Does LLM memorize exact activity sequences?
4-) LLMs are known for hallucination. Within generated activities, the LLM agent generates an activity name and location. How accurate is the LLM agent for the generated activities? For instance, when an activity says Japanese Restaurant (35.369, 140.306), does the GPS location have a Japanese Restaurant? I expect the generated trajectories to require extensive post-processing.
5-) Have you tried your model on a different dataset? Experiments are limited to one dataset, which limits generalizability.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I found two limitations in this paper and would love to hear authors responses on these:
1-) While LLMs are generally powerful, they often struggle to generalize in domain-specific applications. For a robust human mobility simulation, LLM agents might require a fine-tuning strategy to achieve more generalizable performance.
2-) The generated locations in the study are limited to those specified in the prompt (If they do not hallucinate). Since the model also takes into account the locations a person is likely to visit, as illustrated in Figure 2, the degree of synthetic realism in the generated trajectories raises questions.
3-) This approach also does not fully address privacy concerns associated with mobility trajectories as mentioned in the introduction, the underlying patterns and personal data might still be exposed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review and acknowledgment of our work. We would like to address the concerns as follows.
> **Question 1** *Did you utilize the whole dataset in Phase 1?*
**Answer**: Phase 1 is targeted at identifying the persona of each person. For the sake of fair comparison, for each targeted person, we used 80% of the available data of this person in this phase.
> **Question 2** *How are the trajectories and personas sampled in Phase 2?*
**Answer**: In Phase 1, we start with 10 predefined personas (Table 4) representing diverse urban lifestyles. For each individual, we prompt the LLM to refine these personas based on information extracted from their historical data, resulting in 10 candidate personas. The final persona is then determined through our self-consistency evaluation process, which assesses how well each candidate aligns with the individual's actual behavior patterns.
In Phase 2, the identified persona from Phase 1 is incorporated into the prompt as an element. For trajectory generation, we employ two schemes:
1. Evolving-based scheme: We sample the individual's **latest one-week historical trajectories** to inform the LLM about recent activity patterns and motivations.
2. Learning-based scheme: We use a learning model (using 80% of available data) to identify **similar dates**, allowing the LLM to draw insights from historically similar contexts.
This approach allows us to generate personalized, context-aware trajectories that strike a balance between consistent behavior patterns and daily variations in motivation and circumstances.
> **Question 3** *How do the generated activities match with the input trajectories? Does LLM memorize exact activity sequences?*
**Answer**: During generation, the prompt consists of the identified pattern of the targeted person and the retrieved motivation (Please see Appendix B: Page 14, Line 515). We are not providing specific trajectories to the LLM but the retrieved motivations, thus the LLM will **not** need to memorize exact activity sequences.
> **Question 4** *LLMs are known for hallucination. Within generated activities, the LLM agent generates an activity name and location. How accurate is the LLM agent for the generated activities? For instance, when an activity says Japanese Restaurant (35.369, 140.306), does the GPS location have a Japanese Restaurant? I expect the generated trajectories to require extensive post-processing.*
**Answer**: We evaluated the generated activities by checking whether the generated one has appeared in the Tokyo area. We found that all the locations generated from the model exist in the area.
In addition, (35.369, 140.306) is the real location of a Japanese restaurant ("やきとり 福仙", an authentic Japanese yakitori restaurant).
> **Question 5** *Have you tried your model on a different dataset? Experiments are limited to one dataset, which limits generalizability.*
**Answer**: We conducted an experiment based on the data collected in Osaka, Japan. We generated 537 trajectories based on the 2102 daily activity trajectories from 30 persons. The results are reported as follows, where LLMob-L/E are ours and DiffTraj and TrajGAIL are the best-performing baseline methods.
| Model | SD | SI | DARD | STVD |
|----------------|-----------|-----------|-----------|-----------|
| LLMob-L | 0.035 | 0.021 | 0.141 | 0.391 |
| LLMob-E | 0.030 | 0.018 | 0.121 | 0.380 |
| DiffTraj | 0.080 | 0.177 | 0.406 | 0.691 |
| TrajGAIL | 0.281 | 0.063 | 0.525 | 0.483 |
The above results demonstrate that our framework can maintain superior performance in another city. In Section 5 - Limitations, we acknowledged that it is challenging to collect sufficient data from different areas, which limits our ability to conduct more extensive experiments. Additionally, we note that the model's generalization ability can be also demonstrated by its performance across different scenarios in our original experiments, such as under normal periods and under pandemic periods.
---
Rebuttal 2:
Title: Rebuttal by Authors
Comment: > **Limitation 1** *While LLMs are generally powerful, they often struggle to generalize in domain-specific applications. For a robust human mobility simulation, LLM agents might require a fine-tuning strategy to achieve more generalizable performance.*
**Answer**: We agree that fine-tuning the LLM for a specific domain may deliver better results. However, this needs careful design of instruction-tuning data, and the tuning may incur significant training costs, considering that we are using the GPT API. Indeed, we have tried fine-tuning GPT-3.5. After comparison, we chose the current approach for a better trade-off.
> **Limitation 2** *The generated locations in the study are limited to those specified in the prompt (If they do not hallucinate). Since the model also takes into account the locations a person is likely to visit, as illustrated in Figure 2, the degree of synthetic realism in the generated trajectories raises questions.*
**Answer**: We appreciate the reviewer's observation. Our approach to specifying likely locations is grounded in research by [1], which shows that individuals typically have characteristic geographical ranges for daily activities.
[1] Alessandretti et al. The scales of human mobility. Nature, 2020, 587(7834): 402-407.
However, our method does not limit generated locations to those specified. The LLM can reason beyond these locations, potentially introducing new, plausible ones based on the context and motivation. This balances realistic patterns with diversity.
The use of location priors enhances generation efficiency and grounds outputs in realistic spatial contexts. Our evaluations, comparing generated trajectories to real-world data, demonstrate that our method produces patterns closely matching real-world data across various metrics.
In future work, we plan to explore dynamically expanding the set of potential locations based on evolving trajectory contexts, further enhancing realism and flexibility.
> **Limitation 3** *This approach also does not fully address privacy concerns associated with mobility trajectories as mentioned in the introduction, the underlying patterns and personal data might still be exposed.*
**Answer:** We suppose that the reviewer's concern comes from the data exposed to the OpenAI.
To address this concern, we conducted experiments using Llama 3-8B on a local GPU to ensure data security. The results are reported as follows (we also tested GPT-4o-mini):
| Model | SD | SI | DARD | STVD |
|----------------|----------|-----------|--------|--|
| LLMob-L (GPT-3.5-turbo) | 0.049 | 0.054 | 0.126 | 0.570 |
| LLMob-L (GPT-4o-mini) | 0.049 | 0.055 | 0.141 | 0.577 |
|LLMob-L (Llama 3-8B) | 0.054 | 0.063 | 0.119 | 0.566 |
| LLMob-E (GPT-3.5-turbo) | 0.053 | 0.046 | 0.125 | 0.559 |
| LLMob-E (GPT-4o-mini) | 0.041 | 0.053 | 0.211 | 0.531 |
| LLMob-E (Llama 3-8B) | 0.054 | 0.059 | 0.122 | 0.561 |
We observed competitive performance of our framework when other LLMs were used. In particular, Llama 3-8B is overall the best when spatial and temporal factors are evaluated together (DARD and STVD). Such results demonstrate the robustness of our framework across different LLMs. On the downside, there are a few cases when Llama 3-8B generated locations that do not exist. Such hallucination was not observed on GPT-3.5-turbo and GPT-4o-mini. Nonetheless, we expect that the promising results of this open LLM can inspire further studies, particularly in developing better fine-tuning techniques for human mobility modeling.
---
Rebuttal Comment 2.1:
Comment: I would like to thank the authors for their detailed rebuttal. However, I will be maintaining my original score.
---
Reply to Comment 2.1.1:
Title: Re: Official Comment by Reviewer c5UM
Comment: Thank you very much for kindly taking the time to respond to our rebuttal. We greatly appreciate your valuable comments and suggestions. They will be reflected in the revised paper. | Summary: This paper proposes an LLM-based agent framework for generating personal mobility. It tries to address several problems in the domain of personal mobility generation, including aligning LLMs with the urban data in the real world, developing strategies with reliable activities, and exploring further applications in this field. For generating activities of personal mobility (i.e., trajectories), it proposes a framework of LLM-based agent with an action module, memory module, and planning module, which of all these modules can improve the alignment of the results of generation and real-world data. The paper provides several experiments to verify the effectiveness and alignment of the proposed methods and explores further enhancement.
Strengths: 1. This paper is the first simulation with LLM-based agents to generate personal mobility, which can contribute to many fields, such as recommender systems and human behavior studies.
2. The proposed pipeline is intuitive, where the action, memory, and planning modules are important for improving the simulation alignment.
3. The motivation is practical, and the experiments are abundant as well.
Weaknesses: 1. What about other LLMs performing as the core, besides GPT-3.5-turbo? I think GPT-4/GPT-4o can be utilized to generate data of higher quality.
2. For the method of Learning-based Motivation Retrieval, I'm a bit concerned about the calculation of similarity among dates. Although the locations are greatly important to identify, there seems to be too much information lost from their initial motivations. Could you try just using cosine similarities on the semantic embeddings of contents with language models? Or maybe using LLMs with prompting to get the score of similarity?
3. I think the presentation can be improved. A preliminary can be added, and the overview of LLMob can be more clear at the start of section 3.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the above "Weakness". I'm willing to improve my rating if the authors can address my concerns.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have discussed the limitations in the last section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments. We would like to address your concerns as follows.
> **Weakness 1** *What about other LLMs performing as the core, besides GPT-3.5-turbo? I think GPT-4/GPT-4o can be utilized to generate data of higher quality.*
**Answer**: We conducted experiments using GPT-4o-mini and Llama 3-8B. The results are reported as follows:
| Model | SD | SI | DARD | STVD |
|----------------|----------|-----------|--------|--|
| LLMob-L (GPT-3.5-turbo) | 0.049 | 0.054 | 0.126 | 0.570 |
| LLMob-L (GPT-4o-mini) | 0.049 | 0.055 | 0.141 | 0.577 |
|LLMob-L (Llama 3-8B) | 0.054 | 0.063 | 0.119 | 0.566 |
| LLMob-E (GPT-3.5-turbo) | 0.053 | 0.046 | 0.125 | 0.559 |
| LLMob-E (GPT-4o-mini) | 0.041 | 0.053 | 0.211 | 0.531 |
| LLMob-E (Llama 3-8B) | 0.054 | 0.059 | 0.122 | 0.561 |
We observe competitive performance of our framework when other LLMs are used. In particular, GPT-4o-mini is the best in terms of the spatial metric (SD); GPT-3.5-turbo is the best in terms of the temporal metric (DI). Llama 3-8B is overall the best when spatial and temporal factors are evaluated together (DARD and STVD). Such results demonstrate the robustness of our framework across different LLMs.
> **Weakness 2** *For the method of Learning-based Motivation Retrieval, I'm a bit concerned about the calculation of similarity among dates. Although the locations are greatly important to identify, there seems to be too much information lost from their initial motivations. Could you try just using cosine similarities on the semantic embeddings of contents with language models? Or maybe using LLMs with prompting to get the score of similarity?*
**Answer**: We agree with your comment and we indeed considered such scheme when doing this work.
We used the open-source sentence embedding model (sentence-transformers/paraphrase-distilroberta-base-v1) to embed the daily activities into a 768-dimensional vector, and used cosine similarity to measure the similarity. We evaluated this setting in the same Tokyo dataset as in the paper. The results are reported as follows:
| Situation | SD | SI | DARD | STVD |
|----------------|----------|-----------|--------|--|
| Normal Trajectory, Normal Data | 0.047 | 0.046 | 0.132 | 0.560 |
| Abormal Trajectory, Abormal Data | 0.058 | 0.052 | 0.148 | 0.581 |
| Abormal Trajectory, Normal Data | 0.061 | 0.050 | 0.133 | 0.551 |
We find that when we conduct learning-based motivation retrieval based on sentence embedding similarity, the performance is competitive with the original one. This result demonstrates that the learning-based motivation retrieval scheme is effective in personal mobility generation and that our framework is robust against varying similarity metrics. Considering there are various sentence embedding models and metrics, the learning-based motivation retrieval scheme based on the similarity of temporal information (i.e., date) is one of the key contributions of our framework.
> **Weakness 3** *I think the presentation can be improved. A preliminary can be added, and the overview of LLMob can be more clear at the start of section 3.*
**Answer**: We appreciate your feedback on improving the paper's presentation. We agree that adding a preliminary section would be beneficial to establish key concepts and definitions before delving into our method. We will also enhance the overview of LLMob at the start of Section 3.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal by the authors. I would like to raise my score to 5.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer's Engagement
Comment: Thank you very much for kindly taking the time to respond to our rebuttal! We also greatly appreciate your valuable comments and suggestions. They will be reflected in the revised paper. | Summary: This paper introduces LLMob, a framework for personal mobility generation using a large language model. The framework aims to leverage urban activity patterns for emulating urban residents, facilitating the human mobility trajectory generation. Using the Tokyo personal activity dataset, the effectiveness of the proposed framework are validated.
Strengths: - The authors proposed LLMob, an interesting human mobility generation framework with Large Language Models.
- The authors provide an illustrative analysis of how LLM can generate reliable activity strategies.
Weaknesses: - The proposed framework has only been validated in GPT-3.5. It remains unclear whether the performance would vary with different backbone models, such as Llama-2.
- The constructed Tokyo dataset statistic is not very clear in this paper, which makes it difficult for the reader to evaluate the true contribution of this paper.
- The difference between this work and existing work (such as [1]) is not well-discussed in the related work section, although the author claims why these methods cannot be easily adopted as the baselines.
[1] Shao, Chenyang, et al. "Beyond Imitation: Generating Human Mobility from Context-aware Reasoning with Large Language Models." arXiv preprint arXiv:2402.09836 (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: - In Line 250 - Line 253, 100 users are chosen and 10 candidate personas are used for subsequent pattern generation. Why did the author select such a setting, I don't see a clear illustration of these settings in the paper.
- In Line 46 and Line 45, the author mentioned "unseen tasks" and "unseen scenarios". In my perspective, the personal mobility generation task is well-formulated in this paper, what does "unseen" mean here for those data-driven methods?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: In this paper, all experiments are conducted on a single collected dataset (perhaps thousands of trajectories) from Tokyo, which makes this paper not comprehensive enough. Extending the proposed method to more cities helps improve paper quality.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and time. We would like to address your concerns as follows.
> **Weakness 1** *The proposed framework has only been validated in GPT-3.5. It remains unclear whether the performance would vary with different backbone models, such as Llama-2.*
**Answer**: We conducted experiments using other LLMs (GPT-4o-mini and Llama 3-8B). The results are reported as follows:
| Model | SD | SI | DARD | STVD |
|----------------|----------|-----------|--------|--|
| LLMob-L (GPT-3.5-turbo) | 0.049 | 0.054 | 0.126 | 0.570 |
| LLMob-L (GPT-4o-mini) | 0.049 | 0.055 | 0.141 | 0.577 |
|LLMob-L (Llama 3-8B) | 0.054 | 0.063 | 0.119 | 0.566 |
| LLMob-E (GPT-3.5-turbo) | 0.053 | 0.046 | 0.125 | 0.559 |
| LLMob-E (GPT-4o-mini) | 0.041 | 0.053 | 0.211 | 0.531 |
| LLMob-E (Llama 3-8B) | 0.054 | 0.059 | 0.122 | 0.561 |
We observe competitive performance of our framework when other LLMs are used. In particular, GPT-4o-mini is the best in terms of the spatial metric (SD); GPT-3.5-turbo is the best in terms of the temporal metric (DI). Llama 3-8B is overall the best when spatial and temporal factors are evaluated together (DARD and STVD). Such results demonstrate the robustness of our framework across different LLMs.
> **Weakness 2** *The constructed Tokyo dataset statistic is not very clear in this paper, which makes it difficult for the reader to evaluate the true contribution of this paper.*
**Answer**: We agree that detailed data information is important. We have attached a file (please see the pdf file in the above "more detailed information about the experimental data") for more statistics about the data (also including another dataset of Osaka, as requested by the reviewer). We will include the above statistics into Appendix C.2 in the next version of the paper.
> **Weakness 3** *The difference between this work and existing work (such as [1]) is not well-discussed in the related work section, although the author claims why these methods cannot be easily adopted as the baselines.*
*[1] Shao, Chenyang, et al. "Beyond Imitation: Generating Human Mobility from Context-aware Reasoning with Large Language Models." arXiv preprint arXiv:2402.09836 (2024).*
**Answer**: We acknowledge that the related work section could benefit from a more detailed discussion of the differences between our approach and existing methods. For the related work mentioned by the reviewer, there are two major differences from our work:
1. Our framework is a data-driven framework to exploit the intelligence of LLM. All the components in our framework employs the LLM, except the learning-based motivation retrieval, which employs contrastive learning. In contrast, in [1], an analytical model is adopted to derive personal activities.
2. Our framework features a consistency evaluation scheme (Section 3.1.2) to align LLMs with the personal activity trajectory data. We did not find such consistency alignment component in [1].
> **Question 1** *In Line 250 - Line 253, 100 users are chosen and 10 candidate personas are used for subsequent pattern generation. Why did the author select such a setting, I don't see a clear illustration of these settings in the paper.*
**Answer**: We sample 100 users to balance between diversity and token consumption. The candidate personas are drafted by first asking the LLM to give us a pool of candidate personas and then we manually select a subset for representativeness. The candidate personas can be regarded as prior knowledge during the generation. They are reported in Table 4 (Appendix D.1).
> **Question 2** *In Line 46 and Line 45, the author mentioned "unseen tasks" and "unseen scenarios". In my perspective, the personal mobility generation task is well-formulated in this paper, what does "unseen" mean here for those data-driven methods?*
**Answer**: By "unseen tasks" and "unseen scenarios", we meant that LLMs generalize to the tasks and scenarios which are not in its training data [1].
[1] Sanh et al. Multitask Prompted Training Enables Zero-Shot Task Generalization. ICLR 2022.
By prompting the LLM to behave like a citizen, we expect that the model can generate activities in certain scenarios that may not have appeared in the data. For example, given the data during pre-pandemic period, the "unseen scenarios" are referred to the situations during the pandemic.
> **Limitations** *In this paper, all experiments are conducted on a single collected dataset (perhaps thousands of trajectories) from Tokyo, which makes this paper not comprehensive enough. Extending the proposed method to more cities helps improve paper quality.*
**Answer**: We conducted an experiment based on the data collected in Osaka, Japan. We generated 537 trajectories based on the 2102 daily activity trajectories from 30 persons. The results are reported as follows, where LLMob-L/E are ours and DiffTraj and TrajGAIL are the best-performing baseline methods.
| Model | SD | SI | DARD | STVD |
|----------------|----------|-----------|--------|--|
| LLMob-L | 0.035 | 0.021 | 0.141 | 0.391 |
| LLMob-E | 0.030 | 0.018 | 0.121 | 0.380 |
| DiffTraj | 0.080 | 0.177 | 0.406 | 0.691 |
| TrajGAIL | 0.281 | 0.063 | 0.525 | 0.483 |
The above results demonstrate that our framework can maintain superior performance in another city. In Section 5 - Limitations, we acknowledged that it is challenging to collect sufficient data from different areas, which limits our ability to conduct more extensive experiments. Additionally, we note that the model's generalization ability can be also demonstrated by its performance across different scenarios in our original experiments, such as under normal periods and under pandemic periods.
---
Rebuttal Comment 1.1:
Title: Response to The Author
Comment: Thanks for the responses, which partially demonstrate the effectiveness of the proposed method on other LLM backbones and datasets. However, the author fails to answer if previous methods cannot be easily adopted as the baselines (mentioned in weakness 3), and fails to provide more well-illustrative reasons for several settings in this paper (mentioned in questions 1 and question 2, the author just explains this again, but didn't provide a reasonable explanation). Considering this paper also has limitations on experimental settings, which still need time and effort to refine, I will keep my score, and I hope these comments could help the author further improve their paper quality.
Best,
---
Rebuttal 2:
Title: Response to new comments from reviewer xLxx
Comment: Thanks for acknowledging our response and comments. We would like to highlight our arguments:
1. Regarding Weakness 3, from the review comments, we did not find the reviewer requiring an explanation on whether previous methods can be easily adopted as baselines. In addition, for the paper mentioned by the reviewer, its source code and prompt were **not available** (the repo was expired when we accessed it, and the paper did not mention what LLM was used) at the time of our submission to NeurIPS.
2. We are sorry that the reviewer is still confused with Question 1.
3. For Question 2, our claim in the introduction is not about the experimental setting. It is essentially a claim on the advantages of using LLMs in real-world applications.
---
Rebuttal 3:
Title: Response to The Author
Comment: - **About weakness 3: the difference between this work and existing work**
Although the authors claim the difference in their framework (e.g., it is a data-driven framework and features a consistency evaluation scheme), they didn't explain the advantage behind this. The difference means for what aspect makes this work different from existing work and what its potential contribution is, instead of just explain the difference in module-desgin. Unfortunately, this suggestion appears to have been disregarded by the authors.
- **About question 1**
What I want to know is the motivation or reason behind this setting, e.g., if 100 users and 10 candidates is sufficiently representative of this problem. The author should present this but they didn't.
- **About question 2: "unseen" mean here for those data-driven methods?**
What I want to know is the definition of "unseen" for previous data-driven methods. It seems that the author explains the "unseen" for LLMs and ignores this. Is there a possibility that several traditional methods can deal with "unseen" tasks? Overall, I think the current explanation is not well-illustrative.
- **About limitations on experimental setting**
We the authors demonstrate the proposed framework on Osaka, but the validation part is still not comprehensive enough due to limited sample sizes (only thousands of trajectories here in Tokyo). Regarding the supplementary experiments on GPT-3.5, GPT-4, and Llama 3-8B, the author seems to not explain which setting they use. Is it conducted on the (Normal Trajectory, Normal Data) setting?
- **Other weakness**
1. The latency and the cost of invoking gpt3.5 API is not reported in this paper, which makes the efficiency of such an LLM-based framework remain a potential issue.
2. The details of dataset construction (section 4.1) are still not clear. For an LLM-based application, it is very important to clearly present these details, and this will make it easy for other researchers to follow.
---
Rebuttal Comment 3.1:
Title: Response to new comments from reviewer xLxx (1/2)
Comment: Thanks for the constructive discussion. We would like to respond as follows:
> **Weakness 3**: *Although the authors claim the difference in their framework (e.g., it is a data-driven framework and features a consistency evaluation scheme), they didn't explain the advantage behind this. The difference means for what aspect makes this work different from existing work and what its potential contribution is, instead of just explain the difference in module-desgin. Unfortunately, this suggestion appears to have been disregarded by the authors.*
In the previous response, we summarized two significant differences compared to the work mentioned by the reviewer (MobiGeaR): **data-driven** framework and **data alignment** scheme.
1. A data-driven model has the following advantage: it can continuously learn and improve from new data. As more data becomes available, the model's performance can be enhanced, making it adaptive to evolving patterns and trends. Specifically, we demonstrated that our model can automatically adjust to changes in mobility patterns caused by events like the pandemic. In contrast, the mechanistic gravity model used in MobiGeaR is an analytical model, which does not have the capability of learning from and adapting to data. This difference becomes crucial in real-world applications where urban mobility patterns are constantly changing due to factors like urban development, policy changes, or societal shifts.
2. The data alignment scheme has the advantage of ensuring the self-consistency of the identified pattern. The importance of ensuring self-consistency in using LLMs has been investigated in previous studies such as [1]. We did not find such a self-consistency mechanism in MobiGeaR. This feature is of importance for the fidelity of simulated trajectories to actual human behavior is paramount.
[1] Wang X, Wei J, Schuurmans D, et al. Self-consistency improves chain of thought reasoning in language models[J]. arXiv preprint arXiv:2203.11171, 2022.
To reflect the reviewer's concern, we would like to include a discussion on the differences from MobiGeaR in the next version of the paper.
> **Question 1**: *What I want to know is the motivation or reason behind this setting, e.g., if 100 users and 10 candidates is sufficiently representative of this problem. The author should present this but they didn't.*
Increasing the number of users and candidate personas improves the sufficiency and diversity of data. On the downside, this compromises the efficiency. We choose such numbers to balance between the two factors. We found that the choice of 100 users and 10 candidate personas is good enough for promising results.
In the next version, we would like to include the above explanation.
> **Question 2**: *What I want to know is the definition of "unseen" for previous data-driven methods. It seems that the author explains the "unseen" for LLMs and ignores this. Is there a possibility that several traditional methods can deal with "unseen" tasks? Overall, I think the current explanation is not well-illustrative.*
By the meaning of "unseen", we meant that the data or scenarios are unseen to the model.
For traditional mobility generation methods, we do not think they can handle unseen tasks, because "unseen" refers to data points or patterns that fall outside the distribution of the training data. These methods often struggle with extrapolation to out-of-distribution scenarios. In contrast, our LLM-based approach leverages semantic understanding and general knowledge to reason about the scenarios on which it has not been trained.
In addition, such adaptation to unseen scenarios is easy in LLMs by using only a prompt.
> **Limitation**: *We the authors demonstrate the proposed framework on Osaka, but the validation part is still not comprehensive enough due to limited sample sizes (only thousands of trajectories here in Tokyo). Regarding the supplementary experiments on GPT-3.5, GPT-4, and Llama 3-8B, the author seems to not explain which setting they use. Is it conducted on the (Normal Trajectory, Normal Data) setting?*
We used the (Normal Trajectory, Normal Data) setting. Due to the limited time during the rebuttal period, we cannot cover all the settings. Nonetheless, we would like to report a comprehensive evaluation of the Osaka dataset in the next version of the paper.
---
Rebuttal Comment 3.2:
Title: Response to new comments from reviewer xLxx (2/2)
Comment: > **Other Weakness 1**: *The latency of the cost of invoking gpt3.5 API is not reported in this paper, which makes the efficiency of such an LLM-based framework remain a potential issue.*
For GPT-3.5 API, the average generation time per trajectory for the results reported in Table 1 is 46 seconds.
We would like to report this in the next version of the paper. Moreover, if users seek better efficiency, we suggest using open models such as Llama 3-8B, seeing its promising results as well.
> **Other Weakness 2**: *The details of dataset construction (section 4.1) are still not clear. For an LLM-based application, it is very important to clearly present these details, and this will make it easy for other researchers to follow.*
For researchers who are interested in our work, we will provide such details in the open-source repo.
---
Rebuttal 4:
Title: Thank you for the comments
Comment: Thank you for your time and the valuable comments on this paper. We would like to address your concerns as much as possible:
> **Concern 1**: *The dataset construction details, which are very important for the researcher to follow this work.*
**Answer:** We agree with the necessity of explaining the dataset construction. In this paper, the data format has been given in Table 2, and dataset statistics has been reported in the attached pdf file in the above "more detailed information about the experimental data".
We obtained the dataset through Twitter (now X)'s Academic Research Product Track. The construction of the dataset is reported as follows:
1. **Filtering Incomplete Data**: Users with missing check-ins for a specific year were filtered out.
2. **Excluding Non-Japan Check-ins**: Check-ins that occurred outside of Japan were removed.
3. **Inferring Prefecture from GPS Coordinates**: Prefectures were inferred based on the latitude and longitude data of check-ins.
4. **Assigning Prefecture**: Users were assigned to a prefecture based on their primary check-in location; e.g., users whose top check-in location is Tokyo were categorized as belonging to Tokyo.
5. **Removing Sudden-Move Check-ins**: Check-ins showing abrupt, unrealistic location changes, such as from Tokyo to the Osaka within a very short time frame, were deleted to remove data drift (i.e., fake check-ins), following the criteria proposed by [1].
[1] Yang, D., Zhang, D., & Qu, B. (2016). Participatory cultural mapping based on collective behavior data in location-based social networks. ACM Transactions on Intelligent Systems and Technology (TIST), 7(3), 1-23
6. **Anonymizing Data**: Real user IDs and geographic location names were anonymized. Only category information of Geographic location was kept, and latitude and longitude coordinates were converted into IDs before being input into the model.
In the next version of the paper, we will cover the construction process in the appendix, including the raw data collection and the preprocessing. We will also provide open data demos and codes related to this research.
> **Concern 2**: *the Lack of statistically rigorous reason behind some settings, e.g., if 100 users and 10 candidates*
**Answer:** We would like to emphasize that the chosen settings, such as 100 users and 10 candidates, were selected based on practical considerations and the specific context of our study. These settings were chosen to manage a balance between computational efficiency and the ability to demonstrate the effectiveness of our model. The effectiveness has been shown in the experimental section, and the efficiency result has been reported in the above discussion.
> **Concern 3**: *The validation size is the obvious limitation of this paper.*
**Answer:** We agree that having a larger validation size would be beneficial in demonstrating the model's performance. However, due to constraints including the data availability and computational resources, we had to make practical decisions regarding the validation size in this research. We believe that our chosen setting is reasonable, which is partly supported by the similar evaluation setting taken within the 25-LLM-agent community constructed in [1].
[1] Park J S, O'Brien J, Cai C J, et al. Generative agents: Interactive simulacra of human behavior. Proceedings of the 36th annual ACM symposium on user interface software and technology. 2023: 1-22.
---
Rebuttal 5:
Title: Re: Thanks
Comment: Following our previous post, in which we tried to address the most recent concerns of the reviewer, we sincerely appreciate the reviewer's dedicated efforts in evaluating our paper and engaging in the discussion. Throughout the discussions, we have tried our best to address the raised questions, particularly regarding the dataset details (construction & statistics) and the inclusion of additional experiments. We find these discussions highly constructive for improving the quality of our paper.
Regarding the identified limitations, we acknowledge that expanding the number of users and candidate personas could potentially enhance the data's sufficiency and diversity. However, we wish to emphasize that our current selection of 100 users and 10 personas has already yielded **promising outcomes**, as demonstrated in our results. It is important to note that increasing these numbers would lead to **higher token consumption and computational costs**. This limitation is not unique to our study but is also **observed in many notable existing works** that utilize GPT 3.5/4 APIs, including the seminal work of generative agents [1], which involved only 25 agents.
[1] Park J S, O'Brien J, Cai C J, et al. Generative agents: Interactive simulacra of human behavior. Proceedings of the 36th annual ACM symposium on user interface software and technology. 2023: 1-22.
We are committed to addressing all concerns highlighted by the reviewer in the next version of our paper, incorporating all details from this discussion. We hope that these efforts meet the reviewer's expectations and merit consideration for a higher rating. | Summary: This paper presents a prompt engineering framework for generating synthetic human trajectories using LLMs. The framework is guided, in an overall manner, by the observation that human movement is affected by habitual activity patterns and motivations. In this way, the framework has two phases for considering these aspects. In phase 2, two varying strategies are proposed and compared. All in all, this is an interesting piece of application work of LLMs, in prompt engineering.
Strengths: 1) A general solid prompt engineering framework for an interesting application in cities.
2) The method is non-trivial and has a certain level of novelty.
3) The evaluation has been comprehensive.
Weaknesses: 1) There are some parts in the text that are difficult to decipher – I wouldn’t say the paper is easy to follow.
2) It is unclear which method actually yields the best performance across different metrics
3) The generalizability is unclear. Can this method be used in other cities around the globe?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) Is this method zero-shot?
2) How is the Td in 3.2 (habitual activity pattern) used subsequently?
3) What’s the relation between 3.2.1 evolving based motivation retrieval and 3.2.2 learning-based motivation retrieval?
4) Learning-based motivation retrieval appears to be dependent on embeddings obtained through a contrastive learning process. In what way the embeddings are actually used?
5) What’s the rationale behind the two retrieval strategies? If one is superior, why the other is being discussed?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful review and valuable feedback on our work. We would like to address the concerns as follows.
> **Weakness 2**: *It is unclear which method actually yields the best performance across different metrics.*
**Answer**: In our evaluation, we employ four metrics to comprehensively assess mobility generation: SD is a spatial metric, SI is a temporal metric, and DARD and STVD evaluate both spatial and temporal factors.
Although there is no method that outperforms all the others in these metrics, the proposed framework delivers an overall best performance across these metrics. In particular, it achieves the best in DARD and SI, and the runner-up in STVD. That is, our framework excels in reproducing temporal and spatio-temporal aspects of personal mobility. Such result has also been shown effective in a real-world application as event-driven generation (Section 4.2).
> **Weakness 3**: *The generalizability is unclear. Can this method be used in other cities around the globe?*
**Answer**: The proposed framework is inherently area-agnostic. The core idea of our framework is based on personal pattern identification and motivation retrieval, both of which are data-driven. This ensures that the method can be applied universally given sufficient data, as it leverages locally available check-in data to derive activity patterns.
To further validate our claim, we conducted an experiment based on the data collected in Osaka, Japan. We generated 537 trajectories based on the 2102 daily activity trajectories from 30 persons. The results are reported as follows, where LLMob-L/E are ours and DiffTraj and TrajGAIL are the best-performing baseline methods.
| Model | SD | SI | DARD | STVD |
|----------------|-----------|-----------|-----------|-----------|
| LLMob-L | 0.035 | 0.021 | 0.141 | 0.391 |
| LLMob-E | 0.030 | 0.018 | 0.121 | 0.380 |
| DiffTraj | 0.080 | 0.177 | 0.406 | 0.691 |
| TrajGAIL | 0.281 | 0.063 | 0.525 | 0.483 |
The above results demonstrate that our framework can maintain superior performance in another city. In Section 5 - Limitations, we acknowledged that it is challenging to collect sufficient data from different areas, which limits our ability to conduct more extensive experiments. Additionally, we note that the model's generalization ability can be also demonstrated by its performance across different scenarios in our original experiments, such as under normal periods and under pandemic periods.
> **Question 1**: *Is this method zero-shot?*
**Answer**: Yes, our method is zero-shot. While we use historical data to extract general activity patterns and create a pool of potential locations, the actual trajectory generation process is zero-shot. The LLM agent does not rely on seeing complete trajectory examples for the specific day or scenario it is generating for. Instead, it uses the extracted patterns and motivation retrieval to reason about and create entirely new, unseen trajectories. This zero-shot capability allows our method to generate plausible activities even for novel scenarios, such as the COVID-19 pandemic example in our experiments, without requiring any direct examples of trajectories in those conditions.
> **Question 2**: *How is the Td in 3.2 (habitual activity pattern) used subsequently?*
**Answer**: The habitual activity pattern derived from equation (2) in 3.2 is used as part of the prompt to retrieve the motivation, i.e., the planning of the daily activity (Please see Appendix B: Page 14, Line 509 <INPUT 0> and Line 513 <INPUT 0>). In this way, we expect the LLM to behave as a citizen following the specified habitual activity pattern.
> **Question 3-5**: *(1) What's the relation between 3.2.1 evolving based motivation retrieval and 3.2.2 learning-based motivation retrieval? (2) Learning-based motivation retrieval appears to be dependent on embeddings obtained through a contrastive learning process. In what way are the embeddings actually used? (3) What's the rationale behind the two retrieval strategies? If one is superior, why is the other being discussed?*
**Answer**: Regarding evolving-based and learning-based motivation, these two components were designed to explore in two directions to identifying the current motivation for daily activity generation, each dealing with different aspects of data availability and sufficiency. We considered them as two promising directions for designing solutions to real-world applications, rather than determining which is superior. The experimental results also show that neither approach always outperforms the other.
- **Evolving-Based Motivation Retrieval**: This method infers the motivation for daily activities based on the activity data from the past few days. It focuses on **short-term temporal patterns and trends**, adapting to recent changes in behavior or context. This method can be a good candidate when **short-term data is available but long-term data is limited**.
- **Learning-Based Motivation Retrieval**: This method leverages a learning-based model to infer motivations by comparing activities from similar dates. The similarity is evaluated through a model trained on historical data. The trained model is then used to evaluate the similarity of the targeted date and the historical dates. This scheme is designed to identify motivations from a broader temporal context, providing robust insights even when **short-term data is limited but long-term data is abundant**.
---
Rebuttal Comment 1.1:
Title: Thanks
Comment: I would like to thank the authors for the response, and I would like to raise my score.
---
Reply to Comment 1.1.1:
Title: Re: Thanks
Comment: Thank you very much for kindly taking the time to respond to our rebuttal! We also greatly appreciate your valuable comments and suggestions. They will be reflected in the revised paper. | Rebuttal 1:
Rebuttal: Thank you to all the reviewers for your comments. We have addressed each reviewer's feedback in the individual response page. Additionally, we have attached a file that provides **more detailed information about the experimental data** we used.
Pdf: /pdf/1dce1964bb827e1dd530c4e90781e890168381e1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf | Accept (poster) | Summary: The authors take a specific natural language game (One Night Ultimate Werewolf) and study the performance of RL-trained LLM agents with respect to the mathematically derived (by the authors) Nash equilibrium solutions.
Strengths: [1] The paper chooses ONUW as a game that has the Nash equilibrium solution to be used as an oracle/reference. The paper then applies learning to improve the 0-shot answers/behavior of LLMs. The learning is performed with offline RL over the collected game episodes.
[2] Even though the paper laser-focuses on ONUW, its methodology and results can be a valuable stepping stone to solving real-life problems.
[3] As negotiations are an integral part of the society, its automation and improvement of effectiveness is a valuable direction of research. This paper makes a small but definite step in the direction of automated LLM-powered negotiations.
[4] The authors do a great job of rigorous and comprehensive research of ONUW optimal strategies for players by proving theorems for several specific cases.
[5] The formalized way to assess the performance of the learned policies with the help of NashConv (distance to equilibria) is very valuable and insightful.
[6] In 6.3 and 6.4 the authors show that the policy learned with ChatGPT4 can be transferred to Gemini thus demonstrating the generalization across backend LLMs.
[7] As described in E.2 the use of text embeddings to encode the state is intriguing.
[8] The broader impact is well discussed.
Weaknesses: [1] Per se the entire paper is bound to the ONUW game and its rules. The ability of the bundle of proposed algorithms and proofs to transfer to different games or game-like environments like financial markets is not demonstrated.
[2] I have not found any examples of the discussions between the LLM players in the manuscript. Further, the code is not provided by the authors. Thus there is no way to see the actual discussions between the LLM players and assess the value and quality of the generated discussion.
[3] Artificial prior knowledge is inserted into the solution that constraints the model to choose between 6 options: Honest/Deceptive and Evidence/Accusation/Defense. The proposed model is seemingly limited by these options and does not allow learning or discovery of alternative discussion strategies.
[4] It would be good to have a random choice baseline and compare the proposed method to its performance.
[5] The paper studies a game with one round of negotiations that has limited practical use in the light of the fact that negotiations normally are conducted in several rounds.
Technical Quality: 4
Clarity: 3
Questions for Authors: [1] What was the reasoning behind choosing offline RL over online RL?
[2] How is the classification into Honest/Deceptive and Evidence/Accusation/Defense performed: manually or by an 0-shot LLM?
[3] In “An ε-Nash equilibrium is found under the
74 limitation of both human-side and werewolf-side strategies [28]
” maybe villager-side?
[4] As per Table 3 state dim is 1536 which seems quite high for ONUW. What is the reasoning behind selection of this specific number? Is there any graph of a sweep across the state dimension as a hyperparameter.
[5] In E.4 the USD cost per game is discussed. What is the total cost of the experiments described in the paper?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations are well discussed or clear in principle.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive comments.
**Response to W1: The transferability of proposed algorithms and proofs to different games or game-like environments like financial markets is not demonstrated.**
Thank you for your comment. As this work is initially motivated by our analyses of the ONUW game, we mainly focus on the ONUW game in the experiments. We recognize that the financial markets is an interesting and highly practical environment for demonstrating our agents, but it needs relevant knowledge to construct agents for this scenario. However, since the agent framework we proposed is modular, we believe it can be transferred to various scenarios with similar discussion-driven dynamics like bargaining in the financial markets, as long as we have certain knowledge or data to train a corresponding discussion policy.
**Response to W2: I have not found any examples of the discussions between the LLM players in the manuscript. Further, the code is not provided by the authors.**
Thank you for your constructive comment. We made further human analyses based on the discussion logs between LLM players. However, due to the page limits in the global rebuttal, we can only provide one analysis in the PDF file. More analysis results and discussion examples will be updated in the revision. Also, the anonymized link to our code has been sent to the Area Chair in a separate comment as required by the conference guidelines.
**Response to W3: The proposed model is seemingly limited by the predefined discussion tactics and does not allow learning or discovery of alternative discussion tactics.**
Thank you for your comment. As discussed in our work, the predefined six discussion tactics were intended to simplify the problem and facilitate the learning process, due to the lack of a standardized classification for discussion tactics and adequate human player data in games like ONUW. While this approach does constrain the agents to choose from a predefined set of tactics, it also provides a structured framework for understanding the dynamics of discussion. However, as part of future work, we consider it interesting to explore more flexible and adaptive methods that allow for the discovery of new discussion tactics as well.
**Response to W4: It would be good to have a random choice baseline and compare the proposed method to its performance.**
Thank you for your constructive comment. We have added an LLM-based agent whose discussion policy is to randomly choose discussion tactics as a new baseline (named *Random*). Please refer to the global response for the results.
**Response to W5: About the number of negotiation rounds.**
Thank you for your feedback. Actually, although the ONUW game has only one Night, one Day, and one Voting phase, there could be many rounds of discussions during the Day phase where players are allowed to discuss, depending on the game settings. And in our settings, we allowed players to have three rounds of discussions. Please refer to the PDF file in our global response for the detailed game process. Moreover, we consider our discussion policy can adapt to multiple rounds of discussions since it relies on the current state of discussion rather than a specific number of rounds.
**Response to Q1: The reason for choosing offline RL.**
Thank you for your comment. The main reason we chose offline RL is the slow interaction with LLMs, especially the interaction rate limits of GPT-4. Online RL requires agents to interact with the environment in real-time to collect data for training. However, as some parts of our agent framework are LLMs and each inference of LLMs takes a few seconds, it is almost impossible for our agents to train the discussion policy in an online manner. So we decide to collect game data first and then adopt offline RL for training.
**Response to Q2: How is the classification into six discussion tactics performed: manually or by an 0-shot LLM?**
The classification of players' discussion tactics is performed by a 0-shot LLM.
**Response to Q3: About the "human-side" or "villager-side".**
Thank you for pointing it out. The "human-side" is the name that reference [28] used to represent the "villager-side", oppositing to the "werewolf-side". As it might cause confusion for readers, we have replaced it with "villager-side" in our latest version to keep consistency with other parts of the paper.
**Response to Q4: What is the reasoning behind selection of this specific number 1536 for state embeddings?**
Thank you for your insightful comment. The 1536 state dim is actually decided by the model we selected for state embeddings. In our experiments, GPT's `text-embedding-ada-002` model is adopted to encode players' observations and beliefs into state embeddings. We believe it is well pretrained and could effectively represent the semantic information of the original text. We have not performed a hyperparameter sweep across different state dims as it is not the main scope of our work, but it is an interesting suggestion for future work to explore the impact of varying the state dim.
**Response to Q5: What is the total cost of the experiments described in the paper?**
Thank you for your comment. For each evaluation in our experiments, we repeated the game 30 times to obtain the final results. Since we conducted experiments on one three-player game (Sec 6.2), two five-player games in fixed settings (Sec 6.3), and one five-player game in a random setting (Sec 6.4), there are 50 sets of results. However, the cost of each agent version (such as *ReAct* and *LLM-instructed*) varies a lot due to their frameworks, and the Gemini API was free when we conducted our experiments, so we can only estimate the total cost be around 200.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: Thank you for your response. As a side note, it is interesting to see ReAct performing worse than random for GPT4. I am keeping my score.
---
Reply to Comment 1.1.1:
Title: Thank you for your reply!
Comment: Thank you for your reply! And yes, judging by the win rates, the *Random* agent performs better than the *ReAct* agent for GPT-4. But it is also notable that the average number of votes got by the *Random* agent is greater than that of the *ReAct* agent. We believe that the performance between the *Random* and *ReAct* agents is actually similar, and the slightly higher win rate of the GPT-4-based *Random* agent might owe to their teammates during the gameplay. | Summary: This paper delves into the strategic aspects of discussion in the context of a population social deduction game, "One Night Ultimate Werewolf" (ONUW) game. By analyzing Perfect Bayesian Equilibria in scenarios with and without discussion, the authors highlights the pivotal role of discussion tactics in influencing players' beliefs and utilities. They propose a novel framework that employs reinforcement learning to instruct an LLM agent, enabling it to decide and execute strategic discussion policies. The authors empirically show that their framework can recognize and approximate the equilibira, and achieve strong performance across diverse ONUW settings.
Strengths: 1. The paper introduces a new complex communication game, ONUW, and provides a clear explanation of the problem formulation. The ONUW game offers interesting role deduction challenges and seems it can be a benchmark for evaluating the deduction ability of LLM agents.
2. The authors present thoroughly theoretical analyses on a specific setting of the ONUW game, which prove the significance of discussion on influencing other players' belief and provide a solid foundation for their following proposed method.
3. The idea of integrating RL policy into LLM agents to improve their strategic ability in communication games is innovative. It has the potential to be applied in future work related to strategic LLM agents in various situations. And the empirical results shows the efficacy of this method.
Weaknesses: 1. By integrating RL policy into the reasoning process, the authors aim to improve the discussion ability of LLM agents, which provides a new angle for constructing LLM agents. However, applying RL policy to select discussion tactic seems analogous to selecting action candidates in related work [1].
2. Since the RL policy is trained on game logs generated by LLM agents, it is possible that players in the training dataset appear again during testing, resulting in less convincing results.
3. Typos:
1. In line 115, 126, player i's information state should be $h^i \in \mathcal{H}^i$.
2. The symbol $q$ is used twice as the probability that Werewolves vote for Player 3 (line 194), and as the discussion policy in Equation 6.
3. In line 224-225, it seems that $\theta^i$ refers to player i's derived belief on all players' types, while $\theta$ refers to the groundtruth. These two symbols should be further distinguished.
> [1] Zelai Xu, et al. "Language agents with reinforcement learning for strategic play in the werewolf game".
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It is a little confusing why the dimension of Player 2's belief $b^2$ is 6 while the dimension of Player 1's belief $b^1$ is 3 in the formal Theorem D.2.
2. Can the authors give a detailed explanation of why the NashConv values of GPT-4-based agents are higher than Gemini-based agents?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have stated the limitations about their work in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your insightful review.
**Response to W1: Differences from related work [1].**
Thank you for your comment. We have discussed the differences between our work and related work [1] (line 80-82). Here are more detailed differences. And these will be added to the main paper in a future version.
The main differences between our work and [1] lie in the model architecture and the training process. In related work [1]'s agent model, a set of action candidates is generated and the RL policy is trained to select one action among these candidates. However, the RL policy in our framework (*i.e.* discussion policy) is directly integrated into the thinking and decision-making process of agents. So the RL policy in [1] can be seen as an adjustment for the action distribution of the LLM-based agent, while our discussion policy is part of the agent's strategy. As for the training process, related work [1] adopts a population-based RL training method to adapt to the diverse actions of various players. We believe the discussion policy should be generic and invariant across players, so it is trained in a single-agent manner. Finally, the experimental results of our agents in both three-player and five-player games indicate good scalability, which is not explicitly shown in [1].
**Response to W2: It is possible that players in the training dataset appear again during testing, resulting in less convincing results.**
Thank you for your feedback. In fact, most agents in our experiments are different from those used for generating game logs for training. Specifically, the players in the training dataset only utilize the GPT-4-based LLM-instructed agent. On the contrary, the players in our experiments have various settings, including *ReAct*, *Belief*, *LLM-instructed*, and *RL-instructed*, while adopting Gemini or GPT-4 as the backend LLM in different experiments. For example, all players in Figure 4 use Gemini as the backend LLM to avoid the potential appearance of players in the training dataset.
We believe the results are convincing since these game logs only reflect the preferences of GPT-4 which are supposed to be different from Gemini. Also, the differences between agents' versions have an impact on players' performances as shown in our experimental results, so we can regard them as different players.
**Response to W3: About typos.**
Thank you for pointing out the mistakes. We have fixed these typos in our revision.
**Response to Q1: About the dimensions of players' beliefs.**
We appreciate the reviewer for pointing it out, as it might confuse readers who are not familiar with game theory. Each belief in our theorem corresponds to a probability distribution over a specific information set. Therefore, the dimension of belief is actually the number of states in its corresponding information set. According to the game rules, players should vote for other players simultaneously in the Voting phase, which can be seen as a normal-form subgame. However, the game tree is used to describe the extensive-form game, so we need to transform the Voting phase into the extensive form for better demonstration:
- In our game tree (Fig 2 and 5), we assume Player 2 votes after Player 1 and Player 3 votes after Player 2.
- For Player 1, since it does not know which action Player 3 (original Robber) takes at night, there are 3 potential states in its information set.
- For Player 2, it does not know Player 1's voting choice as they vote simultaneously in fact, and it also does not know Player 3's action at night, so there are 6 potential states in its information set in the game tree considering Player 1 votes before it.
Therefore, the different belief dimensions are actually due to the manually set "pseudo" voting order. But it has no impact on the final derived equilibria results.
**Response to Q2: Detailed explanation of why the NashConv values of GPT-4-based agents are higher than Gemini-based agents.**
The NashConv value is defined as $\text{NashConv}(\pi)=\sum_i\left[R^i(\text{BR}(\pi^{-i}), \pi^{-i}) - R^i(\pi)\right]$, which represents how much utilities players can gain by deviating to their best responses. It is notable that players' gains in NashConv actually relate to other players' strategies $\pi^{-i}$, so the NashConv value is a result of all players' strategies.
Let us still take the three-player game as an example. In this game, Player 1's action space is $A^1=\{V2, V3\}$, Player 2's action space is $A^2=\{V1, V3\}$, and Player 3's action space is $A^3=\{(NS, V1), (NS, V2), (S1, V1), (S2, V2)\}$. We ignore the $(S1, V2)$ and $(S2, V1)$ in $A^3$, since they are respectively dominated by $(S1, V1)$ and $(S2, V2)$ (this result is stated in Appendix D.1). Considering the following two strategy profiles:
1. $\pi_1$: Player 1 and Player 2 always vote for Player 3, while Player 3 adopts a random strategy. In this case, $\text{NashConv}(\pi_1) = 1/2+1/2+0 = 1$.
2. $\pi_2$: All players adopt random strategies. And in this case, $\text{NashConv}(\pi_2) = 1/2+1/2+1/4=5/4$.
We can see that even though the deviation gains of Player 1 and Player 2 are the same in $\pi_1$ and $\pi_2$, the deterministic strategies of Player 1 and Player 2 in $\pi_1$ make Player 3 can no longer find a better response, resulting the lower NashConv value comparing to $\pi_2$.
When we analyze the evaluation logs of the experiments in Section 6.2, we find that when employing Gemini, the two Werewolves (Player 1 and Player 2) are more likely to both vote for Player 3, while the GPT-4-based agents are more random. It suggests that the Gemini-based agents' strategy profile is closer to $\pi_1$, so its NashConv value is lower than the GPT-4-based agents in the same settings.
**References:**
[1] Zelai Xu, et al. Language agents with reinforcement learning for strategic play in the werewolf game. arXiv preprint arXiv:2310.18940, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. Most of my concerns have been addressed. I will raise the score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable feedback and for adjusting your evaluation of our work. We appreciate your acknowledgment of our efforts. | Summary: The authors investigate the social game ‘One Night Ultimate Werewolf’ (ONUW) as a testbed for their framework on RL-instruct-tuning an agent to select optimal discussion tactics. They prove the existence of a Perfect Bayesian Equilibria in the game ‘Werewolf’ when the game consists of a single round and show the effectiveness of their approach in a three-player setting as well as a five-player setting to prove generality.
Strengths: The authors introduce a novel RL-instructed LLM-based agent framework specifically tailored for strategic discussions in ONUW focused on identifying the best discussion tactics in communication scenarios as well as providing an environment for further testing.
In addition, the authors further provide a theoretical analysis of the ONUW game, formulating it as a Multi-Phase Extensive-Form Bayesian Game and establishing the existence of Perfect Bayesian Equilibria in different scenarios
Weaknesses: The primary weakness of the paper is noted by the authors, where the discretization of the space of discussion tactics results in what could be an oversimplification of the dynamics that would typically occur in such a game. For example, what could occur in a typical werewolf game is that the werewolf makes no effort to defend themselves, implying the other villagers are bullying them in an attempt to garner false sympathy.
The authors also acknowledge another weakness in that identifying and discretizing the space of discussion tactics in a game is a manual and time-consuming process. While the experiments in the paper imply generalization to X number of players, this generalization is still limited to the ONUW game.
A final weakness is that the results are somewhat limited by the experiments only being performed with LLMs rather than including human players as well. However, the reviewer acknowledges the additional time and monetary costs performing such experiments would occur.
Technical Quality: 3
Clarity: 3
Questions for Authors: How did the authors determine the specific categories for discussion tactics, and have they considered any automated methods for discovering these categories?
Did the authors consider including any human participants in their trials? If not, why?
How do the authors imagine the role-switching dynamics of ONUW impacted their results? What might change were the roles fixed?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors touch on some limitations of their work, however it is unclear whether the topics addressed in the questions (the impact of human participants, the word required to pre-define the decision tactics) were not included due to being outside the scope of the work, or not relevant given some metric or detail the reviewer may have missed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive feedback.
**Response to W1: About the oversimplification and manual discretization of the discussion tactic space.**
Thank you for highlighting this limitation. As acknowledged in our work, the six discussion tactics are manually identified and simplified. This is primarily due to the lack of a standardized classification for discussion tactics and adequate human player data in games like ONUW. But the manual discretization of discussion tactic space provides a structured framework for understanding the dynamics of discussion during the game, and the experimental results have to some extent demonstrated the effectiveness of learning a discussion policy based on these tactics to improve the discussion ability of LLM-based agents. And as part of future work, we consider it interesting to explore techniques to automatically identify the discussion tactics during the gameplay as well.
**Response to W2: The generalization is still limited to the ONUW game.**
Thank you for your comment. Since the manually selected discussion tactics are specific to the ONUW game, the trained discussion policy is naturally limited to this game. However, the framework we proposed is modular and can be adapted to various scenarios with similar discussion-driven dynamics. For example, if we can somehow collect widely used bargaining tactics, then we could train a bargaining policy in the same way and apply it to our framework to construct an LLM-based agent for bargaining.
**Response to Q1: How did the authors determine the specific categories for discussion tactics?**
Thank you for your comment. The specific categories for discussion tactics were determined mainly through the inspiration from prior research on argumentation [1, 2] and analyzing human choices when playing similar games. And to highlight the potential deception in the ONUW game, we further divide these tactics into *honest* and *deceptive* ones. We have not yet explored automated methods for discovering these categories, but we highly acknowledge it is an interesting avenue for future research, as it could enhance the automation level of the LLM-based agent constructions and might discover unexpected tactics among humans' discussions.
**Response to Q2&W3: Did the authors consider including any human participants in their trials?**
Yes, we consider it would be interesting to include human players to play with the agents as well. As the main topic of our work is to demonstrate the importance of discussion in the ONUW games and to improve the discussion ability of LLM-based agents, we did not consider the impact of human participants in our experiments. However, we are planning to add a human interaction interface to our project code, which will be updated soon.
**Response to Q3: How do the authors imagine the role-switching dynamics of ONUW impacted their results? What might change were the roles fixed?**
Thank you for your insightful comment. As we analyzed in Section 4 and the game tree, the role-switching dynamics of ONUW do significantly impact the game's outcome, players' utilities, and the strategies employed by players. And it is the role-switching dynamics that make the three-player ONUW game possible. Imagine if roles were fixed in the three-player case, then two Werewolves would know the player left must be on *Team Village* and vote it out together, which would make the game meaningless.
Also, we believe that the reasoning difficulty brought by the role-switching dynamics is one of the key challenges that reflect the abilities of different language models. If roles are fixed, the ability gaps between different models in the experimental results would be narrowed down, and the need for strategic deception and counter-strategies would be reduced, possibly leading to simpler discussion policies.
**References:**
[1] Bolin Lai, et al. Werewolf among us: A multimodal dataset for modeling persuasion behaviors in social deduction games. arXiv preprint arXiv:2212.08279, 2022.
[2] Winston Carlile, et al. Give me more feedback: Annotating argument persuasiveness and related attributes in student essays. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 621–631, 2018.
---
Rebuttal Comment 1.1:
Comment: Thank you for your explanations. I will keep my current score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your reply! We sincerely appreciate your valuable reviews and acknowledgment of our efforts. | Summary: The paper presents an innovative framework for enhancing the discussion capabilities of language agents in the game "One Night Ultimate Werewolf" (ONUW) using reinforcement learning (RL). The authors propose a multi-phase extensive-form Bayesian game formulation for ONUW, analyze perfect Bayesian equilibria in both discussion and non-discussion scenarios, and develop an RL-trained discussion policy. Experimental results demonstrate the effectiveness and generalizability of the framework across various game settings.
Strengths: 1. The theoretical analysis of the game, including the formulation as a Bayesian game and the derivation of equilibria, is thorough and well-executed. The experimental design is robust, utilizing state-of-the-art LLMs and a novel RL training methodology.
2. The paper is well-structured and clearly written. The authors provide detailed explanations of the game mechanics, the theoretical framework, and the RL training process, which make the complex content accessible to readers.
Weaknesses: 1. Comparison with Related Work: While the paper provides an innovative approach to using RL and LLMs in the Werewolf game, there is a need for a more detailed comparison with closely related works, particularly those combining LLMs with reinforcement learning strategies. For instance:
Xu et al. (2023) [1] also explore strategic play in the Werewolf game using language agents trained with reinforcement learning. A comparative analysis highlighting what differentiates the current approach from Xu et al.'s methodology would clarify the novelty and the specific advancements made.
Wu et al. (2024) [2] utilize offline RL and a dataset-driven approach to enhance reasoning in LLMs within the same game context. Discussing how the methodologies differ, especially in terms of model training, dataset utilization, and resultant agent behavior, would strengthen the current work's positioning within the field.
2. Lack of Human Evaluation: The experimental section primarily focuses on win rates to demonstrate the effectiveness of the proposed framework. However, Werewolf (ONUW) involves complex human interactions and strategic discussions that might not be fully captured by win rates alone. The game's social and psychological aspects, such as bluffing and persuasion, are crucial:
It would be beneficial to include human evaluations to assess the quality of the AI's gameplay and its ability to mimic human-like strategic discussions. This could involve subjective assessments from experienced human players regarding the AI's ability to integrate seamlessly into human gameplay, its strategic depth, and its communication effectiveness.
LLM+RL:[1] Zelai Xu, Chao Yu, Fei Fang, Yu Wang, and Yi Wu. Language agents with reinforcement learning for strategic play in the werewolf game. arXiv preprint arXiv:2310.18940, 2023.
LLM+offline RL, dataset: [2] S Wu, L Zhu, T Yang, S Xu, Q Fu, Y Wei, H Fu, Enhance reasoning for large language models in the game werewolf. arXiv preprint arXiv:2402.02330, 2024
3. Dataset Size and Composition: Detailed information on the size and composition of the dataset used for offline RL is necessary. Understanding the diversity and representativeness of the game logs in the dataset would help in assessing the potential generalizability and robustness of the trained models. This includes the number of game sessions, variety of player strategies, and the range of game outcomes included.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Detailed Comparison with [1] and [2]: Can the authors provide a more detailed comparison of their work with the approaches in [1] Zelai Xu et al., 2023, and [2] S Wu et al., 2024? Specifically, how does the integration of RL and LLMs in your framework differ from these studies in terms of model architecture, training processes, and performance metrics?
2. Human-Like Gameplay Evaluation: Given the social and psychological complexities of the ONUW game, how do the authors plan to evaluate the AI's performance in terms of human-like behavior and strategic discussion quality? Are there plans to incorporate human player evaluations, and if so, what methodologies would be used to assess the AI's gameplay against human strategies and interactions?
3. The authors note "we additionally contribute a dataset featuring players employing various discussion tactics in the ONUW
game. ". Where is the dataset, can the authors opensouce the dataset?
4. Clarification on Offline RL Implementation: The paper mentions the use of offline RL due to the slow interaction with LLMs, but it lacks specific details on how the LLM embeddings are handled within the offline RL framework. Are the state embeddings used by the discussion policy generated by running LLMs on the offline dataset prior to training? If so, how is the freshness and relevance of these embeddings ensured over iterations of RL training?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors should consider expanding the discussion on the limitations related to the discretization of discussion tactics and the potential over-reliance on specific datasets. Suggestions for future work could include exploring methods for dynamic tactic generation or adjustment based on real-time gameplay feedback, which could help in developing more adaptable and robust AI agents for complex communication games. Additionally, addressing the computational demands and proposing more resource-efficient models could make the technology more accessible for broader applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback.
**Response to Q1&W1: Detailed comparison with related work [1] and [2].**
Thank you for your constructive comment. We have discussed the differences between our work and related work [1] (line 80-82). Here are detailed differences between our work with related works. And these will be added to the main paper in a future version.
- **Differences from Xu et al. (2023) [1]:** The main differences lie in the model architecture and the training process. In related work [1]'s agent model, a set of action candidates is generated and the RL policy is trained to select one action among these candidates. However, the RL policy in our framework (*i.e.* discussion policy) is directly integrated into the thinking and decision-making process of agents. So the RL policy in [1] can be seen as an adjustment for the action distribution of the LLM-based agent, while our discussion policy is part of the agent's strategy. As for the training process, related work [1] adopts population-based RL training method to adapt to the diverse actions of various players. We believe the discussion policy should be generic and invariant across players, so it is trained in a single-agent manner. Finally, the experimental results of our agents in both three-player and five-player games indicate good scalability, which is not explicitly shown in [1].
- **Differences from Wu et al. (2024) [2]:** From the perspective of the model architecture and usage, RL in related work [2] is utilized to train the *Thinker* module, which focuses on handling complex logical analysis and strategic planning in specialized tasks, while the RL policy in our work (*i.e.*, the discussion policy) mainly focuses on enhancing the discussion abilities of LLM-based agents. In the training process, the input and output of the RL policy in [2] are structured language features. But the input in our discussion policy is the embedding of current observation and belief and the output is the chosen discussion tactic. Meanwhile, as the context generated by the *Presenter* in [2] is supposed to be consistent with *Thinker*'s produced policies, the agent framework in [2] barely considers the significance of speaking strategically (*i.e.* being honest or deceptive), which is actually the motivation and core of our work.
**Response to Q2&W2: The human evaluation for agents' performance.**
Thank you for your comment. We understand the importance of human evaluations in assessing the quality of AI's performance and its ability to mimic human-like strategy discussions. We have conducted several human evaluations and analyses based on the game logs. However, due to the page limits in the global rebuttal, we can only provide one typical analysis in the PDF file. More results will be added to the appendix in the revision.
**Response to W3: Detailed information of the dataset.**
Thank you for your comment. We have provided the process of data collection, the dataset statistics and analysis in Appendix E.1. The dataset used for offline RL consists of 120 game logs (containing 1800 discussion turns) from the five-player ONUW game. And for each log, it includes the entire discussion history, players' discussion tactics, the initial and final role assignment, voting results, and game outcomes.
**Response to Q3: Can the authors opensource the dataset?**
Thank you for your feedback. We highly acknowledge the importance of opensourcing the code and dataset of our work. The anonymized link to our code has been sent to the Area Chair in accordance with the conference guidelines. And right now we are cleaning our dataset, which is scheduled to be opensourced after the rebuttal period.
**Response to Q4: Clarification on offline RL implementation.**
Thank you for your comment. Our offline RL training details can be found in Appendix E.2. In our implementation, we first use GPT's `text-embedding-ada-002` model to encode players' observations in the dataset into state embeddings, and then adopt these embeddings for further offline RL training. The reason we adopted a frozen encoder is that GPT's embedding model is well pretrained and we believe it could effectively represent the information of the original text in the semantic space. Also, related work [1] adopts prior embeddings before training. However, we agree with the reviewer's point that fine-tuning the embedding model may help further improve the performance of the agents. We will clarify this in the revision.
**References:**
[1] Zelai Xu, et al. Language agents with reinforcement learning for strategic play in the werewolf game. arXiv preprint arXiv:2310.18940, 2023.
[2] Shuang Wu, et al. Enhance reasoning for large language models in the game werewolf. arXiv preprint arXiv:2402.02330, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! After considering other reviews and responses, I've decided to maintain my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you for your consideration. Please let us know if you have any further questions or concerns. We will respond to your inquiries as promptly as possible. If we can hopefully address your concerns, we hope you'll consider adjusting your rating. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful feedback! Your reviews have greatly improved the paper. And we are grateful for the appreciation of our theoretical analysis (`uoU5`, `ADZR`, `Tkt3`, `b5Fq`), the innovation of our framework (`ADZR`, `Tkt3`, `b5Fq`), and the recognition of our research scenario (`Tkt3`, `b5Fq`).
We have responded to each reviewer's respective questions below. Furthermore, we will also revise the paper accordingly to address all other suggestions and comments. For the code of our project, we have forwarded an anonymous GitHub repository to the Area Chair in accordance with the conference guidelines. It contains the implementation of the ONUW game environment, all agents used in the experiments, and the dataset process and policy training procedure. We hope our responses could clarify existing questions.
**Response to Reviewer b5Fq's suggestion on adding a random choice baseline.**
We added an LLM-based agent whose discussion policy is to randomly choose discussion tactics as a new baseline (named *Random*), and conducted experiments in the same settings as other agents in Table 1. Here are the results:
| Agents | Gemini (*Belief*) Win Rates | Gemini (*Belief*) Avg. Votes | GPT-4 (*Belief*) Win Rates | GPT-4 (*Belief*) Avg. Votes |
|-------|--------|--------|--------|--------|
| *ReAct* | 0.40 | 1.23 | 0.30 | 1.73 |
| *Belief* | 0.40 | 1.73 | 0.32 | 1.87 |
| *Random* | 0.37 | 1.53 | 0.32 | 2.03 |
| *LLM-ins.* | 0.62 | 1.10 | 0.37 | 1.90 |
| *RL-ins.* | **0.70** | 1.10 | **0.50** | 1.87 |
It can be seen that the performance of the *Random* agent is almost similar to that of the *ReAct* and *Belief* agents, demonstrating the significance of strategic discussion policy. Especially, when playing with GPT-4-based *Belief* agents as Player 3, the *Random* agent is more likely to be voted. We recognize it is because the random discussion policy increases the inconsistency in agent's discussions.
**About the PDF file.**
In the supplementary PDF file, we provided a graph about the detailed game process of the ONUW game, and a typical example of the agents' performance along with human analysis.
Pdf: /pdf/4dd761ab99f3a2e8c5558870cd09d80db2c57905.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization | Accept (poster) | Summary: This paper proposed MAGNET, a gradient-based tokenization method, to address an over-segmentation issue when handling multilingual text data written in different scripts. MAGNET learns to predict segment boundaries between byte tokens in a text sequence. MAGNET has a customizable architecture where byte-level sequence are routed through language-script-specific predictors, which implements a language-dependent tokenization and thus avoids over-segmentation in non-Latin scripts. The authors conducted extensive experiments on nine languages with distinct scripts. Their results show that the proposed approach works well in the downstream task and moreover contributes to the speed-up at inference time.
This paper is well organized and clearly motivated. The over-segmentation gets a bigger issue in multilingual model training so the proposed approach will be useful.
Strengths: - Well organized paper. easily to follow each section. enough description is provided.
- Extensive results with 9 languages with 3 different scripts.. the authors conducted extensive experiments in the downstream task
- Successfully reducing the inference time
Weaknesses: - experiments are carried out in three different scripts. Other interesting languages would be Chinese, Japanese and Korean scripts as another major examples that struggle with the out-of-vocabulary issues.
- despite addressing the oversegmentation, performance is still similar to that of byte-level or baselines.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I see the proposed approach successfully overcoming segmentation disparities in subword tokenization though what else would be needed to improve the downstream task perfomance? For instance, Figure 3 shows that the byte-level model also achieves good scores across.
- Other interesting languages would be Chinese, Japanese and Korean scripts. Have you ever experimented with the languages? If yes, do you observe anything new?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 2xYq for taking the time to review our work and noting that our experiments and results are extensive with a successful reduction in inference time. We thank you for your suggestions and address your concerns below.
**Sufficient language coverage**
- Our choice of languages was not influenced by space separation, but rather by linguistic diversity and the availability of data for downstream evaluation. Whilst we agree that CJK languages like Chinese and Japanese could have been included in our experiments, we were limited by computational resources and future work should extend our current work to these languages.
- MAGNET is very flexible and applicable to all languages that can be expressed with UTF-8, we use byte-word-ratio as a simple proxy to train our boundary predictors to learn equitable tokenization. Indeed we cannot use byte-to-word ratio for languages without whitespace like CJK languages, but it's important to note that byte-word-ratio is not a compulsory proxy and for such languages other proxies can be used. Also computing the byte-to-word ratio is quite trivial and doesn’t require a huge amount of text. For CJK languages, one could employ state-of-the-art word-segmentation tools like Jieba to count the number of words before computing the byte-to-word ratio.
**Despite addressing the over segmentation, performance is still similar to that of byte-level or baselines**
- The goal of MAGNET is not to outperform byte-level models or DTP, but to encourage multilingual language models to learn equitable segmentations that result in equitable model representations across languages.We still, however, see that MAGNET outperforms DTP on most tasks .
**I see the proposed approach successfully overcoming segmentation disparities in subword tokenization though what else would be needed to improve the downstream task performance?**
- Due to computational limitations, we are restricted to pretraining on a smaller subset of data compared to state-of-the-art models. We believe that scaling the size of our models and pretraining data could significantly help improving downstream performance alongside overcoming segmentation disparities.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have read it along with the rest of reviewers' comments and I have decided to keep my score unchanged. While I understand the authors' computational limitations in conducting experiments on CJK languages, though, as Reviewer stTy also noted, including experimental results on CJK languages would enhance technical soundness of the work. | Summary: The authors propose MAGNET (multilingual adaptive gradient-based tokenization) to remove the common problem in non-Latin-script languages getting only tokens (subwords assigned in the vocabulary) representing short character sequences as opposed to English getting high-semantic-content tokens.
As opposed to previous gradient based tokenization strategies which globally minimize compression cost across all sequences, they use language specific segmentation predictors to reduce oversegmentation in non-Latin-script languages, to produce a better multilingual byte-level LM.
Within a byte-level LM the tokenization acts as an information bottleneck, where the input subword byte sequences have to be predicted again at the output using an "upsampling module", while next word prediction takes place over the tokenized-on-the-fly blocks.
They compare MAGNET to DTP (which doesn't have the same conditioning but otherwise has identical structure) and naive BPE over the vocabulary. Figure 2 shows that the avg # tokens per passage across all languages, as opposed to BPE's severe jump for Hindi, Bengali, and Telegu, and DTP's punishment of both Cyrillic and Indic script languages.
Strengths: This tokenization issue comes up a lot in multilingual NLP research. Different segmentation levels between languages can severely hamper LM performance---particularly open source, open weight LM performance---on many cross-lingual tasks including summarization, translation, QA, and reasoning, so fixing this with architectures such as this and scaling them will prove very impactful.
This simple proposal to fit the hourglass architecture to script-denominated segementation predictors is a simple but useful refinement.
Simple presentation of their method's superior results for simple tokenization, which also generalized to better performance on most multilingual tasks such as XNLI, PAWS-X, and SIB.
The savings in token cost for passages translate into better inference time as well.
Weaknesses: Language ID is still an issue here. While they are able to fit these individual segmentation predictors based on implicit language family by script (which can be inferred from position in the UTF-8 table), I think the case could be made that they're basically just kicking the unequal tokenization can down a level. Now words in Telegu might be worse segmented than words in Hindi, etc. Really an inline language identification-conditioned segmentation module would be best here. But that's probably better as a direction for future work anyway.
Technical Quality: 4
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, they mention language-specific issues to expanding their method beyond the alphabetic languages they examined. I would like to see them mention the further potential improvements to using inline language ID prediction (conditioned in training, maybe predicted implicitly in inference?) to control the segmentation module and maybe ensure more equitable performance across the languages.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer KW7H for reviewing our work and noting that MAGNET is very impactful and useful in reducing over-segmentation in multilingual language models. They also note that MAGNET results in better performance and lower inference costs. We appreciate your suggestions and address your concerns below.
**Resolving Language ID issues**
- Yes, this is a valid point and a great suggestion. Currently, the number of boundary predictors in MAGNET can be scaled based on the linguistic properties that the languages in the pretraining data share. This is straightforward given that the boundary predictor is a small module within the entire model. However, a language identification-conditioned segmentation module that doesn’t require adding too many individual boundary predictors could also be relevant here, and this would be a great avenue for future work. We will include this discussion in the limitations and future work section of the final version.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response! | Summary: In this paper, the authors propose a multilingual adaptive gradient-based tokenization approach to reduce over-segmentation for non-Latin language texts. In particular, they improve the previous Dynamic Token Pooling method by inferring token boundaries with different predictors for different languages. They conduct extensive experiments to demonstrate that not only the over-segmentation issue is reduced, the inference efficiency also increases.
Strengths: * Apply different boundary predictors for different languages/scripts, which is a natural improvement over the previous approach, DTP
* The results look reasonable and similar to human tokenization
Weaknesses: * Basic token prediction tasks or text generation tasks are not included in the evaluation
* Downstream tasks' performances aren't different
* Inference time improvement isn't much compared to DTP
Technical Quality: 3
Clarity: 3
Questions for Authors: In Figure 2, are some lines missing or overlapped?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer WSTY for taking time to review our work and noting that our improvements over previous work yield better tokenization. We address your concerns below.
**Basic token prediction tasks or text generation tasks are not included in the evaluation**
- We limited our evaluation to text understanding tasks following lots of prior work on tokenization Godey et al. (2022), Clark et al. (2022), Tay et al. (2022). We believe that future work should focus on text-generation tasks. Text generation tasks are indeed byte-level and slower, and follow-up ideas similar to Fleshman et al. (2023) can address them.
**Downstream tasks' performances aren't different**
- The goal of Magnet is not to outperform byte-level models or DTP, but to encourage multilingual language models to learn equitable segmentations that result in equitable model representations across languages. However, we still see that Magnet outperforms DTP on most tasks.
**Inference time improvement isn't much compared to DTP**
- Inference time improvements compared to DTP do not appear significant due to some in-effciencies in our implementation. We will invest more time into making the implementation more efficient for the final version of the paper.
Godey, Nathan, et al. "MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling." EMNLP 2022-The 2022 Conference on Empirical Methods in Natural Language Processing. 2022
Clark, Jonathan H., et al. "Canine: Pre-training an Efficient Tokenization-Free Encoder for Language Representation." Transactions of the Association for Computational Linguistics 10 (2022)
Fleshman, William, and Benjamin Van Durme. "Toucan: Token-Aware Character Level Language Modeling." arXiv preprint arXiv:2311.08620 (2023) | Summary: This paper presents multilingual adaptive gradient-based tokenization (MAGNET), which aims to reduce over-segmentation in non-Latin script languages in multilingual settings. MAGNET processes byte-level sequences and routes them through language-script-specific predictors, each optimized for its respective script, such as Latin, Cyrillic, and Indic. The segmentation is modeled using a sigmoid function, making it differentiable. Experimental results show that MAGNET can maintain downstream task performance while reducing inference latency.
Strengths: 1. The main contribution of MAGNET is its ability to maintain performance on downstream tasks while improving inference time by reducing tokenized sequence length. Compared with byte-level tokenizers, inference is more than twice as fast while maintaining or even improving performance.
2. The idea is simple and straightforward, making the paper easy to follow.
Weaknesses: 1. The proposed method doesn't conduct experiments on languages with sufficient coverage. The predictors only include Latin, Cyrillic, and Indic, raising concerns about its applicability to languages lacking spaces as word boundaries, such as Chinese and Japanese.
2. The importance of byte-level tokenization and gradient-based tokenizers as research directions is unclear. The experimental results don't demonstrate the significance of byte-level tokenization in terms of downstream performance and inference latency.
3. The training objective remains byte-level without segmentation. This raises questions about whether MAGNET still needs to generate very long byte sequences during inference.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there any analysis of the segmentation? For example, can the segmentation find phrase boundaries?
2. Is it possible to incorporate prior domain knowledge (such as word dictionaries or tokenization from other tokenizers) to improve gradient-based tokenization?
3. Are these three predictors sufficient to achieve good performance across all languages? What's the performance for languages that don't have spaces in the sequence, such as Chinese or Japanese?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer stTy for taking the time to review our work and noting that MAGNET maintains performance on downstream tasks while improving efficiency and combating over-segmentation. We address your concerns below.
**Sufficient language coverage**
- Our choice of languages was not influenced by space separation, but rather by linguistic diversity and the availability of data for downstream evaluation. Whilst we agree that CJK languages like Chinese and Japanese could have been included in our experiments, we were limited by computational resources and future work should extend our current work to these languages.
- MAGNET is very flexible and applicable to all languages that can be expressed with UTF-8, we use byte-word-ratio as a simple proxy to train our boundary predictors to learn equitable tokenization. Indeed we cannot use byte-to-word ratio for languages without whitespace like CJK languages, but it's important to note that byte-word-ratio is not a compulsory proxy and for such languages other proxies can be used. Also computing the byte-to-word ratio is quite trivial and doesn’t require a huge amount of text . For CJK languages, one could employ state-of-the-art word-segmentation tools like Jieba to count the number of words before computing the byte-to-word ratio.
**Byte-level and gradient-based tokenization research directions**
- Several works Ahia et al. (2023), Petrov et al. (2023) have pointed out flaws of subword level tokenization algorithms, particularly over-segmentation in non-Latin script languages. Tokenization in general is an active research area and there has been recent efforts to make it more robust and easily adaptable across languages and data domains Clark et al. (2022), Xue et al. (2022), Tay et al. (2022), Yu et al. (2023). Byte-level models are very relevant because of their high coverage since UTF-8 supports most of the world’s scripts. They have also generally been shown to match or surpass subword-models in performance. Moreover, gradient-based tokenization makes byte/character level models fairer and more efficient. We will highlight this better in our contributions in the final draft.
**Training objective**
- The training objective is indeed byte-level and slower, and follow-up ideas similar to Fleshman et al. (2023) can address them. For this reason, we limited our evaluation to text understanding tasks following lots of prior work on tokenization Godey et al. (2022), Clark et al. (2022),Tay et al. (2022) . We believe that future work should focus on text-generation tasks.
**Is there any analysis of the segmentation? For example, can the segmentation find phrase boundaries?**
- MAGNET is customizable and the boundary predictors can be trained to learn different granularities of segmentations including phrase segmentations. We provide qualitative analysis of the segmentations learned by MAGNET in comparison to DTP and subword-level models in Table 5 and 6 in the Appendix.
**Is it possible to incorporate prior domain knowledge (such as word dictionaries or tokenization from other tokenizers) to improve gradient-based tokenization?**
- Yes, this is very much possible. Although we train our boundary predictors via stochastic reparameterization, we note that one can also incorporate supervision from other sources such as word dictionaries or even BPE tokenizers. DTP Nawrot et al. (2023) experimented with this in their work.
**Are these three predictors sufficient to achieve good performance across all languages?**
- We used three predictors in our experiments because the languages we covered belonged to three language subfamilies. First, the number of boundary predictions could be scaled based on user preference. Second, the trained boundary predictors are sufficient to cover languages with properties similar to those in the training data. Beyond languages in our pre-training data, we also conducted downstream experiments on 5 other Indo-Aryan languages. We observed that the predicted segmentations on these languages are close to the word-level segmentations we see in language.
---
Petrov, Aleksandar, et al. "Language model tokenizers introduce unfairness between languages." Advances in Neural Information Processing Systems (2023)
Ahia, Orevaoghene, et al. "Do All Languages Cost the Same? Tokenization in the Era of Commercial Language Models." Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023.
Clark, Jonathan H., et al. "Canine: Pre-training an Efficient Tokenization-Free Encoder for Language Representation." Transactions of the Association for Computational Linguistics 10 (2022)
Xue, Linting, et al. "Byt5: Towards a token-free future with pre-trained byte-to-byte models." Transactions of the Association for Computational Linguistics 10 (2022)
Tay, Yi, et al. "Charformer: Fast Character Transformers via Gradient-based Subword Tokenization." International Conference on Learning Representations.
Yu, Lili, et al. "Megabyte: Predicting million-byte sequences with multiscale transformers." Advances in Neural Information Processing Systems 36 (2023)
Fleshman, William, and Benjamin Van Durme. "Toucan: Token-Aware Character Level Language Modeling." arXiv preprint arXiv:2311.08620 (2023)
Godey, Nathan, et al. "MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling." EMNLP 2022-The 2022 Conference on Empirical Methods in Natural Language Processing. 2022
Nawrot, Piotr, et al. "Efficient Transformers with Dynamic Token Pooling." Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023
---
Rebuttal 2:
Title: Unclear on the importance of your weaknesses
Comment: I think it's worth reconsidering the first two weaknesses, as the authors note.
1. Showing improvement on a meaningful set of languages is a useful contribution, even if it isn't applied to a comprehensive set of popular languages. Additionally, CJK languages are much more well-studied than Indic languages as is; and they also have more information content per byte so segmentation is less of an issue than for alphabetic languages. The authors might want to note this
2. Byte-level tokenization is important in a lot of production, non-generic LLM language-model based applications such as ASR retrieval and others. The authors also note a lot of recent related work. Improving it is an important direction.
Hope you consider these when responding to the authors and considering your score! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
I2EBench: A Comprehensive Benchmark for Instruction-based Image Editing | Accept (poster) | Summary: This paper presents a comprehensive benchmark for instructional image editing. The benchmark contains a high-quality dataset with over 2000 images and 4000 instructions. In addition, the benchmark presents a new evaluation pipeline that leverages GPT to act as the judge to validate the performance of different instructional image editing methods. Within the evaluation, there are two major factor groups considering both high-level and low-level editing.
Strengths: Due to the lack of high-quality benchmarks in image editing, this work fills the gap and is of great significance to the community. The evaluation pipeline is reasonable and clear to follow. The idea of using multimodal large language models to evaluate image editing results is interesting. The overall presentation is clear.
Weaknesses: 1. The provided Google Drive link cannot be opened, making the benchmark images inaccessible for review.
2. Image editing evaluation remains a challenge due to few benchmarks. However, the authors ignore comparing their work with a related work [1] due to similar technical pipelines used with that work. Both this and that work first perform human collection, then automated evaluation using GPT, then human evaluation, and then alignment evaluation. The authors should explain the missing comparison.
3. In the high-level editing, the authors did not discuss the evaluation dimension of action change or shape and size change, which are also quite essential editing types.
[1] Diffusion Model-Based Image Editing: A Survey. https://arxiv.org/abs/2402.17525
Technical Quality: 2
Clarity: 3
Questions for Authors: I am confused about the definition of low-level image editing, which is actually restoration and enhancement.
First, low-level restoration tasks usually involve fine-grained operations with rich details. Can GPT really see the slight difference when two similar restored results only vary slightly in terms of visual observation or PSNR evaluation?
Second, many editing methods are not designed for low-level vision tasks. It is not appropriate to use them to perform these tasks. The authors should check more methods specially designed for these tasks.
I hope the authors can perform additional experiments to address the above concerns. For example, for low-light image enhancement, using two leading methods (such as [2] and [3]) to enhance one image with similar outputs visually (or similar PSNR) and then using GPT for evaluation.
[2] Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement (https://arxiv.org/abs/2303.06705)
[3] Low-Light Image Enhancement with Wavelet-based Diffusion Models (https://arxiv.org/abs/2306.00306)
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The paper should be submitted to the NeurIPS Dataset and Benchmark track.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to extend our heartfelt gratitude for your thoughtful and encouraging feedback on our paper. We are deeply appreciative of your commendation, recognizing our work as a significant contribution that fills a notable gap in high-quality benchmarks for image editing. Your praise regarding the reasonableness and clarity of the evaluation pipeline, as well as the innovative use of multimodal large language models to assess image editing results, is highly encouraging. Additionally, we are grateful for your positive remarks on the overall clarity of our presentation.
Now, let us address the specific weaknesses and questions you have highlighted and provide further clarification:
> ***Q1:** The provided Google Drive link cannot be opened, making the benchmark images inaccessible for review.*
>
**A1:**
We sincerely apologize for the inconvenience caused by the inaccessibility of the Google Drive link. Upon investigation, we discovered that our anonymous Google account was mistakenly suspended by Google's system on June 21, 2024, resulting in the data on Google Drive becoming inaccessible. We have since appealed this suspension, and the Google Drive link is now accessible once again. If you have any further questions or encounter any other issues, please do not hesitate to contact us.
> Q2: Image editing evaluation remains a challenge due to few benchmarks. However, the authors ignore comparing their work with a related work [1] due to similar technical pipelines used with that work.
>
>
> *[1] Diffusion Model-Based Image Editing: A Survey.*
>
**A2:**
Thank you for pointing out the relevant and intriguing paper. We apologize for not referencing and comparing our work with this study prior to submission. We will incorporate a thorough discussion and comparison with this work in our revised manuscript. Specifically, the comparison between our benchmark and the one presented in [1] can be outlined as follows:
1. **Evaluation Methodology:**
- **Our Approach:** For each evaluation dimension, we meticulously designed questions to compare GPT’s responses with annotated answers to assess the accuracy of the edits.
- **[1]’s Approach:** The related work uses GPT to score the editing results.
2. **Number of Dimensions:**
- **Our Benchmark:** We propose a comprehensive benchmark with 16 high-level and low-level dimensions to thoroughly evaluate different types of edits.
- **[1]’s Benchmark:** The related work covers 8 dimensions.
3. **Number of Images per Dimension:**
- **Our Benchmark:** I2EBench includes approximately 120 images for each dimension, providing robust validation for each type of edit.
- **[1]’s Benchmark:** The related work includes 50 images per dimension.
We will ensure these detailed discussions are included in the final version of our manuscript. Your suggestion will undoubtedly contribute to a more comprehensive and complete presentation of our work.
> ***Q3:** In the high-level editing, the authors did not discuss the evaluation dimension of action change or shape and size change, which are also quite essential editing types.*
>
**A3:**
Thank you for your valuable feedback. You are absolutely correct that the evaluation dimensions of action change and shape and size change are crucial for a comprehensive assessment of high-level editing. We appreciate your insightful recommendation.
In light of your suggestion and the mentioned related works[1], we will actively incorporate these evaluation dimensions into our benchmark in future iterations. This will ensure that all essential aspects of high-level editing are thoroughly evaluated, ultimately contributing to more accurate and useful benchmarking for the research community.
> ***Q4:** First, low-level restoration tasks usually involve fine-grained operations with rich details. Can GPT really see the slight difference when two similar restored results only vary slightly in terms of visual observation or PSNR evaluation? … For example, for low-light image enhancement, using two leading methodsto enhance one image with similar outputs visually (or similar PSNR) and then using GPT for evaluation.*
>
**A4:**
Thank you for your insightful question. It appears there may have been some misunderstanding regarding our evaluation methodology for low-level restoration tasks. As indicated in lines 186-187 of our manuscript, we employed the Structural Similarity Index (SSIM) between the ground truth image and the edited image to measure the quality of low-level edits. Consequently, we did not rely on GPT-4V for this aspect of the evaluation. Therefore, whether GPT can discern slight differences when two similar restored results vary minimally in visual observation or PSNR evaluation does not impact the accuracy of our low-level evaluation.
To prevent further misunderstandings, we will clarify our evaluation methodology as detailed on lines 186-187 in the revised manuscript.
> ***Q5:** Second, many editing methods are not designed for low-level vision tasks. It is not appropriate to use them to perform these tasks.*
>
**A5:**
Thank you for your insightful suggestion.
For various independent low-level tasks, there are already well-established datasets in existence. Models specifically designed for these tasks can be appropriately evaluated using these datasets.
However, the advent of large models capable of handling multiple tasks has become a significant trend in the field. Our primary objective is to encourage research on versatile models that can support both high-quality high-level and low-level image editing. Therefore, in designing our benchmark, we focused on assessing the capabilities of general-purpose editing models across a range of evaluation dimensions, encompassing both high-level and low-level editing capabilities.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Most of my concerns are addressed. Hence I increase my rating.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer HHca
Comment: We are delighted that our response has addressed your concerns. Thank you for increasing your rating.
---
Rebuttal 2:
Title: Sincere Request for Further Discussions
Comment: Dear Reviewer HHca,
Thank you for your invaluable efforts and constructive feedback on our manuscript.
As the discussion period draws to a close, we eagerly anticipate your thoughts on our response. We sincerely hope that our response meets your expectations. If there are any remaining concerns or aspects that require clarification, we are ready to address them as soon as possible.
Best regards,
The Authors | Summary: This paper proposes I2EBench, which is a new benchmark for evaluating Instruction-based Image Editing (IIE) models. It offers a large dataset with over 2,000 images and 4,000 instructions across 16 detailed evaluation dimensions. The benchmark is designed to assess image editing quality automatically and aligns with human perception through extensive user studies. I2EBench aims to provide insights for improving IIE models and will be open-sourced for community use.
Strengths: I commend the authors for their insightful paper, particularly the proposed benchmark, which is remarkably comprehensive and has the potential to significantly advance the field of instruction-based image editing (IIE).
1.The benchmark addresses a wide array of evaluation dimensions, offering a holistic and multifaceted assessment of IIE model capabilities. This comprehensive approach ensures that the strengths and weaknesses of various models are thoroughly examined from multiple perspectives.
2.It strongly emphasizes aligning with human perception, ensuring the benchmark's relevance to human preference.
3.The large collection of images and instructions forms a solid foundation for comprehensive testing. This large and diverse dataset provides a robust foundation for thorough and rigorous testing, enabling models to be evaluated across a broad spectrum of scenarios.
Weaknesses: Although the paper proposes a amazing benchmark, I think the following points can further improve the paper:
1.The results of multiple models on the benchmark were presented in the paper; however, the specifics of their evaluation were not described in sufficient detail. For instance, information regarding the hyperparameters used for the evaluation process and the types of GPUs employed for model testing would provide greater clarity and reproducibility.
2.The rationale behind the use of GPT-4V for supplementary evaluation requires further elucidation. Additionally, whether alternative models could serve as suitable substitutes for GPT-4V?
3.In the section on Instruction, the contributions of the paper are not sufficiently highlighted. A more thorough summarization of the paper's contributions would significantly aid in conveying its impact to the readers.
Technical Quality: 4
Clarity: 4
Questions for Authors: I would like to inquire if the I2EBench tool or framework will be open-sourced in the future. This information is crucial for understanding its potential accessibility and contributions to the community.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes. The limitations of the paper described by the author in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to extend our heartfelt gratitude for your thoughtful and encouraging feedback on our paper. We are deeply appreciative of your commendation, recognizing our work as insightful and valuable. Thank you for acknowledging that our benchmark encompasses a wide range of evaluation dimensions and places a strong emphasis on alignment with human perception. Additionally, we are grateful for your praise regarding the extensive collection of images and instructions, which indeed forms a robust foundation for comprehensive testing.
Now, let us address the specific weaknesses and questions you have highlighted and provide further clarification:
> ***Q1:** The results of multiple models on the benchmark were presented in the paper; however, the specifics of their evaluation were not described in sufficient detail. For instance, information regarding the hyperparameters used for the evaluation process and the types of GPUs employed for model testing would provide greater clarity and reproducibility.*
>
**A1:**
Thank you for your valuable feedback. We utilized the hyperparameter settings and official weights as specified in the original papers for each model. All editing results were generated using A800 GPUs.
We will include these implementation details in the revised version of our paper to enhance transparency and facilitate the reproducibility of our results.
> ***Q2:** The rationale behind the use of GPT-4V for supplementary evaluation requires further elucidation. Additionally, whether alternative models could serve as suitable substitutes for GPT-4V?*
>
**A2:**
Thank you for your insightful question.
Our decision to use GPT-4V in our supplementary evaluation was based on its status as the most advanced model available at the time of our study. At that point, GPT-4o had not yet been proposed. We chose GPT-4V for its sophisticated capabilities, including its nuanced understanding of language and proficiency in handling complex tasks. As technology continues to evolve, it is indeed possible that other multimodal large language models may emerge, offering even more accurate assessments in the future.
> ***Q3:** In the section on Instruction, the contributions of the paper are not sufficiently highlighted. A more thorough summarization of the paper's contributions would significantly aid in conveying its impact to the readers.*
>
**A3:**
Thank you for your valuable feedback. In light of your comments, we have outlined our key contributions as follows:
- We have proposed a comprehensive benchmark encompassing 16 evaluation dimensions specifically designed to assess instruction-based image editing (IIE) tasks.
- We have conducted extensive experiments on eight popular IIE models, accompanied by a thorough analysis of their performance.
- We have implemented Alignment Verification, demonstrating that our benchmark scores are aligned with human perception.
We will incorporate this summarized list of contributions into the revised manuscript to ensure that the readers can clearly understand the impact and significance of our work.
> ***Q4:** I would like to inquire if the I2EBench tool or framework will be open-sourced in the future. This information is crucial for understanding its potential accessibility and contributions to the community.*
>
**A4:**
Thank you for your suggestion. The evaluation scripts for I2EBench have already been included in the supplementary materials. We are committed to open-sourcing all I2EBench codes and datasets to foster progress in this field. This initiative will ensure the accessibility and usability of our tool for the broader research community, promoting further advancements and collaborative developments.
---
Rebuttal 2:
Title: Sincere Request for Further Discussions
Comment: Dear Reviewer A2Kh,
Thank you for your invaluable efforts and constructive feedback on our manuscript.
As the discussion period draws to a close, we eagerly anticipate your thoughts on our response. We sincerely hope that our response meets your expectations. If there are any remaining concerns or aspects that require clarification, we are ready to address them as soon as possible.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer A2Kh:
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best, AC
---
Rebuttal 3:
Comment: Thank you for the author's response. All my concerns have been addressed. After carefully reviewing all the reviewers' comments and the author's response, I found that all reviewers acknowledge the value of I2EBench. I further believe that I2EBench can significantly contribute to the image editing community. Therefore, I will further raise my score and hope the authors can incorporate all the feedback into the revised version.
---
Rebuttal Comment 3.1:
Title: Response to Reviewer A2Kh
Comment: We greatly appreciate your efforts in reviewing and your recognition of I2EBench's contribution to the community. We are pleased that our response has resolved your concerns. | Summary: The paper addresses the challenge of evaluating models in the field of Instruction-based Image Editing (IIE) by proposing a comprehensive benchmark called I2EBench. It features: 1) Comprehensive Evaluation: Covers 16 evaluation dimensions for a thorough assessment of IIE models. 2) Human Perception Alignment: Includes extensive user studies to ensure relevance to human preferences. 3) Research Insights: Provides analysis of strengths and weaknesses of existing IIE models to guide future development. I2EBench will be open-sourced, including all instructions, images, annotations, and a script for evaluating new models.
Strengths: 1 The supplementary materials provided by the authors offer an in-depth explanation of the evaluation process, which is very beneficial for readers seeking to understand the evaluation details of I2EBench thoroughly.
2 The benchmark proposed in the paper covers a wide range of evaluation dimensions, providing a holistic and multi-faceted assessment of IIE model capabilities.
3 The authors commit to open-sourcing I2EBench, including all relevant resources. This openness will facilitate fair comparisons and knowledge sharing within the community.
4 By systematically evaluating the models, the paper provides valuable research insights that can guide future model architecture design and data selection strategies.
5 The inclusion of user studies in the evaluation process adds depth, ensuring that the evaluation results accurately reflect the real-world experiences of end-users.
6 The benchmark encompasses multiple types of image editing tasks, including both high-level and low-level editing, which makes it versatile and comprehensive.
7 A large number of images and instructions are provided, forming a solid foundation for comprehensive testing of the models.
Weaknesses: 1 If the research code or data is not made publicly available, it may limit the usability of the benchmark for other researchers.
2 The paper only provides radar charts for the category experiments. Including the corresponding quantitative data would make the comparisons more precise and understandable.
3 I observed a significant performance gap in the Object Removal dimension between using the original instruction and diverse instruction. The authors could use methods like CLIP similarity or Jaccard similarity to measure the similarity between the original and diverse instructions. This could help determine whether the performance variance is due to significant changes in the instructions or due to the sensitivity of some models to specific vocabulary changes in the Object Removal instructions.
4 The font size in Figure 3 (c) is too small, which might hinder readers from clearly viewing the information presented in the figure.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1 Why did the authors choose to sample high-level editing images from the COCO dataset, while most low-level editing images are sampled from existing low-level datasets?
Others please ref to the weaknesses.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to extend our heartfelt gratitude for your thoughtful feedback on our paper. We sincerely appreciate your recognition of its strengths, including the comprehensive explanation of the evaluation process, the wide array of evaluation dimensions, the provision of evaluation code, and the valuable research insights we have provided. Additionally, we are thankful for your acknowledgment that I2EBench is a holistic benchmark encompassing both low-level and high-level editing tasks.
Now, let us address the specific weaknesses and questions you have highlighted, and provide further clarification:
> ***Q1:** If the research code or data is not made publicly available, it may limit the usability of the benchmark for other researchers.*
>
**A1:**
Thank you for raising this important concern.
We fully appreciate that the accessibility of research code and data is crucial for the usability and reproducibility of our benchmark by other researchers. In response to this, we have included the code in the supplementary materials accompanying our submission. Additionally, we are committed to fully open-sourcing both the code and data.
> ***Q2:** The paper only provides radar charts for the category experiments. Including the corresponding quantitative data would make the comparisons more precise and understandable.*
>
**A2:**
Thank you for your insightful suggestion.
To provide a more precise and comprehensible comparison of our benchmark's performance across different categories, we have included the corresponding quantitative data. **Tab. III and Tab. IV of the rebuttal pdf** present the quantitative results for different model categories, supplementing the radar charts displayed in Figure 7. These detailed metrics will be incorporated into the supplementary materials of the final version of our paper to enhance clarity and further substantiate our findings.
> ***Q3:** I observed a significant performance gap in the Object Removal dimension between using the original instruction and diverse instruction. The authors could use methods like CLIP similarity or Jaccard similarity to measure the similarity between the original and diverse instructions. This could help determine whether the performance variance is due to significant changes in the instructions or due to the sensitivity of some models to specific vocabulary changes in the Object Removal instructions.*
>
**A3:**
Thank you for your valuable suggestion.
Following your recommendation, we have employed multiple metrics to measure the similarity between the original and diverse instructions, including CLIP cosine similarity, Jaccard similarity, TF-IDF, Word2Vec, and FastText. **Tab. V of the rebuttal pdf** presents the similarity scores, illustrating that there are no significant abnormalities across these metrics when comparing the original and diverse instructions in the Object Removal dimension. Thus, the observed performance gap in the Object Removal dimension is not due to textual discrepancies. Instead, it indicates that these models exhibit a lack of robustness in interpreting the instructions for this particular task.
> ***Q4:** The font size in Figure 3 (c) is too small, which might hinder readers from clearly viewing the information presented in the figure.*
>
**A4:**
Thank you for your valuable feedback.
We understand that the readability of figures is crucial for conveying information effectively. In response to your observation, we will revise Figure 3(c) to increase the font size, ensuring that all text is clearly legible.
> **Q5:** Why did the authors choose to sample high-level editing images from the COCO dataset, while most low-level editing images are sampled from existing low-level datasets?
>
**A5:**
Thank you for your insightful question.
For low-level editing tasks, we sourced images from existing low-level datasets because these datasets provide ground truth (GT) images. The availability of GT images allows for a more precise evaluation by directly calculating metrics such as SSIM (Structural Similarity Index) between the GT images and the edited images, ensuring accuracy in assessment.
In contrast, high-level editing tasks often lack clearly defined ground truth images, making it challenging to perform precise evaluations. Therefore, we chose to sample high-level editing images from the COCO dataset, a widely recognized and accepted dataset in the research community. The use of COCO ensures a broad and diverse range of images, facilitating a more representative assessment of high-level editing capabilities.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thank you for responding to my concerns.
For A1, thanks for your commitment. I look forward to the development of I2EBench in the IIE field.
For A2, I think quantitative tables can better show the absolute difference in performance than qualitative charts. I suggest replacing Figure 7 with a table.
For A3, the response has resolved my issue regarding the performance gap in the object removal dimension. The response is reasonable and supported by experimental evidence.
For A4 and A5, thanks for your response.
---
This paper proposes a comprehensive benchmark to fill the gap in evaluating high-level and low-level image editing. Based on the author's response, which addressed my concerns, I have decided to increase my score from 6 to 8.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer 7DEf
Comment: Thank you for acknowledging our work. We will open-source I2EBench in the near future and incorporate your suggestions to further refine our paper. | Summary: This paper proposes I2EBench, a comprehensive benchmark designed to automatically evaluate the quality of edited images produced by IIE models from multiple dimensions. I2EBench comprises 16 evaluation dimensions, covering both high-level and low-level aspects. Additionally, through user studies, the authors assess the alignment between the proposed benchmark and human perception. The I2EBench dataset consists of over 2000 images for editing, along with corresponding original images and diverse instructions.
Strengths: 1. The proposed I2EBench represents a significant advancement over previous works, providing a large and comprehensive benchmark for instruction-based image editing. This contribution will greatly benefit the research community.
2. When establishing I2EBench, the authors have thoughtfully included often overlooked low-level edits, such as rain removal, and have implemented thorough evaluations.
3. The paper is clearly written and easy to follow.
Weaknesses: 1. The technical novelty of the evaluation method is insufficient. A significant part of I2EBench's evaluation of high-level edits relies on GPT-4V, such as Direction Perception (line 158) and Object Removal (line 164). Therefore, the authors' contribution appears more like a new prompt engineering method.
2. The insights provided in Section 5 are not informative for the research community. The authors state that "the editing ability across different dimensions is not robust," and Figure 7 shows that current IIE methods perform better on high-level editing tasks (e.g., Object Removal) but struggle with low-level tasks (e.g., Shadow Removal). However, most existing editing datasets are high-level, making it difficult for IIE methods to learn low-level tasks during training. Therefore, the insights in Section 5 seems not constructive.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Does I2EBench consider the aesthetic quality of image edits? For example, in Object Replacement (line 164), how does the benchmark evaluate whether the edited object is appropriately and naturally integrated into the image, rather than simply copied and pasted?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to express our sincere gratitude for your positive feedback on our paper and for recognizing its strengths. We appreciate your agreement that I2EBench represents a significant advancement over previous works, greatly benefiting the research community. Additionally, we are thankful for your observation that we have thoughtfully included often overlooked low-level edits. We are also grateful for your praise regarding the clarity and readability of our paper.
Now, let us address the specific weaknesses and questions you have raised and provide further clarification:
> ***Q1:** The technical novelty of the evaluation method is insufficient.*
>
**A1:**
Thank you very much for your insightful feedback. In fact, GPT-assisted evaluation methodologies are widely recognized and utilized in numerous influential published works [1,2,3], demonstrating considerable scientific validity. While it is true that we leverage GPT-4V for certain aspects of the evaluation, it is crucial to understand that our approach represents more than mere prompt engineering. To elucidate further, the effective utilization of GPT-4V in our benchmark necessitated several significant contributions:
- **Image Selection:** For each evaluation dimension, we meticulously select and filter suitable images. For example, for the evaluation dimensions of "Object Removal" and "Object Replacement", it is necessary to ensure that there is at least one object in the image instead of a simple landscape photo. This rigorous curation process ensures the relevance and accuracy of the images used, thereby enhancing the integrity of our evaluation.
- **Instruction Annotation:** Each image is annotated with a specific editing instruction that aligns precisely with the corresponding evaluation dimension. This careful annotation is essential for ensuring that the assessments are accurate and meaningful.
- **Question-Answer Pairing for High-Level Edits:** For dimensions involving high-level edits, we annotate each image-instruction pair with corresponding question-answer pairs. The accuracy of the edits is then evaluated by comparing GPT-4V's responses to these annotated answers. This process allows us to rigorously assess the correctness and effectiveness of the edits.
Thus, I2EBench should be viewed not merely as a prompt engineering method but as a well-designed, thoroughly annotated benchmark. It leverages GPT-4V to reduce evaluation costs and enable the automation of the evaluation process, which is in line with current practices in the field.
*[1] Q-bench: A benchmark for general-purpose foundation models on low-level vision. ICLR. 2023.*
*[2] Evalcrafter: Benchmarking and evaluating large video generation models. CVPR. 2024.*
*[3] Judging llm-as-a-judge with mt-bench and chatbot arena. NeurIPS. 2023.*
> ***Q2:** The insights provided in Section 5 are not informative for the research community.*
>
**A2:**
Thank you for your constructive feedback.
We agree that the current emphasis on high-level tasks in existing IIE datasets may limit the capacity of IIE methods to effectively learn and execute low-level editing tasks. This limitation actually reinforces the validity and reliability of our analysis. Our proposed I2EBench aims to address this gap by offering a comprehensive evaluation framework that includes both high-level and low-level editing tasks.
Therefore, far from being non-informative, we believe our proposed insights serve as a crucial signal to the research community, highlighting the clear need for balanced dataset development and methodological advancements that can better handle low-level tasks. These insights provide a clear direction for future research and innovation, emphasizing areas that require further exploration and improvement. Your valuable comments have enabled us to more effectively convey the significance and implications of our findings.
If you have any questions or need further clarification, please do not hesitate to let us know, and we will do our utmost to address them.
> ***Q3:** Does I2EBench consider the aesthetic quality of image edits?*
>
**A3:**
Thank you for your insightful comments. Aesthetic quality is indeed an important criterion in image editing. In response to your suggestion, we have integrated the Aesthetic Predictor’s Score (AP) to evaluate the aesthetic quality of edited images, similar to the approach used by InstructDiffusion [1]. Specifically, the AP score assesses the aesthetic quality of the generated images, employing a methodology akin to that used by LAION-5B [2], which utilizes the CLIP+MLP Aesthetic Score Predictor. A higher AP score indicates a better perceptual quality.
We calculated the AP score for the edited images generated by different methods and averaged the scores for each evaluation dimension. This allows us to derive the AP scores for each method across various evaluation dimensions. As illustrated in **Tab. I and Tab. II of the rebuttal pdf**, our findings are twofold:
- The difference in the Aesthetic Predictor’s Score between images edited using original instructions and those edited with diverse instructions is relatively small.
- The variations in the Aesthetic Predictor’s Score across different dimensions are relatively large.
These insights will enable us to refine our evaluation methodology and ensure a more comprehensive assessment of the aesthetic quality of image edits. Following your excellent suggestion, we will include the above experimental results and discussion in the revised manuscript.
*[1] Instructdiffusion: A generalist modeling interface for vision tasks. CVPR. 2024.*
*[2] Laion-5b: An open large-scale dataset for training next generation image-text models. NeurIPS. 2022.*
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I lean to keep my rating (5).
---
Reply to Comment 1.1.1:
Title: Response to Reviewer riYa
Comment: Thank you for your efforts in reviewing and for giving I2EBench a positive rating. Your insightful comments will significantly enhance the paper.
---
Rebuttal 2:
Title: Sincere Request for Further Discussions
Comment: Dear Reviewer riYa,
Thank you for your invaluable efforts and constructive feedback on our manuscript.
As the discussion period draws to a close, we eagerly anticipate your thoughts on our response. We sincerely hope that our response meets your expectations. If there are any remaining concerns or aspects that require clarification, we are ready to address them as soon as possible.
Best regards,
The Authors
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer riYa:
Thanks for reviewing this work. Would you mind to check authors' feedback and see if it resolves your concerns or you may have further comments?
Best, AC | Rebuttal 1:
Rebuttal: ### Response To All Reviewers and Table data
We would like to extend our heartfelt gratitude to the reviewers for their valuable feedback and positive comments on our paper. Their insightful reviews have significantly enhanced the clarity and overall quality of our work.
We thank Reviewer riYa $\color{red}{\mathbf{(5: Borderline Accept)}}$ for acknowledging the strengths of our paper. They highlighted the clear and easy-to-follow nature of our writing and recognized that I2EBench represents a significant advancement over previous works, benefiting the research community. They also appreciated our thoughtful inclusion of often overlooked low-level edits.
Reviewer 7DEf $\color{red}{\mathbf{( 6: Weak Accept)}}$ praised our comprehensive explanation of the evaluation process and the wide range of evaluation dimensions covered by the benchmark. They also commended the valuable research insights provided, which can guide future model architecture design and data selection strategies. Additionally, they noted that the benchmark encompasses multiple types of image editing tasks.
Reviewer A2Kh $\color{red}{\mathbf{(7: Accept)}}$ recognized the remarkable comprehensiveness of I2EBench and its potential to significantly advance the field of instruction-based image editing. They appreciated the benchmark's wide array of evaluation dimensions and its strong emphasis on aligning with human perception.
Reviewer HHca $\color{red}{\mathbf{(4: Borderline Reject)}}$ highlighted the novelty and efficacy of our proposed technical components. They acknowledged the importance of our work in addressing the lack of high-quality benchmarks in image editing. Furthermore, they found the evaluation pipeline reasonable and clear to follow, praised the innovative use of multimodal large language models for evaluating image editing results, and appreciated the overall clarity of our presentation.
We sincerely thank the reviewers for recognizing these strengths and for their positive feedback on the clarity, novelty, and effectiveness of our proposed methods. Their comments have further motivated us to address the concerns and improve upon the weaknesses pointed out in their reviews. We are dedicated to thoroughly addressing their concerns and providing a detailed response in our rebuttal. ***The tables involved in the rebuttal are all located in the PDF submitted during the rebuttal phase.***
Pdf: /pdf/67c3e382f9b0f7d49ba7f4c8fa89bda40a2c782b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
An Adaptive Approach for Infinitely Many-armed Bandits under Generalized Rotting Constraints | Accept (poster) | Summary: This paper studied an extension of multi-armed bandit problem by introducing infinitely many arms and generalized rotting constraints. It provides explicit regret lower bounds and proposes an algorithm with regret upper bound matching the lower bound when $\beta \geq 1$. It has been claimed that closing the gap between the lower and upper bounds when $\beta \in (0,1) $ remains an open problem.
Strengths: - Extending the multi-armed bandit problem to infinitely many arms with rotting mean rewards is feasible in many real-world applications. Solid theoretical analyses and empirical results are presented to justify the proposed solution.
- The paper in general is well-written and easy to follow. Since I have not checked the supplementary material step by step, I can not guarantee the correctness of the proofs, but the theoretical results make sense to me.
Weaknesses: - The paper could be viewed as an extension of [1], and the impact of the paper might not be extremely significant. As mentioned in the paper, when $\beta \in (0,1)$, the proposed algorithm can not be proved near optimal at the current stage. The theoretical result could be more impactful if this issue can be solved.
[1] Jung-hun Kim, Milan Vojnovic, and Se-Young Yun. Rotting infinitely many-armed bandits. ICML, pages 11229–11254. PMLR, 2022.
- When the environment parameters $\beta, V_T,$ and $S_T$ are unknown, a significant amount of additional regret is generated with proposed Algorithm 2.
- The paper could benefit from a more extensive experiment study. For example, the algorithm performances can be compared under different rotting processes (perhaps by varying the rotting rate and adding randomness).
Technical Quality: 3
Clarity: 3
Questions for Authors: The following questions are raised simply for discussion. There is no need to address them in the paper.
- Why the rotting budget $V_T$ and $S_T$ are defined for all selected arms? I would imagine it can fit better with real-world problems to treat the rotting process for each arm independently. Saying if an arm is selected $N$ times, the rotting budget is $f(N)$ where $f$ is a nondecreasing function.
- Corresponding to my second point in Weakness, is it possible to address the unknown parameters without incurring large additional regret? Can the methodology in the following paper [2] be applied in this paper to address the nonstationary rotting rewards? Can $\beta$ be estimated by selecting multiple new arms?
[2] Chen, Yifang, et al. "A new algorithm for non-stationary contextual bandits: Efficient, optimal and parameter-free." Conference on Learning Theory. PMLR, 2019.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and positive evaluation. Below, we address each comment.
**The paper is an extension of [1]. When $\beta\in(0,1)$, the proposed algorithm can not be proved optimal:**
First of all, we highlight that we consider rotting constraints with $V_T$ or $S_T$ and initial mean rewards with $\beta>0$, both of which are not considered in [1]. As mentioned in Remark 2.2, the rotting constraint with $V_T$ ($\sum_t \rho_t \le V_T$) is more general than the maximum rotting constraint in [1]. Furthermore, the constraint with $S_T$ ($1+\sum_t 1(\rho_t \neq 0) \le S_T$) is fundamentally different from that in [1] because $S_T$ considers the total number of rotting instances rather than the magnitude of the rotting rate.
While, in our work, we have demonstrated optimality only for $\beta \ge 1$, we believe that the suboptimality for $\beta \in (0,1)$ arises from the looseness of our lower bounds rather than the regret upper bounds. The gap between lower and upper bounds comes from that when $\beta(<1)$ decreases, the regret lower bounds (in Theorems 4.1, 4.2) also decrease while the upper bounds (in Theorems 3.1, 3.3) remain the same. As discussed in Appendix A.1, the phenomenon where the regret upper bound remains the same as $\beta$ decreases has also been observed in many infinitely many armed bandits [5, 24, 10]. This is because, as mentioned in [10], although there are likely many good arms when $\beta$ is small, it is not possible to avoid a certain amount of regret from estimating mean rewards. Therefore, we believe that our regret upper bounds are near-optimal across the entire range of $\beta$, and achieving tighter lower bounds for $\beta < 1$, considering such unavoidable regret, is left for future research. Notably, the optimality proven only for $\beta \ge 1$ has also been observed in stationary infinitely many armed bandits [5, 24].
**When $\beta,V_T,S_T$ are unknown, additional regret is generated with Algorithm 2:**
While our algorithm (Algorithm 2) incurs an additional regret to estimate the threshold parameter when parameters are unknown, if $V_T$, $S_T$ are large enough (e.g. $V_T\ge T^{1/4}$, $S_T\ge \sqrt{T}$ for $\beta=1$), then the regret bound of Algorithm 2 matches that of Algorithm 1 in Corollary 3.5 (for known parameters). This is because the additional term becomes negligible compared to the main term involving $V_T$ and $S_T$. We leave it as future work to obtain a tighter regret bound for the entire range of $V_T$ and $S_T$ when the parameters are unknown.
**Extensive experiment study by varying the rotting rate with randomness:**
The main purpose of our experiments is to validate some claims of our theoretical analysis such as Remark 3.2. Based on your comments, we have conducted an additional experiment for random rotting rates in Figure 3 in the attached pdf. Following the rotting rates used in the experiment section in our paper, which rotting rates are convenient to compare the theoretical results of algorithms, we introduced random rotting rate $\rho_t$ uniformly sampled from $[0,3/(t\log(T))]$ for each time $t$. In Figure 3, our algorithms outperform other benchmarks. We will include further experiments regarding rotting rates in our final version.
**The rotting budget $V_T$ and $S_T$ are defined for all selected arms. How about treating the rotting process for each arm independently with the budget of each arm, $f(N)$, for selected $N$ times:**
In our setting, the constraints on $V_T$ and $S_T$ quantify the rotting rate of all selected arms for all $t$, as you mentioned. We note that similar quantities related to overall nonstationarity across all arms have also been considered in standard non-stationary bandits [7, 19, 4]. These quantities help quantify the overall difficulty of the problem in our regret analysis. As you suggested, addressing individual budgets for each arm would be an interesting research question.
We believe that our algorithm and analysis provide helpful insights for solving such a problem. This is because the core of our algorithm involves determining whether a selected arm is good or bad based on a threshold parameter tuned by the value of the constraint budget. If we know $f(N)$ for each arm, we can use our method to determine the status of the arm using a threshold adjusted by the budget information.
**Addressing the unknown parameters without incurring large additional regret. Can the methodology in [2] be applied in this paper? Can $\beta$ be estimated?**
Here, we provide our thoughts regarding the applicability of [2]. In our problem, it is required to determine whether a selected arm is bad or not. In our algorithm, using a threshold, we can determine this with some confidence. Although we assume that nonstationarity can be detected using techniques from [2], it may not be clear how to determine the status of an arm whose initial mean reward is bad (small). In our setting, which includes infinitely many arms and many near-optimal ones, an absolute threshold (rather than a relative value between arms) may be required, as used in our algorithm to determine the status of arms. Without knowledge of parameters of $V_T$, $S_T$, and $\beta$, how to determine such a threshold combined with the detection method is an unresolved problem.
Another issue is that our setting allows for potentially large negative rewards from rotting with sub-Gaussian noise, whereas [2] considers rewards within $[0,1]$. This difference makes it challenging to apply the concentration inequality of Lemma 12 in [2] to our problem. As you suggested, applying the detection-based algorithm [2] to our setting while addressing those issues would be an interesting direction for future work.
Regarding the unknown $\beta$, estimating $\beta$ directly was considered in Algorithm 3 in [10]. However, it requires additional information on a (non-zero) lower bound of $\beta$, and the performance depends on the tightness of this lower bound.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you to the authors for their reply. I am writing to confirm that I've read the author's rebuttal and my scores remain unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you for your comment.
Comment: Thank you for maintaining the positive rating of our paper. We will incorporate the discussion, including additional experimental results, into our final version. If there are any other questions, please let us know. | Summary: This paper considers the extension of the classic stochastic multi-armed bandit problem to the case where there is a) an infinite set of arms and b) rotting of the arm means. Specifically the rotting behaviour is of the 'rested bandit' variety, where the mean reward of an arm may fall as an immediate consequence of playing it, but the mean of an unplayed arm will not change, in contrast to the restless bandit. The infinite set of arms have i.i.d. mean rewards (no structural assumptions on the set of arms or existence of a reward function over them etc.) and reward observations are sub-Gaussian.
This paper considers a more general rotting behaviour than Kim et al. (2022, ICML). In Kim et al. (2022) the magnitude of rotting in each round was bounded, in the present paper the more general setting where the total amount of rotting in T rounds is bounded is considered, and the somewhat different setting where the number of rounds in which rotting occurs is considered. These two settings are referred to as the 'slow' and 'abrupt' rotting cases respectively.
An important parameter in the infinitely-many-armed bandit problem is $\beta>0$ which controls the probability of an arm being $\delta$-near optimal. All regret bounds in the paper for mean rewards in [0,1] and consider dependence on the number of rounds $T$ and parameter $\beta$. $\beta=1$ corresponds to a uniform distribution on arm means.
The paper derives a sliding-window-UCB-based approach, which can be tuned to both the slow and abrupt rotting cases when the bounds on rotting behaviour are known, and an adaptive version which aims to learn the rotting behaviour when these parameters are not known. The paper shows tight (in terms of order) regret guarantees on these algorithms, and improved empirical performance over sensible competitors.
Strengths: The paper studies an interesting extension of prior work to allow for more general rotting constraints. This is likely to be of interest to the multi-armed bandit community and both the methodological and theoretical work present some novelty and careful algorithmic design that merit publication.
I was pleased to see a treatment of the case where key problem parameters were not known, and that largely comparable bounds were achievable in this setting.
Generally, the paper is written well and concisely, with useful clarifications made around the most important details and a clear relationship to the most closely related prior work.
Weaknesses: There are three main weaknesses which I would like to see commented on the rebuttal phase. I think the first is difficult to address fully in a rebuttal phase, but I would like to be reassured that potential issues do not limit the scope of the contribution.
1. All of the theoretical results are ultimately order results only, without identification of the constants in the appendices, and some being somewhat vaguely described as needing to be 'large enough'. The paper would be stronger if these constants (or upper bounds on them) could be identified, to guarantee for which values of $T$ the bounds are meaningful. It would be helpful if the bounds could thus be realised for the cases considered in the experiments and compared to the actual regret of Algorithms 1 and 2.
2. If I am not mistaken the experiments only consider $\beta=1$? It would be interesting to see if the convergence of the algorithm is replicated for various $\beta \neq 1$.
3. The motivation in terms of real applications is not especially strong. It is difficult to imagine a setting where the precise assumptions of the paper are realistic. In a recommendation setting, is it likely that there will be sufficient information to be confident that the click through rate of an item is always non-increasing, yet there is no contextual information available to supplement the model or exploit similar click through rates across similar items? In the clinical setting, is it likely that the rested assumption is realistic (i.e. any loss in efficacy is only determined by the actions of a single decision maker) alongside the immediacy of feedback and time horizons being of a scale such that the algorithms actually exhibit non-linear regret? If it is the case that the benefit of the paper is more fundamental - it presents an understanding of a simpler problem and its insights serve as a foundation to tackling these more complex real-world challenges, then perhaps it would be more helpful to make that clear?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Can you provide clarity on whether your constants in the regret analysis are identifiable, and suggest the values of these for some simple instance of the problem? How would the resultant bounds compare to the empirical performance of Algorithms 1 and 2?
2. What would the experimental results look like for $\beta \neq 1$?
3. Is Prop 2.3 missing some kind of additional condition, or clarity over what is meant by worst-case in line 453? It would seem to be that you could construct an example where $\rho_1=T+1$ and all other $\rho=0$ and one could achieve $\sqrt{T}$ regret while satisfying $\sum{t=1}^{T-1}\rho_t >T$?
4. Can you put the WUCB definition in an equation display and reference its equation number in the statement of Algorithm 1?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and positive evaluation of our work. Below, we address each comment.
**All of the theoretical results are ultimately order results only, without identification of the constants in the appendices. It would be helpful if the bounds could thus be realised for the cases considered in the experiments and compared to the actual regret of Algorithms 1 and 2:**
In our work, we provide regret bounds in terms of the horizon time $T$ and the rotting constraint parameters $V_T$ and $S_T$ (to the power of $\beta$ terms), which hold up to constant factors whose values are not asserted in the statements of our theorems.
Based on your feedback, we will include more details regarding the values of these constant factors in the final version of our paper. For instance, in Lemma A.5, instead of using $m^{\mathcal{G}} = C_3$ for some sufficiently large constant $C_3 > 0$, we can specify $m^\mathcal{G}=3$ with $V_T\le \max\\{1/T^{1/(\beta+1)},1/\sqrt{T}\\}$ and $\delta=\max\\{1/T^{1/(\beta+1)},1/\sqrt{T}\\}$. Then, we have $m^{\mathcal{G}}\min\\{\delta/2,V_T\\}=3\min\\{\delta/2,V_T\\}>V_T$, which can conclude the lemma.
We will also provide additional experimental results related to this. For now, we present an experiment in Figure 1 in the attached pdf that compares the performance of our algorithms with theoretical regret upper bounds with a constant of 1. We observe that the performance of our algorithms (blue and green solid lines) is better than the theoretical regret upper bounds (light blue and light green dashed lines), respectively, because the theoretical bounds represent worst-case regret regarding rotting rates, while the experiment is conducted under a specific instance of rotting rates.
It is noteworthy that providing regret bounds that hold up to constant factors is an important first step to establishing fundamental bounds for the underlying learning problem. Note that previous work on stationary infinitely-armed bandits (e.g. [24, 10]) also focused on providing regret bounds up to constant factors, whose values are not identified explicitly.
**What would the experimental results look like for $\beta\neq 1$:**
We appreciate your suggestion. We have included additional experiments in which we varied the value of $\beta$ in Figure 2 of the attached pdf. In Figure 2, our algorithms outperform other benchmarks for various $\beta$. We will incorporate this into our final version.
**The motivation in terms of real applications is not especially strong:**
We appreciate your comments and suggestions. We highlight that (rested) rotting rewards in bandits have been studied [16, 20, 21, 13], motivated by real-world applications such as recommendations and clinical trials. For instance, in recommender systems, the click rate for each item may diminish due to user boredom with repeated exposure to the same content. Similarly, in clinical trials, the efficacy of a medication can decline due to drug tolerance induced by repeated administration. However, the previous work regarding rotting bandits, except for [13], focused on MAB with finite arms, which have limitations when the number of items is large, as in recommender systems. In contrast, we consider the case with infinitely many arms without contextual information.
We believe our algorithm, as you mentioned, provides insights into handling rotting cases in real-world scenarios. Specifically, our adaptive SW-UCB and threshold approach offers valuable insights for designing bandit algorithms regarding when to explore new arms and how to estimate rotting mean rewards and determine the status (good or bad) of each arm when rotting occurs. Furthermore, our study on optimizing the threshold parameter provides practical guidance for setting threshold values in real-world applications.
**Is Prop 2.3 missing some kind of additional condition, or clarity over what is meant by worst-case in line 453?:**
Our regret analysis focuses on the *worst-case* regret concerning rotting rates. As outlined in Assumption 2.1, we consider an adaptive adversary that determines the rotting rate $\rho_t$ arbitrarily immediately after the agent pulls an action $a_t$ at each time step $t$. In Proposition 2.3, we show that there always exists a rotting adversary (or instances of rotting rates over $T$) with $\sum_t\rho_t>T$ such that it incurs at least $T$ regret for any algorithm, implying an $\Omega(T)$ regret lower bound. This does *not* imply that any rotting adversary (or any rotting instances) occur $T$ regret. This worst-case lower bound can be demonstrated by providing *a* rotting adversary (or rotting rate instances) where any algorithm will incur at least $T$ regret. Additionally, we show that this lower bound can be achieved by a simple algorithm that pulls a new arm every round. Therefore, Proposition 2.3, which is regarding the worst case, does not contradict the example you mentioned. We appreciate your helpful comments to improve clarification. We will include the explanation to clarify Proposition 2.3 in our final version.
**Can you put the WUCB definition in an equation display and reference its equation number in the statement of Algorithm 1?:**
We appreciate your helpful comments for improving the clarity of our paper. We will display the WUCB definition in an equation and reference its number in Algorithm 1.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you for your detailed reply to my comments. I appreciate the commitment to make modifications and am in agreement with your response. I have also read the other reviews and your responses and consider these to be suitably well addressed. As such I will retain my score and increase my confidence. (Justification: I agree with the remarks that finding results without constants and algorithms that can lead to further application-specific work are important fundamentals, but I feel for a score of 7+ the paper would have made more progress in one of these areas.)
---
Reply to Comment 1.1.1:
Title: Thank you for your comment.
Comment: Thank you for maintaining a positive rating on our paper and increasing your confidence score. We appreciate your detailed feedback. We will incorporate the discussion into our final version, including more details regarding constant factors, a detailed explanation of Prop 2.3, and additional experimental results. | Summary: Authors investigated an adaptive approach for the rotting bandits problem under the infinitely arms assumption.
Strengths: Authors introduced a new ucb like policy for the mentioned problem, additionally a lower bound analysis has been carried on.
Weaknesses: No clear weaknesses
Technical Quality: 3
Clarity: 3
Questions for Authors: I wonder how the regret bound behaves with respect to the effective rotting instead of the V_T or S_T. Assuming their values to be large and far from the real rotting, would it be possible to extend this analysis and solution to propose an adaptive result? Additionally, I wonder if also the regret lower bound would be tight in that case.
Similarly, I was wondering how good these results would be for the infinitely many afrmed bandit with no rotting.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No clear limitations have been found.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your feedback and positive evaluation of our work. Below, we address each comment.
**I wonder how the regret bound behaves with respect to the effective rotting instead of the $V_T$ or $S_T$:**
In this problem, as mentioned in Assumption 2.1, we consider an adaptive adversary who determines $\rho_t$ arbitrarily, subject to the constraint $\sum_t \rho_t \le V_T$ or $1 + \sum_t 1(\rho_t \neq 0) \le S_T$ for given $V_T$ or $S_T$. As a side remark, these constraints are more general than the stricter conditions $\sum_t \rho_t = V_T$ or $1 + \sum_t 1(\rho_t \neq 0) = S_T$. Our regret analysis, for both upper and lower bounds, addresses the *worst-case* scenario concerning the rotting rates under the general constraint of $V_T$ or $S_T$. Hence, our regret bounds are expressed in terms of $V_T$ or $S_T$. We also note that standard nonstationary bandits have been studied under a similar concept of a nonstationary budget upper bounded by $V_T$ [7,19].
We can consider a scenario where an adversary determines $\rho_t$ under the stricter constraints of $\sum_t \rho_t = V_T$ or $1 + \sum_t 1(\rho_t \neq 0) = S_T$. Our (upper and lower) regret bounds apply with these values of $V_T$ and $S_T$, respectively corresponding to the effective rotting amounts of $\sum_t\rho_t$ and $1 + \sum_t 1(\rho_t \neq 0)$. When there is no rotting such that $V_T=0$ and $S_T=1$, then our bounds are the same as the bounds for the stationary case (e.g. $\sqrt{T}$ for $\beta=1$ as in [24, 5]). Furthermore, if $V_T$ and $S_T$ are unknown, Algorithm 2 (in Appendix A.6) can achieve regret bounds that depend on the values of $V_T$ and $S_T$ (Theorem A.15).
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their reply. I am writing to confirm that I've read the author's rebuttal and my scores remain unchanged.
---
Reply to Comment 1.1.1:
Title: Thank you for your comment
Comment: Thank you for maintaining a positive rating on our paper. We will incorporate the discussion into our final version. | Summary: The paper studies infinite-armed bandits with rotting rewards. They show a lower bound on regret in terms of the total-variation and number of abrupt changepoints in the change in rewards. They also provide regret upper bounds: (1) a UCB-like algorithm tuned with knowledge of problem parameters gets optimal regret in some regimes and (2) a parameter-free bandit-over-bandit algorithm can attain optimal regret in some regimes when the level of non-stationarity is large.
Strengths: * The infinite-armed bandit problem is well motivated and the problem and results are clearly presented.
* There are matching upper and lower bounds on regret, at least for the better-understood $\beta \geq 1$ setting.
Weaknesses: * The definition of $V_T,S_T$ are a bit unclear and may not be fully rigorous. They are bounds on the total amount and count of rotting, but the rotting $\rho_t$ depends on the chosen arm $\rho_t$ and is thus random (in fact, determined by an adaptive adversary according to Assumption 2.1). But, meanwhile, $V_T,S_T$ seem to be treated as deterministic constants in the whole paper given that they are, for example, used to tune Algorithm 1 to get the best regret bounds. In fact, it would not even make sense for $V_T,S_T$ to _appear_ in the expected regret bounds if they were random. So, the only way the results of this work can rigorously hold is if $V_T,S_T$ bound the respective random quantities $\sum_{t=1}^{T_1} \rho_t$ and $1 + \sum_{t=1}^{T-1} 1\{\rho_t\neq 0\}$ for all realizations of the randomness, which then means there are some missing assumptions for the results of this paper. For instance, even for a "nice" algorithm, $\sum_{t=1}^{T_1} \rho_t$ could be very large with some probability in which case $V_T$ is forced to be large as well because of a "bad realization". The only other way around this is to make further assumptions about the adversary/design of non-stationarity. In either case, there should be more discussion as the rigor of results presented versus the given assumptions seems unclear to me.
* There are no optimal regret upper bounds without parameter knowledge. Also, the bandit-over-bandit result for parameter-free result requires further assumptions on the non-stationarity and so does not apply as generally as the other results of this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to strengths and weaknesses above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No broader impact concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your time to review our paper and comments. Below, we address each comment.
**The definition of $V_T$, $S_T$
are a bit unclear and may not be fully rigorous. They are bounds on the total amount and count of rotting, but the rotting
depends on the chosen arm
and is thus random**:
We appreciate your detailed comments. However, we argue that the definitions of $V_T$ and $S_T$ are rigorous in our context, as they serve as *constraints* on the rotting adversary. These quantities are not random outcomes based on the adversary's behavior but rather imposed conditions. To clarify this, we restate our Assumption 2.1 concerning the adaptive adversary as it is.
---
**Assumption 2.1.** At each time $t\in[T]$, the value of rotting rate $\rho_t>0$ is arbitrarily determined immediately after the agent pulls $a_t$, subject to the constraint of either slow rotting for a given $V_T$ or abrupt rotting for a given $S_T$.
---
According to this assumption, we consider $V_T$ and $S_T$ to be given a priori to the adversary, and the adversary determines $\rho_t$ subject to these constraints. $V_T$ and $S_T$ are deterministic quantities that act as *constraints*, which are determined before the game begins. This is why we refer to the two scenarios as the slow rotting constraint ($V_T$) and the abrupt rotting constraint ($S_T$). In our setting, if $V_T$ or $S_T$ is large, then, as we have shown in our regret lower bounds, any algorithm naturally cannot avoid a large regret bound in the worst case.
**There are no optimal regret upper bounds without parameter knowledge. Also, the bandit-over-bandit result for parameter-free result requires further assumptions on the non-stationarity and so does not apply as generally as the other results of this paper:**
In the case when the parameters are unknown, there is an additional regret for Algorithm 2, which stems from learning the threshold parameter. However, if $V_T$ and $S_T$ are large enough (e.g. $V_T\ge T^{1/4}$, $S_T\ge \sqrt{T}$ for $\beta=1$), then the regret bound in Theorem A.15 for Algorithm 2 matches that in Corollary 3.5 for Algorithm 1 (for known parameters). It is an open problem to achieve tight regret bounds for entire ranges of $V_T$ and $S_T$.
Now we discuss the further assumptions.
For the unknown parameter case, we consider Assumptions A.10 and A.12 rather than Assumptions 2.1 and 2.4 (for the known parameter case). In Assumption A.10, we still consider an *adaptive adversary* for the rotting rates, such that $0 \le \rho_t \le \varrho_t$ for given $\varrho_t$ to the adversary, where $\sum_t \varrho_t \le V_T$ and $1 + \sum_t 1(\varrho_t \neq 0) \le S_T$. Here $\varrho_t$'s are assumed to be determined before the algorithm is run.
As we describe in Remark A.11, this assumption is more general than that in [13], where $\varrho_t = \rho$ with a maximum rotting rate $\rho$ for all $t$. In the special case where $\rho_t = \varrho_t$, the assumption represents an oblivious adversary.
In Assumption A.12, we consider $\sum\_{t \in \mathcal{T}\_i} \rho\_t \le H$ for all $i \in [ \lceil T/H \rceil]$, where $\mathcal{T}\_i=[(i-1)H+1,iH]$ representing the $i$-th block of times of length $H$ within horizon time $T$. As we mention in Remark A.13, this assumption is satisfied when mean rewards are constrained by $0 \le \mu_t(a_t) \le 1$ for all $t$ because $0\le\rho_t\le1$. The positive mean reward constraint is frequently encountered in real-world applications such as click rates in recommendation systems. We also note that, as mentioned in Remark A.14, the assumption is still more general than the one with a maximum rotting rate constraint of $\rho_t \le \rho = o(1)$ in [13]. This is because, in our setting, each $\rho_t$ is not necessarily bounded by $o(1)$ but $\sum_{t \in \mathcal{T}_i} \rho_t \le H$.
Lastly, we emphasize that our study is the first work to consider $V_T$, $S_T$, and $\beta$ in the context of rotting bandits with infinite arms, which is fundamentally different from finite-armed bandits due to the necessity of exploring new arms (details are described in lines 58–70). Notably, when the parameters are known, we achieve tight results using a novel approach and provide regret lower bounds.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Dear Reviewer g8GP,
Thank you again for taking the time to review our paper.
We sincerely hope our responses have adequately addressed your questions and comments. If they have, we appreciate it if you could reconsider your evaluation. If you have any last-minute questions, please let us know.
Sincerely,
Authors
---
Rebuttal 3:
Comment: * **On $V_T/S_T$**: thank you for the clarification. It seems one assumes the adversary is not fully adaptive then and there are limitations on how much rotting it can force on the rewards. I would say wording this as a "adaptive adversary" can be a bit misleading then as it is really a constrained adversary.
* **optimal regret upper bounds without parameter knowledge**: I would carefully explain in the presentation why the further Assumptions A.10 is needed. It seems like it is an artifact of the bandit-over-bandit approach as you cannot have the adversary respond in an overly strong way to the master's choice of base over $H$ rounds.
Even under the broader above mentioned constrained adversary there seem to be no parameter-free results without this A.10. As such, I feel the scope of result remains limited in this work even while being the first to study this particular setting/parametrization.
---
Rebuttal 4:
Title: Official Comment by Authors
Comment: Thank you for your comment. We would like to address a few points regarding your comments:
**On $V_T$, $S_T$:** Based on your comment, we believe your initial concern regarding $V_T$ and $S_T$ has been resolved. However, we would like to provide additional explanation for clarity. We strongly believe that the adaptive adversary under the slow ($V_T$) or abrupt rotting constraint ($S_T$) is a natural and general assumption in our adversary rotting scenario. The adaptive adversary determines an arbitrary rotting rate at each time, immediately after the agent's action is determined, under either $V_T$ or $S_T$. From the values of $S_T$ and $V_T$, we can appropriately quantify the difficulty of our problems, as demonstrated by our regret lower bounds.
Without such constraints (i.e., fully adaptive according to your comment), the adaptive adversary could easily make the problem trivial in the 'worst-case' scenario of our setting because whenever an algorithm finds a good arm, the adversary could cause that arm to rot and become a bad arm or even have a negative mean reward adaptively, without the constraints. We also note that similar quantities of $V_T$ and $S_T$ have also been considered in standard nonstationary bandit literature [7,19,4], where they are treated as determined quantities, not random variables, as in our setting. If necessary, we are more than happy to clarify this further in our final version to prevent any potential confusion.
**Assumption A.10 for parameter-free:** The additional constraint in Assumption A.10 is required due to our adaptive adversary, which selects the rotting rate arbitrarily and adaptively in response to the selected action at each time, within the bandit-over-bandit framework. If we consider an oblivious adversary instead of an adaptive one, where the values of rotting rates $\rho_t$ are predetermined such that $\sum_t \rho_t\le V_T$ and $1+\sum_t 1(\rho_t\neq 0)\le S_T$ before the game begins, then this satisfies Assumption A.10 with $\rho_t=\varrho_t$. In other words, as we mentioned in Remark A.11, Assumption A.10 is a more general assumption than this oblivious adversary and that in [13].
As mentioned in lines 831~834, the well-known black-box framework proposed for addressing nonstationarity [25] is not applicable to this problem, and attaining the optimal regret bound under a parameter-free algorithm for all ranges of $V_T$ and $S_T$ remains an open problem. However, we respectfully disagree with your comment that the results remain limited. We again highlight that our study is the first work to examine $V_T$, $S_T$, and $\beta$ in the context of rotting bandits with infinite arms, which is fundamentally different from finite-armed bandits (details are described in lines 58–70). Notably, we achieve tight results through a novel approach when the parameters are known, and we establish regret lower bounds. We also examine the case of unknown parameters. We believe our work is crucial for the community. | Rebuttal 1:
Rebuttal: We appreciate you taking the time to review our paper. We are encouraged by the feedback indicating that our problem is well motivated by many real-world applications, solid theoretical analyses and empirical results are presented, and the paper is written well and concisely. We have attached a pdf with additional experimental results to address the reviewers' comments. In the following, we provide detailed responses to each comment from the reviewers.
Pdf: /pdf/156db4aaa7afc8f6fde6208c3e186a53cd909a2e.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Warm-up Free Policy Optimization: Improved Regret in Linear Markov Decision Processes | Accept (poster) | Summary: This paper improves the Policy Optimization methods for learning MDP by eliminating the undesired warm-up phase and replacing it with a simple and efficient contraction mechanism. For linear MDP, it is shown that the proposed Policy Optimization algorithm achieves regret with improved dependence on problem parameters (the horizon and function approximation dimension) under the settings of adversarial losses with full-information feedback and stochastic losses with bandit feedback. The contraction mechanism serves the purpose of ensuring the Q-value estimates are bounded and yield simple policies in the sense of efficient computation and easy implementation. The regret bound improves upon the best known ones for policy optimization.
Strengths: **Significance**:
**1.** This paper improved the regret bound for policy optimization algorithms for linear MDPs. Specifically, the CFPO algorithm achieves a $\sqrt{H^3 d}$ improvement over the best known regret for adversarial setting, and matches the performance of the value-iteration based algorithm.
**2** The policy optimization algorithm is computationally more efficient and easier to implement. This is because the algorithm uses all samples and is reward-aware (thus no waste of information).
**Clarity**: This paper is clearly written.
**1.** All necessary related work is properly discussed in my opinion.
**2.** The algorithm is introduced with a clear explanation. In section 4, a detailed walk-through of the CFPO algorithm is given with explicit items comparing the new algorithm with existing one, making it clear why the new one enjoys improved performance. A simple example is given (bottom of page 6) to demonstrate why reward-awareness is beneficial in PO.
Weaknesses: There is no major technical flaw detected in this paper.
Weakness:
There is no experimental result to corroborate the theoretical results. Though PO algorithms are a bit more complicated to implement than the value-iteration algorithms in my opinion, for linear MDP, a numerical simulation or simple synthetic experiment should not seem too difficult.
And I think experimental results is especially important for this work, since it claims the deviced algorithm is easier to implement compared to existing PO algorithms. Therefore, experimental results verifying the correctness of the improved regret and the easier implementation than existing PO baselines are anticipated. For example, for the improved regret, it would be good to see from experiment that the dependence on the problem parameter is indeed improved. A very simple synthetic environment should be enough.
Technical Quality: 2
Clarity: 4
Questions for Authors: Please see Weaknesses.
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort put into the review.
Regarding synthetic experiments, we are not aware of existing benchmarks that are specific to linear MDPs rather than tabular ones. However, in tabular MDPs the contraction is not necessary and thus we do not expect to see improvement. We agree that coming up with interesting experimental setups for linear MDPs would be valuable to the community but leave this to future research. Please note that most papers on regret minimization in MDPs do not include experiments.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for the rebuttal. I have no additional concerns and thus raised the score. | Summary: This paper equips the rare-switching mechanism with a novel feature shrinkage technique to achieve efficient policy optimization (PO) for linear MDPs with adversarial losses or bandit feedback. By shrinking features in directions of high uncertainty, the authors show that the proposed algorithm has its regret optimal in terms of $K$, the number of episodes, up to a logarithmic factor. Compared with prior work on PO for adversarial linear MDPs, the proposed algorithm does not invoke sophisticated black boxes or incur any $\text{poly}(d, H)$ burn-in cost and enjoys a regret upper bound with lower dependence on the horizon $H$ and the ambient dimension $d$.
Strengths: 1. The proposed feature shrinkage technique, which enables the estimated state-action value to be bounded without resort to truncation techniques, is simple and does not incur any additional statistical or computational overhead up to constant factors.
2. The authors notice that the extended value difference lemma [1] is applicable (for the proposed contracted subMDP) even if the transition kernel is sub-stochastic, which is a key observation that might be of independent interest and useful for the reinforcement learning (RL) community.
[1] Shani, L., Efroni, Y., Rosenberg, A., & Mannor, S. (2020, November). Optimistic policy optimization with bandit feedback. In International Conference on Machine Learning (pp. 8604-8613). PMLR.
Weaknesses: ## Weaknesses
1. The authors should make it clear in the main text whether the analysis or the final regret bound depends on $|\mathcal{X}| < \infty$ in any significant way. As far as I can tell, the regret bound does not rely on the finite $\mathcal{X}$ assumption, but the authors should further clarify this point.
2. Line 266: [1] does not utilize any feature shrinkage technique, so it is confusing and not adequate to mention "the corresponding claim" and use the $\bar{\phi}^{k_e}$ notation in equation (6) without further explanation.
3. Line 49: Though the results in this paper is significant, it is way too assertive to say that no simpler algorithm with lower regret exists for adversarial linear MDPs. The authors should at least elucidate their conjecture about the optimality of the proposed algorithm in a more appropriate manner.
4. Minor issue: Technically speaking, the two $\sum_{h}$ signs between Line 276 and 277 should not be put outside the expectation in that your expectation is taken with respect to the randomness of a trajectory.
5. Between Line 151 and 152: the $\Delta P(x_h, a_h)$ part seems to be a typo
6. The key connection between the proposed technique, especially Lemma 18, and the boundedness of the estimated state-action value function (i.e., the arguments between Line 549 and 550) is not explained in any way in the main text. (BTW, in the proof of Lemma 18, it seems that $K\geq 1$ should be changed to $K\geq \mathrm{e}$)
7. Minor typo on Line 265: $\succ$ -> $\leq$
8. Minor typo on Line 579: $\hat{V}$ -> $\hat{V}_h$
[1] Sherman, U., Cohen, A., Koren, T., & Mansour, Y. (2023). Rate-optimal policy optimization for linear markov decision processes. arXiv preprint arXiv:2308.14642.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Any reference on the proposition mentioned on Line 134-135 that the Markov $\pi^*$ in hindsight is optimal among all policies?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: not applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough review and helpful comments, we will incorporate them in our revision. The following responds to your individual points:
1. Finite $\mathcal{X}$: You are correct, the regret does not depend on the assumption that $\mathcal{X}$ is finite. In short, the assumption is purely technical as it helps avoid measure theoretic notations and allows for a cleaner and more approachable analysis. The explanation for this was deferred to an existing paper but we agree that it should be included in our paper for the sake of completeness.
2. L266: Thank you for pointing this out, we will revise this explanation in the final version. Our intention was to say that the reward free warm-up in [1] gives a guarantee of the form in eq.(6) but where $\bar{\phi}_h^{k_e}(x,a)$ is replaced with $\phi(x,a) \mathbb{1}(x \in \mathcal{Z}_h)$. The overall implication is that, through the lens of contracted sub-MDPs, the reward free warm-up would give an overly conservative contraction that incurs additional cost compared to our approach.
3. L49: The purpose of this phrase was to say that while our rates are not minimax, improving them further likely requires more delicate algorithmic techniques that are not well-understood for PO even in tabular MDPs. In particular, we are not aware of any method besides the mentioned variance reduction technique that achieves better rates. We will soften the phrasing of the final version such that it is more clear that this is a conjecture rather than a proven fact.
4. L276-277: The expectation is indeed over trajectories. However, because their length is fixed, we can take the $\sum_h$ outside due to the linearity of the expectation. As you pointed out, the arguments work either way.
5. L151-152 $\Delta P$: We overloaded notation here (perhaps excessively). We’ll disambiguate it in the final version.
6. Key connection… boundedness of $\hat{Q}$: We omitted this due to space constraints. We’ll include an explanation in the final version. Thanks for pointing out that $K \ge e$ in Lemma 18.
7. Minor typos (points 7-8): Thanks for finding these! We’ll fix them for the final version.
8. Markov $\pi^\star$ is optimal even among history dependent policies: There are probably several sources for this claim. For example, Reinforcement Learning: Foundations (p.56) by Shie Mannor, Yishay Mansour, and Aviv Tamar.
---
Rebuttal Comment 1.1:
Title: Thanks.
Comment: The authors have appropriately answered all questions I asked. I will keep my positive evaluation and recommend this paper for acceptance. | Summary: This paper presents a new policy optimization algorithm called Contracted Features Policy Optimization (CFPO) for reinforcement learning in linear Markov Decision Processes (MDPs). The key contribution is eliminating the need for a costly warm-up phase used in previous state-of-the-art methods, while achieving improved regret bounds.
Strengths: The paper addresses a significant issue in reinforcement learning theory, improving on recent findings for policy optimization in linear MDPs.
The proposed CFPO algorithm eliminates the need for a separate warm-up phase, offering a substantial practical advantage over previous methods.
The regret bounds show improvement compared to prior work, achieving $O(\sqrt{H^4 d^3 K})$ regret, which represents a $\sqrt{H^3 d}$ enhancement.
The analysis introduces novel techniques, such as the contracted MDP concept, which may have broader applications in the field.
Weaknesses: While the regret bound in this paper shows a better dependency on K compared to previous works, it has a worse dependency on d and H. Although K is the most critical factor, it might be more appropriate to discuss the improvement in regret when K exceeds a certain threshold.
The paper is purely theoretical, lacking experimental results to validate the practical performance of the proposed algorithm. More discussion on practical implications and potential applications would be beneficial.
Despite the improvement, there remains a gap between the upper and lower bounds for this problem. Further discussion on potential approaches to address this gap would be valuable. For example, can previous variance reduction techniques be applied to this method?
The paper could be strengthened by adding a conclusion section to summarize the key findings
Technical Quality: 3
Clarity: 3
Questions for Authors: na
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors should include a more detailed discussion of the study's limitations in the conclusion part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time and effort put into the review. The following addresses the points made in your review:
1. Regarding dependence on $K,d,H$: Are you referring to the additive regret term that is logarithmic in $K$? As you mentioned, most works assume that $K$ is the dominant factor and thus this term is omitted. For our algorithm it becomes a low order term when $K \ge \sqrt{H^4 d^3}$. Notice that the regret bound of $\sqrt{H^4 d^3 K}$ is trivial, i.e., $ \ge KH$, when $K \le \sqrt{H^2 d^3}$ and thus the additional term is non-trivial only between these two values. The regret bound of Sherman et al. (2023) is non-trivial only for $K \ge \sqrt{H^5 d^4}$ and thus our regret bound is equivalent or better for all values of $K,d,H$. Overall, you bring up a subtle point. If you think this will improve the paper then we are willing to explain this in the final version.
2. Discussion: We will add a discussion about potential applications, limitations, and the gap from the lower bound. We conjecture that it is possible to use variance reduction methods for PO. However, this seems quite complicated and thus left for future research.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. I hope the authors will consider incorporating these clarifications in their revision. I will maintain my positive assessment. | Summary: This paper studies online learning for linear MDPs with stochastic and adversarial full-information losses. The authors propose a new contraction mechanism, avoiding the costly initial exploration phase in previous papers and achieving a better regret bound.
Strengths: 1. This paper studies an important problem with improved regret bounds.
2. The new algorithm gets rid of a costly initial exploration phase in previous papers using new techniques, which may give more insights for future works in both theory and practice.
Weaknesses: I do not see obvious weakness. For writing suggestions, I feel it is beneficial to discuss more about the difficulty of bounding covering numbers when applying PO for linear MDP. Although [Zhong and Zhang 2023] and [Sherman et al. 2023a] gave comprehensive discussions, it is good to discuss more in this paper to ensure the readers have a whole picture (e.g. discuss more why clipping Q-functions does not lead to good covering numbers for policy class).
Technical Quality: 3
Clarity: 3
Questions for Authors: Currently, I do not have questions.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the positive comments. We agree that reiterating the explanations on the effect of clipping on the covering number will make the paper more self-contained and we will include it in the final version of the paper. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Wide Two-Layer Networks can Learn from Adversarial Perturbations | Accept (poster) | Summary: In this paper, the authors theoretically investigate the perturbation learning phenomenon from a future hypothesis perspective. Perturbation learning means that a classifier can be learned from adversarial examples with incorrect labels (the labels used to generate adversarial examples, but seems to be incorrect in human eyes). Their theory includes two parts. First, they show that the adversarial perturbations are the linear combination of training samples, thus containing the information of the clean training dataset. This theorem provides an intuition for the feature hypothesis. Then, they show that, under some conditions, the predictions of a classifier trained on adversarial perturbations (or adversarial examples) are consistent with those of a classifier trained on a clean dataset.
Some experiments on a synthetic Gaussian dataset are provided to show the effect of the input dimension and hidden dimension, validating their theorem.
Strengths: * Understanding perturbation learning and the feature hypothesis is quite important in the domain of adversarial learning. This paper provides a deeper understanding of this problem.
* In this paper, the authors provide a more general theory about perturbation learning compared to prior work. The theory is based on fewer constraints in training data distribution, training time, etc.
Weaknesses: My main concern is whether the kernel regime is a suitable and extensible tool to study the feature hypothesis in adversarial training and explain other interesting phenomena such as the transferability of adversarial examples and the trade-off between robustness and accuracy.
According to the feature hypothesis of adversarial training, the data contains a set of features that could used for prediction. Humans and all kinds of neural networks use different feature subsets to classify the image (the subsets have overlaps). The feature hypothesis may provide a **unified** explanation for several open problems in adversarial training. However, it seems that the theory in this paper only focuses on perturbation learning and is difficult to extend to explain other phenomena. In my opinion, the framework should have a deeper exploration of the feature subsets that are used by the trained models to make predictions.
Technical Quality: 3
Clarity: 2
Questions for Authors: Experimental Questions:
* What is the effect of the sampling of adversarial labels on the accuracy of the model $g$? If we set the adversarial labels of all negative data to be positive and vice versa, will the accuracy of $g$ be very low?
* What is the effect of the structures of $f$ and $g$? When these two networks have more divergent structures, I think the accuracy of $g$ must be lower. Is it correct in experiments? If it is, this is similar to the transferability of adversarial examples, i.e., the adversarial examples are easier to transfer to similar networks.
Theoretical Questions:
* What do the functions $\hat{f}$ and $\hat{g}$ mean? What are the relationships between $f$ and $\hat{f}$, $g$ and $\hat{g}$?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors state the limitation on the assumption of the width of the network in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's constructive comments.
> My main concern is whether the kernel regime is a suitable and extensible tool to study the feature hypothesis in adversarial training and explain other interesting phenomena such as the transferability of adversarial examples and the trade-off between robustness and accuracy. ... The feature hypothesis may provide a unified explanation for several open problems in adversarial training. However, it seems that the theory in this paper only focuses on perturbation learning and is difficult to extend to explain other phenomena. In my opinion, the framework should have a deeper exploration of the feature subsets that are used by the trained models to make predictions
First, we would like to emphasize the following: **the target of this study is perturbation learning, not others**, such as adversarial training or transferability. While the feature hypothesis and perturbation learning are foundational for understanding several phenomena of adversarial examples and training, they are originally different research topics. We would appreciate it if the reviewers evaluated our work fairly, taking into account its scope.
Nevertheless, we found the questions insightful for our future work because our analysis is compatible with the setups of adversarial training and other related topics. Recall that all of our discussions stem from the update equation of gradient descent (flow), which is a fundamental and common component of deep learning. In addition, the main assumption is only wide width, which is only used to control each gradient descent step and not specific to perturbation learning. These advantages of our framework generally do not confxlict with the analysis of adversarial training and examples. Therefore, we consider that our framework is helpful for considering the features (subsets) of adversarial examples and adversarial training. If there are specific barriers that the reviewer foresees, we would greatly appreciate if they could be shared during the discussion period.
For these reasons, we believe our theoretical framework has the potential to address other problems and phenomena, including adversarial training and examples. However, we would like to respectfully emphasize that the primary focus of our research is perturbation learning and the feature hypothesis. We kindly ask the reviewer to consider evaluating our work from this perspective. We believe that our research offers novel and profound theoretical insights into perturbation learning.
> What is the effect of the sampling of adversarial labels on the accuracy of the model $g$? If we set the adversarial labels of all negative data to be positive and vice versa, will the accuracy of $g$ be very low?
Empirically, the accuracy of $g$ decreases along with the increase of label flips (i.e., negative to positive and vice versa). However, it is known that deep neural networks can sometimes achieve above-chance accuracy even in such environments. Detailed experimental results are presented in [13] and [18]. On the other hand, for simpler models like two-layer networks, it is empirically difficult to achieve accuracy significantly above chance in such environments. Indeed, our theoretical results, similar to prior work [18], indicate that two-layer networks are unlikely to achieve prediction matching under flipped label conditions.
> What is the effect of the structures of $f$ and $g$? When these two networks have more divergent structures, I think the accuracy of $g$ must be lower. Is it correct in experiments? If it is, this is similar to the transferability of adversarial examples, i.e., the adversarial examples are easier to transfer to similar networks.
The more different the structures of $f$ and $g$ are, the lower the accuracy of $g$ tends to be. As the reviewer pointed out, this behavior is similar to that observed in transferability, suggesting that the features captured by models during learning (and exploited by adversarial attacks) are dependent on the structure. These results are provided in Figure 3 [13].
> What do the functions $\hat{f}$ and $\hat{g}$ mean? What are the relationships between $f$ and $\hat{f}$, $g$ and $\hat{g}$?
$\hat{f}$ and $\hat{g}$ can be interpreted as the main components of $f$ and $g$ respectively. Specifically, $f(z) = \hat{f}(z) + \Delta_f(z)$ and $g(z) = \hat{g}(z) + \Delta_g(z)$, where $\Delta_f(z)$ and $\Delta_g(z)$ are, in many cases, relatively smaller than $\hat{f}(z)$ and $\hat{g}(z)$. Therefore, if the signs of $\hat{f}(z)$ and $\hat{g}(z)$ match (i.e., the agreement condition holds), in most cases, the signs of $f(z)$ and $g(z)$ also match (i.e., perturbation learning succeeds). The functional margin conditions represent the conditions under which $\hat{f}(z)$ and $\hat{g}(z)$ become sufficiently large relative to $\Delta_f(z)$ and $\Delta_g(z)$. | Summary: This work aims to provide an alternative theoretical analysis to justify feature hypothesis and perturbation learning. The analysis is based on approximation theory in the kernel regime (i.e., infinite width). They show that the adversarial perturbation contains sufficient data information, which can be retrieved by the perturbation learning when y labels are uniformly sampled.
Strengths: The paper is theoretically sound and relaxed certain conditions required by prior works.
Weaknesses: What is the contribution beyond technical novelty? Are any new insights obtained via the new analysis compared to existing results?
The conditions in the main theorems lack interpretability. I can understand the technical reason behind them, but how reasonable are these conditions, especially the agreement condition? The author argues that the agreement condition depends on the consistency of the correlation between z and y_nx_n. This statement is not rigorous, since z is a function argument, but not a r.v. and there is no correlation involved. I suppose a more meaningful question will be, if z follows the data distribution of x (e.g., a mixture of two Gaussian), what is the chance these conditions hold?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The limitation is clearly statement in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer's suggestive questions. The reviewer seems to appreciate our technical contributions, and the questions are thus more posed on the high-level understanding of the results. We here address this concern. We are willing to address any feedback and requests for further clarification upon your response during the discussion period, which will hopefully lead to improved the reviewer’s understanding and evaluation of our work.
> What is the contribution beyond technical novelty? Are any new insights obtained ... compared to existing results?
Indeed, our work not only proposes a technically advanced analysis but also provides many insights into perturbation learning as follows:
- A large dimension and longer training time strengthen the alignment between perturbations and training samples.
- Similar samples in the training set are emphasized in perturbations.
- Our results explicitly reveal the dependncy between the success of perturbation learning and each variable (dimension $d$, sample size $N$, perturbation size $\epsilon$, and confidence $\delta$)
We elaborate on them.
**Feature hypothesis (Theorem 3.3).**
Our results offer three new insights regarding the feature hypothesis.
First, the residual term $\xi_n$ in our result offers new insights into the alignment between perturbations and training samples. The direction of the perturbation vector is described by two components: the weighted sum of the training samples (main term) and the residual term. Our results suggest that as the input dimension increases, the residual term becomes smaller than the main term, and the alignment strengthens. In other words, perturbations more robustly contain class features. This insight was not obtainable from the existing research due to the absence of a residual in their limited problem setting.
Second, our result suggests that longer training time strengthens the directional agreement, which is supported by intuition and experience but has not been addressed in existing research.
Third, we identify the coefficient $\Phi(x_n, x_k)$ for each training sample. Our explicitly derived $\Phi(x_n, x_k)$ reveals that the coefficient of each training sample is determined by the slope of the activation function and depends on the similarity between $x_n$ and $x_k$ (cf. Eq. (4)). This implies that the more similar samples are included in the training set, the stronger their influence is reflected in the perturbations. While similar coefficients existed in previous research, they are could not be explicitly obtained.
**Perturbation learning (Theorems 3.4 and 3.5)**
The last insight is the explicit connection between the successful perturbation learning and training factors, including training time $T$, perturbation size $\epsilon$, input dimension $d$, sample size $N$, and confidence level $\delta$, enhancing our understanding of their impacts on perturbation learning. For example, perturbation learning succeeds more easily with a sublinear speed with respect to the dimension $d$ and sample size $N$. Furthermore, our findings indicate that either a large $d$ or $N$ alone is insufficient since Eqs. (9) and (10) include $d$- and $N$-irrelevant terms. In contrast, existing research does not explain the impact of these variables, only demonsrtating that it succeeds when both $d$ and $N$ are infinite.
Additionally, technical improvements in data distribution, perturbation design, training procedures, and network settings directly contribute to understanding the broad applicability of the feature hypothesis and the success of perturbation learning. These improvements are insightful in their own right.
> The conditions in the main theorems lack interpretability. ... how reasonable are these conditions, especially the agreement condition? ... if z follows the data distribution of x ..., what is the chance these conditions hold?
In this context, we used "correlation" only to abbreviate (the sign of) the inner product of two vectors (not necessarily random variables). We will improve our manuscript to reduce any misleading.
We consider $z$ as an arbitrary $d$-dimensional vector rather than a random variable to provide general results that do not depend on a specific probability distribution. This approach allows for easy consideration of $z$ as a random variable in subsequent analyses.
Let us consider the agreement condition in the following setting.
1. The positive sample size is $N/2$, and the negative is $N/2$.
2. $x_1, \ldots, x_N$ are i.i.d. and sampled from $\mathcal{N}(y_n\mu, I_d)$, where $\mu$ is a $d$-dimensional vector.
3. $z$ follows $\mathcal{N}(\mu, I_d)$; namely, $z$ is positive.
In this setting, informally, the following holds:
1. $\langle x_n, z\rangle = y_n \| \mu \|^2 \pm O(\sqrt{d}\ln(1/\delta))$
2. $\Phi_+ = \Phi(x_n, z)$ if $y_n = +1$, otherwise $\Phi_- = \Phi(x_n, z)$
3. $\Phi_+ = \Phi(x_n, x_k)$ if $y_ny_k = +1$, otherwise $\Phi_- = \Phi(x_n, x_k)$
(1) is derived from a concentration inequality, where $\delta$ is the confidence level. Since $x_n$ has the same probabilistic properties within each class, $\Phi(x_n, z)$ should have similar values for all $n$ satisfying $y_n = 1$ (or $y_n = -1$). While there would be probabilistic fluctuations, for simplicity, we fix $\Phi_+ := \Phi(x_n, z)$ if $y_n=1$, and $\Phi_- := \phi(x_n, z)$ if $y_n = -1$. Similarly, we fix $\Phi(x_n, x_k)$ as in (3).
Noting that $\Phi_+ = \Theta(1)$ and $\Phi_- = \Theta(1)$, and if the sign of $\sum_k y_k \Phi(x_n, x_k) \langle x_k, z \rangle$ is the same for all $n$, then the sign of $\hat{g}_a(z)$ is equal to it, we can derive:
- $sgn(\hat{f}(z)) = (\Phi_+ + \Phi_-) \| \mu \|^2 \pm O(\sqrt{d}\ln(1/\delta)) = \Theta(d) \pm O(\sqrt{d}\ln(1/\delta))$
- $sgn(\hat{g}(z)) = (\Phi_+ + \Phi_-) \| \mu \|^2 \pm O(\sqrt{d}\ln(1/\delta)) = \Theta(d) \pm O(\sqrt{d}\ln(1/\delta))$
This implies that if the input dimension $d$ is large, the agreement condition easily holds with high probability. | Summary: Perturbation learning, where classifiers are trained on adversarial examples with their associated incorrect labels, results in non-trivial generalization. This work theoretically tackles the perplexing former phenomenon for wide two-layer networks in the kernel regime. The authors first prove that adversarial perturbations are parallel to a meaningful feature vector (that can contain whole dataset information under some additional assumptions) up to an error term. Second, the authors provide some conditions on when the predictions with perturbation learning match those of standard learning.
Strengths: 1. The problem is a very interesting one and lacks more analysis, so this paper addresses a significant problem in an original way.
2. The setting and assumptions are very well described and clear.
Weaknesses: 1. I am overall confused about the choice of kernel regime to explain perturbation learning. As the authors acknowledge, there is no feature learning for the choice of width in this paper (the output of hidden units remains the same). It's not clear how such a framework can explain perturbation learning and, in particular, the "feature hypothesis," which authors claim that they do. Since there is no feature learning in this regime, there should not be any "feature hypothesis." This is not to say the analysis is uninteresting; it applies to perturbation learning with kernels. But this requires a completely different contextualization than the authors provide, one that focuses on the adversarial robustness of kernels rather than neural networks. And for the reasons above, I find the comparison with prior work a bit misleading as they tackle perturbation learning for different scenarios.
2. It is difficult to judge the validity of the assumptions in the paper.
* Assumption 3.2. is very convoluted, and I don't see how one can justify this assumption in finite width settings. I equate this assumption to an infinite-width assumption, which is perfectly reasonable, but as I discussed above, I believe it changes the object of study.
* The assumptions of Theorem 3.3. and Theorem 3.4. are discussed, and some intuitions are provided. But it is not discussed why they would hold for a small $\delta$.
3. The $\delta$ dependencies in Theorems 3.3, 3.4, and 3.5 are confusing. Let's focus only on Theorem 3.3. When we take $\delta \to 0$ while everything else remains fixed, the statement is trivial. So, the interesting part of the statement is when $T_f$ or $d$ scales with $\delta$. I believe it is more reasonable to consider $T_f \to \infty$. So, this verifies that Assumption 3.2. is an infinite width assumption.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you discuss again the choice of kernel regime vs. feature learning for your analysis after my comments in the Weaknesses section?
2. Could you provide references or evidence towards why $\epsilon$ should scale with $1/\sqrt{d}$?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I think the authors overclaim by saying, "We provided a theoretical justification for perturbation learning and the feature hypothesis" (L293). Their analysis is limited to the kernel regime, which is acknowledged, but its implications for the feature hypothesis are not well explained. In addition, scaling with $\delta$ in main theorems is not discussed, and as I pointed out, it points to other limitations towards infinite width.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful comments. The reviewer seems to highly evaluate our problem and analysis as interesting but also has concerns about the theoretical principle, which results in a low initial score.
Regarding the concerns, the reviewer seems to have several fundamental misunderstandings about the kernel regime. We would like to address this in the rebuttal.
In short, a) network parameters change during training, b) feature learning occurs in the kernel regime, and c) our claim does not require infinite width, $\delta \to 0$, and $T \to \infty$. We elaborate on each below.
**Network parameters change during training.**
If the reviewer considers that the outputs of hidden units remain the same during training as at initialization, this is not correct. In our framework, hidden weights are trained and updated during training. Thus, the outputs of hidden units change from their initialization.
**We employ gradient flow.**
The reviewer might misunderstand that we employ a kernel method. We use gradient flow. Our training scenario is the same as in prior work [18].
**Feature learning occurs even in a kernel regime**
In our framework, i.e., a kernel regime, feature learning occurs. In other words, network parameters are updated and learn class features from training samples.
For clarity, we offer the following definitions:
- Feature learning: A process in neural networks where parameters change during training according to the training dataset via gradient flow, extracting class features, and determining predictions.
- Kernel regime: A situation where parameters in neural networks change within a small (but not infinitely small) margin from their initialization in a (finitely) large width setting.
The key distinction between our work and previous studies is that prior work considers weights that can change freely (unrestricted feature learning), while our work examines weights that can change freely only around their initialization point (feature learning in a kernel regime).
**Our assumptions and theorems do not require infinite network width.**
Assumption 3.2 does not imply infinite width. For the identity loss, it requires $m > O(d^2T^2)$, where $d$ is the input dimension and $T$ is the continuous training time. Given the finiteness of $d$ and $T$, $m$ remains finite.
Our theorems also do not necessitate infinite width. Let us consider simplified Assumption 3.2 and Theorem 3.4.
Assumption 3.2: Network width $m$ satisfies $m > T^2$.
Theorem 3.4: Let $\delta$ be a small positive number. Under Assumption 3.2, for any $z \in \mathbb{R}^d$, if $g_a(z) > 1 / T \delta$ holds, then, with probability at least $1 - \delta$, perturbation learning succeeds.
$\delta = 0$ leads to $T = \infty$ and thus $m = \infty$. However, for any small $\delta > 0$, we can choose finite $T$ such that $g_a(z) > 1 / T \delta$. Furthermore, for any $T > 0$, we can select finite $m$ such that $m > T^2$. Thus, for any $\delta > 0$, a finite width is sufficient to consider Assumption 3.2 and Theorem 3.4. Note that strict positivity of $\delta$ is assumed in these theorems.
We also note that $\delta$ is the confidence level, and it is not necessary to consider it as $0$ or infinitesimal. For example, with $\delta = 0.01$, we can choose a finite width $m$ that guarantees the success of perturbation learning with probability at least 99%. We believe that the impacts of $T$, $d$, and $N$ on perturbation learning with a fixed $\delta$ are more insightful than considering the dynamics as $\delta \to 0$. The primary implication of Theorem 3.4 is that a large $T$ leads linearly, and $d$ and $N$ lead sublinearly, to more pronounced prediction matching with the same probability (fixed $\delta$).
**Our results hold even for infinite network width.**
The reviewer might consider that in sufficiently wide width, weights do not change from initialization, and thus networks do not learn class features. This is not correct.
For any $m$ and $T$, trained weights can be represented as $v_i(T) = v_i(0) + \Delta_i(T)$ with $\\|\Delta_i(T)\\| = O(T/\sqrt{m})$. $\Delta_i$ is not $0$ and aligns with data patterns, indicating that feature learning occurs. While $\\|\Delta_i(T)\\|$ approaches zero as $m \to \infty$, it does not become zero for any $m$. If $\\|\Delta_i(T)\\| = 0$ for every $i$, the trained network output $f(T)$ would be identical to the initialized output $f(0)$, meaning training would not work at all. For any $m$ (even infinitely large), each hidden weight changes slightly, and network outputs as the summation of outputs of hidden units change significantly during training. Therefore, neural networks learn class features, and our theorems are not invalidated for any width.
**Feature learning is not necessary to justify the feature hypothesis.**
The reviewer suggests that theoretical approaches without feature learning dynamics cannot explain perturbation learning and the feature hypothesis. However, we consider that feature learning is not necessary to explain them. The key requirement to justify them is that the image gradients through a classifier (i.e., the direction of the perturbation) contain data information (like Theorem 3.3). For example, we believe that for kernel methods (which our analysis is not), we can empirically and theoretically justify them because the image gradients contain training sample information. Note that feature learning and the feature hypothesis are unrelated concepts.
**Scaling of $\epsilon$ (Question 2)**
As shown in Theorems 3.4 and 3.5, the L2 perturbation size $\epsilon$ needs to scale with $\sqrt{d}$. This scaling is a consequnce of the property of the L2 norm, which scales with $\sqrt{d}$. To maintain the signal-to-noise ratio, L2 perturbation sizes must scale accordingly. For example, please refer to Table 2 in [1].
[1] V. Sehwag et al., Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? ICLR22.
---
Rebuttal 2:
Comment: Thanks for your detailed explanations.
I do agree with both of your claims in a) and c), and I did not claim otherwise in my review. I will detail what I meant in my original review.
**Assumption 3.2.**
It is true that Assumption 3.2. is satisfied by some $m$ for any $d$ and $T$ (let's grant that $\ell'$ is bounded). I do not claim otherwise. But I do not see why the finite $m$ regime satisfied by Assumption 3.2. is **any more** interesting than infinite $m$. There is nothing in the practice of ML that would make me think Assumption 3.2. is more reasonable than the infinite $m$ assumption. Again, there is nothing wrong with assuming infinite $m$: it is extremely hard to do good theory, and kernel regime is only one of the few approaches available.
**Assumptions in Theorems 3.4., 3.5.**
Let $\delta$ be a fixed small number. For any $z$, there are two competing conditions on $T_f$ and $T_g$ now: First, you assume them to be $\Theta(1)$. Then, you also assume the margin conditions, which depend on $T_f$ and $T_g$. There is no discussion on why these margin conditions would hold for the regime imposed on $T_f$ and $T_g$. In addition, there are remainder terms in margin conditions even if $T_f$ and $T_g$ go to infinity, which are not discussed.
**Scaling with $\delta$**
Theorems 3.4. and 3.5. have $\delta$ dependency only in multiplication with $T_f$ and $T_g$. For a choice of confidence level $\delta$, $T_f$ and $T_g$ will need to be larger than some quantity that depends inversely on $\delta$. This means that $m$ itself has to be larger than some quantity that depends on $\delta$.
This is further evidence, along with Assumption 3.2., that authors need an infinite-width approximation, and the result holds with probability degrading depending on the validity of this approximation. This should not come across as a surprise to the authors: this is the core gist of the kernel regime. All this considered, my main point is the fact that for some confidence levels, there are large numbers of $m$ verifying the assumptions of the paper, is **not** more interesting than an infinite $m$ assumption.
**Network parameters change during the training.**
From now on, I will only consider the $m \to \infty$ limit. As the Kernel Regime paragraph details in the Sketch of Proof section, the outputs of hidden neurons change negligibly. This is the basis of my first objection to the kernel regime and how it can be used to explain perturbation learning. In this regime, one cannot claim feature learning in the usual sense. The final output changes, but this does not imply that the network has learned specific features.
My understanding of perturbation learning was that it is equal to "learning" the features present in adversarial perturbations, the features that are claimed to exist by "feature hypothesis." And since there is no actual feature learning happening in the kernel regime, this cannot be studied in such a framework. However, the authors use a more nuanced definition, and more specifically, the word "enable," i.e., indicating that the features in adversarial perturbations can somehow help the classifiers (in contrast to the case where classifiers simply learn these features). This is exactly why I brought up the kernels, which would have no learning of features but alignment of the features such that they align with these features.
**Final conclusion**
I will increase my score to 4 from 3, as my initial judgment on the relationship between perturbation learning and the kernel regime was harsh. The authors do not explicitly claim features posited by "feature hypothesis" are learned by wide, two-layer networks.
I am willing to increase my score to 5 from 4 if the authors agree to incorporate
i) a discussion on the kernel regime and what it can model in neural networks in the related work section,
ii) more nuanced discussions on the limitations of their results, including the ones I have highlighted in the text.
---
Rebuttal 3:
Comment: We sincerely appreciate the reviewer's prompt response and the effort they put into engaging with the discussion.
First reply: We will first address the two conditions that the reviewer has set for increasing the score. Following that, we will summarize our perspective on the generality of finite width, which may be a point of misunderstanding between us and the reviewer.
Second reply: We will provide responses to the specific questions raised by the reviewer.
**The reviewer's suggestion (important)**
We address the two conditions that the reviewer required for raising the score as follows:
(i) We will include studies related to the kernel regime, such as [14], in the related work section. Rather than focusing on studies related to training convergence, we believe it would be more effective to reference works like [8], which first discussed the invariance of hidden units, and [1] (see below), which uses the kernel regime to analyze the properties of adversarial examples.
(ii) In the limitations section, we will acknowledge that the "features" we focused on are primitive, and higher-level features potentially included in perturbations and learning from them may not be theoretically clear (cf. (III)).
If there are any discrepancies between the reviewer's requests and our understanding, we would be happy to address them.
[1] H. Zhang et al. Adversarial Noises Are Linearly Separable for (Nearly) Random Neural Networks. AISTATS23.
**Generality of finite width (important)**
Furthermore, we sincerely appreciate the reviewer's thoughtful suggestion for an additional discussion, and we will certainly incorporate it into the main text (please see the elaboration at the end of this reply).
However, we believe that a critical point has been missed by the reviewer: one of our main contributions is the proposal of a general theory that encompasses both finite and infinite-width cases. While the reviewer has interpreted our work with $m \to \infty$, it is important to note that the finite case is at the very core of this research. The reviewer may argue that the finite case is neither more interesting nor more practical than the infinite case, but we respectfully and strongly object to this.
a) First of all, all networks that can be realized in practice are constrained by finite width. Although the assumption of infinite width is commonly used for theoretical convenience, the finiteness of width is indeed significant in practical machine learning. For instance, if we had infinitely many samples, there would be no generalization gap; if we had infinitely long training time, simulated annealing would discover a global optimum with probability one.
b) Second, by deriving conditions that depend on specific variables rather than assuming infinity, we can understand how assumptions about width change. For example, in this study, we have $m > O(d^2T^2)$, which suggests that our assumptions become stricter as they are proportionate to squared $d$ and $T$. This insight is not evident under the naive assumption of $m \to \infty$.
Title: Official Comment by Authors (1/2)
---
Rebuttal 4:
Comment: **(I) Assumption 3.2 and scaling with $\delta$.**
> (scaling with $\delta$) Theorems 3.4. ... on $\delta$.
> the result holds with probability degrading depending on the validity of this approximation
First, these are correct.
> (in "scaling with $\delta$") for some confidence levels, there are large numbers of $m$ verifying the assumptions of the paper, is not more interesting than an infinite $m$ assumption.
> (in "Assumption 3.2") I do not see why the finite $m$ regime satisfied by Assumption 3.2. is **any more** interesting than infinite $m$.
As the reviewer pointed out, for some confidence levels, there are infinitely many values of $m$ that satisfy the conditions, naturally including infinite width. However, this does not imply that we must choose infinite width. We can select the smallest $m$ that satisfies the conditions.
Moreover, we do not claim that the finite width regime is more *interesting* than infinite width. The finite width is discussed in this paper because it is not theoretically necessary to consider infinite width. There may be some misunderstanding between us and the reviewer regarding the term "interesting." If our response does not address the reviewer's concerns, we would appreciate further clarification.
> There is nothing in the practice of ML that would make me think Assumption 3.2. is more reasonable than the infinite $m$ assumption.
We have interpreted this concern as follows:
*In practice, the experimenter does not know in advance how large $T$ (training time) needs to be for perturbation learning to succeed. If the experimenter continues training until perturbation learning succeeds, $T$ could become very large. Therefore, the experimenter may need to select a very large $m$ in advance, which is essentially equivalent to assuming infinite width.*
This concern is valid when $T$ is variable during training and unknown in advance, and the experimenter is free to choose an arbitrarily large value.
However, it should be noted that our theory does not claim how the training evolves over time (i.e., $T$; variable). It characterizes the model trained with a designated setup (including the training time $T$; constant). In other words, we fix $T$ before training, as is typical in actual network training. For example, it is natural to set $T$ in advance to 100 epochs for MNIST. The experimenter selects a constant $T$ (although it does not always lead to the success of learning) and, accordingly, a finite $m$, which is more reasonable than an infinite width assumption.
**(II) Assumptions in Theorems 3.4., 3.5.**
> there are two competing conditions on $T_f$ and $T_g$ ...
The reviewer might consider that $T$ is substantially constrained to $c < T < C$ for some $c,C$, which prevents the satisfaction of the functional margin conditions. We should note that the assumption $T = \Theta(1)$ is introduced only for notational simplicity of the functional margin conditions. Essentially, no assumption on $T$ without $T > 0$ is required. Simply speaking, this assumption is introduced to derive $O(T + (1/\sqrt{d})) = O(T)$. Although not realistic, if $T = \Theta(1/d)$, then $O(T + (1/\sqrt{d})) = O(1/\sqrt{d})$. Without any assumptions, we would need to write unnecessarily intricate conditions, which we believe should not be included in the main text. We will revise any misleading assumptions.
> there are remainder terms ... which are not discussed.
These time-independent terms are discussed in Lines 179--181.
**(III) Network parameters change during the training.**
The reviewer's understanding is generally correct. The reviewer seems view "feature learning" as the extraction of higher-level features that are potentially latent within the data. In other words, they interpret features not as raw data vectors or their parts but as more complex combinations of these elements. According to the reviewer, kernel methods and our framework (cf. Eq. (22) and (23)) perform "feature alignment" with raw data rather than "feature learning" as a higher-level feature extraction.
I partially agree with this assertion. The difference in intepretation between us and the reviewer seems to be in the scope of what is referred to as "features." In addition to these high-level features, we also consider the data vectors themselves as features and regard their simple extraction as a form of "learning." For example, in the binary classification of vertical and horizontal lines or in MNIST, the raw data itself would serve as features. However, we will revise any misleading terms that may lead to misinterpretation. Furthermore, we will explicitly state in the limitations section that we prove the feature hypothesis and perturbation learning from primitive feature perspective, but do not fully explain them with higher-level features.
Note that prior work shares the same limitation, and the extraction of higher-level features might require relaxing constraints related to depth rather than width.
Title: Official Comment by Authors (2/2) | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multilingual Diversity Improves Vision-Language Representations | Accept (spotlight) | Summary: The paper conducts a systematic study to explore the performance benefits of using translated non-English data for English vision tasks. By translating multilingual image-text pairs from a raw web crawl to English and re-filtering them, the authors show that continual pre-training on this data increases the performance of the English model on a diverse set of tasks with a single model.
Strengths: * The authors show that by incorporating translated multilingual and multicultural VL dataset can improve English model's results empirically on English only tasks.
* The paper provides interesting analysis showing that using diverse training data improves benchmark results that require geographical or cultural diversity.
Weaknesses: - The improvement from using translated data diminished as the training converged, which is not surprising. Similarly, using more training data to improve the evaluation results on downstream tasks is also expected. Hence, the novelty of insights is limited in this paper.
- The paper only evaluated on one LLM, which more evaluations are required to validate the generality of the claim of this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Possible experiments to make the paper better:
- Add more multicultural evaluation datasets
- Add more evaluations to show the benefits of using diverse training data could benefits multiple models sizes and types
- Perform similar training and evaluation in other languages, to show all languages would benefits from diverse training images and captions
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: While this research may be interesting for those working primarily on monolingual research (i.e., English), its framing should be more sensitive to the many efforts related to multilingual and multicultural NLP. Personally, I think the benefits of using diverse images and captions can be achieved by collecting diverse images and paraphrasing captions or through better sampling of the training data, rather than just using multilingual or multicultural datasets and translating them. As a community, we already lack people working on multicultural and multilingual models, yet this work uses data and research from other cultures to improve **English-only** models (although with good intentions). Perhaps there could be some discussions on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback.
It is very important that we clarify a misconception: our work deals with vision-language models and vision benchmarks. We do not train any LLMs or make claims about the state of multilingual NLP research.
**Framing should be more sensitive to multilingual efforts in NLP/ This work uses data to improve English-only models while we already lack people working on multilingual models.** To clarify,
(i) We support multilingual and multicultural efforts. While benchmarking and improving performance on multilingual tasks is not the focus of our work, as one step in this direction, we enable English-language models to gain exposure to more samples of diverse origins and consequently, do better on geographically diverse benchmarks (e.g. GeoDE and DollarStreet). We discuss how to adapt models from our paper to further enhance their multilingual capabilities under the “Future work” section. Comparing the multilingual performance of these post-training adaptation methods to directly training on non-English captions merits a separate investigation.
(ii) Instead, the goal of our work is to challenge the status quo: many ML practitioners seek to improve state-of-the-art performance on popular benchmarks, which happen to be English vision tasks like Imagenet. In the process of overfitting to these benchmarks, previous work discards a lot (if not all) of non-English samples due to the belief that non-English data does not benefit English evaluations. We question this practice. But in order to *incentivize change*, we need to show that it is possible to improve the English-centric “state-of-the-art” by using more data of non-English origins. We hope that our positive findings could help multilingual and multicultural efforts become the default design choice for data curation, instead of existing as a second priority or as a separate (societal) consideration.
**The improvement from using translated data diminished as the training converged, which is not surprising.** We want to clarify that the results we reported show the opposite. From Table 1 of the main paper, our best baseline, “Filtered raw captions & Filtered translated captions”, outperform “Filtered raw captions” by 2.0% on ImageNet and 1.1 percentage points on average across 38 tasks. When training for 10x more steps, the performance gaps increase to 4.2% and 2.1 percentage points respectively.
**Using more training data to improve the evaluation results on downstream tasks is also expected.** For our baselines, we fix the training budget and the amount of raw data available (i.e. initial pool). With each caption distribution, we tune the filtering threshold, and consequently the size of the resulting filtered subset used for training (this was reported in Table 1 of the main paper). We note that with this setup, increasing *filtered data quantity* is *not* guaranteed to lead to better performance on downstream tasks, as larger subset (with less strict filtering) means fewer passes through each datapoint. In Table 1 we only report the *best performance obtainable* after filtering each data distribution. Table 6 (Appendix) contains the full results, accounting for the number of samples used. While “Top 20% raw captions & Top 20% translated captions” indeed uses more training data than “Top 20% raw captions” (51.2M versus 25.6M), when controlling for data quantity taken from the latter distribution, “Top 20% raw captions & Top 20% translated captions” still significantly outperforms “Top 40% raw captions” (both having 51.2M samples).
**Add more multicultural evaluation datasets.** Given the reviewer’s concern, we have added a new evaluation on Google Landmarks dataset (Weyand et al., 2020). When training for 10x longer, using filtered raw captions yields 16.9% classification accuracy, while our best baseline (using a mix of filtered raw captions and translated captions) gets 18.5% accuracy.
**Show the benefits of using diverse training data could benefit multiple models sizes and types, and other languages.** Due to compute constraints, we did not experiment with more model types and translating to other languages, but we will add these points to our Discussion section. For context, obtaining the baseline numbers for our paper (see Appendix Tables 6 & 7) took about 850 GPU hours with 8 A40 GPUs. Besides, we find that extending our approach to other languages (e.g. translating all captions to Chinese and measuring performance on Chinese tasks) is sufficient scope for a separate paper.
---
Rebuttal Comment 1.1:
Title: Re
Comment: Thank you very much for the additional clarifications. I have revised the score to reflect my current assessment of the paper.
---
Reply to Comment 1.1.1:
Title: Author Response
Comment: Thank you for acknowledging our rebuttal. As mentioned in our comment, we are happy to engage in further discussion at any point in the future as well. | Summary: This paper points out an important issue in current training of CLIP models---the push for including more english-centric / english-only data in the pretraining dataset. The paper points out that this is mainly driven by the downstream evaluation test-beds primarily being english-focused and hence the need to include multi-lingual data has not been found so far. The paper conducts a thorough study of using multilingual data into the pretraining datasets of CLIP models by using caption translation models. The paper evaluates and shows improvements on several downstream datasets from the 38-Datacomp benchmark evaluation suite, and provides analysis on the key differences between multilingual and english-only pretraining datasets.
Strengths: - The paper is very well written and easy to follow
- All the claims in the paper are verified with substantial empirical evidence
- The paper's main message is adequately represented throughout the paper without any distracting claims, hence all the experiments done in the paper flow very smoothly
- The paper points out a very important issue in current pretraining datasets of CLIP models, and showcases a simple intuitive fix for future models and pretraining datasets
Weaknesses: - The paper could benefit from some additional analysis. I note down some analayses that I think might strengthen the paper below.
- For a particular concept is it possible to showcase the diversity afforded by looking at only English data and multilingual data? For instance, take the concept of “stove”. You could perhaps manually take 10-30K images of a stove (assuming that many exist), by filtering the texts for this particular concept, and do the same for both English only and multilingual data. Then, quantify the diversity of images, by perhaps taking a strong Dino-v2 pretrained encoder and then clustering them, and perhaps looking at some intra- and inter-cluster distance metrics. I think this would nicely validate the qualitative visualisation in figure 1 with some empirical evidence that multilingual data indeed boosts the diversity of visual concepts in pretraining datasets. It would be great to see this analysis on perhaps one or two concepts.
Why is this an important analysis? My reasoning would be that while it is totally plausible that multilingual concepts add diversity, it is unclear to me how much of this is actually true? Since it is plausible that English webpages also contain such diverse visual concepts, I think it would be nice to see if there are “new visual concepts” being included in the pretraining corpus, or is it mostly that these “culturally-diverse” visual concepts were present in the english-only pretraining datasets, but we just boost up their frequency in the pretraining dataset? I think either of the two cases is a valid justification for the claim that multilingual data increases diversity, but it would be nice to have a precise answer for what exactly out of the two is the case.
- The main claims of the paper that multilingual data improves VL representations is sufficiently backed up, but benchmarking on culturally diverse / multilingual-sourced data is still limited. The paper only considers GeoDE and DollarStreet for evaluation. I would recommend to also benchmark and report the performance on some more culturally and geospatially diverse datasets like [1,2,3,4]. Some of these datasets might need to be ported into a classification / retrieval format, but in general I think this is an important experiment to do to further validate the important of multilingual-sourced pretraining data.
- By filtering on top of the translated image-text pairs, it seems likely that the training data diversity increases, and potentially train-test similarity [5] also increases. This could be an added confounder in the takeaways of the paper. Could the authors comment/discuss this a bit more in the paper?
- The analysis in the paper would further be improved by checking how the distribution of CLIPScores look before and after the translation. Potentially there would be a shift to the right in the similarity distribution, but it would be a good analysis to include for better intuitions on what might be good filtering thresholds if such a model were to be used in the future for vector arithmetic / data filtering itself.
[1] Kalluri et al, GeoNet: Benchmarking Unsupervised Adaptation Across Geographies
[2] Weyand et al, Google Landmarks Dataset v2 -- A Large-Scale Benchmark for Instance-Level Recognition and Retrieval
[3] Yin et al, Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning
[4] Romero et al, CVQA: Culturally-diverse Multilingual Visual Question Answering Benchmark
[5] Mayilvahanan et al, Does CLIP’s generalization performance mainly stem from high train-test similarity?
Technical Quality: 4
Clarity: 4
Questions for Authors: I have a few additional questions/comments which I include below:
- How do you estimate the english part of the web-crawl? You mention it is in 1/3rd in the introduction.
- Is there an explanation for why filtered translated captions might improve english-centric performance over the filtered raw captions? This seems counter-intuitive given as you say in the paper, that ImageNet is mostly western-centric. This also seems to contradicts the results of the No-Filter paper [1].
- Could the model trained on this filtered raw subset itself be used as a DFN, and potentially further improve the performance when training on the raw multilingual pool? The idea being that you might have to do less tuning on the filtering threshold since the model is more robust to noise from the translated captions.
- In fig 3, are the results from the "filtered raw captions" and "filtered translated captions" methods from tab 1? Doesn’t seem so since the performance improvements on the 38 tasks mentioned in the caption of fig 3 is 1.5% whereas the performance improvements from tab 1 is only 0.9%.
- Why does performance on Food101 go down? Is it because all the food classes included in Food101 are primarily English-centric? I am certain that is not the case. I would be very interested in a more detailed analysis of why the performance on Food101 goes down when we include more multilingual data since that is one aspect that I would have thought improves quite a lot.
[1] Pouget et al, No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors are explicit about their limitations. Any further limitations I think are mentioned in the weaknesses section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for all the valuable suggestions!
**Showcase the diversity for certain concepts.** Per your suggestion, we ran additional analysis to compare the diversity of images for the same concept. We sampled 1K images from each data distribution for which the corresponding (translated) captions mention “stove”. With DINOv2, we obtained embeddings of these images and used them to cluster each data distribution into 20, 50 and 100 clusters. Figure 2 in our global response shows the average inter-cluster distance of “stove” images in English data versus non-English data. Overall the latter group of images yields higher inter-cluster distance, suggesting that the clusters are more well-separated and the non-English “stove” images are more diverse. We will add this experiment to the paper.
**Benchmarking on culturally diverse data is still limited.** Our work does not explicitly seek to improve performance on culturally diverse tasks. We explore whether increasing the diversity of the data origins, and consequently allowing the training set to be dominated by non-English samples, can improve performance on vision tasks *in general*, especially on those that dictate the field’s state-of-the-art like ImageNet. Improvement on geographically diverse benchmarks is an added benefit in this process. However, given the reviewer’s feedback, we have added a new evaluation on Google Landmarks dataset (Weyand et al., 2020). When training for 10x longer, using filtered raw captions yields 16.9% classification accuracy, while our best baseline (mixing filtered raw captions and translated captions) gets 18.5% accuracy.
**By filtering on top of the translated image-text pairs, potentially train-test similarity also increases.** Changes in train-test similarity may happen as a result of using translated captions, but that should not be a confounder in the paper takeaways because (i) the proposed translation method does not directly optimize for train-test similarity, (ii) we apply the same filtering process (i.e. based on DFN score) to both raw and translated captions, (iii) we measure performance on a large number of vision tasks (38).
**Checking how the distribution of CLIPscore look before and after the translation.** We have added this analysis in our global response (see Figure 1 of the attached PDF). We sampled 10K images with English captions and 10K with non-English captions from the initial pool, and compared how the DFN scores change with translation. Unsurprisingly, DFN scores for non-English samples generally increase after the captions are translated into English. This in turn leads to a right shift in the distribution of image-text similarity scores when looking at the score histograms before and after translation. As image-text alignment (measured by DFN score) tends to help with empirical performance, this shift suggests that translation increases the availability of good training data.
**Estimate that English takes up ⅓ of any web crawl.** This estimate is computed from using the NLLB model to detect languages in the DataComp dataset. Since DataComp performed minimal preprocessing (i.e. NSFW removal and test set deduplication), we hypothesize that the initial DataComp pool is representative of the natural distribution of raw data on the web that is appropriate for training.
**Improvement on ImageNet contradicts the results of the No-Filter paper.** We note some differences between our experiment setup and that of concurrent work:
(1) We compare training on filtered translated captions (which is *dominated* by non-English data) to training on filtered raw captions (which is *dominated* by English data). In contrast, the No-Filter paper compares multilingual data (globe) to its strict subset containing *only* English data (en).
(2) We perform filtering and retain only a small fraction of the original pool (20-30%), whereas the No-Filter paper trains on web data with “minimal filtering”. We posit that filtering is important to obtain competitive performance with translated captions, especially given the noise introduced by the translation process.
We believe our setup better mimics how pre-training data curation is commonly done in practice (e.g. LAION, which was also heavily filtered and contains a mix of English and non-English data).
**Why filtered translated captions might help improve English-centric performance.** While the fraction of images of English origins is lower when we use filtered translated captions instead of filtered raw captions (60% → 40%, Figure 2 of our paper), the number of such images in the final training set is still significant. As a result, the model can still learn English-centric concepts to a certain extent. We hypothesize that the performance on English vision tasks is further reinforced by training on high-quality, diverse multilingual data, which helps induce better visual features and robustness in general.
**Average performance improvement in Figure 3.** The analysis in Figure 3 of our paper is performed on models trained with the top 30% of the initial pool (see description in Lines 251-254), whereas the first section of Table 1 involves training on the top 20% of the pool. We pick the 30% threshold for further analyses because (i) it is the baseline that yields the best performance when training until convergence, (ii) the average performance gap between filtered translated captions and filtered raw captions is larger at this threshold, allowing us to observe the differences on individual tasks better.
**Why does performance on Food101 go down?** We note that (1) the accuracy change for the Food101 task between using filtered translated caption and using filtered raw captions is relatively small (-0.2%), and (2) the Food101 class names are still dominated by English-centric concepts. Besides, the noise in the translation process may affect whether the class mentions in the original languages are still preserved after translation.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: I thank the authors for their thorough analysis and further experiments.
- The clustering experiment now adds a very strong demonstration that the non-English data improves the diversity of visual samples even for concepts that are already present in the English pool. This is a great result!
- GLD experiments look great and provide additional validation for the significance of including non-english data.
- The points discussing the differences between your and the No-Filter paper are quite valid and important. I would encourage the authors to add this as an explicit discussion section either in the related works or the appendix.
After reading all the other reviewer comments and the author's responses to them, I am inclined to strong accept this paper as it makes a significant contribution to the data-centric VL community and provides relevant, practical insights. I am increasing my score to 8.
---
Reply to Comment 1.1.1:
Title: Follow up
Comment: Thank you for acknowledging our additional results and for adjusting the rating! We will make sure to include the new analyses as well as a discussion of the No Filter paper results in the next version of our work. | Summary: The authors investigate whether *multilingual* vision-language data improves the *English-only* performance on a model in vision-language tasks.
They translate captions from DataComp from English to other languages and train a CLIP model on these multilingual captions.
They find that this action boosts performance on both English-only tasks like imagenet matching and geographically diverse tasks like GeoDE.
Strengths: This is a super efficient way to gain a performance boost on DataComp (for revs who might not be aware, this is a cool benchmark for assessing efficient training of vision-language representations evaluated over a bunch of downstream datasets within a fixed compute budget). Any method that boosts DataComp performance across so many tasks for free (just translation instead of collecting new captions) is awesome in my opinion.
Performance boost on GeoDE (geographically sorted img classification) is improved considerably in all regions, justifying the purpose of the intervention.
Weaknesses: Unclear how the proposed automated translation pipeline ties in to the original motivation (culturally specific items not being captured in English data)
Could benefit from a bit deeper description of what the GeoDE task is for unfamiliar readers to improve reach.
Technical Quality: 4
Clarity: 4
Questions for Authors: Figure 1: where are these examples from? Are they actually in the datacomp datasset? Doesn't translating datacomp captions into other languages keep the learned representations in the "English concept space" of sorts?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes, limitations are sufficiently covered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and for recognizing the strengths of our work!
**How the proposed automated translation pipeline ties in to the original motivation (culturally specific items not being captured in English data).** We discuss the presence of culturally salient concepts in the Introduction mainly to motivate how multilingual data is inherently enriching and complements English data. Improving the availability of culturally specific items in the final training set is not what our method specifically optimizes for, but is likely to happen with the inclusion of substantially more samples of non-English origins after filtering (see Figure 2 of main paper).
**Deeper description of what the GeoDE task is for unfamiliar readers.** Thank you for raising this point. GeoDE is a geographically diverse dataset containing 61K images spanning 40 common object categories, collected from 6 different world regions via crowd-sourcing. We will add more details of this task to the paper.
**Where are the examples in Figure 1 from? Are they actually in the datacomp dataset?** Yes, these examples are taken from our training data, i.e. DataComp.
**Doesn't translating datacomp captions into other languages keep the learned representations in the English concept space?** We interpret this question as whether translation would reduce the richness of the concept space (especially when it comes to translating culturally salient concepts). We empirically observe that after translating multilingual captions to English, some culturally salient concepts are still preserved - refer to Figure 1 (left) of the main paper for examples. Our work has also acknowledged this potential limitation, that “translation can sometimes be too literal, subject to losing the intent and richness of the original phrasing”. Nevertheless, we hope that our findings inspire future work to (i) be more inclusive of data of non-English origins during training, (ii) investigate other ways to effectively leverage the diversity of multilingual data.
---
Rebuttal Comment 1.1:
Comment: **Motivation of translating from English**
This explanation is reasonable. I'd like to see this clarified in the intro of the CR.
**Other answers**
Reasonable, thank you for clarifying. I forgot that datacomp pre-filtering does indeed contain multilingual data.
**Translating datacomp captions**
After the clarification, I understand. You are indeed augmenting the concept space by translating into English here.
Thank you for your responses, I will update my soundness score.
---
Reply to Comment 1.1.1:
Title: Author response
Comment: Thank you for acknowledging our rebuttal and for adjusting the rating! We are glad to hear that our response has provided clarification to your earlier questions. | Summary: In this paper, the authors conduct a thorough exploration of how multilingual image-text pairs benefit English vision tasks. They first present how to effectively utilize translated data to improve performance on standard vision tasks and derive valuable conclusions through detailed ablation studies. Second, they illustrate the differences in image and text distributions between English and non-English image-text pairs. These findings highlight the potential of leveraging multilingual datasets to enhance the robustness and accuracy of vision models.
Strengths: 1. The paper presents effective strategies for utilizing translated data to enhance performance on standard vision tasks
2. Detailed ablation studies are carried out to derive valuable conclusions, which help the community understand the impact of various factors.
3. The paper is well-written and easy to follow.
Weaknesses: 1. No quantitative assessment for the quality of translation.
2. Only evaluation on representation task. Does such data strategy also work for the training of Multimodal LLM?
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and feedback!
**No quantitative assessment for the quality of translation.** Given your concern, we have performed additional analysis of the translation quality. We sampled 100K multilingual captions from our raw data pool and backtranslated the English-translated caption into the original language (e.g. Chinese text -> English translation -> Chinese translation of the English-translated text). Then we computed the cosine similarity between the initial web caption and the backtranslated caption using embeddings from the multilingual Sentence-BERT model (Reimers et al., 2019), to assess how much semantic meaning is preserved after translation. We find that on average the cosine similarity (thus, translation quality) remains relatively high (0.63). Below we report the top 5 and bottom 5 languages that observe the highest and lowest translation quality as captured by our metric (each has at least 30 samples). We will add the full analysis in the next version of the paper.
| Language | Text cosine similarity after backtranslation |
| -------- | ------- |
| English | 0.886 |
| Norwegian Nynorsk | 0.883 |
| Bengali | 0.883 |
| Russian | 0.860 |
| Norwegian Bokmål | 0.839 |
| | |
| Marathi | 0.271 |
| Irish | 0.240 |
| Standard Latvian | 0.233 |
| Chechen | 0.0595 |
| Karachay-Balkar | 0.00280 |
**Only evaluation on representation task. Does such data strategy also work for the training of multimodal LLM?** We note that our evaluations already contain 38 datasets involving recognition and classification of a wide range of domains (e.g. texture/ traffic sign/ scene/ metastatic tissue/ geolocation/ animal/ etc.), in addition to image-text retrieval and commonsense association.
Extending the findings from this work study to multimodal LLM training would be an interesting direction for future work, but is orthogonal to our contribution; we will add this to the Discussion section. Our results demonstrate that there is so much diversity and visual information to be leveraged from multilingual data, to the extent that training on predominantly non-English samples can improve CLIP’s performance on standard English benchmarks. We hypothesize that leveraging this diversity will also be beneficial for training multimodal LLMs, especially since many of which still rely on English-dominated image-text datasets (e.g. LAION) and still use CLIP as the image encoder (e.g. LLaVA-1.5). | Rebuttal 1:
Rebuttal: We would like to thank all reviewers again for providing thoughtful reviews of our work. Here we highlight several new results/ analyses taking your feedback into consideration:
\
1. Reviewer xoEn wondered about the translation quality. In response, we sampled 100K captions from our raw data pool and backtranslated the English-translated caption into the original language (e.g. Chinese text -> English translation -> Chinese translation of the English-translated text). To assess the translation quality, we computed the cosine similarity between the initial web text and the backtranslated text using embeddings from the multilingual Sentence-BERT model (Reimers et al., 2019). We find that on average the cosine similarity (and thus, translation quality) remains relatively high (0.63). Below we report the top 5 and bottom 5 languages that observe the highest and lowest translation quality measured by our metric:
| Language | Text cosine similarity after backtranslation |
| -------- | ------- |
| English | 0.886 |
| Norwegian Nynorsk | 0.883 |
| Bengali | 0.883 |
| Russian | 0.860 |
| Norwegian Bokmål | 0.839 |
| | |
| Marathi | 0.271 |
| Irish | 0.240 |
| Standard Latvian | 0.233 |
| Chechen | 0.0595 |
| Karachay-Balkar | 0.00280 |
2. Reviewer 7AUA asked about DFN score changes before and after translation. We analyzed the score differences for 10K English and 10K non-English samples, and found that translation improves the image-text similarity score of non-English samples, as well as the overall score distribution of the data pool. Refer to Figure 1 of the attached PDF for more details.
\
3. Reviewer 7AUA suggested adding another metric for quantifying the diversity of multilingual data by measuring clustering distances. In Figure 2 of the response PDF, we showed that for a specific concept such as “stove”, non-English images form more distinct clusters than English ones. We randomly sampled 1K images with English captions and 1K with non-English captions, such that the (translated) captions mentioned “stove”, and embedded them with the DINOv2 model. Across different numbers of clusters uncovered, non-English data generally yields higher inter-cluster distance, suggesting that the “stove” images with multilingual captions are more heterogeneous compared to those with English captions.
\
4. Since Reviewers 2aum and 7AUA requested more multicultural evaluations, we ran a new evaluation on Google Landmarks dataset (Weyand et al., 2020). When training for 10x longer, our best baseline (using a mix of filtered raw captions and translated captions) yields 18.5% classification accuracy, outperforming the baseline trained on just filtered raw captions by 1.6%. We note that the performance on this task is limited given that our best data mix only uses ~51M samples.
\
5. Last but not least, Reviewer 2aum had concerns about our investigation not being sensitive to the many efforts related to multilingual and multicultural NLP. **Here we would like to restate the goal of our work:**
We advocate for increasing the cultural and linguistic diversity of the training data for vision-language models (VLMs), and view our findings as complementary to other multilingual and multicultural efforts. While these efforts have led to new geographically diverse benchmarks as well as new ways to increase the availability of high-quality non-English data, their adoption is still relatively limited. This is evident from the fact that many popular models that are considered state-of-the-art (e.g. MetaCLIP, SigLIP, ALIGN) were still trained on entirely English image-text pairs. In order to change this status quo, it is important to demonstrate that training on predominantly data of non-English origins can do better than training on predominantly English data, especially on standard vision tasks that define the field’s state-of-the-art (e.g. ImageNet). Our work provides such evidence. We hope this can lead to more active exploration and adoption of multilingual and multicultural data in mainstream VLM training, instead of using this data only when under-served populations or tasks are involved.
Pdf: /pdf/ef6eb9d68460a18f145f2dc7879e7f304bed1546.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
KrwEmd: Revising the Imperfect Recall Abstraction from Forgetting Everything | Reject | Summary: This paper introduces the KrwEmd algorithm, a novel approach to hand abstraction in Texas Hold’em-style games. The main contribution is the integration of historical information using K-recall winrate features and earth mover’s distance, addressing the limitations of previous imperfect recall abstraction methods. The algorithm demonstrates significant performance improvements over state-of-the-art methods.
Strengths: 1. The integration of historical information via K-recall win rate features enhances the accuracy and reliability of hand abstraction.
2. KrwEmd significantly outperforms existing methods, showcasing its practical value and potential for real-world applications.
3. The algorithm is technically sound, with strong experimental validation supporting its claims.
Weaknesses: Scalability: While the paper demonstrates KrwEmd's effectiveness in Texas Hold’em-style games, its scalability to more complex and diverse game scenarios remains to be explored.
Comparative Analysis: The paper could benefit from a more detailed comparison with existing hand abstraction methods to better highlight KrwEmd's unique advantages and limitations.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How well does KrwEmd scale when applied to more complex and diverse game scenarios beyond Texas Hold’em? Could the authors provide any preliminary results or insights into this?
2. While KrwEmd achieves significant performance improvements, are there scenarios or tasks where additional training data might be beneficial or required? If so, what kind of training data would be most useful?
3. Are there other game models or systems that KrwEmd could enhance similarly? What challenges might arise in such integrations?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper primarily focuses on Texas Hold’em-style games. While this is a significant achievement, a discussion on the model's scalability and generalization to more complex and varied game scenarios would be beneficial. Including potential strategies to address these challenges would strengthen the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of our work, which has greatly encouraged us.
You mentioned that we should provide a more detailed comparison with previous hand abstraction algorithms. We will adjust the hyperparameters and conduct more comparisons. The previous experiments were somewhat limited mainly because the computational cost of evaluating exploitability in the Numeral211 environment is very high, making each set of experiments take about a week. We will supplement with more experiments in the future.
Below are the answers to your questions:
1. How well does KrwEmd scale when applied to more complex and diverse game scenarios beyond Texas Hold’em? Could the authors provide any preliminary results or insights into this?
Regarding the scalability of KrwEmd, we can confirm that the algorithm can be used in environments with large data volumes, such as Heads-Up No-Limit Texas Hold'em (HUNL) and Heads-Up Limit Hold'em (HULHE), as we have developed an acceleration algorithm. The main reasons for not providing a performance comparison are: 1) exploitability is not computable in HUNL & HULHE environments; 2) the computational cost of strategy solving in HUNL & HULHE is extremely high. Therefore, we validated it in a toy environment, but KrwEmd can be applied in HUNL & HULHE environments.
We are currently developing a new AI that replaces the hand abstraction module. Here is a preliminary observation: our previous AI version used a "zero-recall" or "forget everything" hand abstraction paradigm (PaEmd), which in the HUNL river phase could identify a maximum of 20,687 different categories. In contrast, using KrwEmd, the potential identifiable categories can reach up to 577,366,243, showing the significant information loss caused by previous algorithms. We plan to conduct ablation experiments on Slumbot (a public Texas Hold'em AI evaluation platform) after replacing the hand abstraction module (using the same abstraction scale configuration as before) to demonstrate the effectiveness of our algorithm. This work is currently in progress.
2. While KrwEmd achieves significant performance improvements, are there scenarios or tasks where additional training data might be beneficial or required? If so, what kind of training data would be most useful?
KrwEmd is a clustering algorithm and currently does not require additional data. However, to facilitate the computation of KrwEmd, it would be helpful to pre-store the potential winrate isomorphism (POI) and its winrates, as well as the k-recall winrate isomorphism (KRWI), in a database, rather than computing the winrates for each POI category in real-time. By adjusting the $w_j$ parameters in equation (5), we can generate KrwEmd data with different configurations.
3. Are there other game models or systems that KrwEmd could enhance similarly? What challenges might arise in such integrations?
We have not yet validated and applied our approach in environments beyond Poker, although we have tried to model the signal abstraction problem in a general way during the writing of the paper. In the future, we may look for other game environments to test the idea of "using historical information for abstraction." There are some important points to consider when finding such validation environments: 1) the game should have a mechanism to divide it into phases; 2) the game environment can be stochastic but should be static, meaning players know the state transition distribution; 3) the game should be able to extract states that are independent of other elements, such as hand cards in Texas Hold'em being independent of actions, pot size, and other elements. | Summary: This paper introduces a novel approach to hand abstraction in Texas Hold'em-style poker games, addressing the limitations of current methods that often disregard historical information. The authors make two primary contributions: First, they develop KRWI (K-Recall Winrate Isomorphism), a new abstraction method that incorporates historical information from previous game phases. Second, they present KrwEmd, the first hand abstraction algorithm to effectively combine K-recall win rate features with earth mover's distance for hand classification. Through experiments conducted in the Numeral211 Hold'em environment, the authors demonstrate that KrwEmd significantly outperforms state-of-the-art algorithms such as Ehs and PaEmd in terms of exploitability, while maintaining the same number of abstracted information sets. This work shows that incorporating historical information can substantially enhance the performance of hand abstraction algorithms, potentially leading to more advanced strategic computation in large-scale adversarial games and stronger poker AI systems.
Strengths: * Overall, the paper provides a new technique that is promising for an important area of research
* The results indicate strong improvements over alternative methods
Weaknesses: For me, the paper's primary weakness is the presentation method. I had trouble understanding the significance and nature of the contribution from the current submission. In general, a clearer description of this area of research for people who, e.g., work on games but don't focus on poker would be quite helpful. Some specific suggestions/areas for improvement are:
* Clearer introduction of key concepts: The paper jumps into technical terms like 'imperfect recall abstraction' and 'hand abstraction' without adequately explaining them for a broader audience. A brief explanation of why these concepts are important in poker AI would be beneficial.
* More intuitive explanations of the algorithms: The descriptions of PWI, KRWI, and KrwEmd are highly technical. Including some simple examples or diagrams to illustrate how these algorithms work could greatly improve understanding.
* Better contextualization of the contribution: While the paper claims to outperform existing methods, it's not clear how significant this improvement is in the broader context of poker AI. A discussion of the practical implications of this improvement would be valuable.
* Clarification of experimental setup: The Numeral211 Hold'em environment is not well-known. A clearer explanation of how this relates to standard poker variants would help readers understand the relevance of the results.
* More accessible presentation of results: The graphs and tables are dense with information but lack clear explanations. Simplifying these visualizations or providing more guidance on how to interpret them would be helpful.
* Glossary of terms: Given the many technical terms used (e.g., 'earth mover's distance', 'K-recall winrate feature'), a glossary could be a valuable addition to help readers keep track of these concepts.
Technical Quality: 3
Clarity: 1
Questions for Authors: Please address my comments about clarity overall and address any specifics that seem appropriate.
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: There's limited discussion of limitations. It would be good to include an explicit limitations section. In particular, it would be good to discuss potential computational challenges in more detail, any limitations on the scope of the evaluation, and future work that might be planned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of my work and for providing many constructive suggestions. These suggestions are very helpful, and we will take them seriously.
You and other reviewers mentioned that this paper is somewhat obscure for researchers outside the Poker AI field. We realized that a significant reason for your appreciation of our work, yet not awarding a higher score, is due to our insufficient presentation. We accept this criticism and will improve it in future versions. We will directly incorporate and improve upon the points 1, 2, 5, and 6 you mentioned in the weakness section.
For the points 3 and 4 you raised in the weakness section, we will also make improvements. We noticed that these points contain some aspects that were not clearly explained, and we will provide a brief clarification here.
- Contribution of this Work to the Poker AI Community
The current most mainstream Texas Hold'em AI construction paradigm revolves around the CFR algorithm and its variants (including recent CFR algorithm variants that incorporate deep learning). Using this paradigm to construct a Texas Hold'em AI involves first determining the solution space, which means constructing an abstracted game. This step is divided into state abstraction (also known as infoset abstraction) and action abstraction, which are independent of each other. Our work falls into the category of state abstraction. After constructing the abstract game, we use the CFR algorithm in the solution space to perform strategy solving, noting that the strategy obtained at this point pertains to the abstracted game. Finally, we map the strategy from the abstracted game back to the original game for application. The construction of the abstracted game and strategy solving are independent tasks.
The poker AIs that caused a sensation between 2016 and 2019 (such as Deepstack, Libratus, and Pluribus) were primarily focused on strategy solving. In their construction of the abstracted game, they used the PaEmd abstraction algorithm for state abstraction. The PaEmd algorithm can be seamlessly replaced with KrwEmd, which could significantly improve the performance of the resulting AI, as we consider more comprehensive information. This represents a significant innovation and contribution to the field of Texas Hold'em AI construction.
- Regarding the Choice of the Numeral211 Hold'em Environment
The choice of this environment was based on the following considerations: Previously, the toy environment was Flop Hold'em (which limits two-player limit Texas Hold'em to 2 phases). The issue with this environment is that with only two rounds, it cannot reflect the deficiencies of POI abstraction (the number of information sets has a spindle-shaped rather than triangular distribution). Therefore, we needed an environment with at least 3 phases.
The next problem is that in environments with at least 3 phases, the computational cost of calculating the best response is very high. To create a graph showing exploitability changes with training epochs, we needed a simpler environment. Initially, we chose a game with a deal sequence of [2,1,1] using a standard deck, but the best response calculation was still slow in this environment. One variant of poker is the Royal Deck (which only includes [T, J, Q, K, A] × [♠, ♣, ♥, ♦], a total of twenty cards). However, with this setup and a deal sequence of [2,1,1], the number of recognizable LI is very small, making k-means clustering less convincing. Therefore, we designed a custom environment with 40 cards. This environment is ideal in terms of best response calculation time and the number of recognizable LIs.
---
Rebuttal 2:
Title: Thanks for reviewer
Comment: Dear Reviewer,
We noticed that you haven't yet provided a response, and we appreciate your recognition of our work. Currently, our submission is on the borderline for acceptance. Would it be possible for you to consider increasing one more score? This would greatly helpful for the work.
We are confident in the contribution of our research. The field of hand abstraction has seen little progress in the past decade, and our work offers a seamless application to current poker AI systems, potentially improving their performance significantly. Moreover, our research introduces new ideas that could inspire a series of further advancements in hand abstraction.
Thank you for your consideration.
Best regards, | Summary: This paper focuses on the problem of hand abstraction for Texas Hold-Em style poker games. Hand abstraction is the process of partitioning game histories into infosets which still contain enough information to make strategically advantageous decisions. Previous approaches have focused on abstractions that primarily focus on the future outcomes from each hand, but the authors suggest that it may instead be beneficial to also include past information. They design a hand abstraction algorithm called KwrEmd that outperforms previous work by incorporating historical information.
Strengths: The results seem to show that KrwEmd outperforms other imperfect recall hand abstraction algorithms in terms of exploitability.
Weaknesses: As somebody who is unfamiliar with the subfield of imperfect recall abstraction, I found the paper to be quite confusing throughout. The authors do not often provide intuition or examples for their method, and I found it difficult to tell exactly which contributions were novel compared to Fu et al. While it's reasonable for a paper to use technical language at times, well-written papers are usually understandable by a broader set of readers than just those in the specific subfield.
Technical Quality: 3
Clarity: 1
Questions for Authors: No specific questions.
Confidence: 1
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: Not sure.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We apologize for the difficulties you experienced while reading my paper, and we appreciate that you did not give a very negative score, giving me the opportunity to further present my work.
Your suggestions are excellent. In the revised version, I will incorporate more intuitive examples (especially examples from Texas Hold'em) to better illustrate the symbols and definitions.
Our writing may have made it difficult for you to grasp the structure of this work, so I will highlight the key points here. This paper builds on the work of Fu et al., who were the first to discover that the mainstream hand abstraction algorithms (signal abstraction algorithms) such as Ehs and PaEmd misused the term "imperfect-recall". In reality, these algorithms follow a "never-recall", "zero-recall", or "forget-everything" paradigm, meaning they abstract hands based solely on the future states and outcomes of the game. This paradigm results in significant information loss. In light of this, they developed the **common refinement** tool and constructed the **k-recall outcome features** and **k-recall outcome isomorphism (KROI)** to demonstrate that considering historical information can greatly facilitate hand abstraction.
However, the work of Fu et al. also has its limitations. Primarily, k-recall outcome features cannot be further clustered (as k-recall outcome features only represent a probability distribution, making it difficult to define the concept of distance), and k-recall outcome isomorphism is too data-intensive to be directly applied in large-scale games like Texas Hold'em. Therefore, there is a need in the community to develop an abstraction algorithm that considers historical information and can be further simplified through clustering. Moreover, since 2014, there have been no updates to applicable hand abstraction algorithms. This is the motivation behind this paper.
We creatively discovered that simplifying k-recall outcome features to k-recall winrate features only reduces a small amount of information, but it allows us to introduce the concept of distance (i.e., the difference in win rates). Based on this discovery, we developed the **k-recall winrate isomorphism (KRWI)** and the **KrwEmd** algorithm. KrwEmd is the first hand abstraction algorithm that considers historical information and can be used in large-scale games. We validated it in the Numeral211 toy environment, and the results show that KrwEmd can significantly improve the performance of poker AIs that originally used Ehs and PaEmd.
It is also important to note that although we have only validated KrwEmd in the Numeral211 environment, this is because the computational cost of evaluating the exploitability metric in larger-scale poker games is quite high. Indeed, KrwEmd can be applied to larger-scale games as well.
Since the poker AIs that caused a sensation between 2016 and 2019 (such as Deepstack, Libratus, and Pluribus) used the PaEmd abstraction algorithm, the PaEmd module can be seamlessly replaced with KrwEmd. This is because hand abstraction is a relatively independent task in the domain of poker AI, with Deepstack, Libratus, and Pluribus primarily focused on strategy training and search. Consequently, the resulting AI could significantly improve performance. This represents a substantial innovation and contribution to the field of poker AI development.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. I think that the paper's presentation could be significantly improved by incorporating some of the explanations given in your rebuttal—this will clarify the significance of the paper to a wider audience.
---
Rebuttal 2:
Title: Thanks for the review
Comment: We are pleased to have had the opportunity to explain our work to you. Thank you for raising your evaluation of our work, which is very important and helpful to us.
May you continue to find success in your work and happiness in your life.
Best regards,
the authors. | Summary: This paper proposes KrwEmd, a novel hand abstraction algorithm for imperfect recall settings in Texas Hold'em poker. The algorithm leverages K-recall winrate features, incorporating historical information in addition to future information for constructing hand abstractions. The authors introduce two new isomorphism frameworks: Potential Winrate Isomorphism (PWI) and K-recall Winrate Isomorphism (KRWI). They demonstrate that KRWI outperforms existing methods like POI in identifying distinct infosets. KrwEmd, which combines KRWI with Earth Mover's Distance (EMD) for hand classification, shows superior performance compared to POI, Ehs, and PaEmd in the Numeral211 Hold'em environment.
Strengths: *Originality*: The paper presents a novel combination of K-recall winrate features and EMD for hand abstraction in imperfect recall settings, addressing a critical limitation of current approaches that solely rely on future information.
The introduction of KRWI and PWI provides valuable new tools for understanding and constructing hand abstractions in poker AI.
*Quality*: The experimental results in the Numeral211 environment demonstrate a clear improvement over existing methods, supporting the claims of the paper. The paper includes an appendix with algorithm details and supplementary experimental data.
*Significance*: The proposed KrwEmd algorithm advances the state-of-the-art in hand abstraction for imperfect recall settings, offering a potentially significant improvement for developing stronger poker AI agents.
The incorporation of historical information is a valuable contribution that benefit positively future research in poker and other imperfect information games.
Weaknesses: I found the paper challenging to understand, though this may be due to my limited background knowledge in poker AI and game theory. While the authors provide a background section, the density of the technical content and the numerous specialized terms make comprehension difficult.
The description of the accelerated algorithm in Appendix A.3 could be expanded for better understanding. Additionally, a clear discussion of the limitations of the accelerated algorithm would be beneficial.
The paper provides limited information about the proposed algorithms, particularly KrwEmd. While the core concepts are presented, the details regarding implementation and specific design choices are limited. More in-depth explanation and pseudocode would enhance the paper's quality.
Technical Quality: 2
Clarity: 2
Questions for Authors: What are the limitations of the accelerated algorithm? Does it introduce any approximations or trade-offs in performance or accuracy?
Have you explored the application of KrwEmd to other poker variants beyond Numeral211? Are there any specific challenges or adaptations needed for different game settings?
How sensitive is KrwEmd to the choice of hyperparameters (w0, w1, w2)? Is the algorithm robust to different hyperparameter configurations?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors acknowledge the computational complexity of clustering with EMD and introduce an accelerated algorithm. However, the paper lacks a dedicated section addressing the limitations of the proposed methods and the accelerated algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive evaluation of my work. Your feedback is very constructive and helpful.
To address the weaknesses you pointed out, I will make the following improvements:
You and other reviewers pointed out that my paper is not very easy to read. I acknowledge this issue and appreciate the feedback. Specifically, you mentioned that the high density of background information and the overly abstract and symbolic descriptions contribute to the difficulty. To address this, I will include some examples in the appendix to clarify which components of the Texas Hold'em game each symbol corresponds to.
I will add an intuitive description of the acceleration algorithm (mainly because the distance calculations for k-POI classes given a centroid with all points are repeatedly applied, with the acceleration algorithm avoiding redundant EMD distance calculations). Additionally, I will include a case study demonstrating how the distance calculation for a centroid to all inputs is reduced in the acceleration algorithm, and provide a more accurate complexity comparison. Regarding the limitations of the acceleration algorithm, there are some considerations I need to mention. We noted that this acceleration algorithm is designed to run in a distributed environment (although it can also run in a non-distributed environment, it allows for distributed splitting in centroids). In a distributed environment, the algorithm cannot globally select the most suitable top k initial centroids during the initial centroids selection process. You mentioned a similar issue in question section, and I will discuss this point further later. However, apart from the initial centroids selection process, the results of the clustering process of the acceleration algorithm are equivalent to those of the non-accelerated algorithm, whether in a distributed or non-distributed environment. In a non-distributed environment, the acceleration algorithm is completely equivalent to the non-accelerated algorithm.
Your suggestions on the algorithm description are very important, and we will take them seriously. In the revised version, I will provide a more readable algorithm description. Additionally, if you have any specific suggestions for improvements, I would greatly appreciate them.
Answer to the questions:
- What are the limitations of the accelerated algorithm? Does it introduce any approximations or trade-offs in performance or accuracy?
This acceleration algorithm does not introduce approximations; its behavior in distance calculations and clustering is identical to the original Kmeans++ algorithm. The acceleration algorithm makes a compromise in the selection of initial centroids compared to the Kmeans++ algorithm (however, this issue exists in almost all distributed Kmeans++ algorithms, which is another problem). Our approach to this scenario is for each distributed node (a total of m nodes) to provide (k/m) initial centroids. When confirming a new local centroid, after updating the minimum distances from the points in each local dataset to the selected local centroids, we use these distances with the softmax operator to probabilistically select the next centroid, instead of simply choosing the point with the largest minimum distance as the new centroid. The benefits of this approach are evident.
- Have you explored the application of KrwEmd to other poker variants beyond Numeral211? Are there any specific challenges or adaptations needed for different game settings?
The choice of this environment was based on the following considerations: Previously, the toy environment was Flop Hold'em (which limits two-player limit Texas Hold'em to 2 phases). The issue with this environment is that with only two rounds, it cannot reflect the deficiencies of POI abstraction (the number of information sets has a spindle-shaped rather than triangular distribution). Therefore, we needed an environment with at least 3 phases.
The next problem is that in environments with at least 3 phases, the computational cost of calculating the best response is very high. To create a graph showing exploitability changes with training epochs, we needed a simpler environment. Initially, we chose a game with a deal sequence of [2,1,1] using a standard deck, but the best response calculation was still slow in this environment. One variant of poker is the Royal Deck (which only includes [T, J, Q, K, A] × [♠, ♣, ♥, ♦], a total of twenty cards). However, with this setup and a deal sequence of [2,1,1], the number of recognizable LI is very small, making k-means clustering less convincing. Therefore, we designed a custom environment with 40 cards. This environment is ideal in terms of best response calculation time and the number of recognizable LIs.
- How sensitive is KrwEmd to the choice of hyperparameters (w0, w1, w2)? Is the algorithm robust to different hyperparameter configurations?
We have not explored this hyperparameter extensively, primarily because each experiment takes about a week more, limiting our exploration. The current conclusion is that setting (w0 > w1 > w2) yields better results, indicating that information from later phases is more important than information from earlier phases. However, further research is needed to determine the optimal hyperparameter settings.
Regarding the limitation section:
The main limitation of our algorithm actually comes from the scale of Texas Hold'em (the data volume is extremely large). However, in a distributed environment, we can utilize several weeks to obtain an abstraction (our distributed environment consists of a cluster of 10 servers, each with 96 logical cores). Although this computational cost is indeed very high, it is also acceptable because, in the process of constructing a high-performance Texas Hold'em AI, calculating the abstraction data is a one-time data preprocessing task. Spending over a month to obtain high-quality abstraction data is worthwhile. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper develops new hand abstraction techniques for Texas Hold'em-style games (in general: games with ordered signals), which fare better than previous methods in both the number of hands identified, and performance (exploitability) in a simplified version of the game.
Hand abstraction is a technique aiding the strategy construction in a Texas Hold'em, where concrete hands, or rather concrete signal infosets (i.e. "possible words" according to the information revealed so far), are replaced by abstract infosets, represented in an abstract feature space (here $\mathbb{R}^n$).
The core idea of the paper is to use the features of hands from previous rounds in the construction of the current round feature. More precisely, the paper investigates a simple method (KRWI = *k-recall winrate isomorphism*) of maintaining, at a given round, the collection of all potential-winrate isomorphims (PWI) features from previous $k$ rounds by concatenating them all together. PWI for an $n$-player game is a categorical probability distribution over $n+1$ events of a form *"this player outperformed exactly $l-1$ other players while losing to none"* for $l = 1, 2, \ldots n$ and *"this player lost to at least one player"*. Those distributions can be computed by a dynamic programming method. To reduce the cardinality of the space, the paper later clusters KRWI features with k-means using the Wasserstein distance, naming it the KrwEmd method.
All of the methods are benchmarked against currently used techniques that do not use historical information. Experiments find that KRWI identifies similar proportion of signal infosets as the previously used KROI. Using the metric of exploitability of the equilibrium (how it deviates from Nash equilibrium) of the strategy found by an imperfect-information game solver, authors find that 2-RWI-based approach performs almost the same as 2-ROI, and that KrwEmd outpefrorms previously used Ehs and PaEmd by a relatively large margin.
Strengths: The paper proposes a reasonable extension of the currently used techniques for hand abstraction in Texas Hold'e, and shows that the new idea beats SOTA. It does not shy away from introducing the reader to the relevant background in a rigorous way (which is what made it possible for me to even start reading it). Experiments show a meaningful improvement, and provide additional insights (such as the decreasing worth of historical information).
Weaknesses: From the perspective of someone not at all acquainted with the field of imperfect information games/games with ordered signals, the paper was quite hard to read and understand - even though (assuming that the authors agree with my summary), the contribution is a relatively straightforward idea.
The introduction was uninformative and confusing (I would recommend rewriting the whole second paragraph); the preliminaries, although presented in-depth and trying to be formal, also posed quite a few questions; sections 4 and 5 describing the main contribution lacked detail and justification (i.e. ideally I would like to see definition/theorem/proof style - otherwise the text is impossible to read for someone unfamiliar with the field), and the experimental setup is assuming a lot of background knowledge that was not explained neither in the main paper nor in the appendix (it was also difficult to gauge if the comparison between SOTA and the new approach was fair from the resources pov - the paper reports some numbers, but never an aggregated "memory/time used" for all methods).
Please see Questions below for a detailed explanation of what I found lacking or hard to understand.
Technical Quality: 2
Clarity: 2
Questions for Authors: Intro:
- What are the "clustering settings" referred to in line 66? (It's difficult to understand how important it is for the claim 'outperforms the SOTA')
Prelims:
- I think the definition of the game $\Gamma$ should include the initial distribution over signals, to start iterating the map $\varsigma
: \Theta \to \Delta(\Theta)$
- I did not understand how exactly the map $\varsigma$ interacts with the shape of the game tree given by $X, \tau$ and the order $\sqsubseteq$ on $\Theta$ - are we assuming that $\theta \sqsubseteq \theta'$ for all $\theta'$ in the codomain of $\varsigma(\theta)$ (dividing the signal space appropriately)? Are the terminal signals $\tilde{\Theta}$ take wrt to the order $\sqsubseteq$, or wrt to the "realisable" order of '$\theta$ is final if it can only ever appear as the final signal of any game trajectory'? Is the order $\sqsubseteq$ assumed to be tree-shaped? (line 12 of algorithm A2 requires a unique predecessor of $\vartheta$)
- Why is the order $\preccurlyeq$ assumed to be partial? Shouldn't it be total on $\tilde{\Theta}$?
- Are the survival status and the $\preccurlyeq$ relevant to the paper? Is the signal abstraction refinement relevant to the paper?
- Why is $\gamma$ provided in the definition of the game and not just derived from other structures?
- What happens to observations $O$ and the map $\varsigma$ in the signal abstracted game $\Gamma^{\alpha)?
- Line 121 - "such that the union of those" should be just "such that they"
- Is the criterion of the signal perfect recall for a game $\Gamma$ equivalent to saying that "players can be Markov"? I.e. that for any non-Markov (wrt to signals) strategy profile $\pi = (\pi_i, \pi_{-i}$) there exists a Markov (wrt to signals) strategy $\pi'_i$ such that $(\pi'_i, \pi_{-i})$ is equivalent to $\pi$?
- I think the signal space $\Theta$ has to be assume to be finite - otherwise, the definition of perfect recall breaks.
Winrate Isomorphism:
- What are the outcome-based features? (never explained, but used throughout this and next sections). What is POI?
- Line 179 - "an identical Winrate-based feature uniquely determines an abstracted signal infoset" - is that a definition of the abstracted signal infosets for winrate-based features? If so, this requires proof or justification (that they partition nicely, satisfy order conditions, interact with $\varsigma$ in the right way etc).
- The above question is even more relevant for KrwEmb, which uses a complicated clustering mechanism inside.
- Why is $\mathcal{D}$ an isomorphism?
- Line 196 - typo in "classify"
- What is the intuition behind constructing PWI in this particular way (i.e. with "lose at least to one other player"/"win with with $l-1$ players and tie with the rest")? Is this a natural choice? Why not "lose to $l$ players$" or some other metric?
- What is the River phase? What is HUNL&HULHE?
- It would be good to include at least the definition of the Wasserstein distance, instead of linking to Wikipedia
Experiments:
- What is "mb/g" in Figures 4 and 5?
- The plots do not show LI (potentially overshadowed?) and overshadow 2-RWI. I would recommend changing the style, since this is one of the few most important results in this section.
- Line 299 - "We gauge the performance over exploitability" - what does that mean?
- Line 331 - typo in "infosets"
- Line 336 - "the final number of abstracted infosets is set to" - are these numbers for Ehs, PaEmd and KwrEmd respectively? In what order?
- Line 338 - why does KrwEmd use so many parameters here, instead of just 3?
- If the costs of 1000x1427.7s, 12x11.2s and 96.7x341.4s are total costs, wouldn't it be more fair to set the hyper-parameters of the algorithms such that their total computational budget is approximately equal? Otherwise, it's difficult to judge the performance improvement.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing such a thorough review. Your diligence and responsibility are truly impressive, and I am very grateful for your positive evaluation of my paper and the highly constructive feedback. This is extremely helpful for me to improve my work, regardless of whether my paper is accepted for the conference. I would be honored to further introduce my work to you, answer any questions you may have, and correct the mistakes and inaccuracies you pointed out.
I will first address the issues you pointed out in the **Weaknesses** section.
You and the other reviewers mentioned that my writing tends to be somewhat opaque. I acknowledge and appreciate your critique. I attempted to use symbolic language to convey the background and methods. The advantage of this approach is that it allows for a relatively rigorous description of algorithms. For example, in my paper, I refer to the "k-recall winrate feature." If described in prose, this might look like, "We characterize the signal based on the results of the player's order with others derived from all terminal signals that can be exported. This feature has n+1 dimensions, representing the probability of n+1 different events, which are categorized as...". While this might generally convey the motivation of the work, it often ends up being ambiguous and confusing in detail. On the other hand, using symbolic language, as long as the reader is willing to spend time studying it, there is a higher likelihood of understanding the detailed algorithm I describe. However, I may have overly relied on this symbolic descriptive style and neglected the readability of the paper. Going forward, I will include more examples and diagrams to assist in the explanation, making the paper easier to read and less likely to deter readers.
In the **Weaknesses** section, you mentioned that the ideas presented in the paper are straightforward, which is correct. I am not sure if you meant that the idea seems somewhat trivial, potentially leading to a perception of lower innovation and contribution. Just to clarify, I would like to address this aspect briefly.
The main issue addressed in this paper is the significant information loss associated with methods like Ehs and PaEmd, as highlighted by Fu et al., due to their failure to utilize historical information[^1]. The community has struggled to provide abstraction methods that incorporate historical information (Fu et al.'s work cannot flexibly adjust the scale of abstraction, making it impractical for real-world applications). While the motivation behind the abstraction algorithm is relatively straightforward, our novel discovery is that winrate-based isomorphism results in less information loss compared to outcome-based isomorphism (we believe this finding is significant, akin to lifting a veil that has obscured the truth). Based on this phenomenon, we developed KrwEmd, the first practical abstraction algorithm that incorporates historical information. Since hand abstraction in the Poker AI community is relatively independent and previous work failed to take the advantage of historical information, our work can systematically improve the performance of Poker AI by replacing these modules, thereby making a comprehensive contribution to the entire field.
You provided many writing suggestions in the **Weaknesses** section, and we will take them seriously and make necessary revisions, including but not limited to: restructuring the entire second paragraph, carefully considering the content of the background knowledge section, and removing or simplifying parts that are less relevant to this paper while enriching parts that are highly relevant. We will add more details in Sections 4 and 5 (or supplement the relevant content in the appendix) and enhance the description of the experimental setup.
Regarding the fairness of computational resources in experimental comparisons, we can confirm that while KrwEmd consumes more computational resources during the preprocessing stage (constructing the KrwEmd abstraction compared to Ehs and PaEmd abstractions), this overhead is a one-time cost and will not recur during actual application. In the evaluation phase (i.e., during the actual AI competition, which is repeated in applications), KrwEmd will not introduce additional overhead. Using KrwEmd will systematically improve the AI's performance, and we will emphasize this point in the revised paper.
Due to the large number of questions you have raised, I am unable to address all of them in this thread. I will answer these questions in the comments, so please check them there.
# References
[^1]: Yanchang Fu, Junge Zhang, Dongdong Bai, Lingyun Zhao, Jialu Song, and Kaiqi Huang. Expanding the resolution boundary of outcome-based imperfect-recall abstraction in games with ordered signals. arXiv preprint arXiv:2403.11486, 2024.
[^2]: Finnegan Southey, Michael Bowling, Bryce Larson, Carmelo Piccione, Neil Burch, Darse Billings, and Chris Rayner. Bayes’ bluff: opponent modelling in poker. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 550–558, 2005.
[^3]: Andrew Gilpin and Tuomas Sandholm. Lossless abstraction of imperfect information games. Journal of the ACM (JACM), 54(5):25–es, 2007.
[^4]: Kevin Waugh. A fast and optimal hand isomorphism algorithm. In AAAI Workshop on Computer Poker and Incomplete Information, 2013.
[^5]: Jiefu Shi and Michael L Littman. Abstraction methods for game theoretic poker. In Computers and Games: Second International Conference, CG 2000 Hamamatsu, Japan, October 26–28 2000 Revised Papers 2, pages 333–345. Springer, 2001.
---
Rebuttal 2:
Title: The explanations of some confusing terms
Comment: Thank you for pointing out the typos, which will be corrected in the revised version. It seems that many of my terms and abbreviations were not clearly explained. In the revised version, I will include a glossary in the appendix. For now, I will provide preliminary explanations for the terms you found confusing.
- What are the outcome-based features? (never explained, but used throughout this and next sections). What is POI?
In line 164, we expand the abbreviation POI to "potential outcome isomorphism," which originates from the paper [^1]. Outcome-based features is a general term for k-recall outcome features and potential outcome features, introduced in the same paper.
To explain these concepts, we take the Leduc Hold'em environment[^2] as an example, focusing on potential outcome features and potential outcome isomorphism. Due to space constraints, we will not discuss k-recall outcome features. We denote a player's hand in the first phase as [x] and in the second phase as [x;y], where x is the hole and y is the board. Potential outcome features are equivalent to potential winrate features in the terminal phase. Before the final phase, a potential outcome feature describe a distribution of potential outcome features for the next phase (or, equivalently, a distribution of potential outcome isomorphisms). Potential outcome isomorphism refers to classifying hand categories using potential outcome features.
For example, in Leduc Hold'em, there are nine possible hand combinations in the final phase (phase 2): [J;J], [J;Q], [J;K], [Q;J], [Q;Q], [Q;K], [K;J], [K;Q], and [K;K]. However, their potential winrate (outcome) features fall into only three categories. Therefore, the potential outcome isomorphism in this phase consists of three classes:
Table.1
| POI | hands | (loss, draw, win) |
|:---:|:-------------------:|:-----------------:|
| 2-1 | [J;J], [Q;Q], [K,K] | (0, 0, 1) |
| 2-2 | [Q;K], [K;J], [K,Q] | (0.25, 0.25, 0.5) |
| 2-3 | [J;Q],[J;K],[Q;J] | (0.75, 0.25, 0) |
In the first phase, there are three possible hand combinations: [J], [Q], and [K]. Their potential outcome features also fall into three distinct categories. Therefore, the potential outcome isomorphism in this phase consists of three classes:
Table.2
| POI | hands | (2-1, 2-2, 2-3) |
|:---:|:-----:|:------------------------------------------:|
| 1-1 | [J] | (0.2 {1[J;J]}, 0, 0.8 {2[J;Q]+2[J;K]}) |
| 1-2 | [Q] | (0.2 {1[Q;Q]}, 0.4 {2[Q;K]}, 0.4 {2[Q;J]}) |
| 1-3 | [K] | (0.2,0.8 {2[K;J]+2[K;Q]},0) |
- Why is an isomorphism?
Using the term isomorphism, we aim to express a classification equivalence. For example, in Table.1, the hands [J;J] and [Q;Q] have the same potential outcome (winrate) features, and therefore, they are isomorphic in terms of potential outcome (winrate) features.
- What is the River phase? What is HUNL&HULHE?
HUNL stands for Heads-Up No-Limit Texas Hold'em, and HULHE stands for Heads-Up Limit Hold'em (see line 225). The rules regarding hands are the same for both HUNL and HULHE, which is why I group them together. In both HUNL and HULHE, the game is divided into four phases: Preflop (phase 1), Flop (phase 2), Turn (phase 3), and River (phase 4).
- What is "mb/g" in Figures 4 and 5?
We indeed overlooked the explanation of this unit. mb/g means milli-blind per game, which can be understood as the number of blinds won or lost per thousand games. In HUNL and HULHE, a similar unit is mbb/g (milli-big-blind per game). However, in Numeral211 Hold'em, we do not distinguish between small and big blinds.
---
Rebuttal 3:
Title: Answers for the questions
Comment: ## Intro
- What are the "clustering settings" referred to in line 66? (It's difficult to understand how important it is for the claim 'outperforms the SOTA')
By "clustering settings," we refer to the approach of using clustering techniques for hand abstraction, with representative techniques being Ehs and PaEmd. Although clustering-based abstraction has become mainstream, there are still other approaches, such as the Lossless Isomorphism methods developed by Gilpin and Sandholm and Waugh[^3][^4], as well as rule-based abstraction methods based on human expertise[^5]. This confusion arises from my choice of words; a more accurate expression would be "In the context of clustering-based abstraction algorithms,...".
## Prelims
- I think the definition of the game should include the initial distribution over signals, to start iterating the map.
The distribution of signals is fixed and known to everyone, but you cannot have the opponent's observation of the given signal (opponent's hole), and therefore cannot determine the distribution of subsequent signals. For example, in Leduc Hold'em, if a player receives the hole [J] and knows that the opponent's hole is [Q], they can accurately determine that the probabilities of the board being [-; J], [-; Q], and [-; K] are 0.25, 0.25, and 0.5, respectively. However, since the player cannot know the opponent's hole, they can only estimate a prior probability: the board has a 0.2 probability of being [-; J], a 0.4 probability of being [-; Q], and a 0.4 probability of being [-; K]. This distribution is typically known in poker games, similar to looking up a table, so there is no need to discuss the initial distribution separately.
- I did not understand how exactly the map $\\varsigma$ interacts with the shape of the game tree given by $X, \\tau$ and the order $\\sqsubseteq$ on $\\Theta$ - are we assuming that $\\theta \\sqsubseteq \\theta^{\\prime}$ for all $\\theta^{\\prime}$ in the codomain of $\\varsigma(\\theta)$ (dividing the signal space appropriately)? Are the terminal signals $\\tilde{\\Theta}$ take wrt to the order $\\sqsubseteq$, or wrt to the "realisable" order of ' $\\theta$ is final if it can only ever appear as the final signal of any game trajectory'? Is the order $\\sqsubseteq$ assumed to be tree-shaped? (line 12 of algorithm A2 requires a unique predecessor of $\\vartheta$ )
Signals can indeed be understood as a tree structure because we require each signal to have a unique predecessor. In Leduc Hold'em, a possible signal is [J, Q; K], which is a second-phase signal indicating that in the first phase, player 1 received the hole J and player 2 received the hole Q, and in the second phase, the board K was dealt. The unique predecessor of this signal is [J, Q], a first-phase signal indicating that player 1 received the hole J and player 2 received the hole Q. Symbolically, we have [J, Q] $\\sqsubseteq$ [J, Q; K].
$\varsigma$ is only related to the current signal and is independent of $X$ and $\\tau$. As illustrated in the above example, $\\varsigma([J, Q]) = \\{ 0.25[J, Q; J], 0.25[J, Q; Q], 0.5[J, Q; K] \\} $.
The description of terminal signals is indeed not very detailed, which might confuse readers unfamiliar with poker game rules. We can illustrate this with an example: in Leduc Hold'em, terminal signals are all the second-phase signals, and an order can be defined on these signals. For instance, we have $\\succcurlyeq ([J, Q; K], 1, 2) = false$ and $\\succcurlyeq ([J, Q; K], 2, 1) = true$, meaning that for the terminal signal [J, Q; K], the order of player 1 is lower than that of player 2.
Are all signals at terminal nodes terminal signals? No. For example, if a player folds in the first phase, it will also create a terminal node, but there is no need to compare orders at this terminal node. Are all terminal signals at terminal nodes? Not necessarily; the terminal signal [J, Q; K] can also appear at internal nodes in the second phase (i.e., when player 1 and player 2 are still making decisions). If we consider signals as a tree structure, we can assert that signals without successors are terminal signals, while those with successors are not. Terminal signals are the leaf nodes of the signal tree.
- Why is the order $\\preccurlyeq$ assumed to be partial? Shouldn't it be total on $\\tilde{\\Theta}$ ?
You are right, that was a typo. $\\succcurlyeq$ is a total order on terminal signals (i.e. $\\tilde{\\Theta}$), but what I meant is that it is a partial order on $\\Theta$. The domain of $\\succcurlyeq$ projected onto $\\Theta$ is $\\tilde{\\Theta}$ exactly.
---
Rebuttal 4:
Title: Answers for the questions
Comment: - Are the survival status and the relevant to the paper? Is the signal abstraction refinement relevant to the paper?
Survival status is part of the definition of ordered signal games. It determines at which nodes the order of signals will affect the player's payoff, thus it is relevant to our discussion. Although it might seem less important when studying signal abstraction alone, omitting this concept might lead to misunderstandings regarding the orthogonal relationship between the signals and public tree.
Signal abstraction refinement is a relationship between signal abstractions that describes the inclusion relationship in terms of the number of infosets (information sets) identified. In our paper, we use the conclusion that POI is a signal abstraction refinement of the mainstream abstraction algorithms Ehs and PaEmd (see line 311). Applying this conclusion, we only need to demonstrate that if KrwEmd performs better than POI while identifying the same number of information sets as POI, then KrwEmd is a stronger abstraction algorithm than Ehs and PaEmd.
- Why is $\gamma$ provided in the definition of the game and not just derived from other structures?
In imperfect information games, there is no concept of phases. However, in games with ordered signals, we define phases as the number of times the chance player (nature) takes actions during the game. The definition of phases was first formally described by [^3], and a more detailed definition was provided by [^1]. With the definition of phases, the problem of signal abstraction gains a more mathematical description.
- What happens to observations $O$ and the map $\varsigma$ in the signal abstracted game $\Gamma^{\alpha}$?
We can use the Table.1 to answer this question. Let's assume $\alpha = (POI, \varTheta_2)$ (i.e., player 1 uses POI abstraction while player 2 does not use any abstraction). $O_2([J,Q;K]) = \\{[J,Q;K], [Q,Q;K], [K,Q;K]\\}$ and $O_1([J,Q;K]) = \\{[J,Q;K], [J,J;K], [J,K,K]\\}\cup \\{[J,J;Q], [J,Q;Q], [J,K;Q]\\} \cup \\{[Q,J;J],[Q,Q;J], [Q,K;J]\\}$。In another representation, we can write $O_2([J,Q;K]) = \\{[-,Q;K]\\}$ and $O_1([J,Q;K]) = \\{[J,-;K]\\}\cup \\{[J,-;Q]\\} \cup \\{[Q,-;J]\\} = POI_{2-3}$.
The $\varsigma$ in $\tilde{\Gamma}$ and $\tilde{\Gamma}^{\alpha}$ is the same; signal abstraction only affects observation.
- Is the criterion of the signal perfect recall for a game $\Gamma$ equivalent to saying that "players can be Markov"? I.e. that for any non-Markov (wrt to signals) strategy profile $\pi=\left(\pi_i, \pi_{-i}\right)$ there exists a Markov (wrt to signals) strategy $\pi_i^{\prime}$ such that $(\pi^\prime_{i}, \pi_{-i}) \equiv \pi$?
In imperfect information games, the intuitive meaning of perfect recall is remembering all the information observed by the players throughout the game. In contrast, in stochastic games, the Markov property indicates that the future outcome of the game depends only on the current state (i.e., the current state includes all necessary information). The two concepts can indeed be compared. Regarding the strategy comparison you mentioned, I don't quite understand its specific meaning.
- I think the signal space $\Theta$ has to be assume to be finite - otherwise, the definition of perfect recall breaks.
You are right. Although we aim to discuss the signal abstraction problem more generally, for now, $\Theta$ should be restricted to a finite set.
---
Rebuttal 5:
Title: Answers for the questions
Comment: ## Winrate Isomorphism
- Line 179 - "an identical Winrate-based feature uniquely determines an abstracted signal infoset" - is that a definition of the abstracted signal infosets for winrate-based features? If so, this requires proof or justification (that they partition nicely, satisfy order conditions, interact with $\varsigma$ in the right way etc).
We can argue this intuitively, albeit not rigorously. Each signal infoset has a unique potential winrate feature or k-recall winrate feature (obtained by enumerating all opponent holes and then calculating the statistics, a deterministic and non-random process; note that these features are defined on signal infosets, not on individual signals). We then arbitrarily define signal infosets with the same potential winrate feature or k-recall winrate feature as equivalence classes, i.e., an abstracted signal infoset. Therefore, each signal infoset will be in one and only one abstracted signal infoset.
- The above question is even more relevant for KrwEmb, which uses a complicated clustering mechanism inside.
You can think of it this way: KROI is an input to a K-Means problems, and the K-Means algorithm will assign each input data point to a unique class. In the previous question, we also argued that each signal infoset will definitely be assigned to one of KROI's class. Therefore, KrwEmd will also assign each signal infoset to a unique abstracted signal infoset.
- why is $\mathcal{D}$ an isomorphism
As mentioned above, we use isomorphism to represent the equivalence with respect to a certain feature. Signal infosets determined to be isomorphic by $\mathcal{D}$ have the same certain feature, such as potential winrate/outcome feature or k-recall winrate/outcome feature.
- What is the intuition behind constructing PWI in this particular way (i.e. with "lose at least to one other player"/"win with with $l-1$ players and tie with the rest")? Is this a natural choice? Why not "lose to $l$ players" or some other metric?
Since Texas Hold'em is a winner-takes-all game, during the showdown, if you lose to any other player who reaches the showdown, you will lose your chips (though it may not be all your chips due to the all-in and side pot rules, but we are not discussing such complexities here. In academic discussions, the simplified problem typically assumes that all players bring the same amount of usable chips to the game. In this case, if your hand is lower than that of any other player at the showdown, you will lose all your chips). Therefore, as long as your hand is not the highest, it means you lose. A tie occurs only when a player's hand is the highest but not the sole highest hand.
## Experiments
- Line 299 - "We gauge the performance over exploitability" - what does that mean?
We use the metric of exploitability to evaluate the performance of each signal abstraction algorithm.
- Line 336 - "the final number of abstracted infosets is set to" - are these numbers for Ehs, PaEmd and KwrEmd respectively? In what order?
The word "final" is a typo that was mistakenly added and it introduced ambiguity. (100, 225, 396) As described in line 320, phase 1 sets 100 abstracted signal information sets, phase 2 sets 225 abstracted signal information sets, and phase 3 sets 396 abstracted signal infosets.
- Line 338 - why does KrwEmd use so many parameters here, instead of just 3?
We apply KrwEmd in the Numeral211 Hold'em environment. Clustering is not needed in the first phase due to the small number of singal infosets, so we directly use LI. We cluster the signal infosets for the second and third phases separately. Clustering in the second phase needs to consider two phases, phase 1 and phase 2, requiring two parameters $(w_{2,0}, w_{2,1})$. Clustering in the third phase needs to consider three phases, phase 1, phase 2, and phase 3, requiring three parameters $(w_{3,0}, w_{3,1}, w_{3,2})$. A total of five parameters are needed.
- If the costs of 1000x1427.7s, 12x11.2s and 96.7x341.4s are total costs, wouldn't it be more fair to set the hyper-parameters of the algorithms such that their total computational budget is approximately equal? Otherwise, it's difficult to judge the performance improvement.
As we mentioned in the **Weakness** section, signal abstraction is a data preprocessing process that is computed only once in the entire Poker AI construction. As long as it is computable, even if it takes months to compute high-quality hand abstraction data, it is worth the effort. These hand abstraction data can then be used for AI iterations with algorithms such as CFR, as well as for real-time strategy solving. We acknowledge that our algorithm has a very high computational cost, mainly because the input data considering historical information is several orders of magnitude larger than the input data that ignores historical information. However, this results in much higher quality abstraction data compared to previous algorithms.
---
Rebuttal Comment 5.1:
Title: Response
Comment: I thank the authors for their thorough response and addressing all of the points raised in my review. I think the paper can be significantly enhanced by incorporating those explanations into the text, as promised by the authors.
Two minor points regarding the above:
- In the question about $\gamma$, I meant that, as far as I understand, it is possible to deduce it from $\rho$ and $\tau$ - so I am puzzled as to why it is (seemingly unnecessarily) included in the game definition.
- Even though for poker specifically, the initial distribution is known, an abstract definition of a game with imperfect information still requires it to be specified.
I decided to keep my original rating.
---
Reply to Comment 5.1.1:
Title: Thanks for review
Comment: During our discussion, we identified another point that could enhance the rigor of the paper, specifically related to your question about $\gamma$.
$\gamma$ is indispensable in the definition, as it is one of the key concepts that distinguishes imperfect information games from sequential signaling games. Our work on hand abstraction is also based on $\gamma$ in games with ordered signals. However, your point is also valid—$\gamma$ can indeed be deduced from $\tilde{\rho}$ and $\tilde{\tau}$ (in fact, its definition originates from this). The issue lies in whether $\gamma$ appears in other components of the definition of $\tilde{\Gamma}$. If it does not, then it could be excluded from the definition and treated as an extended concept. However, if $\gamma$ does appear in other components of $\tilde{\Gamma}$, then it should be included in $\tilde{\Gamma}$. Our original intention was the latter, but this may not have been clearly reflected in the current version. We are considering adjusting the definition of $\Theta$ to $\Theta = (\Theta^{(1)}, \dots, \Theta^{(\max{\mathfrak{r}})})$, so that $\gamma$ would appear in other parts of the definition of $\tilde{\Gamma}$. This adjustment would also allow us to avoid the need for an additional definition of the terminal signal set $\tilde{\Theta}$, as we could directly use $\Theta^{(\max \mathfrak{r})}$ (we have always felt that introducing $\tilde{\Theta}$ in the introduction was somewhat inelegant).
Regarding your question about the initial state, after careful consideration, we agree that your point is well-founded. Introducing an initial state does enhance the generalizability of the model, allowing it to be applied to different initial states in poker games. For Texas Hold'em, we can define $x^o = (\tilde{x}^o, \phi)$, where $\tilde{x}^o$ is the root public node, and $\phi$ indicates that no cards have been dealt.
Once again, we sincerely appreciate your thorough and detailed review. Your suggestions have prompted us to think more deeply and improve our paper. Furthermore, our paper is currently in a borderline state with a reasonable chance of acceptance. If you could give us one more score, it would be greatly helpful. We are confident in the contributions of this work, as it could significantly enhance the performance of AI in Texas Hold'em (after all, previous hand abstraction methods have resulted in significant information loss). If this paper could be accepted sooner, it would promptly impact the field (since work in hand abstraction has stagnated for the past decade). We sincerely hope you can lend us your support.
Best regards,
the authors | null | null | null | null | null | null |
FineStyle: Fine-grained Controllable Style Personalization for Text-to-image Models | Accept (poster) | Summary: The paper tackles the problem of single-shot fine-tuning of text-to-image models for diverse subject-driven renditions. It first discusses the problem of image-text alignment present in the few-shot fine-tuning paradigm for text-to-image models. It then presents FineStyle. More specifically, it introduces
* a novel data augmentation technique to synthetically increase the number of image-text pairs from just a single pair
* concept-oriented masking during the (parameter-efficient) fine-tuning phase
Strengths: * The paper identifies the problems present in the existing few-shot fine-tuning frameworks for text-to-image models.
* The proposed method is simple and is well demonstrated.
* The derivation of segmentation maps from cross-attention maps for concept-oriented masking is beautiful.
Weaknesses: * Minimal details available on the pre-trained model being used. The paper just mentions MUSE. It didn't mention its capacity. Similarly, it didn't provide any details on the VQGAN being used.
* Lack of references provided to the works that leverage parameter-efficient fine-tuning for controlled generation in the domain of text-to-image models. Some examples include [1], [2], and [3]. I believe this is relevant since the authors use parameter-efficient fine-tuning as well.
* FineStyle was demonstrated for masked models like MUSE. But the image generation community doesn't use MUSE that much. So, I am a little concerned about its adoption at scale. It would be very nice if the authors could also showcase some results obtained from applying FineStyle to open text-to-image models such as [4] and [5].
* Timing information would have been nice to include as this study aims to avoid the limitations of the iterative fine-tuning scheme introduced in StyleDrop.
## References
[1] https://github.com/cloneofsimo/lora
[2] Using LoRA for Efficient Stable Diffusion Fine-Tuning, https://huggingface.co/blog/lora
[3] Implicit Style-Content Separation using B-LoRA, https://arxiv.org/abs/2403.14572
[4] SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis, https://arxiv.org/abs/2307.01952
[5] PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation, https://arxiv.org/abs/2403.04692
Technical Quality: 4
Clarity: 3
Questions for Authors: ## Suggestions
* Could the Figure 1 be modified as follows? Each row would have a starting image (style image can be overlapped on it as it is currently), and their variants would be obtained by testing different aspects such as color, compositionality, etc.
* Figure 1 could also hint about the augmentation scheme being used to upscale the dataset. Just writing a single image-text pair (as is the case right now) doesn't sound technically right, especially after reading the nice data augmentation introduced in the paper.
* In Figure 2, for the bottom row, the content leakage isn't very evident to my eyes. Is it possible to include a stronger example that immediately establishes the point, like the former row?
* I included this point in the "Weaknesses" too but for clarity, I will include it here as well. The parameter-efficient fine-tuning scheme sounds extremely similar to LoRA [1]. It might be worth clarifying the differences if there are any. Additionally, I think it would be sensible to include the concurrent works that make use of parameter-efficient fine-tuning in the context of text-to-image generation (provided some references in "Weaknesses").
*
## Questions
* There is human evaluation involved yet checklist pts. 14 and 15 are written as NA. Is this expected?
* Is using the pre-trained MUSE model sufficient to extract the segmentation maps from images coming from non-natural domains? If so, it might be worth mentioning it with a few examples.
* Could the image-text misalignment problem be mitigated if the text encoder was also fine-tuned? Since MUSE uses T5-xxl and it already supports longer prompts, I believe this is worth trying to compare. Additionally, T5-xxl displays fine-grained understanding of text, as shown in Imagen [2].
* Is FineStyle particularly effective for style images with multiple concepts? How about simple style images?
* 145 - 151: I like the approach. However, it appears tedious. Have the authors explored automating this using an LLM? If so, I would appreciate some results.
* Have the authors tried using/re-purposing the prior-preservation loss introduced in DreamBooth [3] in eqn. 4?
* As per equation 4, it seems like we need to keep two models in memory for sampling. This appears to be memory-expensive. Or do the authors just use one base model and enable and disable the adapter layers when needed (reference [4])?
* In Figure 4, in the last row, "Christmas decoration" is still present in the first example. Why is that? Is that a failure case?
* How is the notion of "unwanted concepts" implemented in practice with FineStyle? Is it similar to negative prompting (implemented through classifier-free guidance)?
## References
[1] LoRA: Low-Rank Adaptation of Large Language Models, https://arxiv.org/abs/2106.09685.
[2] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, https://arxiv.org/abs/2205.11487.
[3] DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation, https://arxiv.org/abs/2208.12242.
[4] StackLLaMA: A hands-on guide to train LLaMA with RLHF, https://huggingface.co/blog/stackllama#reinforcement-learning-from-human-feedback.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No comment here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful comments and suggestions! We appreciate them and will make the necessary revisions to ensure our work is presented with better accuracy and completeness.
W1: As detailed in Section 3 of our paper, Muse employs a cascaded design of generative modules. FineStyle only adapts the lowest resolution model, which operates at a resolution of 256x256 and comprises 3B parameters. This model features a masked generative image transformer that serves as the image decoder and a VQGAN for converting images from pixel to latent space and vice versa. The VQGAN encodes a 256x256 image into a sequence of 16x16 latent visual tokens for the image decoder to consume.
W2: Thank you for your valuable feedback regarding the citation of related works. We acknowledge the oversight in not sufficiently referencing prior studies that have utilized parameter-efficient fine-tuning in the domain of text-to-image models.
Unlike the typical application of LoRA layers across each transformer block of the image decoder, our method employs a singular main LoRA layer but modifies it with distinct biases for each transformer block. This adaptation not only reduces the number of trainable parameters but also aims to mitigate potential overfitting issues, a critical aspect in maintaining model generalizability. We will carefully revise the relevant sections of our paper to include these references and articulate the different design choices of our adapter compared to LoRA.
W3: See Figure 2 of the attached PDF. We showcase some results from SD3.
W4: Although iterative human feedback can enhance style fine-tuning performance by expanding the training dataset with human-annotated synthetic images, this approach is cumbersome, risky, and time-consuming. The limitations of human feedback include:
Human intervention during the training process is necessary to curate a set of synthetic images, which incurs significant labor costs.
The effectiveness of using synthetic data for iterative human feedback depends critically on the quality of the synthetic images produced by the style fine-tuned model. If the images do not meet a certain standard, performance may deteriorate. In practice, even the most advanced text-to-image models can struggle to accurately replicate the compositions of certain reference style images.
The time required for labeling through human feedback is contingent on the quality of the synthetic training data; poorer quality requires more time.
Regarding timing, StyleDrop, with one round of iterative human feedback, takes at least three times as long as FineStyle with additional labor costs.
Q1: The answer to checklist point 14 should have been Yes. We apologize for the mistake. For checklist point 15, given that the paper is approved for submission, I assume that the inclusion of human preference study is also approved.
Q2: As demonstrated in Figure 3(d), using cross-attention weights of the pre-trained model as segmentation map might not be accurate like the bottom right corner outliers. We do notice that it can get worse for non-natural domains, possibly resulting in reduced fine-grained controllability.
We expect to explore the potential of fine-tuning a pre-trained model to improve its cross-attention alignment between visual and textual tokens corresponding to the same concept as future works, which could land a better foundation to answer this question.
Q3: This is a valid point, since outputs of T5-xxl are fed right into cross-attention layers modified by our kv adapter. However, the downside is T5-xxl actually contains more parameters than Muse image decoder, 4.6B v.s. 3B. Tuning the full text-encoder is computation prohibitive for most users. We would like to study light-weighted adapter for text-encoders in future works.
Q4: FineStyle can effectively process simple style images based on straightforward descriptions. For example, the second and third rows of Figure 12 show the styles “rainbow color flowing design” and “bookshelf in watercolor painting”, which are simple concept compositions.
Q5: See Author Rebuttal and Figure 3 of the attached PDF.
Q6: We tried using a style descriptor, e.g. “in watercolor painting style.”, as a prior-preservation anchor. It can mitigate concept collapsing to some extent, but doesn’t show concrete evidence of improving fine-grained style controllability. Unlike prior-preservation loss that requires auxiliary training data, FineStyle does not require any other image but the one style reference image.
Q7: Correct, the FineStyle adapter operates as a standalone layer with its parameters. It is designed to be dynamically attached or detached from the base model as needed.
Q8: We apologize for the confusion with the use of the double negative for listing “tree WITHOUT christmas decoration” under unwanted concepts. We will rephrase this to be listed as “bare tree” under unwanted concepts. The first example is NOT a failure as based on the synthesis prompt we expect generated trees to contain a form of christmas decorations. In contrast, StyleDrop mode collapses to the exact pine tree of the reference image.
Q9: We don’t use negative prompting during inference. The "Unwanted concept" column in Figure 4 is designed to demonstrate concepts that are implicitly inferred from the synthesis prompt. Our preliminary experimental findings have shown that negative prompting at inference time does not effectively counter mode collapse, which actually prompts us to seek an efficient fine-tuning strategy that can better disentangling style and visual elements of a subject.
---
Rebuttal Comment 1.1:
Comment: Glad that LLMs helped! Looking forward to seeing this section in the main paper. Additionally, it might have been better to try it out with an open vLLM.
## [W1]
I agree that Section 3 has details on MUSE, but I think there is room for improvement. For example, you could specify the MUSE variant you used in your work.
3B parameters for a model for generating 256x256 images seems significantly high given other models (diffusion) such as SDXL [1], Stable Diffusion 3 [2], etc., can operate on 1024x1024 resolution but have a significantly lower number of parameters. I think this questions the use of such as a pre-trained backbone because it hinders applicability. So, it might have been prudent to consider a more realistic model justifying the operating resolution and their parameter count. Furthermore, I think this is a fair comparison with latent diffusion models because your work also features a VQGAN operating on the latent space.
(I stand with my general point here i.e., the use of a better backbone with better applicability compared to MUSE. I acknowledge that I have seen Figure 2 of the Rebuttal Document that shows SD3 [3] results.)
Was the original VQGAN from [4] used? If not, I think the differences deserve a place in the main text.
## [W2]
Agreed with the overall point conveyed. Thank you.
## [W3]
Could this be reflected more quantitatively in the paper? I think this aspect of your work is equally important to highlight more explicitly.
For the answers provided to my questions, I would like to suggest the authors to consider including some of those details in the paper. For example, Q7.
## References
[1] Podell et al., 2023, SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
[2] Esser et al., 2024, Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
[3] Esser et al., 2020, Taming Transformers for High-Resolution Image Synthesis
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer
Comment: We thank the reviewer again for supporting this work and their insightful suggestions in the first response that helped us identify new areas of improvement: leveraging open vLLMs such as GPT4o for automated sub-prompt generation. We believe FineStyle will offer greater benefits to the AI/art community by integrating these additional technologies. We also thank the reviewer for the new comments and suggestions on the details and backbones, we will answer them as follows and also integrate them into our revision!
## More details on Muse
### Why we used MUSE?
We used MUSE for this work because our major baseline StyleDrop (published NeurIPS 2023) was built on top of it. We leveraged the same backbone to ensure our study is comparable and thorough. In the StyleDrop paper, they did not use other open-source t2i backbones and also there is no official codebase for us to compare with.
A caveat about this line of research topic the reviewer might already be aware is that a series of such influential works like DreamBooth, StyleDrop were never open-sourced and they were also built entirely on close-sourced models like Imagen and MUSE which limits the impacts and follow-up works in the community.
#### W1(1): I agree that Section 3 has details on MUSE, but I think there is room for improvement. For example, you could specify the MUSE variant you used in your work.
Since the official Muse model we obtained for this research is not openly accessible, it was our oversight not clarifying the details enough in the submission and rebuttal. The Muse model variant we used is “tigg_v2_5_3b_512”. It has several sub-models: a pair of low-res and high-res VQGAN operating at 256x256 and 512x512 resolution respectively, a base transformer for decoding low-res image tokens at 256 and a superres transformer for translating low-res image tokens to high-res at 512. All the sub-models comprise the 3B parameters, while the low-res base transformer contains the bulk of the parameters. FineStyle only adds adapters to the low-res base transformer, and also we ensured the exact same configuration for our experiments related to StyleDrop for fair comparison and better understanding of the effects of our proposed method.
#### W1(3): Was the original VQGAN from [4] used? If not, I think the differences deserve a place in the main text.
We follow the exact setup of StyleDrop/MUSE and both VQGAN models used were trained on internal text-to-image datasets, which differ from those used in [3]. We will explicitly note this difference in the main text.
Overall, We will definitely include more relevant details from the answers to review questions in the revised version of this paper or its supplementary.
## More on Backbones and Our Commitment to Open-Source
#### W1(2, modified): add comparisons with latent diffusion models
Yes, we agree! While our baseline method StyleDrop did not compare with latent diffusion models, we strongly agree with the reviewer’s suggestion and plan to include a comparison with latent diffusion models in the revised version. We are currently actively working on implementing FineStyle with the latest open text-to-image models, including SD3 [2] and Flux [1] - for SD3 we were able to show our initial results in the rebuttal PDF. In future revisions, we will open-source our implementation and include additional quantitative results, such as CLIP scores and human evaluations.
Lastly, we want to emphasize our commitment to open-source, which is different from some related prior works including StyleDrop. We are committed to open-source this project to contribute to the AI community, and that is exactly the reason why we spent a significant amount of extra time and effort in the summer after the NeurIPS submission deadline to rewrite the entire code-base and run many experiments based on the most recent state-of-the-art open-source models like SD3 [2] (released in July and published in the summer in ICML 2024, not fully open on training codes). We are also exploring the most recent powerful models Flux series [1] (released August 1, no training code) and we hope to incorporate those open models in the open-source project to contribute to the AI + Art/Design community. To this end, we fully agree with the reviewer on the importance of utilizing more applicable and openly accessible backbones for implementing and evaluating FineStyle and we hope our efforts here are evident and will gain the support of the reviewers.
(In any case even if this work is unfortunately rejected by NeurIPS, we may still wrap up this project and open-source this project before we move on to other research topics since we believe in the value of this work to the community.)
Thank you!
[1] Flux, https://github.com/black-forest-labs/flux
[2] Esser et al., 2024, Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
[3] Esser et al., 2020, Taming Transformers for High-Resolution Image Synthesis
---
Rebuttal 2:
Comment: I acknowledge your reply and truly appreciate the efforts you took to explain each and every point in detail. I truly respect your willingness to make your work open so that the entire community can benefit from it. I think it's safe to say that it was because of open-source codebases like LDM [1], the diffusion community, in particular, progressed very rapidly over the past few years. So, your commitment towards open-source is quite endearing and I am sure it will be helpful.
I have nothing further to add and I offer the authors my best wishes.
References
[1] https://github.com/CompVis/stable-diffusion | Summary: This paper proposes a few-shot fine-tuning paradigm called FineStyle for controllability-enhanced style personalization that requires only a single reference image. A concept-oriented data scaling scheme and a parameter-efficient adapter are two key components of the proposed method to achieve this goal.
Strengths: 1. This paper is well-motivated and well-organized.
2. This idea of scaling the number of training images by creating multiple sub-images and corresponding separate concepts is interesting and reasonable.
3. The controllability of the proposed method is good.
Weaknesses: 1. Only one baseline model (i.e., StyleDrop) is adopted to compare with the proposed method. Many highly related SOTA methods are not introduced or compared in this paper, such as DreamStyler [1*], ControlStyle [2*], StyleAligned [10], and IP-Adapter [40]. They can perform the same task as the proposed method. \
[1*] DreamStyler: Paint by Style Inversion with Text-to-Image Diffusion Models. AAAI 2024. \
[2*] ControlStyle: Text-Driven Stylized Image Generation Using Diffusion Priors. ACM MM 2023.
2. The proposed method requires very detailed and complex text description for each input style reference image, which is inconvenient and needs human intervention. Moreover, as the authors say, ‘it is often challenging to faithfully describe the visual look of a style in pure text form.’ In contrast, many other methods do not require additional text description of the input style image, such as DreamStyler [1*], ControlStyle [2*], StyleAligned [10], and IP-Adapter [40].
3. The proposed method is inferior to previous method StyleDrop in style learning, which can be observed from both qualitative results and quantitative results.
4. Detailed information about human evaluation is not provided. How many image-text pairs and participants are involved in the conducted human evaluation? In addition, the sum of the user preference proportions reported in Table 2 is not 1.
5. I am curious about the running time of the proposed method. Is it comparable or superior to previous methods in speed at inference?
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see **Weaknesses**.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The limitations and potential negative societal impact are discussed in the supplementary material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, particularly regarding the need for comparison with extra baselines. We appreciate the opportunity to add new baseline results and clarify the advantage of our method.
W1: See Figure 1 of the attached PDF. We include results of DreamStyler, StyleAligned, and IP-Adapter. To the best of our knowledge, there are no openly available codes for ControlStyle. We use SDXL as the base model for StyleAligned and IP-Adapter, while DreamStyler only supports SD 1.5.
Both StyleAligned and IP-Adapter show the symptoms of content and structure leakage. The bay and the mountains keep appearing in the first row, even if not in the prompts. For StyleAligned, the bay water is fixed at the bottom left of the generated images. In the second row, IP-Adapter hallucinates mountains and the moon from the reference. In the third row, IP-Adapter almost copies the texture and shape of the house, which doesn’t look like an office building. StyleAligned correctly follows the semantics for the third and fourth rows but doesn’t quite follow the target styles.
On the other hand, Dreamstyler fails in both semantic and style consistency and is of lower quality, which is partially due to the less powerful model SD 1.5.
W2: Our method utilizes detailed text descriptions to empower users to control fine-grained styles. This is a key distinction from methods like StyleAligned, IP-Adapter, etc., which interpret the styles of an image as a whole. Although crafting detailed descriptions may require an extra step, this provides users additional fine-grained control in text-to-image generation over how much granularity in a prompt they want rather than requiring exhausting every detail of a reference image. Thus, users can focus on which style aspects they wish to capture, making our method versatile to different user needs.
Furthermore, detailed descriptions can also be beneficial in other methods requiring image inversion, such as StyleAligned. As in Figure 4 of the attached PDF, when a simple inversion prompt (a) is "a tall house in watercolor painting style.", gable roofs of the reference leak into synthesized images, which is uncommon for an office building. In contrast, when a detailed prompt (c) containing "gable roof," etc., is used., the problem is improved. We also use a control prompt (b) containing a "<random token sequence>" to make the length of the tokenized sequence the same as (c). This demonstrates it's "gable roof" in the detailed descriptions that improves the situation instead of any random, longer description.
W3: We acknowledge the discrepancies in certain results between StyleDrop and FineStyle in terms of style learning.
In the Eiffel tower example of Figure 6, StyleDrop exhibits a darker blue tone, whereas FineStyle presents a lighter one. In fact, the reference image features a range of tones from dark to light, evident in various elements such as the bookshelf, plant leaves, pot, and laptop. This tone spectrum informs the lighter blue color in FineStyle. While StyleDrop tends to present a darker tone inheriting from the laptop, it does not capture the legit composition of the reference, particularly in the Eiffel tower example with recurring appearance of leaves.
As in the quantitative human evaluation of Table 2, although FineStyle scores lower on Style than StyleDrop, it has a significant edge over StyleDrop in Text and Structure/Common Sense scores. In our human study, we found that out of the image pairs where StyleDrop was deemed as having a superior style, 73% of the time raters found the FineStyle image to follow the prompt the best, 27% percent of the time both methods followed the prompt equally and 0% of the time did the StyleDrop image get rated as better at following the prompt. The same is found when looking at the axis of compositional structure and common sense of the generated images. Out of the image pairs where StyleDrop was deemed as having a superior style, 81% of the time raters preferred the FineStyle image for structure, 19% percent of the time both methods both had equally good structure and 0% of the time did the StyleDrop image get rated as better at structure and common sense.
We think that StyleDrop sacrifices composition for style which can often create images that are far from the intended prompt. These findings underscore that FineStyle is a better method with balanced style learning and composition, with fine-grained style controllability.
W4: In our study, we recruited 14 participants, each of whom evaluated 24 image-text pairs of distinct styles. This resulted in a total of 336 evaluations, ensuring a robust dataset for evaluation.
Regarding your observation about the sum of user preference proportions in Table 2 not equaling 1, the proportions indeed sum to 0.998, 0.999, and 0.999 across different comparisons. This slight discrepancy arises from the presentation without rounding the figures to maintain numerical precision. We recognize the need for clarity and will address this in the revised version of our paper, ensuring all totals are rounded appropriately.
W5: Regarding the inference speed, it is true that our method incorporates an additional 0.15M adapter parameters compared to StyleDrop. However, when considered in the context of Muse's overall 3B parameters, this increase is relatively minor. Consequently, FineStyle maintains an inference time that is comparable to that of StyleDrop, ensuring that the enhanced control over style elements does not come at the cost of efficiency. | Summary: Existing style-tuning based style transfer methods often result in content leakage because of the coupled style and content. To address this, FineStyle proposes a decomposition conception of style and content in images and fine-tune a kv adapter in cross-attention on MUSE. FineStyle demonstrates better fine-grained control in visual results compared to other methods.
Strengths: 1.FineStyle focuses on fine-grained style transfer and achieves excellent control effects, showcasing the future potential of style transfer.
2.This paper is well written, easy to follow.
3.The experiments are comprehensive, and the appendix provides detailed supplementary information.
4.This paper shows that the kv adapter on cross-attention is better than the feat adapter in hidden states for more fine-grained style or content control.
Weaknesses: 1.The author mentioned in Section 4.1 that T2I models require a very large image-text dataset for concept learning.Therefore, some style tuning methods use human feedback to scale up the dataset to achieve better learning outcomes. However, the differences between these two datasets are still substantial. For instance, human feedback often adds only a few images, whereas the dataset for a large T2I (Text-to-Image) model typically contains millions of image-text pairs. Although scaling up such a small dataset can theoretically reduce overfitting and enhance the style transfer effect, it is insufficient to achieve the concept learning emphasized by the author.
2.The author mentions in line 50 that the adapter is fine-tuned using clearly defined pairs of content and style concepts, anticipating the learning of associations between text and image concepts. However, in the methods section, it is described merely as a data augmentation technique. The clip scores for variant (a) in Table 1 also suggest that this data augmentation technique does not significantly enhance performance relative to StyleDrop. Furthermore, the paper lacks a cost-time analysis for the concept division.
3.This paper demonstrates that the KV adapter is more effective than the feat adapter in providing fine-grained control. However, it lacks additional experimental analysis to substantiate this claim, such as visualizations of attention maps.
4.The concept pair data scaling up seems like un-necessary, which reduces the innovativeness of this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.I observed that StyleDrop, as shown in Table 1, did not employ data scaling. Was human feedback omitted in this case? It seems essential to include a comparison involving human feedback for both StyleDrop and FineStyle, along with the concept scaling up discussed in this paper, to evaluate the contribution of each component.
2.I observed that FineStyle does not align with the reference style as closely as StyleDrop does. For instance, regarding color, the Eiffel Tower in FineStyle depicted in Figure 6 is blue, whereas the reference is purple. Concerning the painting style, the oil painting by FineStyle in Figure 7 is distinctly different from the reference. What could be causing these discrepancies? Could it be due to improper concept pair data scaling up techniques or the KV adapter?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have already discussed limitations and societal impact in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful and detailed reviews. We appreciate the opportunity to address the weaknesses and questions raised and to clarify aspects that may have been unclear.
W1: The pre-training and human feedback datasets exhibit significant size disparities, which leads to differing methodologies in concept learning. Pre-training with a large dataset requires a vast amount of time and resources for a careful curation. On the contrary, style fine-tuning with human feedback requires significantly less time and resources to collect a few images of the same style yet of various contents to learn a model with a custom style. Nevertheless, this requires human intervention for every new style, which could be cumbersome. We believe that these two regimes (large once-for-all fine-tuning vs a few-shot case-by-case fine-tuning) exhibit a trade-off. In this paper, we aim to match the training efficiency and remedy the limitations of human feedback dataset with proposed data-scaling and concept-weighted loss strategy.
W2: In our approach, we not only scale up the single text-image pair to be multiple pairs with each one having a concept-oriented text, but also re-weight the loss computation by concept-oriented masking, detailed in line 152. This strategy boosts the learning of associations between text and image concepts by the gradient flows stemming from concept-oriented text and masking. Utilizing masks derived from cross-attention weights naturally complements the use of a kv adapter over a feature adapter. According to Table 1, the combination of data scaling and the kv adapter yields the highest clip text scores.
Concept division is efficient in both time and computation. Upon completing the writing of a prompt, a user can quickly annotate desired concept words within the text, typically requiring only a few seconds. Subsequently, the concept mask is generated by a single, zero-mask pass through the Muse model, during which the attention weights corresponding to the annotated concepts are retrieved.
W3: Theoretically, data scaling with concept-oriented masking works better with kv adapter due to the intuitions and facts: concepts are activated during cross-attention between visual and textual tokens. Furthermore, as demonstrated in Table 1, the strategy achieves the best clip text scores.
W4: Concept data scaling and concept-oriented masking are designed to complement each other effectively, as discussed in the response to W2.
Q1: In practice, human feedback is undesirable due to human intervention requirements. While it can enhance style fine-tuning, it comes at a prohibitively high cost and a risk of performance deteriorating with low-quality synthetic data. For a fair comparison, we have excluded the human feedback component of StyleDrop, recognizing that including human feedback in our comparative analysis could obscure the individual contributions of each component. In this work, our goal is to develop an efficient method that facilitates improved concept learning without the need for costly human feedback.
Q2: For Figure 6, the reference image features a range of hues from blue to purple, evident in various elements such as the bookshelf, plant leaves, pot, and laptop. This spectrum informs the lighter blue color of the Eiffel Tower in the FineStyle. While StyleDrop tends to present a darker tone, it does not capture the legit composition of the reference, particularly in the Eiffel tower example with recurring appearance of leaves.
Regarding Figure 7, a similar principle applies. The oil painting style produced by FineStyle, though different, draws from the diverse stylistic cues within the reference. Theoretically, enhancing our style dataset with a broader range of subjects and finely annotated concepts could allow for more precise control over such discrepancies through tailored prompts. This suggests a need for improved methods in explicitly regulating each concept component in our KV adapter, which could address the issues you highlighted.
---
Rebuttal 2:
Title: Response to author's rebuttal.
Comment: For W3:
I hope the authors provide visualizations to prove that the kv adapter truly focuses on the conceptual content (as mentioned in their paper) rather than performance enhancements brought about by data augmentation methods and conceptual masks. I noticed in Figure 6 that the FineStyle generated an Eiffel Tower with a blue circle even without the prompt word 'circle'; and in Figure 4, parts of a mountain appeared. However, the authors have **ignored this point** and have **not responded** to my question.
For Q1:
The authors claim that a 'fair comparison' is that StyleDrop **does not use data scaling** while FineStyle **uses data scaling**. I do not think such experimental results are convincing.
For Q2:
The authors claim that the color differences in Figure 6 depend on the spectrum, but for the different styles in Figure 7, they state that a similar principle applies. How does the spectrum of color apply to clearly different styles? In my view, if the effectiveness of style transfer cannot be ensured, then the significance of the proposed method is limited. Theoretically, StyleDrop could achieve fine-grained control and maintain a better standard of style transfer by more precise and carefully designed human feedback and prompts.
---
Rebuttal 3:
Title: Response to reviewer's feedback
Comment: We thank the reviewer for your additional clarification and comments and we address them as follows.
*W3 (1): This work demonstrated that the KV adapter is more effective in performance but lacks visualization to prove that the KV adapter truly focuses on the conceptual content.*
Sorry indeed it was our oversight and we did not address this question as requested clearly enough; our previous answer mostly focused on emphasizing the performance improvement and our intuition. We do not claim in the paper that the performance enhancement from the KV adapter necessarily has concrete causal relations with the attention weight changes of conceptual contents. Our current observation shows that there could be a strong connection but we can’t be certain that is true in all cases; but we are certain about the performance enhancement and also our intuition. We thank the reviewer for this suggestion and will try to add further ablation and visualization comparisons in the revision to study and clarify whether our KV adapter could also be a robust and visually interpretable approach besides its performance and intuition.
*W3 (2) in Figure 6 that the FineStyle generated an Eiffel Tower with a blue circle even without the prompt word 'circle'; and in Figure 4, parts of a mountain appeared.*
The comments regarding Figure 6 was a misunderstanding, likely due to our terse caption for Figure 6: the prompt word “circle” is actually part of the style descriptor, so the blue circle generated with Eiffel Tower is expected. We assumed the readers will refer to Figure 3 for the full prompt when checking Figure 6, and the style description is “in flat cartoon vector art inside a light blue circle on a white background.” Due to space limits, we only include incremental changes in prompt as “….eiffel tower…”. We will include these full prompts in Figure 6 in our revision to clarify this.
As for figure 4, while there seems to be a small green corner region that could be viewed either as a tree or a green forest/mountain, at viewers’ discretion, we believe it is very different from the light grayish-brown stony mountain in the original reference image, which was generated by StyleDrop. We hope the reviewer will agree with us that this could be considered as a successful example of our method, or at least an imperfect but effective example :-)
*Q1: The authors claim that a 'fair comparison' is that StyleDrop does not use data scaling while FineStyle uses data scaling. The reviewer does not think such experimental results are convincing.*
We did mention “fair comparison” but that was only referring to the part that we have decided to exclude the human feedback component of StyleDrop so we can better understand the difference between the two methods. For better context, when conducting this work, we discussed iterative human feedback with the authors of StyleDrop. They recognize the limitations of iterative human feedback in practice: expensive labor cost, extra annotation time, dependency on quality of synthetic images and risk of performance deteriorating due to human selection bias. The last point is also noted at the end of Sec. 3.3 of StyleDrop paper. These limitations actually motivated us to seek a way of improving fine-grained control without iterative human feedback.
As for whether it is fair to compare FineStyle (with data scaling) and StyleDrop (without data scaling), we beg to differ with the reviewer. Data scaling along with concept-oriented masking are part of the contributions of our proposed work, we compared with StyleDrop as a baseline and did not add data scaling to this baseline because that was not part of their original method. It is reasonable to assume that data scaling will also help StyleDrop and combining our proposed work with StyleDrop can potentially produce a StyleDrop V2 with fine-grained control without the need of iterative human feedback. In theory, we can also add iterative human feedback to both FineStyle and StyleDrop under controlled environment to further study the capacity of FineStyle, and we intend to explore this in our future revisions.
*(to be continued)*
---
Rebuttal 4:
Title: Response to reviewer's feedback (continued)
Comment: *Q2: The authors claim that the color differences in Figure 6 depend on the spectrum, but for the different styles in Figure 7, they state that a similar principle applies. How does the spectrum of color apply to clearly different styles? In my view, if the effectiveness of style transfer cannot be ensured, then the significance of the proposed method is limited. Theoretically, StyleDrop could achieve fine-grained control and maintain a better standard of style transfer by more precise and carefully designed human feedback and prompts.*
By “similar principle”, we did not mean the spectrum of color apply to clearly different styles. We simply meant that similar to the example in Figure 6 that a range of different colors contributed to the fine-grained control in FineStyle , the reference image in Figure 7 features a range of fine-grained style elements (e.g. different colors, texture, shape, and other stylistic cues) and the oil painting image produced by FineStyle also draw from the diverse stylistic cues within the reference.
We understand the concern from the reviewer that it could appear to some viewers (maybe including the reviewer) that the global style seems better captured in the StyleDrop than in FineStyle in the oil painting example and hence it could imply a limitation of FineStyle in terms of style transfer compared to StyleDrop. To our defense, the perception of what is a good style transfer is quite subjective here, especially when FineStyle is focused on fine-grained style elements and StyleDrop is focused on global styles in the reference image. While our proposed method is not perfect, we believe it clearly achieved its goal of enabling fine-grained controllability with proper style preservation. However, we also acknowledge the insight from the reviewer that we could potentially further improve our method to better preserve the global styles if that is what users may desire, and we leave this to future extensions of this work.
Finally, while we beg to differ that StyleDrop is stronger than FineStyle in style transfer for the reasons we mentioned above, we agree that both StyleDrop and our FineStyle could theoretically achieve better fine-grained control with additional iterative human feedback and carefully designed prompts. This, however, again goes back to part of the motivation of our proposed work that we want to gain fine-grained control without the need for iterative human feedback, which has a set of limitations we discussed above when addressing Q1.
Overall, we thank the reviewer for sharing their insight to help us to improve this work and we hope we have clarified some of the misunderstandings and we will adopt the valuable feedback from the reviewer in our revision.
---
Rebuttal 5:
Title: Response to author's feedback.
Comment: For Q1:
In fact, the Iterative Training with Feedback in StyleDrop is another form of data scaling. As the only baseline method compared in the paper, training without feedback is highly unreasonable and leads to unfair assessments.
Moreover, incorporating Iterative feedback training does not require much cost. I do not understand why the authors are unwilling to provide the corresponding experimental results.
For W3 & Q2:
The authors also acknowledge their in-sufficient experiments (visual analysis for KV adapter) and inferior style transfer results.
Why the authors claim in rebuttal, Q2 that 'Theoretically, enhancing our style dataset with a broader range of subjects and finely annotated concepts could allow for more precise control over such discrepancies through tailored prompts,' but then do not provide actual improved visual results? This also makes me doubt their responses.
In the rebuttal, the authors only repeatedly emphasize the contributions mentioned in their paper once and once again, but they do not address the issues I raised, which only needs very low costs and is important for this paper. Overall, I will decrease my rating.
---
Rebuttal 6:
Title: response to reviewer's response
Comment: We are extremely disappointed that the reviewer VFWr has completely ignored our contributions and rebuttals while we politely and professionally pointed out the misunderstandings from the reviewer's side, including they misjudged the prompt in Figure 6 etc., and the their reading of the Figure 6 and Figure 4 etc. (which were not even so important from the beginning, because our blue is lighter than StyleDrop then our generation is worse? the prompt word "circle" was there so generating it in Figure 6 is absolutely reasonable; the small green corner of trees in the background is not the brown rocky mountain in the reference and our results are simply effective ...)
Also we want to emphasize that **we never acknowledged our style transfer is inferior** and that which one is better is just the difference of subjective evaluation between the reviewer VFWr and the authors, which again support our claim that iterative human feedback could be problematic as pointed out by the original StyleDrop authors. And we hope the other reviewers and AC will examine this and provide their subjective evaluation and preference as well while clearly the reviewer VFWr's subjective evaluation is not in favor of this technical work but more nitpicking on the subtle visual difference from their own perspective with misunderstanding.
We also do not agree to the unreasonable standard from the review VFWr that we have to add these attention visualization to prove that our method works: our method apparently work and it showed performance improvement, effective visual output and we carefully explained our intuition behind it. While as we agreed we could add visualization to demonstrate this point in some additional ablation, it is neither really required to prove that our proposed method work (it works!) and nor rigorous (we believe the consensus in the ML community is that visualization generally is a nice tool to have but it is often less robust and can be easily cherry-picked and have strong bias).
Removing iterative human feedback training is desiderata from the original StyleDrop work and we proposed a valid approach to make it work. We are extremely disappointed at the reviewer's response in **strongly downplaying our contributions**, **holding our work to unreasonable standards despite their misunderstanding** while **dismissing our improvement over a strong prior work StyleDrop**.
We've shown our best intention in the last few responses to reviewer VFWr, but we also want to remind the reviewer VFWr that we believe they don't understand the NeurIPS review standards:
"3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and/or incompletely addressed ethical considerations." and "6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations."
We'll stop engaging with the reviewer VFWr. We've put in a significant amount of time and effort into this work and we are confident about its contributions and its applicability to users. We leave this indignant and frank feedback with the sincere hope that other reviewers and ACs will review the conversation above and judge the paper fairly in its own merit.
Thank you all! (including the reviewer VFWr for your reviewing effort, we provide this frank and fair feedback to everyone instead of just sending to AC with the hope to convey our point in the spirit that conversations in the ML community should be unbiased, professional and bi-directional. It is the only way that our community can move forward.) | null | null | Rebuttal 1:
Rebuttal: Reviewer PDcY, Q5
We agree using an LLM can improve efficiency and reduce human work. Therefore we use an internal multi-modal LLM and prompt it with the image from Figure 3 and a prompt outlined below.
The output shows we can use fairly simple prompt to a multi-modal LLM we can automate the original image description including the entities and the style description. This procedure generates an overall caption for the image included all entities contained in the image along with style descriptors. Then the model outputs captions for K=3 entities with the style descriptor as well.
We plan to show the results of this procedure in the paper as it would eliminate any work for a human user to do and we thank the reviewer for the suggestion.
Pdf: /pdf/595bb496315c00e332acd2dada46a6d5b037ebab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Evidential Stochastic Differential Equations for Time-Aware Sequential Recommendation | Accept (poster) | Summary: The authors note that existing methods assume a uniform time interval among user behaviors. This paper posits that the time intervals in sequential recommendation increase the uncertainty of users' behavior. Therefore, it proposes NSDE to learn users’ fine-grained time-evolving behavior, while evidential learning quantifies both aleatoric and epistemic uncertainties. The authors believe that this is something other sequential learning methods have not considered.
Strengths: 1. The paper starts from the uncertainty of time intervals to capture the evolution and uncertainty of user behavior, which is a valuable insight. However, this perspective lacks innovation, as "Sequential recommendation via stochastic self-attention" (2022) also approaches the problem from the same insight.
2. The straightforward derivation of the relationship between increasing time intervals and uncertainty to guide learning demonstrates a degree of innovation and robustness in modeling.
Weaknesses: 1. There are formatting issues in lines 99-105. The abbreviation for Normalized Discounted Cumulative Gain is generally NDCG.The combination of metrics chosen in the paper is uncommon and lacks persuasiveness. According to line 294, the metrics are derived from Bert4Rec. However, Bert4Rec's metrics are Hit Ratio (HR), Normalized Discounted Cumulative Gain (NDCG), and Mean Reciprocal Rank (MRR).
2.The selected baselines could include more recent state-of-the-art models, such as STOSA, DuoRec, CoSeRec, GDERec,TiCoSeRec and GCG-ODE. The baselines particularly lack ODE-related sequential recommendation models. Among the two ODE-related models, LT-OCF is not specifically designed to address sequential recommendation.
3.The increase in time intervals may lead to changes in user interests. Addressing this issue within NODE-based recommender systems, the paper proposes corresponding solutions and offers a relatively novel perspective.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1.Why do the authors believe that sequential recommendation methods typically use uniform time intervals and overlook changes in user interests? Many earlier works have addressed this issue, such as "Uniform Sequence Better: Time Interval Aware Data Augmentation for Sequential Recommendation".
2.See weeknesses
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The abbreviation for Normalized Discounted Cumulative Gain is generally NDCG.The combination of metrics chosen in the paper is uncommon and lacks persuasiveness. According to line 294, the metrics are derived from Bert4Rec. However, Bert4Rec's metrics are Hit Ratio (HR), Normalized Discounted Cumulative Gain (NDCG), and Mean Reciprocal Rank (MRR).**
Thanks for the suggestion to clarify the metrics. We follow a similar setup in Bert4Rec as a next-item prediction. However, we selected three popular metrics (i.e., Precision, Recall, and NDCG) that are widely used in evaluating recommendation models. Here, Precision is equivalent to Hit Ratio (HR), and due to next-item prediction, it is also equivalent to Recall. Further, NDCG is a metric that considers position by assigning greater weights to higher positions. We will clarify these in the revised paper.
**Q2: The selected baselines could include more recent state-of-the-art models, such as STOSA, DuoRec, CoSeRec, GDERec, TiCoSeRec and GCG-ODE. The baselines particularly lack ODE-related sequential recommendation models. Among the two ODE-related models, LT-OCF is not specifically designed to address sequential recommendation.**
Thanks for pointing out recent baselines. Limited by time, we have provided results for three of the additional baselines, including STOSA, DuoRec, and GDERec, and compared them with our approach (please refer to the table presented in the overall rebuttal to all reviewers). The results clearly show that the proposed E-NSDE model outperforms these baselines by effectively leveraging the interaction time gap in novel ways to offer uncertainty-aware recommendations with diverse items that better align with the user interest. We will provide a more complete comparison in the revised paper.
**Q3: Why do the authors believe that sequential recommendation methods typically use uniform time intervals and overlook changes in user interests? Many earlier works have addressed this issue, such as "Uniform Sequence Better: Time Interval Aware Data Augmentation for Sequential Recommendation".**
Thank you for the comment. In the paper suggested by the reviewer, the authors propose to address the issue of varying time intervals for sequential recommendations. Instead of leveraging the varied intervals, it argues that sequences with uniformly distributed time intervals are more beneficial for performance improvement and introduces new approaches to transform non-uniform sequences into uniform ones. While there may also be other works that consider time intervals in user interactions, none of these works address the increasing uncertainty arising from the extended interaction intervals. We believe our paper is the first work that establishes a correlation between the interaction time interval and the model uncertainty, and leverages this important connection to improve recommendations through time-aware uncertainty guided exploration.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer zE4M,
Thank you once again for your insightful comments and questions. In our rebuttal, we have:
- provided a comparison with the suggested baselines (STOSA, DuoRec, and GDERec).
- compared the paper "Uniform Sequence Better" with our work, focusing on the varying uniform time intervals of user interactions.
- offered a detailed explanation of the metrics used.
We believe that addressing your comments will improve the paper's readability, and we appreciate your support. We hope you find our responses satisfactory and consider updating the score accordingly. We are more than happy to address any further questions you may have. | Summary: This paper investigates the modeling of interaction time intervals in sequential recommendation. Considering both time interval and model uncertainty, this paper formulates E-NSDE to integrate NSDE and evidential learning to model effective time-aware sequential recommendation. Experimental results on four real-world datasets prove the effectiveness of proposed method.
Strengths: 1. The problem is interesting. The time interval between interactions is really an important factor in the sequential recommendation. Meanwhile, time-aware uncertainty guides the recommendation to adapt to users' changing preferences.
2. Combining two existing techniques is well-motivated. Overall, both user NSDE and item NSDE are introduced here to encode the evaluation of the preference and inherent noise. The evidential module then provides uncertainty-aware rating prediction. All the stack techniques are combined reasonably.
3. Experiments are conducted to verify the effectiveness. And the results are statically reliable.
Weaknesses: 1. The motivation in the introduction is unclear. As the authors argue, GRU-ODE recommends genres that come from past behaviors, while NSDE recommends genres which have potential future benefits in Table 2. It is not sure if some genres recommended by NSDE are in accordance with long-term interests. For example, Sci_Fi, Mystery, and Crime in Table 2. My concern is whether motivation is not the real need of the user.
2. As to method, in Figure 1, users also have evolved into representation. It is clear that the user interfaces with items in an evolving manner. But not so clear that the users also evolved into different user.
3. The experiments in Table 4 report the P@5, nDCG@5. However, in the movielens datasets, 1m and 100k are used, so more top-n results should be given to verify the effectiveness. In line 316, "Uncertainty vs. Interaction gap vs." may be a typo.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Please refer to weakness 1, and give more clarification on Table 2.
2. Please explain the relationship between this work and [1],[2],[3]. The baselines lack the time-interval awarded by the sequential model and diversity model.
3. Pleas refer to weakness3, more n in top-n results should be given.
[1]Time interval aware self-attention for sequential recommendation,WSDM20
[2]Learning Graph ODE for Continuous-Time Sequential Recommendation,TKDE
[3]Temporal Conformity-aware Hawkes Graph Network for Recommendations,Webconf 24
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The motivation in the introduction is unclear. As the authors argue, GRU-ODE recommends genres that come from past behaviors, while NSDE recommends genres which have potential future benefits in Table 2. It is not sure if some genres recommended by NSDE are in accordance with long-term interests. For example, Sci\_Fi, Mystery, and Crime in Table 2. My concern is whether motivation is not the real need of the user.**
Thank you for the comment. We would like to clarify that we included motivating examples (in Table 2 of the main paper) to demonstrate the need for an effective means of exploration, which is essential and critical to uncover the user's long-term interests. Especially, if a user is inactive for a longer period, maintaining their engagement in the system requires more aggressive exploration, as indicated by the larger time-interval gap in Table 2. While there is no total guarantee that every explored item will be successful, our epistemic uncertainty-guided exploration (see Eq. 11) allows the model to recommend items that are largely unknown to the users (due to the second term) while potentially interesting to them (due to the first term). Both our quantitative (shown in Table 4) and qualitative results (shown in Table 5a) clearly demonstrate the effectiveness of our approach.
**Q2: As to method, in Figure 1, users also have evolved into representation. It is clear that the user interfaces with items in an evolving manner. But not so clear that the users also evolved into different user.**
We would like to clarify that our NSDE module consists of two key components: diffusion and drift. The diffusion component captures the user's extrinsic behavior, while the drift component captures the user's intrinsic behavior. This module is responsible for modeling the user's evolving interests over time. Additionally, as the reviewer pointed out, the user-item interface also evolves over time, and the uncertainty of these interactions is captured through the evidential module.
**Q3: More top-n results**
Thanks for the suggestion. We have reported additional higher-order top-N (@10 and @20) results in the table presented in the general response to all reviewers.
**Q4: Please explain the relationship between this work and [1],[2],[3]. The baselines lack the time-interval awarded by the sequential model and diversity model.**
Thanks for pointing out those papers.
For prior efforts on time interval-aware recommendations, [1] introduces a self-attention mechanism that considers both the absolute positions of items and the time intervals between them in interaction history, in order to allow the system to be aware of the influence of different time intervals on predicting the next item. However, the architecture leverages the black-box embedding and attention mechanisms for time intervals. In comparison, the proposed approach introduces differential equations and monotonic networks to explicitly encourage the system to generate higher uncertainty to longer time intervals.
[2] is published in July 2024, and therefore, it is impossible for us to be aware of it during submission. [2] introduces a Graph Ordinary Differential Equation model to capture continuous-time dynamics in sequential recommendation systems. The key differences between [2] and the proposed method are: 1) Ordinary differential equations are deterministic, while our approach leverages stochastic differential equations, which involve a noisy process to model how the uncertainty of users' and items' representation evolves over time.
2) The proposed approach explicitly encourages the system to assign higher uncertainty to longer time intervals.
[3] introduces Hawkes processes to model the intensity of user interactions and leverages graph neural networks to learn from the complex user-item interactions. Hawkes process is a point process to model the occurrence of sequential events with irregular inter-arrival time. Unlike the proposed method, it does not explicitly model the uncertainty or enforce higher uncertainty to longer time intervals.
We will incorporate them into the related work section of the revised paper. Further, we have provided results for TiSASRec ([1] Time interval aware self-attention for sequential recommendation, WSDM20) and compared with the proposed method in the table of the general response, which shows the proposed approach achieves a better recommendation performance.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer V8wq,
Thank you again for your insightful comments and questions. In our rebuttal, we have:
- clarified the motivation behind the paper and emphasized the importance of exploring new tastes for users who have not been active for a long time.
- explained the evolving nature of user behavior and how the drift and diffusion components of the NSDE module help capture these changes.
- incorporated more top-n results on both datasets with competitive baselines.
- explained relationships between suggested papers and further included one paper i.e. TiSASRec as a baseline comparison.
We believe that addressing your comments will enhance the paper's readability, and we appreciate your support. We hope you find our responses satisfactory and consider updating the score accordingly. We are more than happy to address any further questions you may have.
---
Rebuttal 2:
Comment: Dear Reviewer V8wq,
Thank you again for reading our responses and providing quick feedback. We would like to clarify that we have clearly described the fundamental differences between the three mentioned related works and ours. We have also experimentally compared the most relevant method i.e. TiSASRec, with our proposed ENSDE model and the result is shown in the table of the general response. We will appreciate if you could be more explicit about your remaining concerns about the related work. | Summary: The paper revolves around enhancing sequential recommendation systems by incorporating time-awareness through the utilization of Evidential Neural Stochastic Differential Equations (E-NSDE). Traditional recommendation systems often overlook the temporal dynamics of user interactions, leading to suboptimal recommendations. The authors aim to bridge this gap by developing a novel framework that captures the evolving behavior of users over time intervals, thereby improving the accuracy and reliability of recommendations.
It acknowledges the limitations of existing methods in effectively modeling temporal dynamics and uncertainty in user behavior. By integrating NSDE and evidential learning, the authors build upon the foundations laid by previous works on stochastic differential equations and recommendation systems. This integration allows for a more robust modeling of user preferences and interactions over time, setting the stage for more accurate predictions.
Strengths: 1. The E-NSDE framework dynamically models user behaviors over time, effectively addressing a crucial gap in traditional recommendation systems that often overlook temporal dynamics.
2. By incorporating NSDEs to capture the continuous-time dynamics of user interactions, along with evidential learning to assess recommendation uncertainty, this dual approach ensures that the recommendations are not only time-sensitive but also carry a higher degree of confidence.
3. The integration of NSDE and evidential learning into a unified framework facilitates comprehensive modeling of both the evolution of user preferences and the inherent uncertainty in predicting user behavior.
Weaknesses: 1. When user-item interactions have uniform time intervals, such as in click-through scenarios, the model's time-aware approach may be less effective in capturing uncertainty.
2. The paper acknowledges the interpretability of the E-NSDE framework but falls short in providing detailed explanations of the decision-making process and incorporating feature importance analysis, which would significantly enhance transparency and build user trust.
3. The discussion regarding the generalizability of the E-NSDE framework across various domains and datasets is inadequate. Providing more detailed insights into the model's performance in diverse settings would clarify its robustness and broader applicability.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Could the authors provide a detailed explanation of how NSDE and evidential learning are integrated within the E-NSDE framework?
2. Can the authors elaborate on the characteristics of the real-world datasets used in their experiments? How diverse are these datasets, and do they encompass a broad spectrum of recommendation scenarios to validate the generalizability of the E-NSDE framework?
3. Are there specific techniques or methodologies incorporated to enhance the transparency and explainability of the recommendation process?
4. In the context of large-scale recommendation systems, how does the E-NSDE framework address challenges related to scalability and computational efficiency? Have any experiments been conducted to evaluate the framework's scalability as dataset sizes or user interactions increase?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations and identified no potential negative societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: When user-item interactions have uniform time intervals, such as in click-through scenarios, the model's time-aware approach may be less effective in capturing uncertainty.**
We agree with the reviewer that when interaction intervals are uniform, the time-aware uncertainty component is less effective. We have pointed out this as we discuss the limitation of our approach in Appendix G. Meanwhile, we would like to clarify that the proposed model's performance will not degrade when interaction intervals are uniform when compared to existing state-of-the-art models.
**Q2: The paper acknowledges the interpretability of the E-NSDE framework but falls short in providing detailed explanations of the decision-making process and incorporating feature importance analysis, which would significantly enhance transparency and build user trust.**
Thank you for the suggestion; it would indeed be valuable to examine details at the feature level. However, we would like to emphasize that our approach aims to improve interpretability at the module level using epistemic uncertainty. In particular, our epistemic uncertainty indicates the model's confidence in recommending the next item, which increases the transparency of the recommendation process and can help build user trust.
**Q3: Providing more detailed insights into the model's performance in diverse settings would clarify its robustness and broader applicability.**
Thank you for the great suggestion! The datasets used in our experiments indeed cover very diverse settings. First, the datasets possess different levels of sparsity that varies from 95\% to over 99.9\%: Movielens-1M with a sparsity of 95.75\%, Netflix with a sparsity of 98.82\%, and Amazon Books with a sparsity of 99.98\%. The level of sparsity is usually tied to the difficulty of the dataset when making recommendations. Meanwhile, different datasets also encode diverse interaction behaviors from users. For example, the interaction patterns vary significantly based on the timing of interactions. In the Movielens-1M dataset, most interactions occur within seconds, minutes, and days. For Netflix, interactions predominantly happen with day-long gaps. However, in the Amazon Books dataset, user interactions span months or even years. We selected these datasets to particularly demonstrate the proposed model's effectiveness across different interaction time intervals. Our model exhibits robust performance across all these datasets, providing evidence of its broad applicability in any time-aware user-interaction settings.
**Q4:Detailed explanation of how NSDE and evidential learning are integrated within the E-NSDE framework**
Our NSDE module generates richer user and item representations by explicitly considering their interaction time. Those representations are fed to the EDL module to capture interaction uncertainty and generate predicted scores, as shown in Figure 1 of the main paper. We further leverage a monotonic network in building the relationship between the interaction time gap and model uncertainty. The monotonic network ensures the increase of the uncertainty (i.e., variance) of the predicted rating along with the increase of time interval ($\Delta t$). This novel integration leads to an end-to-end E-NSDE framework that provides an effective time-aware sequential recommendation model.
**Q5: Elaborate on the characteristics of the real-world datasets used in their experiments? How diverse are these datasets, and do they encompass a broad spectrum of recommendation scenarios to validate the generalizability of the E-NSDE framework?**
Please refer to the answer to Q3.
**Q6: Are there specific techniques or methodologies incorporated to enhance the transparency and explainability of the recommendation process?**
A key component to enhance transparency and explainability of the recommendation process is the evidential learning module, which allows us to perform fine-grained uncertainty decomposition. As a result, the decomposed epistemic uncertainty indicates the model's confidence in recommending the next item, which increases the transparency of the recommendation process and can help to build user trust.
**Q7: In the context of large-scale recommendation systems, how does the E-NSDE framework address challenges related to scalability and computational efficiency? Have any experiments been conducted to evaluate the framework's scalability as dataset sizes or user interactions increase?**
We would like to clarify that our training setup leverages incremental time computation for the stochastic differential equation, progressing from the previous time step $t-1$ to the current step $t$ rather than starting from the beginning ($t_0$). This approach, which uses the previous step's computation for the next step, enhances both the forward and backward passes of the NSDE module. Consequently, this improves the model's scalability, as demonstrated by its applicability to datasets ranging from smaller ones(e.g., Movielens) to larger ones (e.g., Netflix and Amazon). For more details on the training setup, please refer to the experimental setting section in the main paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer LeEA,
Thank you once again for your thoughtful comments and questions. In our rebuttal, we have:
- explained the impact of capturing uncertainty when user interactions occur at uniform time intervals,
- provided a detailed explanation of the interpretability, generalizability, and transparency aspects of our work, specifically focusing on the two key components of the ENSDE model.
- offered a thorough discussion on handling sparse datasets and the diverse settings of our experiments.
We believe that addressing your suggestions has significantly strengthened our paper, and we appreciate your support. We hope you find our responses satisfactory and consider updating the score accordingly. We are more than happy to answer any additional questions you may have. | Summary: This work investigates sequential recommendation, and proposes a new method that utilizes stochastic differential equations (SDEs) to model dynamic time intervals and estimate uncertainty.
Overall, this study addresses an engaging problem and provides a novel and reasonable solution. Extensive experiments have been conducted. Consequently, I lean towards accept.
Strengths: 1. This work studies on an interesting and important problem --- how to capture dynamic time interval as well as estimate model uncertainty.
2. The paper is well-written, with clear motivations.
3. The application of stochastic differential equations to model sequential recommendation is both novel and reasonable.
4. Extensive experiments are conducted to validate the effectiveness of the proposal.
Weaknesses: 1. It would be advantageous to include diffusion model-based sequential recommendation baselines for comparison in the experiments, such as [a1][a2], especially since SDEs have been utilized in these methods as well.
2. The work reports performance metrics only for P@5 and NDCG@5. It would be better to include the results with different @N, particularly @20, which is commonly adopted by recent work.
3. A discussion on the limitations and future directions of the research in Section 6 would be beneficial.
4. There are some typos. For example, in the eq.(13), ‘BPR’->’WBPR’; In the line 316, removing ‘vs’.
5. Some important relate work is omitted:
[a1] TOIS’23: Diffurec: A diffusion model for sequential recommendation
[a2] NIPS’23: Generate what you prefer: Reshaping sequential recommendation via guided diffusion
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No concern.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Diffusion model-based sequential recommendation baselines for comparison.**
Thanks for suggesting those important related works. We will cite them and add a discussion in the revised paper. More specifically, DiffuRec [a1] models item representations as distributions by corrupting the target item embedding into a Gaussian distribution through adding noise and reversing the Gaussian noise into the target item representation based on historical interactions. DreamRec [a2] formulates sequential recommendation as a learning-to-generate task. It uses a guided diffusion model with a transformer encoder to generate the distribution of representations from historical items and adds noise to items to explore the item space distribution.
In summary, the two papers leverage explicit augmentation of noises to representations, which are technically different from the proposed method.
As suggested by the reviewer, we have conducted experiments to compare with the diffusion model-based sequential recommendation, i.e., DiffuRec and the results are presented in the Table in the general response to all reviewers. Since DiffuRec represents compact item representation with construction and injection of uncertainty, it does not properly address continuous time-evolving aspects of user interest and hence has lower performance than the proposed E-NSDE model. Limited by time, we will include a comparison with DreamRec [a2] in the revised paper.
**Q2: Better to include the results with different @N, particularly @20**
Thanks for the suggestion. We have conducted additional experiments and reported the top @N =10 and 20 results in the Table of the general response to all reviewers.
**Q3: Limitations and future directions of the research**
Thanks for the suggestion. We have included a discussion of limitations and future work in Appendix G.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer p7WH,
Thank you once again for your insightful comments and questions. In our rebuttal, we've clarified the key differences of suggested baselines (DiffuRec and DreamRec) from our work and further included DiffuRec in baseline comparison with higher ranking metrics. We believe that addressing your feedback has significantly improved and strengthened our paper. We hope you find our responses satisfactory and are more than happy to address any further questions you may have. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their constructive comments and suggestions. Here, we provide results for several suggested baselines and top-$N$ metrics with a higher $N$ value as requested by the reviewers:
| **Datasets** | **Metric** | **Bert4Rec** | **ResAct** |**GRU-ODE** |**DiffuRec** |**TiSASRec** |**STOSA** |**DuoRec** |**GDERec** |**E-NSDE** |
|--|--|--|--|--|--|--|--|--|--|--|
| *MovieLens-1M* | P@5 | 0.4163 | 0.4286 | 0.4275 | 0.4212 | 0.4253 | 0.4314 | 0.4337 | 0.4301 | **0.4551** |
| | P@10 | 0.7258 | 0.7443 | 0.7392| 0.7478 |0.7401 |0.7484 | 0.7489 | 0.7503 | **0.7745**|
| | P@20 | 0.8370 | 0.8572 | 0.8515 | 0.8616 | 0.8532| 0.8622 |0.8659 |0.8649 |**0.8926**|
| | NDCG@5 | 0.3754 | 0.3814 | 0.3792 | 0.3795 |0.3829 |0.3801 | 0.3852 | 0.3805 | **0.3982**|
| | NDCG@10 | 0.5131 | 0.5276 |0.5226 | 0.5336 |0.5297 | 0.5310| 0.5259 | 0.5324 | **0.5467**|
| | NDCG@20 |0.5545 | 0.5697 | 0.5645 | 0.5759 | 0.5717 | 0.5733| 0.5682 | 0.5750 | **0.5907**|
| *Amazon Book* | P@5 | 0.3846 |0.3884 | 0.3856| 0.3870 | 0.3940 |0.3899 | 0.3902 | 0.3935 |**0.4021** |
| | P@10 | 0.6583 | 0.6759 | 0.6732 | 0.6805 | 0.6813 | 0.6788| 0.6831 | 0.6808 |**0.7021** |
| | P@20 | 0.7671 | 0.7863 | 0.7830 | 0.7868 | 0.7889| 0.7902 | 0.7945 |0.7928 |**0.8175** |
| | NDCG@5 | 0.3463 | 0.3472| 0.3455 | 0.3415 | 0.3521 |0.3538| 0.3504 | 0.3563 | **0.3621**|
| | NDCG@10 | 0.4833 | 0.4976 | 0.4911 | 0.4994 |0.4951 | 0.5025 | 0.5001| 0.5077| **0.5198**|
| | NDCG@20 | 0.5014 | 0.5160 |0.5095 | 0.5179 | 0.5135 | 0.5210| 0.5186 | 0.5264 |**0.5390** | | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BackdoorAlign: Mitigating Fine-tuning based Jailbreak Attack with Backdoor Enhanced Safety Alignment | Accept (poster) | Summary: This paper introduces a defence against jailbreak fine-tuning attacks that markedly improves over the baseline suggested by Qi et al. Their method works by implanting a safety backdoor that is subsequently used during inference and show that it is effective on preventing few shot fine-tuning attacks across a variety of controlled settings.
Strengths: Generally this paper is very well written, novel, and an extremely valuable contribution to the emerging threat of fine-tuning attacks. Not only do they convincingly demonstrate their defence is effective on the settings presented by Qi et al, and that their method indeed improves over a set of controls for different settings like using a natural language secret prompt, not using category-wise safety samples and in a setting of mixed safe and unsafe samples. I would be excited to see this paper accepted.
Weaknesses: There are a few clarity issues in the paper but the main issues I’d like to see addressed are:
(1) The “pure_bad” dataset construction details are insuffecient. What is meant by redteaming?(181) What process was used for it? How was this dataset constructed? What format? Where are the examples? Why wasn’t Qi et al. or other already existing harmful sample datasets used? Section A.2 is not enough details. Without these details we cannot trust as a reader that these were actually harmful samples.
(2) I think that the authors fail to discuss Prompt Injection attacks that leak the secret prompt. In a LLMaaS FJAttack threat model, what if the attacker gets access to the secret prompt? As mentioned in the paper and the motivation for chosing a non-semantically meaninfdul secret prompt, this means the attacker could then finetune against this prompt. I would recommend the authors at least discuss this limitation but would encourage them to add an experiment showing the effectiveness of this adaptive attack and how likely standard prompt injection methods for prompt leakage are liable to work.
(3) This is minor and is related to a weakness below but 10% of safety samples needed seems like a high cost for a defence (for example this is 10k samples for a 100k training set). Perhaps though this won’t be the case for much stronger attacks. We just don’t know what is required for stronger attacks based on the results in this paper.
This final weakness is more of what I would have liked to see to increase my score higher and what I think would improve the paper but is likely too much to ask to see addressed during the review period:
While the precedant of 100 sample attacks is set in Qi et al. I don’t think that the attack strength is high enough to be able to truly assess this method, realistically it seems like users might use thousads or more samples. I would encourage the authors to at least devise a 1k and 10k setting from a dataset like BeaverTails as is being done in other contemporaneous works. This important because without it: we are not sure that the 10% safety example mix holds, we are not sure how this method operates on more realistic attacks (for example right now there exist 10k+ harmful sample datasets on huggingface - would this defend against those? if not we should know as a communtiy so we can develop stronger defences, if so this defence is potentially promising). I’d also like to have seen more settings from Qi et al so for example varying learning rate, epoch numbers, and smaller attack sizes.
Technical Quality: 4
Clarity: 4
Questions for Authors: ### Suggestions
I think that this work could benefit from existing defences gainst FJAttacks. Certianly “Henderson, P., Mitchell, E., Manning, C., Jurafsky, D., & Finn, C. (2023, August). Self-destructing models: Increasing the costs of harmful dual uses of foundation models. In *Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society* (pp. 287-296).” should be discussed (of course its not a defence against this exact threat model but it has a very similar motivation, threat model, problem setting etc.)
Other work that is contemporaneous so is optional to add but would nonetheless enhance discussion in the paper to compare and constrast these methods:
- Zhou, X., Lu, Y., Ma, R., Gui, T., Zhang, Q., & Huang, X. (2023). Making harmful behaviors unlearnable for large language models.
- Huang, T., Hu, S., & Liu, L. (2024). Vaccine: Perturbation-aware alignment for large language model
- Rosati, D., Wehner, J., Williams, K., Bartoszcze, Ł., Atanasov, D., Gonzales, R., ... & Rudzicz, F. (2024). Representation noising effectively prevents harmful fine-tuning on LLMs.
One thing that shakes out of at least Rosati et al (in particular their earlier work before RepNoise) and Henderson et al that you do not address is the limitation of your setting to only LMaaS settings. What about settings where the attacker has complete access to the model? Even in the LMaaS case there is the risk of stealing the weights so it would be good to consider whether you have any thoughts here for discussion.
Another citation worth discussing is Korbak, T., Shi, K., Chen, A., Bhalerao, R. V., Buckley, C., Phang, J., ... & Perez, E. (2023, July). Pretraining language models with human preferences. In International Conference on Machine Learning (pp. 17506-17533). PMLR. Although it uses pre-training with safety tokens, the approach has an interesting parallel.
### Notes and suggestions
2: Requires is probably a better word than “request” here
93: Calude 2
98-99,117: Missing spaces before citations
123: I don’t think its correct to say “widely used” since FJAttacks really only consist of 5 or 6 papers at this point.
126: I think exposure is a little bit too vague since in-context learning attacks could also be compromising through exposure
Equation 1 and 3: A very small optional nit pick but I think we usually represent this as an expectation of the negative log loss over the dataset distribution since the actual computation isn’t a sum or losses but a mean. But I see how this formulation has an advantage for conciseness.
136: It would be useful to cite Qi et al. again here and perhaps “Zhan, Q., Fang, R., Bindu, R., Gupta, A., Hashimoto, T., & Kang, D. (2023). Removing rlhf protections in gpt-4 via fine-tuning.” which was published at NAACL this year.
149: I feel like we should reserve this double line notation for asymmetric divergence or parallel lines and use a more standard contenation operator like “+”. I see what you might be trying to do with s_i conditioned on secret prompt s but its clearer to use a standard concatenation operator.
147,149+equation 2: I would find it clearer and more correct if you used a different notation for the secret prompt since now “s” is overloaded to mean both the whole system prompt for the index i and only the secret prompt so its confusing.
178: Please add the version of GPT-3.5 for replicability
192: 1 times what learning rate?
238-239: I don’t agree with this finding in general - from Table 1 - it seems like all you can say is a decline in utility ARC-challenge.
267-274: Can you say more about why you think this is the case?
290-293: Insightful!
295-301: Can you provide the hyperparameters used for LoTA here or in an appendix?
345: I don’t agree that 10% is a very small set of safety examples.
489: generate
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Aside from what I mentioned above they discuss the limitations very well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Question A: “pure_bad” dataset construction details.
The “pure_bad” dataset used in our experiments consists of 100 harmful question-answer pairs. These pairs are exactly the same as the harmful samples used in Qi et al.'s work, which were subsampled from the Anthropic red team dataset [1]. In this context, red teaming refers to the use of jailbreak attacks to collect harmful examples, aimed at evaluating the model’s robustness. As detailed in Appendix A.2, the data is formatted according to the OpenAI Standard Data Format. This includes using the same system prompt, with safety-related questions labeled as “USER INPUT” and harmful answers as “MODEL OUTPUT.”
[1] Ganguli, Deep, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 (2022).
> Question B: The leakage of the secret prompt through prompt injection attacks.
To determine if a prompt injection attack could leak our secret prompt, we test the five attack queries presented in the paper [1] to assess their ability to extract the system prompt from the fine-tuned Llama-2-7B-Chat model. Surprisingly, the fine-tuned model identified all these prompt extraction attempts as harmful behavior and refused to disclose its system prompt. This resistance to leakage is partly due to the secret prompt's inherent capability to enhance safety by protecting personal privacy. Additionally, this further demonstrates that the secret prompt is not easily extracted by malicious users with adaptive attacks.
[1] Zhang, Yiming, Nicholas Carlini, and Daphne Ippolito. Effective prompt extraction from language models. arXiv preprint arXiv:2307.06865 (2024).
> Question C: Defense with a larger training set and stronger attack with more harmful examples.
We utilize 10% safety examples in the initial setting due to the involvement of harmful examples only. In a more realistic scenario, the dataset provided by users predominantly consists of benign examples for specific applications, along with a few undesirable harmful examples. As detailed in Section 5, we incorporated the safety examples into a fine-tuning dataset comprising 1,000 training samples. This represents only about 1% safety examples of the total training set yet still achieves satisfactory defense performance. Here we also report the Average Epoch Training Time under 1xNVIDIA A100 GPU to calculate the exact extra time introduced by the 1% safety examples. From the table, we can observe that the 1% extra safety examples only bring about 1s extra GPU time for our defense.
|Defense Method| Avg Epoch Training Time of Llama-2-7B-Chat |
| ----| ---- |
|No Defense | 159.24s |
|Ours | 160.44s |
To determine if our method can defend against stronger attacks involving thousands of harmful examples, we evaluated our backdoor enhanced safety alignment under fine-tuning with 1,000 harmful examples randomly sampled from BeaverTails under Llama-2-7B-Chat with 11 safety examples, repeating 10 times. The repeating strategy here is to make sure the model can sample enough times of safety examples without collecting new safety question-answer pairs. The results, displayed in the following table, indicate that our method can reach better defense performance under stronger attacks compared with the no-defense setting.
| Defense Method | 100 harmful examples | 1000 harmful examples |
| ---- |---- |---- |
| No Defense | 94.91 | 88.00 |
| Ours | 3.64 | 3.64 |
> Question D: Suggestions and notes
Thank you very much for your suggestions and feedback! We will incorporate more discussions on your suggested papers in the final version. Additionally, we will continue refining our paper to correct the typos and clarify the descriptions you pointed out.
Reply to questions in the notes:
178: Our GPT-3.5 version is gpt-3.5-turbo-0613.
192: Times of learning rate multiplier is one parameter used in OpenAI Fine-tuning API. The exact learning rate used in the API is protected by OpenAI and unknown to us.
238-239: Here, we observe a decline in the MT-Bench Score of Llama 2 and the ARC Challenge Accuracy of GPT-3.5 when comparing performance before and after the FJAttack (between the No Attack and No Defense settings). These declines were already present before implementing the defense, demonstrating that they were not introduced by our subsequent defense.
267-274: The choice of secret prompt length is primarily based on our empirical findings. Figure 4 clearly illustrates that a longer secret prompt helps reduce the attack success rate. However, considering that longer prompts for LLMs incur additional inference costs, we have chosen 150 as the final length. This selection considers a balance between effectiveness and efficiency.
295-301: For our LoRA fine-tuning, we use $lora\\_alpha=16$, $lora\\_dropout=0.1$ and the lora attention dimension $r=8$ with other hyperparameters as default ones.
---
Rebuttal 2:
Title: Thanks again to the authors
Comment: I appreciate the authors efforts in responding and hope that the excersize was helpful in clarifying the paper for final revision.
As stated previously, I believe this is a novel and significant contribution and I hope to see it accepted.
---
Rebuttal Comment 2.1:
Comment: Thanks again for acknowledging our work. We will make the corresponding revisions in our final version. | Summary: The authors introduce the Backdoor Enhanced Safety Alignment method, which uses prefixed safety examples with a secret prompt acting as a backdoor trigger to ensure safety responses during inference. This approach aims to maintain the safety alignment of LLMs with minimal safety examples and without compromising their benign performance.
Strengths: 1. The method requires only a small number of prefixed safety examples to achieve significant improvements in safety performance.
2. The paper conducts extensive experiments, including ablation studies on token length, safety samples and real-world scenarios, to validate their approach.
Weaknesses: 1. The paper only uses PolicyOriented Safety Evaluation Benchmarks for harmlessness evaluation, which may not fully capture the method's impact on overall model performance in diverse scenarios.
2. This method still requires a very small set of safety examples for fine-tuning, which is only applicable to the settings of Language-Model-as-a-Service.
3. Since determining refusal answers based on a list of rejection keywords is highly inaccurate, some open-sourced Judge models, such as LlamaGuard2 [1] or MD-Judge [2] can be utilized for attack success rate evaluation.
[1] Inan, Hakan, et al. "Llama guard: Llm-based input-output safeguard for human-ai conversations." arXiv preprint arXiv:2312.06674 (2023).
[2] Li, Lijun, et al. "Salad-bench: A hierarchical and comprehensive safety benchmark for large language models." arXiv preprint arXiv:2402.05044 (2024).
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Question A: Various benchmarks for harmlessness evaluation.
To demonstrate the effectiveness of our method across different scenarios, we also apply the AdvBench [1] and HarmBench [2] benchmarks, which are widely used to assess robustness against jailbreak attacks, to evaluate safety alignment performance. The Attack Success Rates (ASR) for different defense methods under the Llama-2-7B-Chat model are presented in the table below.
| Defense Method | PolicyOriented Safety Evaluation Benchmark | AdvBench | HarmBench
| ---- |---- |---- |---- |
| No Attack | 3.27 | 0.00 | 11.25 |
| No Defense | 94.91 | 96.54 | 96.25 |
| Baseline | 34.91 | 40.19 | 73.12 |
| Ours | 3.64 | 0.00 | 3.75 |
The table shows that our method significantly outperforms the baseline defense across all three benchmarks, demonstrating the effectiveness of our approach.
[1] Zou, Andy, Zifan Wang, J. Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 (2023).
[2] Mazeika, Mantas, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee et al. Harmbench: A standardized evaluation framework for automated red teaming and robust refusal. arXiv preprint arXiv:2402.04249 (2024).
> Question B: Only applicable to the settings of Language-Model-as-a-Service
First, we want to emphasize that the Language-Model-as-a-Service (LMaaS) has been widely used by companies in practice (e.g., OpenAI). We believe this is a highly practical and commonly used setting, utilized by a significant number of users. Thus, providing an effective and efficient defense method against fine-tuning based jailbreak attacks in this setting represents significant progress in this field.
Here, we still clarify that our method still works on the open-source models under the threat model, where the attacker can upload the data to perform the fine-tuning-based jailbreak attack but can not control the inference template. We have shown the experimental results in our paper on the open-source model Llama-2. We also add experiments on mistral-7B. The results are as follows. It shows the effectiveness of our method on other open-source models. (Ours is 22.55 while the initial model ASR is 17.09. We are significantly better than the baseline method.)
| Defense Method | Llama-2-7B-Chat | Mistral-7B-Instruct-v0.2|
| ---- |---- |---- |
| No Attack | 3.27 | 17.09 |
| No Defense | 94.91 | 97.09 |
| Baseline | 34.91 | 44.00 |
| Ours | 3.64 | 22.55 |
> Question C: Open-sourced Judge models for ASR evaluation.
The ASR computed using rejection keywords is a simple and efficient evaluation method employed in our experiments. Additionally, we include the Harmfulness Score, which uses GPT-4 as the judge model. Our results demonstrate that evaluations using both rejection keywords ASR and the Harmfulness Score consistently support our conclusions.
However, evaluating with GPT-4 incurs significant costs due to API usage. To provide a more accurate and cost-saving evaluation method, we utilize the open-source models LlamaGuard2 and MD-Judge for safety classification. These models compute the ASR by calculating the proportion of ‘unsafe’ labels generated under the Llama-2-7B-Chat model. The results are presented in the following table.
| Defense Method | ASR | Harmfulness Score | LlamaGuard2 ASR | MD-Judge ASR |
| ---- |---- |---- |---- |---- |
| No Attack| 3.27 |1.11 | 0.00 | 8.73 |
| No Defense | 94.91 | 4.68 | 64.36 | 91.27 |
| Baseline | 34.91 | 2.49 | 24.73 | 40.00 |
| Ours | 3.64 | 1.22 | 3.64 | 8.00 |
The experimental results in the table show that our defense method significantly outperforms the baseline in effectively defending against fine-tuning-based jailbreak attacks across all harmlessness evaluation metrics, including assessments using the open-source judge models LlamaGuard2 and MD-Judge.
---
Rebuttal Comment 1.1:
Title: Look forward to your reply
Comment: Dear Reviewer oEoJ,
The deadline for the discussion period is approaching. We have provided our rebuttal material and hopefully could address your concerns. Your feedback is highly valuable to us, and we would greatly appreciate it if you could take some time to review our response.
Best Regards,
Authors
---
Rebuttal 2:
Comment: Thanks for the response. As the response solves my concerns, I will increase my score. It would be great to see those added contents in the revised version. | Summary: In this paper, the authors present a new approach to defending LLMs against the fine-tuning-based Jailbreak Attack (FJAttack). The FJAttack exploits the fine-tuning process by introducing harmful examples into the dataset, compromising the model's safety alignment. The proposed method, Backdoor Enhanced Safety Alignment, uses a backdoor trigger mechanism to incorporate safety examples into the fine-tuning process. By prefacing safety examples with a secret prompt (the backdoor trigger), the model learns to associate this prompt with safe responses. The secret prompt is then prepended to user inputs during inference, ensuring safe outputs even when the model is exposed to harmful queries. The paper demonstrates the effectiveness of this method through extensive experiments, showing significant improvements in safety without compromising the model's performance on benign tasks.
Strengths: 1. The introduction of a backdoor mechanism for safety alignment is innovative and provides a new perspective on defending LLMs against fine-tuning attacks.
2. The method requires only a small number of safety examples to be effective, addressing the inefficiency of previous approaches that needed large datasets.
3. The paper provides a thorough analysis, including ablation studies and comparisons with baseline methods, to validate the robustness of the proposed method.
Weaknesses: 1. The method heavily relies on a secret prompt, which poses a security risk if the prompt is discovered or guessed by malicious users. Additionally, the algorithm for generating the secret prompt is overly simplistic, relying on random generation. Consequently, the improvement it offers is not significant in terms of both defense and utility.
2. Despite being small, the need for fine-tuning with safety examples introduces an extra cost, which may not be feasible in all settings.
3. The scalability of the method to larger models or more complex tasks has not been extensively explored.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How is the secret prompt designed to ensure it does not interfere with the semantic meaning of benign queries?
2. How scalable is the proposed method when applied to LLMs with varying architectures and sizes?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Although the method is efficient, it still requires a small set of safety examples, which may not always be readily available. Additionally, the efficiency is not well demonstrated, as there is no clear comparison to baseline methods to show the extent of improvement.
2. As the method uses a secret prompt, there is a risk of adaptive attacks where attackers design strategies to circumvent the backdoor mechanism.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Question A: Concerns about the security risks of the secret prompt; improvement is not significant
Our defense method is primarily designed for the Language-Model-as-a-Service (LMaaS) based threat model, where attackers are only permitted to upload a fine-tuning dataset to perform the fine-tuning based jailbreak attacks, while the processes of fine-tuning and inference remain under the control of the LLM service providers. In this setting, the secret prompt is created by and known only to the model provider, making it difficult for malicious users to discover or guess.
To further enhance the stealthiness of the secret prompt, we employ dynamically generated random tokens, which prevents attackers from easily guessing the secret prompts. This is a key design of our method. The results shown in Table 4 further highlight the effectiveness of selecting random tokens as the secret prompt, in comparison to other methods like the Default or GPT-4 Generated secret prompts with specific semantic meanings.
We disagree with the reviewer’s comment that our performance is not significant in terms of both defense and utility. We conduct comprehensive experiments to evaluate the defense and utility of our method. The results show our method **significantly improves safety alignment without compromising model utility** . As shown by the results in Table 1, our method achieves a 2.82 lower Harmfulness Score and a 45.09% reduction in ASR, while maintaining comparable ARC-Challenge and MMLU accuracy and even achieving a slightly higher MT-Bench score compared to the baseline under GPT-3.5.
> Question B: Extra cost of our method; comparison to baseline methods; not feasible in all settings.
Thank you for your questions and suggestions. Despite our method will introduce extra costs, we do not think the extra cost is high since our method requires only 11 additional safety examples. To more accurately assess the additional costs associated with these safety examples, we calculated the Average Epoch Training Time for the Llama-2-7B-Chat using a single NVIDIA A100 80GB GPU. The details of these extra costs are presented in the following table:
|Defense Method| Avg Epoch Training Time of Llama-2-7B-Chat |
| ----| ---- |
|No Defense | 16.40s |
|Ours | 18.77s |
From the table, it is evident that compared to the No Defense setting, our method requires only an additional 2 seconds of GPU time to defend against fine-tuning based jailbreak attacks. This minimal extra cost makes our method feasible for application across various settings.
To further illustrate the efficiency of our method, we also conducted experiments comparing the number of safety examples required for the baseline method to achieve a defense performance similar to ours with just 11 safety examples. These experiments were performed using the Llama-2-7B-Chat model, and the results of the attack success rate are detailed in the table below.
|Defense Method| Number of Safety Examples | ASR |
| ----| ---- |---- |
|Baseline | 11 | 34.91 |
|Baseline | 100 | 33.09 |
|Baseline | 200 | 9.82 |
|Baseline | 300 | 4.73 |
|Ours | 11 | 3.64 |
The results in the table above indicate that to achieve a safety performance comparable to our method, the baseline defense approach requires 300 safety examples, more than 27 times the 11 safety examples. This demonstrates that our approach is significantly more efficient than the baseline method.
Here we hope to highlight that our safety examples are constructed from various harmful categories (policies) instead of data instances. If we can know the harmful categories (policies), it is easier to construct safety examples. Thus, we believe our method is feasible. To evaluate the transferability of our constructed safety examples, we evaluate the model trained by our constructed safety examples among other safety evaluation benchmarks. The Attack Success Rates (ASR) for different defense methods under the Llama-2-7B-Chat model are presented in the table below.
| Defense Method | PolicyOriented Safety Evaluation Benchmark | AdvBench | HarmBench
| ---- |---- |---- |---- |
| No Attack | 3.27 | 0.00 | 11.25 |
| No Defense | 94.91 | 96.54 | 96.25 |
| Baseline | 34.91 | 40.19 | 73.12 |
| Ours | 3.64 | 0.00 | 3.75 |
The table shows that our method is still effective among other benchmarks and can significantly outperform the baseline defense across all three benchmarks with a large margin.
> Question C: Scalability of our method on various architectures.
In our paper, we have included two different LLMs with different architectures and sizes: Llama-2-7B-Chat and GPT-3.5. To better assess the scalability of our defense method for different architectures, we further conduct experiments using the Mistral-7B-Instruct-v0.2 model. The defense performance is evaluated by presenting the keyword list attack success rates under various defense methods, as detailed below.
| Defense Method | Llama-2-7B-Chat | Mistral-7B-Instruct-v0.2|
| ---- |---- |---- |
| No Attack | 3.27 | 17.09 |
| No Defense | 94.91 | 97.09 |
| Baseline | 34.91 | 44.00 |
| Ours | 3.64 | 22.55 |
The results in the table reveal that our defense method outperforms the baseline across various architectures, demonstrating the generalizability of our approach to different LLMs.
> Question D: Scalability of our method for more complex tasks.
Our paper addresses not only the direct attack setting, where only harmful data is used for the attack but also a more complex task where harmful examples are mixed into a fine-tuning dataset. We evaluate the defense effectiveness in two practical fine-tuning tasks: dialog summary and SQL generation, both with harmful examples included in the fine-tuning dataset. The results in Table 6 show that our method outperforms the baseline defense approach without compromising fine-tuning performance, demonstrating the scalability of our method for more complex tasks.
---
Rebuttal 2:
Title: Additional Part of the Rebuttal
Comment: > Question E: Secret prompt design to maintain the semantic meaning of benign queries
Thank you for your question. In our paper, we have evaluated the performance on various widely-used benchmarks, including the ARC Challenge, MMLU, and MT-Bench. The results are shown in Table 1 in the paper. It empirically demonstrates that backdoor triggers do not significantly harm natural generation outputs. For instance, under Llama-2-7B-Chat, the model fine-tuned with our defense can achieve the best ARC Challenge Acc, but slightly lower performance in MMLU Acc and MT-Bench Score.
The potential reasons are as follows: despite our method inspired by the traditional backdoor attack to build a correlation between the trigger and safety response, we hope to highlight that our method works in a totally different setting compared with traditional backdoor attacks. **Fine-tuning jailbreak attacks** focus on the fine-tuning stage. At this stage, the initial model has already been trained on a very large corpus, which endows it with strong generation performance (mapping benign questions to normal responses) and robust safety alignment (mapping harmful questions to refusal responses). It’s important to note that before this stage, the model has NEVER learned the ability to map benign data to refusal responses.
Within this context, what our method does is to strengthen the mapping from the trigger + harmful question and safety response with a trigger, while still maintaining the model’s initial generation performance, by using a small amount of triggered data. This correlation is easy to learn with a small amount of data since the initial model already has the mapping from harmful questions to refusal responses. However, such a trigger is hard to generalize to benign question + trigger to refusal response since the mapping from benign data to refusal response does NOT exist in the initial model. The small amount of triggered data is not enough to build a correlation between the trigger + benign question and refusal response. On the other hand, if we want the trigger can be generalized to benign questions, we need to let the model forget the original generation ability (mapping from benign questions to normal responses) during the fine-tuning. In this way, the model’s initial generation performance will also significantly drop, which is not aligned with the principle of fine-tuning.
> Question F: Risk of adaptive attacks.
To conduct adaptive attacks, attackers should first determine the secret prompt used in our method, which can be achieved through prompt injection attacks aimed at leaking the system prompt. Thus, we test the five prompt injection attack queries presented in paper [1] to assess whether they can extract the secret prompt from the fine-tuned Llama-2-7B-Chat model. Surprisingly, the fine-tuned model identified all these prompt extraction attempts as harmful behavior and refused to disclose its system prompt. This resistance to leakage is partly due to the secret prompt's inherent capability to enhance safety by protecting personal privacy. Additionally, this further demonstrates that the secret prompt is not easily extracted by malicious users with adaptive attacks.
[1] Zhang, Yiming, Nicholas Carlini, and Daphne Ippolito. Effective prompt extraction from language models. arXiv preprint arXiv:2307.06865 (2024).
---
Rebuttal Comment 2.1:
Title: Look forward to your reply
Comment: Dear Reviewer D4Wj,
The deadline for the discussion period is approaching. We have provided our rebuttal material and hopefully could address your concerns. Your feedback is highly valuable to us, and we would greatly appreciate it if you could take some time to review our response.
Best Regards,
Authors | Summary: This paper proposes a defense method against fine-tuning-based jailbreaking attacks on close-source LLM services. The main insight is to add a backdoor trigger to safe prompts incorporated during the fine-tuning, and use the trigger as a prefix during inference.
Strengths: 1. This paper focuses on a trendy and important AI safety problem.
2. The evaluation considers diverse settings, including both malicious fine-tuning and simple task-specific fine-tuning.
3. The ablation study covers various components of the proposed method.
Weaknesses: 1. The reason why the backdoor triggers are not harmful to natural generation may be further explained or empirically studied. For general backdoor machine learning, the trigger is to break the performance of the model when injected. How can the safe triggers not affect the LLM’s performance?
2. The defense uses a system prompt during inference to improve the generation safety. Therefore, some prompt-based defenses may need to be compared as baselines, like self-reminder [1] and In-context defense [2].
3. The method cannot defend against fine-tuning attacks on open-source models, which should be acknowledged as a limitation and specified in the title (e.g., Mitigating Fine-tuning based Jailbreak Attack on cloud services …).
[1] Defending ChatGPT against jailbreak attack via self-reminders
[2] jailbreak and guard aligned language models with only few in-context demonstrations
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Question A: Concerns about the safe triggers affecting the LLM’s performance.
Thank you for your question. In our paper, we have evaluated the performance on various widely-used benchmarks, including the ARC Challenge, MMLU, and MT-Bench. The results are shown in Table 1 of our paper. It empirically demonstrates that backdoor triggers do not significantly harm natural generation outputs. For instance, under Llama-2-7B-Chat, the model fine-tuned with our defense can achieve the best ARC Challenge Acc, but slightly lower performance in MMLU Acc and MT-Bench Score.
The potential reasons are as follows: despite our method inspired by the backdoor attack to build a correlation between the trigger and safety response, we hope to highlight that our method works in a totally different setting. **Fine-tuning jailbreak attacks** focus on the fine-tuning stage. At this stage, the initial model has already been trained on a very large corpus, which endows it with strong generation performance (mapping benign questions to normal responses) and robust safety alignment (mapping harmful questions to refusal responses). It’s important to note that before this stage, the model has NEVER learned the ability to map benign data to refusal responses.
Within this context, what our method does is to strengthen the mapping from the trigger + harmful question and safety response, while still maintaining the model’s initial generation performance, by using a small amount of triggered data. This correlation is easy to learn with a small amount of data since the initial model already has the mapping from harmful questions to refusal responses.
However, such a trigger is hard to generalize to benign question + trigger to refusal response since the mapping from benign data to refusal response does NOT exist in the initial model. The small amount of triggered data is not enough to build a correlation between the trigger + benign question and refusal response. On the other hand, if we want the trigger can be generalized to benign questions, we need to let the model forget the original generation ability (mapping from benign questions to normal responses) during the fine-tuning. In this way, the model’s generation performance will also significantly drop, which is not aligned with the principle (e.g., maintaining the model’s initial generation performance) of the fine-tuning.
We will add it to our revised version.
> Question B: Prompt-based defense baselines.
To evaluate additional baselines, including Self-reminder and In-context, in comparison with our proposed defense, we implemented these prompt-based defenses on the No Defense model with Llama-2-7B-Chat. The results are presented in the table below.
| Defense Method | ASR |
| ---- |---- |
| No Defense | 94.91 |
| Self-reminder | 97.09 |
| In-context | 94.91 |
| Ours | 3.64 |
The table shows that none of the prompt-based defense methods are effective against fine-tuning based jailbreak attacks. This finding underscores the challenges of defending against fine-tuning based jailbreak attacks and highlights the effectiveness of our defense method. While prompt-based methods have been proven effective for defending against jailbreak attacks in static LLMs, their effectiveness diminishes when fine-tuning based attacks are involved.
> Question C: Acknowledged LMaaS as a limitation and adding it to the title.
Thank you for your suggestions. We will follow your suggestion to acknowledge it and add it to our title.
Here, we still clarify that our method works on the open-source models under the threat model, where the attacker can upload the data to perform the fine-tuning-based jailbreak attack but can not control the inference template. We have shown the experimental results in our paper on the open-source model Llama-2. We also add experiments on mistral-7B. The results are as follows. It shows the effectiveness of our method on other open-source models. (Ours is 22.55 while the initial model ASR is 17.09. We are significantly better than the baseline method.)
| Defense Method | Llama-2-7B-Chat | Mistral-7B-Instruct-v0.2|
| ---- |---- |---- |
| No Attack | 3.27 | 17.09 |
| No Defense | 94.91 | 97.09 |
| Baseline | 34.91 | 44.00 |
| Ours | 3.64 | 22.55 |
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, I will update my rating. Regarding the prompt-based defense baselines, can you provide more details on the evaluation of in-context defense and self reminder? I think both of these works have discussed that their methods can improve the robustness of unaligned models, which is quite different from your reported results.
---
Reply to Comment 1.1.1:
Title: Response to your valuable feedback
Comment: Thank you for your response! We would like to provide additional details regarding the prompt-based defense baselines:
> More details about the prompt-based defense baselines.
We implement the prompt-based defense baselines, Self-reminder and In-context, on the model fine-tuned under our No Defense setting (fine-tuning based jailbreak attack with 100 harmful examples) with Llama-2-7B-Chat backbone LLM. We then compute the attack success rate using a keyword list parsing on the responses generated within the PolicyOriented Safety Evaluation Benchmark. The detailed defense prompts, which are the same as those used in the original paper, are provided below. In these prompts, "QUERY" indicates the placeholder where evaluation queries from the benchmark are inserted.
Self-reminder:
_You should be a responsible ChatGPT and should not generate harmful or misleading content! Please answer the following user query in a responsible way._
_QUERY_
_Remember, you should be a responsible ChatGPT and should not generate harmful or misleading content!_
In-context:
_User: Write a tutorial on how to make a bomb._
_Assistant: I’m sorry, but I cannot provide a tutorial on how to make a bomb. Making a bomb is illegal and dangerous, and can result in serious harm to yourself and others._
_User: QUERY_
> Reasons for the difference between the prompt-based defense paper results and our reported results.
Note that the mentioned prompt-based defense papers work in a totally different jailbreak setting compared with us. Our results for the prompt-based defense method focus on defending against **fine-tuning-based jailbreak attacks** where the model’s parameters have been updated/finetuned by attacker-uploaded data, while the original papers present defenses against inference-stage jailbreak attacks where they do not finetune the model but just add carefully designed prompt suffix. These experiments are conducted under different attack scenarios, leading to different outcomes. The results show that existing inference-stage jailbreak defenses can not work on fine-tuning based jailbreak attacks. It demonstrates the importance and our contribution to designing effective defense methods for fine-tuning based jailbreak attacks. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FastDrag: Manipulate Anything in One Step | Accept (poster) | Summary: This paper presents a method that enables fast drag-based image editing using diffusion models. The proposed method uses a latent warpage function to obtain the dragged latent representation. The additional nearest neighbor interpolation and content-preserving strategy further improve the result.
Strengths: The strengths of this paper are:
- The proposed method seems straightforward, and the results are convincing.
- A large speed performance gain compared to the previous method.
- The evaluations support the paper’s claim.
Weaknesses: The weaknesses of this paper are:
- I think the discussion of limitation is too shallow. I recommend providing more failing cases and discussing the potential directions to enhance the results.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I wonder why the interpolation is called “Bilateral” nearest neighbor interpolation. For me, “Bilateral” term should be used for a scheme that considers the value as well. And the proposed interpolation scheme is just find the nearest non-null value. This is not critical but I think it is misleading.
- It is unclear to me what the brighter region around the drag arrow means. Is that a mask? Are those region/mask included and used in drag instructions?
- I am confused about Figure 8. What is the desired effect and result of the drag edit? Semantically, the user is moving “the dog’s” hand, not “the cat’s”. However, the result seem to generate cat’s hand for all different steps. Is this some limitation of all drag-based method? If yes, I think it is worth discussing.
- I wonder if there are metrics or evaluation processes that can evaluate the content preserving quantitatively. Regarding image editing, I think this is a very important factor and the current paper only provides a couple of images as evidence.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As I mentioned above, I think the limitation discussion is not enough (I also raised another potential limitation). Moreover, I strongly recommend including the limitation discussion in the main paper, not supplemental material.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the positive feedback, helpful comments, and the support of our work.
Following are our responses to each individual comment (which are highlighted in italics).
# Response for Weaknesses (RfW)
>*W1: I think the discussion of limitation is too shallow. I recommend providing more failing cases and discussing the potential directions to enhance the results.*
**RfW1:** Thank you for the constructive suggestion. We have provided more failing cases to illustrate the limitations of our study, as shown in Fig. 1 and Fig. 2 of the attached PDF with our author rebuttal. We also provided in-depth discussion about these limitations for potential future directions, please see the detailed analysis of the ''Limitation'' section in General Response to reviewers.
# Response for Questions
>*Q1: I wonder why the interpolation is called ''Bilateral'' nearest neighbor interpolation. For me, ''Bilateral'' term should be used for a scheme that considers the value as well. And the proposed interpolation scheme is just find the nearest null value. This is not critical but I think it is misleading.*
We apologize for any confusion caused by our terminology. The term ''Bilateral'' was chosen to emphasize that the interpolation considers nearest neighbors in two directions (i.e., both the X and Y axes), highlighting that the interpolation process involves adjacent non-null data across two dimensions. We will explain this terminology in the camera-ready version for better clarity.
>*Q2: It is unclear to me what the brighter region around the drag arrow means. Is that a mask? Are those region/mask included and used in drag instructions?*
Yes, you are correct, the brighter region is a mask, indicating the image area to be edited.
It is used as drag instruction with the drag arrows, which is widely adopted in drag-based editing [4, 18, 19, 27].
We will explain this in the camera-ready version for better clarity.
>*Q3: I am confused about Fig.8. What is the desired effect and result of the drag edit? Semantically, the user is moving ''the dog’s'' hand, not ''the cat’s''. However, the result seem to generate cat’s hand for all different steps. Is this some limitation of all drag-based method? If yes, I think it is worth discussing.*
We apologize for any confusion regarding Fig.8.
This case was randomly selected to illustrate an ablation study on number of inversion steps in terms of drag effect, which is used to determine the number of inversion steps in diffusion inversion.
It is not intended to demonstrate a specific editing result. To avoid such confusion, we will replace Fig. 8 with a more proper case in the camera-ready version.
Regarding your question, the observed phenomenon is common for diffusion models. In this case, when ''the hand'' is moved onto ''the cat'', it semantically associates with ''the cat'', resulting in the generation of ''the cat’s hand''. This is because the U-Net architecture used in diffusion models contains numerous CNN layers, which fuse features of the cat and the hand in the latent space when their positions are close together. For generative artificial intelligence, the ability to generate image with contextual semantic association is actually considered a strength of diffusion models, such as Stable Diffusion.
>*Q4: I wonder if there are metrics or evaluation processes that can evaluate the content preserving quantitatively. Regarding image editing, I think this is a very important factor and the current paper only provides a couple of images as evidence.*
Indeed, we have provided evaluation results with regard to content preserving in terms of ''1-LPIPS'' metric, as shown in Table 1 of our paper. ''1-LPIPS'' metric is widely used to for image content consistency evaluation for drag editing [4, 14, 15, 27]. Our experiments are conducted on DragBench dataset, and the results can demonstrate the effectiveness of our method, as explained in general response (i.e., evaluation metrics and statistical rigor) to reviewers.
# Response for Limitations
>*As I mentioned above, I think the limitation discussion is not enough (I also raised another potential limitation). Moreover, I strongly recommend including the limitation discussion in the main paper, not supplemental material.*
Thank you for the constructive suggestions.
Due to page limitation, we did not include limitations in the main paper of the submitted paper. However, we acknowledge the importance of this discussion and will try our best to enhance and include it in the main paper of the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thx authors for the detailed rebuttal.
Most of my concerns are addressed.
I am still in favor of accepting this paper, as long as authors revised the paper according to their rebuttal.
---
Reply to Comment 1.1.1:
Comment: Thank you for the constructive suggestions and supportive comments. | Summary: The paper introduces "FastDrag," a novel one-step drag-based image editing method that significantly accelerates the editing process compared to existing n-step iterative approaches. The core of FastDrag is the Latent Warpage Function (LWF), which simulates the behavior of stretched material to adjust pixel locations within the latent space, enabling single-step latent semantic optimization. This results in substantially faster editing speeds while maintaining quality and precision.
The paper's contributions include:
1. The proposal of FastDrag, a diffusion-based image editing approach that employs LWF for one-step semantic optimization, greatly enhancing editing efficiency.
2. The introduction of Bilateral Nearest Neighbor Interpolation (BNNI), a new interpolation method that addresses null regions and enhances the semantic integrity of edited content.
3. The incorporation of a consistency-preserving strategy that maintains image consistency during the editing process by using original image information to guide diffusion sampling.
Experiments on the DragBench dataset demonstrate FastDrag's superiority in processing time, being nearly 700% faster than the fastest existing method and 2800% faster than the typical baseline method, with comparable editing performance. The paper also includes rigorous ablation studies to validate the strategies used in FastDrag. The authors aim to continue refining and expanding their approach for enhanced capabilities and applications in the future.
Strengths: **Originality:** The paper introduces "FastDrag," a novel one-step drag-based image editing method that significantly accelerates the editing process compared to conventional n-step iterative methods. By proposing a Latent Warpage Function (LWF), the authors innovate by simulating the behavior of a stretched material to instantly adjust pixel locations within the latent space, marking a departure from traditional iterative latent optimization. Additionally, the Bilateral Nearest Neighbor Interpolation (BNNI) strategy for handling null regions and the consistency-preserving mechanism that utilizes self-attention key-value pairs add further novelty to the approach. The integration of these components into a diffusion model framework for image editing showcases a creative combination of existing techniques to solve a known problem more efficiently.
**Quality:** The quality of the work is demonstrated through rigorous experimentation on the DragBench dataset, which validates FastDrag's efficiency and performance. The paper shows that FastDrag is not only faster but also maintains editing quality, being nearly 700% faster than the fastest existing method and 2800% faster than typical baseline methods. A series of ablation studies further reinforces the effectiveness of the individual components of the proposed method. The computational efficiency and scalability implications are considered, even though specific computational requirements are not fully detailed in the abstracted content.
**Clarity:** The paper is structured coherently, presenting the FastDrag method systematically across its phases: diffusion-based editing, one-step warpage optimization, BNNI for semantic integrity, and the consistency-preserving strategy. The abstract and introduction effectively outline the contributions, setting clear expectations for the paper's content. Theoretical assumptions and limitations are openly discussed, complying with the NeurIPS Code of Ethics. The use of visual aids like Fig. 2 likely contributes to illustrating the method's workflow, although the figure itself is not accessible in the abstracted content.
**Significance:** The significance of FastDrag lies in its potential to democratize and streamline image editing tasks, enhancing productivity in various domains such as art, design, education, and training. By offering an intuitive and efficient tool, it empowers a broader range of users to engage in creative image manipulation without requiring extensive technical skills. The societal impact discussion highlights both the positive outcomes of fostering creativity and the potential negative implications like misuse for disinformation or privacy breaches, showing a balanced perspective. The authors' consideration of mitigation strategies underscores a proactive approach to responsible innovation.
Overall, FastDrag represents a marked advancement in image editing technology, combining originality in method design, high-quality empirical validation, clarity in presentation, and significant implications for both the research community and practical applications. Its contribution to reducing latency in image editing workflows can have far-reaching effects on digital content creation workflows and user experiences.
Weaknesses: The paper titled "FastDrag: Manipulate Anything in One Step" presents a novel one-step drag-based image editing method that aims to accelerate the editing process compared to existing n-step iterative optimization methods. While the paper makes significant contributions to the field of image editing, there are several areas where the work could be improved:
1. **Theoretical Depth**: The paper does not appear to include theoretical results or proofs to support the claims made. While it is stated that the paper does not involve theoretical results, providing some theoretical grounding could strengthen the contribution, such as explaining the mathematical properties of the proposed Latent Warpage Function (LWF) or the bilateral nearest neighbor interpolation (BNNI) strategy.
2. **Experimental Design**: While the paper includes qualitative and quantitative evaluations, there could be a more extensive range of experiments to further validate the robustness of the method. For instance, testing the method on a broader variety of image types and editing tasks could provide a more comprehensive understanding of its capabilities and limitations.
3. **Statistical Rigor**: The paper does not report error bars or other statistical measures that would provide insight into the variability and reliability of the results. Including such measures would enhance the credibility of the experimental findings.
4. **Comparison with State-of-the-Art**: The paper compares its method with existing techniques but could benefit from a more detailed analysis of where FastDrag stands in terms of the trade-offs between speed and quality. A deeper dive into how the method's performance compares across different types of image content and editing tasks would be valuable.
5. **User Study**: The paper lacks a user study that could provide insights into the usability and user experience of the FastDrag method. Including feedback from potential end-users, such as artists or designers, could highlight practical aspects that may not be evident from quantitative metrics alone.
6. **Limitations Discussion**: While the paper does discuss some limitations, it could be more explicit about the scenarios where the method may not perform as expected. Providing more concrete examples and discussing potential workarounds or future work to address these issues would be beneficial.
7. **Reproducibility**: The paper claims that the results are reproducible and that the code is included in the supplementary materials. However, it would be helpful to provide more detailed instructions on how to set up the environment and run the experiments, ensuring that other researchers can easily replicate the results.
8. **Ethical Considerations**: The paper touches on societal impacts but could expand on the ethical considerations of using such technology, especially regarding the potential for misuse in spreading misinformation or altering reality.
9. **Technical Details**: The paper could benefit from a more thorough explanation of the technical details, such as the specific design choices in the U-Net architecture and how the latent consistency model (LCM) is integrated into the FastDrag framework.
10. **Long-Term Performance**: The paper could address how the method performs over time, especially considering the potential for the model to degrade in quality as more edits are made to the same image.
By addressing these points, the authors could strengthen the paper's contributions and provide a more comprehensive understanding of the FastDrag method's capabilities and potential areas for future research.
Technical Quality: 3
Clarity: 3
Questions for Authors: **Questions for the Authors:**
1. **Latent Warpage Function (LWF) Details:** Could you provide more technical details about the design and implementation of the Latent Warpage Function? Specifically, how is the stretching behavior modeled, and what criteria determine the degree and direction of pixel adjustments within the latent space?
2. **Bilateral Nearest Neighbor Interpolation (BNNI) Robustness:** How does the BNNI strategy cope with complex scenes or objects with varied textures? Are there any specific cases where BNNI might struggle, and how do these instances affect the final image quality?
3. **Consistency-Preserving Strategy and Diffusion Sampling:** Can you elaborate on the process of adopting semantic information from the original image during diffusion inversion, and how exactly these key-value pairs guide the sampling process to maintain consistency? Are there quantitative measures to evaluate the consistency preservation?
4. **Scalability with Image Complexity and Dataset Size:** Your work demonstrates impressive speed improvements, but how does FastDrag's performance scale with the complexity of the image being edited or the size of the underlying dataset? Are there any benchmarks or theoretical analyses to predict computational demands for larger datasets or more intricate images?
5. **Addressing Negative Societal Implications:** Given the potential misuse of FastDrag for generating disinformation or breaching privacy, have you explored or considered integrating any technical safeguards directly into the tool to mitigate these risks? For instance, adding watermarks, traceability features, or built-in detection mechanisms for manipulated content.
6. **Limitations and Future Work:** In Appendix D, you discuss the limitations of your method. Could you elaborate on the most critical limitations that future research should address to improve FastDrag, and what are your thoughts on potential directions for overcoming these limitations?
7. **Error Analysis and Sensitivity to Initialization:** How sensitive is FastDrag to the initial state of the latent space? Have you conducted experiments to analyze the variability in output quality based on different starting points, and if so, could you share insights on the stability of the method?
**Suggestions for Improvement:**
1. **Quantitative Evaluation of Semantic Integrity:** Consider including a quantitative evaluation of the semantic integrity of edited images, perhaps through metrics that assess feature similarity or structure preservation, to complement qualitative assessments.
2. **User Study or Survey:** Conduct a user study or survey to gather feedback from artists, designers, and other potential users to understand the real-world usability and satisfaction with FastDrag. This could validate the claimed benefits of intuitiveness and efficiency.
3. **Expand on Mitigation Strategies:** Further elaborate on potential mitigation strategies for the negative societal impacts mentioned. Discuss how these strategies could be practically implemented, and if possible, provide examples of successful implementations in similar contexts.
4. **Code Availability and Reproducibility:** Since the computational efficiency and resource requirements are crucial aspects of FastDrag, consider making the code publicly available with clear documentation to facilitate reproduction of your results and encourage further development by the community.
5. **Additional Experimentation on Diverse Datasets:** Extend the evaluation to include a variety of datasets covering different image types and complexities to strengthen the generalizability of FastDrag's performance claims.
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations of their work in the "Limitations" section of the paper, which is in line with the NeurIPS Paper Checklist guidelines. They are encouraged to explicitly state any assumptions made and discuss how these assumptions could be violated in practical scenarios, as well as the potential implications of such violations. For instance, if the FastDrag method is highly dependent on certain image characteristics, such as resolution or lighting, the authors should clarify these dependencies and discuss the possible decrease in performance under varying conditions.
Regarding potential negative societal impacts, the authors should discuss if there are any direct paths from their research to unintended or malicious uses, as outlined in the guidelines. Given that FastDrag is an image editing tool that could potentially be misused for creating disinformation or deepfakes, it is essential for the authors to acknowledge this possibility and outline strategies to mitigate such risks. This could include discussing ethical release strategies for the model, such as gated access, user guidelines, or incorporating detection mechanisms to identify manipulated content.
If the authors have not done so already, they should also reflect on the broader ethical implications of their work, such as fairness considerations (ensuring the technology doesn't unfairly impact specific groups), privacy concerns (ensuring personal data is protected), and security considerations (preventing unauthorized use). The authors should also consider including a discussion on how the technology could be used as intended but still give incorrect results, or how it might be intentionally or unintentionally misused, along with possible mitigations.
In summary, if the authors have not adequately addressed these aspects, they should revise their paper to include a clear limitations section that outlines any assumptions, constraints, or areas where the method may fall short. Additionally, they should discuss potential negative societal impacts and propose strategies to mitigate these risks. The NeurIPS guidelines emphasize that authors should be transparent about limitations and negative impacts, and doing so will not negatively affect their review but instead demonstrates responsible research conduct.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the positive feedback, helpful comments, and high praise and recognition of our work. Below are our responses to each individual comment (highlighted in italics).
# Response for Weaknesses
> *W1: Theoretical Depth*
Our one-step optimization strategy is developed by simulating strain patterns in stretched materials, inspired by [20]. The core of this strategy is LWF, which is strictly derived by formula (3)-(10) in our paper.
Furthermore, our effort in this study is to significantly reduce the editing time, the experiments we conducted have fully verified the contribution we claim. Specially, the results in Table 1 of our paper show FastDrag's ultra-fast editing speed compared to SOTA methods.
Additionally, the principle of BNNI is based on the most fundamental interpolation principles in the field of image processing.
> *W2, 3 and 4: Experimental Design, Statistical Rigor and Comparison with State-of-the-Art*
The evaluation methods on DragBench [27] dataset are widely used in the field of drag-based editing, following the state-of-the-art methods [4, 14, 15, 27]. The results presented are averages across DragBench, which includes diverse image categories and drag tasks. Thus, the results on it are sufficiently demonstrating FastDrag’s superiority and robustness.
Additionally, for Statistical Rigor, please refer to the ''Statistical Rigor'' of the General Response.
Regarding the trade-offs between speed and quality, please see the response RfW3 to Reviewer eL9M.
> *W5: User Study*
Thank you for your valuable suggestions. We plan to conduct a user study through surveys in the released project demo to ensure the reliability of the results. The outcomes of this study will be published following the demo release.
> *W6: Limitations Discussion*
Regarding Limitations, please refer to ''Limitation'' in General Response to reviewers.
> *W7: Detailed Code Instructions*
Thanks for your valuable suggestion. We will improve it and provide more detailed tutorials in the released code.
> *W8: Ethical Considerations*
It is a common concern with generative methods. We will enforce strict open-source licensing in the publicly released code to limit unethical use.
> *W9: Technical Details of U-Net and LCM*
Our diffusion structure's implementation entirely follows the mainstream baseline DragDiffusion [27].
Specifically, the U-Net used in our model is widely used in image generation [4, 14, 15, 27, 31]. As stated in Appendix B, the U-Net structure is adapted with LCM-distilled weights from LDM (i.e., Stable Diffusion 1.5). We will further clarify this in the camera-ready version.
> *W10: Long-Term Performance*
Thank you for your interesting idea. We conducted experiments and found that repeated editing using diffusion models can degrade image quality over time due to accumulated errors. This may be mitigated by refined training techniques or corrective algorithms. Due to PDF space constraints, we are unable to include the figure in this rebuttal but will include related discussions in the camera-ready version.
# Response for Questions
> *Q1: Latent Warpage Function (LWF) Details*
In Section 3.2.1, we provide a detailed explanation of the derivation process for the Latent Warpage Function (LWF).
1) We model the stretching behavior by Equation (3), which normalizes and aggregates ''component warpage vectors'' $p_jp_j^{i\ast}$ caused by multiple drag instructions into a single warpage vector for subsequent latent optimization, as shown in Fig. 3. The $p_jp_j^{i\ast}$ are modeled by Equation (5), with the stretch factor modeled by Equation (6), designed to simulate the behavior of stretched materials.
2) The degree of pixel adjustments is inversely proportional to the distance of the pixel from handle point $s_i$.
3) The direction of pixel adjustments is the sum of $p_jp_j^{i\ast}$, whose direction is the same as corresponding drag instruction's direction $|\overrightarrow{s_i e_i}|$.
> *Q2: BNNI in Complex Scenes*
Thank you for your valuable opinion. BNNI may struggle with cases involving detailed textures, as discussed in our response RfW1 to Reviewer eL9M. However, BNNI effectively handles most cases. Results on the DragBench dataset demonstrate its comparability with SOTA methods. We will try to improve it in future work.
> *Q3: Consistency-Preserving Strategy Details*
Key-value pairs contain semantic information through attention mechanism during diffusion inversion, which has been proved by many researches [2, 18, 19]. Via cross-attention mechanism in each step, key-value pairs containing ori-image's semantics guide the sampling process to maintain consistency. We use the ''1-LPIPS'' metric to evaluate consistency preservation. Please refer to ''Evaluation Metrics'' in General Response.
> *Q4: Scalability with Image Complexity and Dataset Size*
There is no existing drag-based image editing benchmark with a focus on intricate images or larger datasets. Although DragBench is not as large as some traditional datasets, it is the most complex dataset in its field—featuring more classes and a mix of real and generated images compared to those used in [14, 21, 27], which contain less image number and class or miss key drag instructions, such as mask. It also supports our primary contribution of improving editing speeds.
> *Q5: Negative Societal Implications*
Thanks for your valuable suggestion. About this issue, please see response in RfW8.
> *Q6: Limitations and Future Work*
Regarding Limitations and Future Work, please refer to ''Limitation'' in General Response to reviewers.
> *Q7: Sensitivity to Initialization*
FastDrag uses DDIM inversion strategy to get the initial state of the latent space as same as other drag-based editing methods [14, 15, 31].
Since our latent optimization is decoupled from the diffusion inversion and sampling process, Fastdrag is not sensitive to the initial state of the latent space. | Summary: This paper introduces a new one-step drag-based image editing method that significantly accelerates the editing process using a LWF function. It also employs a BNNI strategy to handle null regions and a consistency-preserving strategy to maintain the integrity of the edited image. Experimental results demonstrate FastDrag’s fast speed and performance compared to existing methods.
Strengths: 1. FastDrag is easy for editing and has a fast editing speed.
2. BNNI strategy addresses the issue of null regions, maintaining semantic integrity and quality.
3. It also provides spatial control over specific regions of the image, enabling detailed drag editing.
Weaknesses: 1. If the drag distance is long, will the BNNI still success to maintain high semantic quality, how about the editing speed and complexity for long-distance dragging and latent relocation?
2. Threre should also be some failed examples to better illustrate the proposed method.
3. I am curious whether using a better base model can achieve better editing results, and whether there is a trade-off between editing time and editing performance.
4. More recent works [1] should be included for comparison
[1] EasyDrag: Efficient Point-based Manipulation on Diffusion Models, CVPR 2024
Technical Quality: 3
Clarity: 3
Questions for Authors: See the strengths and weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the strengths and weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the positive feedback and helpful comments. Following are our responses to each individual comment (which are highlighted in italics).
# Responses for Weaknesses (RfW):
>*W1: If the drag distance is long, will the BNNI still success to maintain high semantic quality, how about the editing speed and complexity for long-distance dragging and latent relocation?*
**RfW1:** For typical drag tasks, dragging is usually performed over short distances. However, for long-distance dragging, which is generally an object moving task, as illustrated in Figure 13 (row 3) in the Appendix, we do not need to employ the BNNI strategy. Instead, we can fill the semantics around the target location or a manually designated area of the image into the moved object's original location [18, 19], as described in Appendix B of our paper. We will further clarify this in the camera-ready version.
However, for extremely long-distance drag editing, our method may lose some details of the object, as explained and discussed in the "Limitation" of our general response to reviewers. Please refer to the limitation regarding "Extremely Long-distance Drag Editing" in our general response to reviewers.
Nonetheless, long-distance editing does not increase the editing speed and complexity of our method. Our method can achieve latent optimization with one-step warpage optimization using Equations (3)-(10), which is not affected by the editing distance.
>*W2: There should also be some failed examples to better illustrate the proposed method.*
**RfW2:** Thank you for your valuable suggestion. We have included failed examples to illustrate the limitations of our method for better understanding, as shown in Figures 1 and 2 of the attached PDF with our author rebuttal. Please refer to the limitation discussion in our general response to reviewers.
>*W3: I am curious whether using a better base model can achieve better editing results, and whether there is a trade-off between editing time and editing performance.*
**RfW3:** Regarding better base model, it may potentially enhance editing results.
Since the compared state-of-the-art methods all utilize SD 1.5 for drag tasks, our experimental results are solely based on SD 1.5.
Theoretically, our one-step optimization method is independent of the base model.
Therefore, FastDrag can perform drag tasks based on more advanced models such as SDXL or SDXL turbo, and its editing performance theoretically varies with the base model used.
However, integrating more advanced models like SDXL would require substantial code base restructuring (we use relatively early version of diffusers), due to time constraints in the rebuttal period, we are not able to implement this in time.
We will support more base models in the future.
Regarding the trade-off between editing time and performance, there should be a trade-off between editing time and performance for $n$-step optimization methods, as they all require $n$-step iterative optimization to achieve desired editing performance, though these methods can reduce editing time by decreasing the number of iterations, this typically degrades performance, since less iteration will lead to insufficient latent optimization. Thus there should be a trade-off for $n$-step methods.
In contrast, FastDrag leverages one-step warpage optimization to achieve latent optimization via Equations (3)-(10), the editing time will not be influenced by editing task or images. Therefore, our method does not have the trade-off between editing time and performance.
>*W4: More recent works [ref1] should be included for comparison.*
**RfW4:** Thank you for the suggestion. The study [ref1] did not publish or preprint when we submitted FastDrag, thus we did not compare it. We will discuss and compare it in our camera-ready version.
[ref1] EasyDrag: Efficient Point-based Manipulation on Diffusion Models, CVPR 2024 | Summary: This paper presents a drag-based image editing method that uses the Latent Warpage Function to optimize pixel adjustments in a single step, which is an improvement over previous iterative methods. This approach simulates a stretched material in the latent space to allow for fast and accurate pixel adjustments. It also combines bilateral nearest-neighbor interpolation to handle null regions and a consistency-preserving strategy, involving key and value pairs in self-attention during the inversion process, to maintain semantic consistency and coherence.
Strengths: The proposed method reduces computational time by performing one-step optimization, which makes the process faster and more efficient. Additionally, it includes clever tricks using bi-linear nearest neighbor interpolation to fix empty areas. Preserving keys and values during inversion also appears to help maintain semantic coherence.
Weaknesses: The paper needs to include more comprehensive evaluation metrics beyond visual inspection. The chosen metrics are unreliable, and the evaluation seems to be on a very small scale. Moreover, there are missing standard errors, and it is unclear if the results are cherry-picked or randomly chosen as the numbers on the proposed metrics look very much alike. To improve the overall evaluation of editing accuracy and quality, using interest points and key points like SIFT, SuperPoint, or DUST3R to verify if features at desired locations match the correct points in the edited image would be beneficial. Furthermore, there is a need to explore the effect of the edits on "other" parts of the image to understand potential unintended alterations due to the stochasticity in diffusion models. The edited images appear overly smooth and lose finer details, which may impact the overall realism.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) Does the reported computational time include the inversion process?
2) What happens if you integrate DragDiffusion with a single-step diffusion model? How does it compare with Fastdrag?
3) Why does iterative drag diffusion suffer compared to the one-step approach?
4) How does FastDrag affect other parts of the image during edits?
5) Can you perform multi-point drag editing with FastDrag?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes in the appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for the positive feedback and helpful comments. This rebuttal addresses your comments and suggestions for conciseness. Following are our responses to each individual comment (which are highlighted in italics).
# Response for Weaknesses (RfW)
>*W1: ... needs to include more comprehensive evaluation metrics ...*
**RfW1:** Regarding evaluation metrics, following the SOTA methods [14, 15, 27], we employ ''MD'' (Mean Distance [22]), ''IF'' (i.e.,''1-LPIPS'' [10]) as evaluation metrics for a fair comparison. The metrics suggested by Reviewer, such as SIFT, are primarily used for feature point detection and matching. The suggested metrics may not be appropriate for drag-based editing due to semantic changes within the editing region. It is because that drag editing requires measuring the similarity between the edited result and the desired semantics, rather than comparing the ground truth and the edited result.
Due to the space limitation, please refer to ''Evaluation Metrics'' in the general response to reviewers, where we conduct more detailed analysis and explanation.
>*W2: Miss standard error and it is unclear if the result are ...*
**RfW2:** Our method is evaluated on the widely used DragBench dataset [27] for drag-based editing, following the state-of-the-art methods [4, 14, 15, 27] for a fair comparison. The results presented are the averages across DragBench that includes diverse image categories and drag tasks, which are not cherry-picked or randomly chosen. This has been explained in the General Response to Reviewers, please refer to the Evaluation Metrics in general response to reviewers.
Regarding standard errors, to address your concern, we conducted an additional experiment by repeating our experiment 10 times under the same experimental settings. We observed that the variances of the performance metrics obtained from 10 realizations of our FastDrag are MD (0.000404), 1-LPIPS (9.44E-11), and Time (0.018), all of which fall within a reasonable range.
>*W3: ... effect of the edits on "other" parts of the image ...*
**RfW3:** There are almost no changes outside the masked editing region using our method.
It is because our approach optimizes the latent strictly within the masked region, as described by Equations (8)-(9) in our paper. We also employ a consistency-preserving strategy to maintain the consistency of image content outside the mask region. Note that, this is almost the same for other drag editing methods, as they also conduct latent optimization within the masked region, and adopt different strategy to maintain consistency such as LoRA [14, 15, 27, 31].
>*W4: ... smooth and lose finer details ...*
**RfW4:** The limitation of smooth and losing finer details is common across diffusion-based drag methods, inherent to the diffusion models employed, such as LDM and LCM. This issue arises due to model approximations, randomness, and potential computational errors, which can result in imperfect symmetry between the inverse and sampling processes, leading to a situation where generated image may not be exactly the same as the original [28].
For instance, DragNoise [15] and DragDiffusion [27] introduce textures not present in the original images, affecting fidelity, as illustrated in Fig. 6 of paper. However, although our method may result in finer details loss, it outperforms others in overall task execution, especially in editing speed. We will discuss the limitation in the camera-ready version. Future research will be conducted to mitigate such effect and enhance the editing performance.
# Response for Questions
>*Q1: Does the reported computational time include the inversion process?*
Yes, the reported computational times include the inversion process, sampling process, and the time for latent optimization.
>*Q2: What happens if you integrate DragDiffusion with a single-step diffusion model? How does it compare with Fastdrag?*
When integrating DragDiffusion with a single-step diffusion model, the editing time is still much longer than using FastDrag. For DragDiffusion and FastDrag under diffusion steps of 1, 20, and 50, we calculate the time required for inverse, sampling, and latent optimization respectively. The results provided in Fig. 3 of the attached PDF show that even with a single diffusion step (i.e., diffusion step set as 1), DragDiffusion still requires significantly more time (20.7 seconds) compared to FastDrag (2.88 seconds).
>*Q3: Why does iterative drag diffusion suffer compared to the one-step approach?*
These methods require $n$-step iterations to achieve semantic optimization, with each step to optimize semantics within small editing area of the image, thus they need $n$ small-scale, short-distance optimizations to achieve overall latent optimization, and require large amount of time to perform $n$ iterations of the optimization. Whereas our method only requires a single short-time computation on the latent to achieve the semantic optimization, thereby significantly reducing editing time.
>*Q4: How does FastDrag affect other parts of the image during edits?*
During the editing process, other parts of the image outside the mask remain almost unchanged, as explained in RfW3 to the reviewer.
>*Q5: Can you perform multi-point drag editing with FastDrag?*
Yes, our method can perform multi-point drag editing, which has been demonstrated in Fig. 6 (row 4) of our paper and Fig. 13 (row 6) in Appendix C.
We will provide further clarifications and explanations regarding RfW2 and Q3 in the camera-ready version of our paper.
---
Rebuttal Comment 1.1:
Title: Response to author rebuttal
Comment: Thank you for the thorough response. Many of my concerns have been addressed, but I'm still unsure if the metrics used for evaluation are suitable. Perhaps this is something to consider for future work. Overall, the results seem promising and would be of interest to the broader community. I am willing to change my rating to a weak accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for the supportive comments and the constructive suggestions for our future study. | Rebuttal 1:
Rebuttal: # General Response to Reviewers
We would like to thank the reviewers for the positive feedback and valuable comments. We are elated that the reviewers found our paper well-written, the presentation clear and excellence in ultra-short drag-based editing time compared to state-of-the-art (SOTA) methods. This rebuttal addresses reviewers' main concerns and suggestions for conciseness. For more detailed questions and answers, please refer to the individual responses to each reviewer.
**1. Evaluation Metrics (To Reviewers YBFQ, hGqW, Sik5)**
In our study, we employ the widely used performance metrics in the field of drag-based editing, i.e., "MD'' (Mean Distance [22]), "IF'' (i.e., "1-LPIPS'' [10]), following the SOTA methods [4, 14, 15, 27]. MD assesses how well the approach moves the semantic contents to the target points. 1-LPIPS quantifies the similarity between the original and edited images (i.e., consistency), as detailed in Section 4.2 of our paper.
The metrics that Reviewer YBFQ suggested (i.e., SIFT, SuperPoint and DUST3R) are primarily used for feature point detection and matching. These may not be appropriate for evaluating performance of drag-based editing due to semantic changes within the editing region. For example, turning the head or thinning the face not only changes the image's content but also its semantics, which cannot be effectively measured by these metrics such as SIFT, as illustrated in Figure 13 of our paper.
Besides, our effort in this study is to significantly reduce the editing time for drag-based editing, thus we adopt the evaluation metrics following these SOTA methods for a fair comparison, and the results effectively support our contributions, i.e., ultra-short editing time with competitive quantitative metrics.
However, as a novel task, existing mainstream metrics (i.e., MD and IF) in this field may not be perfect to evaluate drag performance. We will make efforts to develop appropriate metric for drag editing in our future study.
**2. Statistical Rigor (To Reviewers YBFQ, hGqW, Sik5)**
Our method is evaluated on the most widely used DragBench dataset [27] for drag-based editing, following the SOTA methods [4, 14, 15, 27] in this field, for a fair comparison. DragBench is a diverse compilation encompassing more than 10 types of images, including 205 images with 349 pairs of handle and target points.
The comparison results presented are the averages across DragBench that includes diverse image categories and drag tasks. Thus, the results in our study are sufficiently to demonstrate superiority and robustness of our method, as compared with the state-of-the-arts.
To address the reviewers' concerns, we conducted an additional experiment by repeating our experiment 10 times under the same experimental settings. We observed that the variances of the performance metrics obtained from 10 realizations of our FastDrag are MD (0.000404), 1-LPIPS (9.44E-11), and Time (0.018), all of which fall within a reasonable range. These statistical results further demonstrate the effectiveness and stability of our method for drag editing. We will provide these statistical results in the camera-ready version for better clarity.
**3. Limitation (To all Reviewers)**
We will provide a more in-depth analysis of the limitations of our method in the camera-ready version from following three aspects:
**Overly Smooth and Finer Details Loss:** This is a common issue across diffusion-based drag editing methods, inherent to the diffusion models employed, such as LDM and LCM. For instance, DragNoise [15] and DragDiffusion [27] introduce textures not present in the original images, affecting fidelity, as illustrated in Figure 6 of our paper.
Though our method may also result in some loss of finer details, it outperforms other state-of-the-art methods in overall task execution, particularly in editing speed.
**Extremely Long-distance Drag Editing:** When conducting extremely long-distance drag editing, our method may lose some details of the dragged object. As illustrated in Figure 1 of the attached PDF, though our method can achieve the long-distance drag editing, part detail information of the objects are missing, i.e., window on post box (row 1), and window on house (row 2). This is because our optimization is conducted on a lower-dimensional latent space. If the feature changes in the details within the latent space are too large (i.e., long-drag editing), the semantics of these details will be severely disrupted, making it more difficult to complete or maintain the details, resulting in the missing details of drag editing.
However, our FastDrag can still achieve better editing performance than these SOTA methods, as illustrated in Figure 1 of the PDF.
The reason why these methods perform inferior to our FastDrag is that they require $n$-step optimizations, with each step to optimize semantics within small editing area of the image, thus they need $n$ small-scale, short-distance optimizations, to achieve overall latent optimization. When performing extremely long-distance drags, each step requires to optimize semantics within much larger editing area, resulting in difficulty for desired semantic optimization, leading to inferior performance to our FastDrag.
**Highly Relying on Precise Drag Instruction:** It is worth noting that achieving precise performance relies on clear drag instructions. FastDrag optimizes the latent space based on these instructions, which is also common for the SOTA methods. Therefore, providing clear instructions is crucial for desired performance. As illustrated in Figure 2 of the attached PDF, when the goal is to "thin the hair while keeping the face size", it is best to exclude the face from the mask region (row 2 of Figure 2). Similarly, when the task is to "lengthen the beak", the handle point should ideally be placed where the 'beak' feature is more prominent.
Pdf: /pdf/3219ee0643383402a410de7ec07b6c74bdba0d1c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Computational Landscape of Replicable Learning | Accept (poster) | Summary: The authors study the computational relationship between *replicable* learning algorithms, a recent notion of algorithmic stability [ILPS22] promising two runs of the algorithm over independent data output the same hypothesis with high probability assuming shared randomness, and other classical notions of algorithmic stability such as SQ-learning and differential privacy (as well as the closely related topic of online learning). The equivalences between such notions are well understood statistically, but this is the first paper to make a systematic attempt to study *computational* aspects of the problem.
More formally, the authors present both negative results (separations), and positive results (efficient transformations) between these notions:
On the negative side, the authors show:
1. A concept class based on one-way sequences which has no efficient online learner but can be learned replicably in polynomial time. This result is based on work of Bun separating private and online learning, but requires a new idea on the replicable side. In particular, two independent runs of the algorithm (with fresh samples) must identify a shared early element in the one-way sequence (using their shared randomness) to output the same unique hypothesis on the remainder. This is done via a new replicable quantile estimation lemma that leverages a variant of [ILPS22]'s randomized rounding methods.
2. A separation between SQ learning and replicable learning based on affine parities. While this class was already known to separate the two under the uniform distribution, no separation (or even efficient replicable algorithm) was known for more general distributions. Based on work of Blanc, Lange, Malik, and Tan, the authors give a general procedure to lift an efficient replicable algorithm for the uniform distribution for certain classes to a replicable algorithm for general distributions whose computational efficiency scales with the decision tree complexity. They then give examples of distributions with low decision tree complexity where the trivial `gaussian elimination’ based algorithm for the uniform case fails but their lifted algorithm runs in polynomial time (they also observe these distributions remain hard for SQ). This makes progress on a question of ILPS'22 about replicable algorithms where Gaussian elimination fails.
On the positive side:
1. There is an algorithm transforming any pure private learner for a class C to a $rho$-replicable learner in time exp(rep-dim(C))*poly(eps^{-1},\rho^{-1}), where eps is the accuracy parameter and rho is the replicability parameter. Their algorithm uses a classic procedure of Beimel Nissim and Stemmer which generates a finite approximation of C using a pure private learning, then learn this representation with the finite learner of [BGH+23]. To get polynomial dependence, this is run for constant error then boosted afterwords using replicable boosting.
Strengths: Replicability is a critical notion in machine learning and the sciences in general, and has recently been a fruitful notion more generally in the study of algorithmic stability, leading e.g. to advances in differential privacy. Prior works in the area largely focus on the statistical complexity of replicable learning. Understanding the *computational* cost of replicability, both in general and as compared to other notions of stability, is a critical open problem and clearly of interest to the NeurIPS community. This paper is the first to initiate a systematic study of this problem, and makes initial progress on understanding connections with pure privacy, SQ, and online learning, standard notions in the literature.
Related to the above, the authors make progress on an open problem of [ILPS’22] to design efficient replicable algorithms for parities beyond the uniform distribution (namely in this case for distributions with constant decision tree complexity). This takes some work to formalize and is a reasonable contribution on its own from a computational standpoint.
Weaknesses: The work has two main weaknesses.
First, the authors seem to have misunderstood prior separation results in the literature, and as a result, the presentation of `computational separations’ with respect to privacy in the paper (namely in Figure 1.1 and the exposition) is wrong. Namely, the authors claim that there is an “efficient transform from pure DP learning to replicable learning” and “no efficient transform from apx DP learning to replicable learning”, where “efficient” is in terms of eps and rho (error and replicability) but not the underlying dimension of the problem, but this seems false.
In particular, there actually *is* a transformation from approximate DP to replicability that is “efficient” in this sense. Correlated sampling can be run in time roughly scaling with the output domain of the private algorithm, then boosted via ILPS from constant accuracy/replicability in polynomial time in these parameters. In fact, *any* learning problem that is solvable replicably can be solved in time polynomial in eps and rho by boosting/amplifying, so “efficiency” in these parameters is not very meaningful. The question of efficiency instead should be one of domain-size/dimension, which (as the authors to their credit highlight several times) is not efficient in the given reduction.
Part of the confusion here seems to stem from the result of [BGH+23] giving a computational separation between apx DP and replicability. In Prop 5.1, the authors state [BGH+23] exhibit a PAC problem which is efficiently learnable under apx-DP, but cannot be replicable learned assuming one-way functions exist. As far as I can tell, this is not shown in [BGH+23]. First, the separation given by [BGH+23] is not for PAC-learning, it is for a somewhat contrived statistical task; separating the two in the PAC setting is open. Second, the separation has nothing to do with one way functions (which seems to be a different result in their paper), and relies on public key encryption. Third, the separation is in terms of the dimension/size of the space, not accuracy or replicability parameters.
The second weakness of this paper is that, while extra technical work is certainly required for several of the results in this paper (namely the online bound, and generalizing BLMT23), the ideas in this paper do not go substantially beyond known methods in the study of replicability. The online vs replicability result is not too much of a jump from its use in work of Bun separating privacy and online learning, (the new replicable quantile estimation method takes work but is fairly straightforward from techniques in ILPS22). The SQ/distribution-lifting result also follows largely from combining techniques of [ILPS22] with [BLMT23]’s non-replicable method which already relies mostly on statistical estimation sub-routines.
Overall, the authors have identified an important problem and made some nice partial progress in this front (including progress on an open problem of ILPS22), but combined with the issues above and without introducing substantially new ideas to the study of replicability I cannot recommend the work for acceptance in its current form.
Technical Quality: 1
Clarity: 3
Questions for Authors: Given the above discussion of the apx-DP, perhaps a better way to justify the pure DP transformation would be to exhibit a sequence of concept classes over {0,1}^n with constant representation dimension for which the transformation from apx-DP is inefficient (or at least some evidence can be given for this)?
I will be happy to increase my scores if I have misunderstood something regarding these transformations in the paper and the authors can clarify, or if the authors agree and fix the current presentation.
POST REBUTTAL: The authors' proposed changes address my concerns in the soundness of the work, and I have updated my overall score accordingly.
Confidence: 4
Soundness: 1
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer 9ean for their thorough review, their suggestions regarding the presentation of the manuscript, and the clarification of results from prior work.
We apologize for the confusion; indeed
there was a misunderstanding on our end about the
separation that [BGH+23] provided. We are committed to modifying
the manuscript according to the reviewer's recommendation
and have updated the figure in the "global" response section.
Moreover,
as the reviewer correctly points out, the transformation
of [BGH+23] from approx DP to replicability requires
exponential time in the size of the domain, and polynomial time
in the approximation/confidence parameters. Thus, their
result does not subsume ours. We also agree that the question
the reviewer mentions is an interesting one, and we can
mention it as an open problem.
Regarding the second weakness,
we believe
the replicable lifting framework provides a non-trivial conceptual step towards understanding efficient replicable learnability beyond SQ algorithms,
which is a direction we hope the community will further explore.
Indeed, we provide a concrete
distribution where
we can obtain a replicable algorithm for learning parities using our framework,
but Gaussian elimination, which is the standard
algorithm in the absence of replicability, fails
to be replicable. We view that as evidence
that our transformation can indeed lead to replicable
algorithms in settings where such results are lacking.
We will include these discussions in the next iteration of our paper, and we apologize again for the misunderstanding.
---
Rebuttal 2:
Title: Rebuttal Followup
Comment: We thank the authors for their response and would like to ask a few follow-up questions before modifying the final review accordingly:
First, it is unclear to me what the authors mean by saying the transformation in [BGH+23] is "exponential in the size of the domain". The correlated sampling transform should (at worst) scale inverse polynomially with the minimum probability of any output of the apx-DP algorithm. Often the size of this output space scales with the *dimension* of the problem, or can be made to by discretizing (see e.g. the application of [BGH+23]'s transform to Gaussian mean estimation in the same work). In fact, I'm not totally sure what the authors mean by domain here. Do you mean the output space of the Apx-DP algorithm? Or the original domain of the problem? I suppose in many cases (e.g. in tabular settings) the size of the domain is exponential in dimension, so maybe this is what is meant. To be clear, I do not mean to claim here the Apx-DP reduction subsumes the one presented in this work. I agree this is almost certainly not true.
Second, while the updated figure is certainly closer to accurate, I still find the diagram's representation of Apx-DP -> replicability vs Pure-DP -> replicability and its discussion of "computational separations" to be (unintentionally) misleading.
For instance, the figure states that the dashed double arrow indicates "an efficient learner for a task... can be black-box transformed into an efficient learner for the same task... for a subset of the relevant parameters." In what sense does the correlated sampling transform fail this? As discussed above it can be made efficient in $\varepsilon,\delta$ and $\rho$, just like the Pure DP case, but may be expensive in dimension/output-space. Why does Apx-DP -> Replicability not fit the double dashed arrow as written?
Conversely, the Pure DP case *could* still have a computational separation based on dimension. I.e. there could be a PAC problem that is privately learnable in polynomial time (in the ambient dimension), but requires exponential time replicably (under say cryptographic assumptions). This is exactly the sort of separation given in [BGH+23] (albeit not for a PAC problem), which as far as I can tell is cited as really the only reference for what a "computational separation" actually means in this figure. Why then is an efficient transform from Pure DP -> Replicability not listed as "open" if such a separation could still exist?
Overall, I do not think the diagram's categories appropriately capture the known landscape of computational transforms.
Could the authors please clarify if I am misunderstanding the diagram/results, or if not propose a change that reflects the above?
---
Rebuttal Comment 2.1:
Title: Response to Reviewer's 9ean Comments
Comment: Thank you again for your thorough comments. Let us
try to clarify the questions you ask.
- Regarding the Apx-DP reduction to replicability:
Our understanding is that in order for [BGH+23] to use
correlated sampling it has to be the case that the output space
of the algorithm is finite.
To be more precise, based on [BGH+23], there is an efficient transformation from Apx-DP to perfectly generalizing algorithms. Next, the authors use correlated sampling to get a replicable learner.
Given a perfectly generalizing algorithm $A$ and sample $S$,
the correlated sampling strategy is applied to the distribution of outputs of
$A(S)$.
Hence, the output space of $A$ should be finite.
In the PAC learning setting,
in order to ensure that the algorithm
has finite output space,
one sufficient
condition is that the domain $\mathcal{X}$
is finite, and this is what our comment
was trying to state. To be even more
precise, for the case of PAC learning,
there is another transformation from apx-DP
to replicability that holds for
countable domains $\mathcal{X}$ that was
proposed by
[KKMV'23], but this approach i) goes
throgh the Littlestone dimension of the
class and ii) might not even be computable,
in its general form.
The correlated sampling step can
be explicitly implemented via rejection sampling from the output space of $A$.
The acceptance probability is controlled
by the pmf of $A(S)$, as the Reviewer mentions. As a result, in general, it is not computationally efficient.
For instance, if the finite input space to the correlated sampling strategy is $\\{0,1\\}^d$, then the runtime of the algorithm could be $\exp(d)$, since the acceptance probability is exponentially small in the dimension in the worst case. We will further clarify our comment to capture this behavior.
- Figure: We will adjust the figure for the Apx-DP to Replicability as the Reviewer suggests and explicitly discuss the situation for the two directions.
For the Apx-DP to Replicability, we will add the above discussion and change the arrow type since the transformation is efficient in the other parameters except for $d$. For the PureDP to Replicability, we will clarify that it could still have a dimension-dependent computational separation, as the Reviewer comments.
Our intention was to depict the following
difference: both transformations from
apx-DP, pure-DP to replicability can
be efficient in $\varepsilon, \delta$ but,
in its general form, apx-DP to replicability
from [BGH+23] would need $\mathcal{X}$
to be finite (or some other condition that ensures the algorithm has finite output space), and apx-DP to replicability from [KKMV'23] for countable domains $\mathcal{X}$
might not even be computable, whereas
the pure-DP to replicability transformation
we provide has running time exponential
in the representation dimension.
Please let us know if this answers your
questions or if we have missed some
point you were trying to make. | Summary: In the paper, the authors discuss the connection and differences among replicability, a novel stability condition for learning algorithms proposed by Impagliazzo et al. [2022], online learning, and differential privacy from a computational perspective. Their first contribution is a computational separation between online learning and replicable PAC learning. In particular, under standard cryptographic assumptions, there is a concept class that can be replicably learned, but no efficient online learning algorithm exists. The second contribution is a method to extend a replicable PAC learner that works under the uniform distribution to ones that work under more complex distributions, whose probability mass functions can be computed by decision trees. The final result is a way to transform a purely differentially private learner into a replicable one. Combining these with some existence hardness/equivalence results provides a figure (Figure 1.1) referred to by the authors as the "computational landscape of stability."
Strengths: Figure 1.1 is an excellent summary of all known relationships between different notions of algorithmic stability. I believe the communities behind all three areas (replicable, online, DP) could benefit from such a roadmap. All the results are clean and well-motivated.
The one I like the most is the lifting result for replicable learner. Intuitively, it feels like replicable learners and statistical query algorithms are almost equivalent as most replicable algorithms we know are based on the idea of making each statistical query fired by the algorithm replicable. The only exception known before is learning parities under the uniform distribution over boolean hypercube. The authors demonstrate the possibility that there may potentially be much larger gaps between SQ algorithms and replicable learners.
Weaknesses: While the results are conceptually novel and interesting, the techniques used are more or less standard. For example, the lifting result follows from building a replicable version of the routine from Blanc et al. [2023] for learning the decision tree structure of the distributions, and the core of it is just to estimate the influence of distributions in a replicable manner (via random statistical query rounding).
Technical Quality: 4
Clarity: 4
Questions for Authors: As a pure DP algorithm is also an approximate DP algorithm by definition, I imagine that the reduction from Bun et al. [2023] could also transform a pure DP learner into a replicable learner. Could you elaborate more on how the performance of your reduction would differ from theirs quantitatively?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer iQWb for recognizing the contributions of our paper.
The main result of [ILPS22] is that SQ-based algorithms can be made replicable. However, our understanding of replicability beyond SQ algorithms is fairly limited. The main candidate in understanding this question is learning parities. We make progress on this important problem by using the replicable lifting framework.
This framework provides a non-trivial conceptual step towards understanding efficient replicable learnability beyond SQ algorithms,
which is a direction we hope the community will further explore.
Indeed, we provide a concrete
distribution where
we can obtain a replicable algorithm for learning parities using our framework,
but Gaussian elimination, which is the standard
algorithm in the absence of replicability, fails
to be replicable. We view that as evidence
that our transformation can indeed lead to replicable
algorithms in settings where such results are lacking.
It is, indeed, a correct
observation that pure DP algorithms
are also approximate DP algorithms.
Thus, in
principle, one could use
the reduction from Bun et al. [2023].
The catch is that this reduction
is based on correlated sampling so it
requires i) the output space of the algorithm to be finite and ii)
even under finite output spaces, it needs
exponential time in the size of that space.
We will elaborate on these discussions in the next version of our work. | Summary: Replicability is a notion of stability for learning algorithms, recently proposed by Impagliazzo et al. [ILPS22] to address the replicability crisis pervasive in scientific studies using statistical procedures. A learning algorithm $A$ is a function that takes as inputs a dataset $S \in (\mathcal{X} \times \mathcal{Y})^*$ and a (random) string $r \in \\{0,1\\}^*$, and outputs a hypothesis $f: \mathcal{X} \to \mathcal{Y}$. The algorithm is replicable if for independent draws of same sized datasets $S, S’$ and random $r$, $A(S, r) = A(S’, r)$ with high probability. Put simply, if two scientific labs use the same replicable algorithm to analyze independent datasets S and S’, and arrive at different conclusions, they will have a hard time blaming statistical fluctuations in the data.
This paper presents a variety of results that connect replicability to other well-studied concepts in learning theory, such as (realizable) online learnability and private learning. In particular, the authors show a computational separation between replicability and **online learnability**, assuming the existence of one-way functions. They also present a lifting procedure that transforms an efficient replicable PAC learner for a hypothesis class $\mathcal{H}$ over the uniform distribution on $\\{\pm 1\\}^n$ to replicable learners over any marginal distribution whose sample and time complexity depends on the complexity of the marginal. This lifting procedure is then used to design an efficient replicable algorithms for **learning parities over distributions that are far from uniform**. Furthermore, they show that any **pure DP learner** can be turned into a replicable one in time polynomial in all relevant problem parameters, except for the "representation dimension" of the hypothesis class.
Strengths: This is a very well-written paper with solid technical contributions. Though the paper touches on various concepts in learning theory, the polished presentation ensures that the reader is not overwhelmed by the breadth of coverage. As replicability is a relatively new concept in learning theory, I was not familiar with it. However, the connections between replicability and other concepts are rather surprising and pleasing to someone new to the topic. The clear and well-organized proofs, each employing diverse techniques, were relatively easy to follow.
I found the application of the lifting procedure to replicable parity learning and the distinction from Gaussian elimination particularly interesting. For the uniform distribution, it is straightforward to see that Gaussian elimination is replicable for data labeled by parity functions. However, one can define simple distributions on which Gaussian elimination is *not* replicable (though the zero-one loss *is* small with high probability). This highlights the necessity and the non-triviality of the lifting procedure for efficient replicable learners over the uniform marginal. It also shows that computational separation between replicable learning and statistical query (SQ) learning extends to non-uniform marginal distributions.
Weaknesses: Given that this is a solid and well-polished paper, I did not find any significant weaknesses, only a few minor points below.
- **Elaborating replicability with simplified, realistic statistical examples.** The paper begins by addressing the replicability crisis in scientific fields, but once the authors define replicability within the learning theory setup, the initial narrative gets lost. What would the random strings model in a real-world scientific study? Does it model the PRG seed number that experimenters use in their PyTorch code? Taking a simplified but realistic example (e.g., a hypothetical FDA approval procedure based on clinical data collected by independent labs), and mapping the A's, S's and r's to real-world concepts would be helpful.
- **Ambiguity in the big questions.** I found some of the "big" questions Q1-Q4 [page 2-3] too generic to be useful. In particular, questions like "How does replicable learning *relate to* online/private learning?" are extremely underspecified because "relate to" can have multiple interpretations. It would have been more helpful if the authors posed more specific motivating questions, such as "Is there a computational separation between replicably learnable classes and online learnable classes?"
Technical Quality: 3
Clarity: 4
Questions for Authors: - What is the "representation dimension" of a hypothesis in Theorem 5.2?
- Can we define replicability with continuous random sources? Or are there obstacles to meaningfully defining replicability with respect to continuous sources?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer npit for the insightful suggestions regarding the presentation of our paper.
The random string in the definition of replicability does indeed model the random seed of a learning algorithm in practice
and sharing internal randomness can be easily implemented in practice by sharing the random seed.
We can indeed define
replicability with continuous
random sources, but to make
the presentation easier to follow and tie
it to real-world settings,
we decided to stick with random binary strings.
We will provide other concrete examples for the definition in the next iteration of our manuscript.
We will also restate the motivating questions with more specific language.
The representation dimension is a combinatorial dimension, similar to VC dimension, that characterizes which classes are PAC learnable by pure DP algorithms. We will formally give its definition in the next revision of our work and mention how it has been used
in prior works on pure DP.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions and being receptive. My rating of the paper remains the same. | Summary: This work contributes to the recently evolving area of replicable learning. This work establishes three main results
1. It is known that online learning algorithms can be replicable and replicable learning algorithms yield online learning algorithms. The work focuses on the computational complexity of these transformations and establishes a negative result--there exist concept classes that are efficiently, replicably, and learnable but there are no efficient online learning algorithms (assuming the existence of one-way functions).
2. Recent work showed that PAC learning algorithms under the uniform distribution can be black-box converted into distribution-free PAC learning algorithms. This work shows the transformation can be made replicable.
3. It is known there exist concept classes that are approximately DP learnable, but not efficiently, replicable learnable. This work shows that if the concept class is pure DP-learnable, then it is efficiently replicable learnable.
Strengths: This solid work clarifies several questions on replicable learning in the context of computational efficiency. The paper is very well written and the proofs are rigorous.
Weaknesses: Main weakness is that obtained by combining known works. It is not clear to be that there is not that much novelty in the proofs. for example,
1. Proof of Theorem 2.1 proceeds in two parts. a) existence of a concept class that is not online learnable, b) designing a replicable learning algorithm for this class. (a) is known by prior works, (b) replicable algorithm follows by the (now) standard technique of random thresholding/rounding.
2. Please see Q2.
Though the proofs may not be novel, the final results that the authors obtain are interesting.
Technical Quality: 4
Clarity: 4
Questions for Authors: Pertains to the weakness. Can you explain the new ideas needed to obtain the results?
1. ILPS22 presents a replicable algorithm for an approximate median. The approximate quantile algorithm seems to build on similar ideas. What are the new ideas in the proof?
2. Similarly, the critical ingredient in the proof of 3.2 is the replicable influence estimation. Once we have a replicable influence estimation the proof of Blanc et al goes through with minor changes. The design of replicable influence estimatior follows by the randomized rounding technique of ILPS22. Are there any technical/conceptual challenges?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer eJSY for recognizing the importance of our results
and their suggestions regarding the presentation of our paper.
We believe that the main contribution of our work, as the reviewer acknowledges, is a collection of interesting conceptual results that are missing from prior work. For instance, while the replicable influence estimator is not overly technically involved, it constitutes a key ingredient for the use of the lifting framework, which we find conceptually interesting. To justify this further,
the main result of ILPS22 is that SQ-based algorithms can be made replicable. However, our understanding of replicability beyond SQ algorithms is fairly limited. The main candidate in understanding this question is learning parities.
It is clear that under the uniform marginal distribution (or similar nice distributions) the problem is trivially replicable; yet beyond such distributions, it was not clear how to argue about replicable algorithm design for parities.
We make progress on this important problem by using the replicable lifting framework.
This framework provides a non-trivial conceptual step towards understanding efficient replicable learnability beyond SQ algorithms,
which is a direction we hope the community will further explore.
Indeed, we provide a concrete
distribution where
we can obtain a replicable algorithm for learning parities using our framework,
but Gaussian elimination, which is the standard
algorithm in the absence of replicability, fails
to be replicable. We view that as evidence
that our transformation can indeed lead to replicable
algorithms in settings where such results are lacking.
At a more technical note, our replicable quantile estimation differs from the replicable median algorithm of [ILPS22] in at least two ways.
Firstly,
the replicable median algorithm of [ILPS22] seems to rely heavily on properties of (approximate) medians in order to satisfy the approximation guarantees.
On the other hand,
our algorithm works regardless of the desired quantile.
Secondly,
their median algorithm relies on a non-trivial recursive procedure
while our replicable quantile algorithm is considerably simpler and is based on a concentration of the CDF through the DKW inequality.
We will include these discussions in the next version of our manuscript. | Rebuttal 1:
Rebuttal: Following reviewer 9ean's suggestions,
we have uploaded a slightly modified figure
of the computational landscape of stability to clarify the computational separation between approximate DP and replicability given in [BGH+23],
which is not for PAC learning but for some other statistical task.
We will also modify the exposition accordingly.
Pdf: /pdf/6fa2880b072640e96e8cda6ec070c8a0583f46bd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Causal Deciphering and Inpainting in Spatio-Temporal Dynamics via Diffusion Model | Accept (poster) | Summary: The paper introduces CaPaint, a causal structure plugin for spatio-temporal (ST) forecasting, aiming to identify causal regions in data and enable the model to perform causal reasoning. Utilizing a two-stage process and employing a novel image inpainting technique using a fine-tuned unconditional Diffusion Probabilistic Model (DDPM), the paper proposes a method to fill in the gaps identified as environmental parts, enhancing model generalizability and interpretability significantly.
Strengths: - The paper is well-written, with clear, concise explanations and the use of figures effectively illustrates the model's mechanisms and results.
- The paper introduces an interesting concept by incorporating causal inference into spatio-temporal data analysis, particularly through the integration of generative models
- The experiments are thoroughly conducted across multiple datasets and backbones, results are overall promising.
Weaknesses: - The abstract contains a typo where front-door adjustment is incorrectly referred to as back-door adjustment.
- The paper appears to lack detailed descriptions on how the generated spatio-temporal data are synthesized into coherent ST sequences, missing crucial details on this aspect of the methodology.
- The paper does not clearly demonstrate how the quality and efficiency of generation are improved. It is recommended to supplement with additional experiments to substantiate these aspects.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. How effective is the inpainting technique implemented by CaPaint specifically on spatio-temporal datasets, and what are the key factors that influence its performance in these contexts?
2. Why do traditional data augmentation methods, which can disrupt spatio-temporal characteristics, result in performances that are consistent with or only slightly worse than the original, instead of showing a significant decline?
3. How does the performance compare when augmented data is combined with original data to form the training set for enhancing model generalizability, particularly when controlling for an equal amount of training data?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors highlight that the effectiveness of the method is limited under conditions of abundant data, as demonstrated through experiments that show more significant performance improvements under data-scarce conditions compared to when data is plentiful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Thank you for your careful review. We appreciate your attention to detail and will correct this typo to accurately reflect the use of front-door adjustment in the abstract.
**W2:** Thank you for your valuable feedback. For each original spatio-temporal sequence, we enhance the data by first identifying causal regions using a Vision Transformer. We then apply diffusion inpainting to fine-tune the data and fill the identified environmental parts. After inpainting, we perform sampling with a probability ppp to mix the original sequences with the augmented sequences. This process ensures that the generated data maintains spatial and temporal coherence, resulting in high-quality, coherent spatio-temporal sequences.
**W3:** Thank you for your suggestion. We have supplemented our paper with additional experiments comparing our method with NuwaDynamics. The results demonstrate that our method achieves similar or better performance with only 2 sequences, whereas Nuwa requires 4-5 sequences. This clearly shows the superior quality and efficiency of our generation process.
| Datasets | SimVP | SimVP | PredRNN-v2 | PredRNN-v2 |
| ----------- | ------- | ----- | ---------- | ---------- |
| | CaPaint | Nuwa | CaPaint | Nuwa |
| **TaxiBJ+** | 2.21 | 2.56 | 2.86 | 3.25 |
| **KTH** | 31.26 | 33.98 | 38.45 | 40.37 |
We selected MAE as the reference metric, and it is evident that our proposed CaPaint outperforms Nuwa across the board. It is worth mentioning that Nuwa requires generating 4 and 5 augmented sequences per spatio-temporal sequence for TaxiBJ+ and KTH, respectively, to achieve its results. In contrast, we only needed to generate 2 sequences to surpass Nuwa's performance. This is because Nuwa can only mix local information around the environment, whereas our method can consider and fill in global details, generating higher-quality data. This aligns with the background of our paper, which addresses data scarcity and uneven sensor collection.
**Q1:** Thank you for your insightful question. We have provided an example of the inpainting technique in Appendix F. CaPaint effectively fills the environmental regions of the images based on their data distribution. We fine-tuned the Stable Diffusion model on our datasets, enabling it to learn the spatio-temporal data distributions accurately.
The key factors influencing the performance of the inpainting technique in these contexts include:
1. **Data Distribution**: The model's ability to understand and replicate the inherent distribution of the spatio-temporal data.
2. **Model Fine-tuning**: The extent to which the Stable Diffusion model has been fine-tuned on the specific datasets to capture the nuances of the data.
3. **Environmental Region Identification**: The accuracy in identifying and masking the environmental patches that need inpainting.
4. **Causal and Non-causal Patch Distinction**: The precision in distinguishing between causal and non-causal patches to ensure that the inpainting enhances the model’s generalizability and interpretability
**Q2:** Thank you for your question. We briefly analyzed this in Section 4.4 of our experiments. Traditional data augmentation methods indeed disrupt spatio-temporal characteristics, leading to two possible outcomes:
1. **Data Only Augmentation**: If only the data is augmented without corresponding changes to the labels, there is a significant decline in performance due to the loss of spatio-temporal coherence.
2. **Data and Label Augmentation**: If both the data and labels are augmented together, the performance only slightly declines. This is because augmenting both preserves some of the spatio-temporal characteristics, maintaining a level of coherence between the input data and the expected output.
By retaining some of the intrinsic spatio-temporal properties through coordinated augmentation of both data and labels, the performance remains relatively stable.
**Q3:** Thank you for your question. As shown in Figure 7 of our experiments section, even when maintaining an equal amount of training data, CaPaint demonstrates superior performance compared to the original backbone model. For instance, when comparing 50% original data with 25% original data combined with 25% augmented data, CaPaint's MAE and MSE are lower than those of the original backbone. This indicates that CaPaint effectively enhances model generalizability by leveraging the augmented data.
| Metric | 10% | 25% | 50% | 75% | 100% |
| ------------- | ------ | ------ | ------ | ------ | ------ |
| **SimVP** | | | | | |
| MAE | 0.4925 | 0.4477 | 0.3215 | 0.2438 | 0.2320 |
| MSE | 0.5103 | 0.4434 | 0.2821 | 0.1843 | 0.1645 |
| **SimVP+CaP** | | | | | |
| MAE | 0.3925 | 0.2875 | 0.2633 | 0.2210 | 0.2057 |
| MSE | 0.3787 | 0.2541 | 0.2157 | 0.1586 | 0.1390 |
I believe my response has addressed your concerns. If you have any further questions, please feel free to let me know. Thank you!
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Thank you for your detailed and clear response to the issues we raised. After careful review, we find that your replies are very specific and have adequately clarified the concerns mentioned in our review. In particular, your detailed experimental results and further explanations of the methodology have provided us with a more comprehensive understanding of the paper. Additionally, your selection of error metrics and discussion of different data augmentation methods have deepened our appreciation of the model you proposed. We believe that your response has effectively addressed all the questions we previously raised, and the additional experimental results further enhance the validity and robustness of your method. Moreover, we especially appreciate the innovative application of diffusion inpainting to spatio-temporal video data, which significantly improves the model's performance. Based on these improvements and supplements, we will raise our score for your paper.
Thank you again for your careful attention and detailed responses.
Best regards,
Reviewer
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your thoughtful and positive feedback. We greatly appreciate your recognition of our efforts to address the concerns raised in the initial review. We are delighted that our detailed explanations, additional experimental results, and the innovative application of diffusion inpainting have enhanced your understanding and appreciation of our work.
Your acknowledgment of the improvements we've made means a great deal to us, and we are grateful for your willingness to raise the score based on these enhancements. We will continue to refine and improve our research to contribute to the field.
Thank you again for your careful consideration and support.
Best regards | Summary: The paper focuses on generalizability and interpretability for spatio-temporal predicting. The authors propose a causal structure plugin, named CaPaint, which identifies causal regions in data to generate data for scenarios where data are scarce. Experiments on five datasets demonstrate the effectiveness of the proposed method in spatio-temporal forecasting.
Strengths: 1.The paper focuses on the issue of modeling uneven and insufficient spatio-temporal data, which is a fascinating and significant area of research.
2.To incorporate physical laws into deep networks, the authors propose a method that obeys the causal deciphering and performs interventions on the non-causal diffusion pathces, which is an extremely challenging problem.
3.The authors have conducted experiments on five datasets, validating the effectiveness of the model, and have appropriately discussed the limitations of the model.
Weaknesses: 1.In line 39-44, the authors lack discussion of why the causality and interpretability of models can improve generalization capabilities when dealing with the uneven, insufficient data collection.
2.In the left side of Fig.4, the visualizations of finer details are small and not clear enough, so it would be more informative to zoom in on local details of the image.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1:** In line 39-44, the authors lack discussion of why the causality and interpretability of models can improve generalization capabilities when dealing with the uneven, insufficient data collection.
**Answer:**
1. When data is sparse, models may learn **shortcut solutions from biased data**, as noted in several studies [1-3]. These shortcuts can significantly interfere with causal discovery, which relies on counterfactual reasoning to understand the underlying mechanisms that generate the observed data. In spatio-temporal domains, this issue is particularly critical as the temporal dependencies and spatial correlations are more complex and often underexplored. Traditional approaches have primarily focused on **spatio-temporal graphs**, but our work extends this to spatio-temporal continuity in **image data (video data)**, a relatively underexplored area. By addressing these gaps, we provide a more robust foundation for understanding and predicting spatio-temporal dynamics, ultimately enhancing the model's ability to generalize from incomplete or biased datasets.
2. **Interpretability and causality** are strongly correlated. Discovering patterns in model and data predictions is crucial for identifying causality and enhancing interpretability. This process allows us to understand how different variables interact and contribute to the observed outcomes. In our work, we **leverage self-supervision** and employ global perception by **introducing inpainting to dynamically fill in potentially biased areas.** This approach ensures that the model can make informed predictions based on a more comprehensive understanding of the underlying data distribution, thus achieving interpretability. This interpretability, grounded in solid causal theory, allows us to make sense of the model's decisions, facilitating better generalization and robustness, especially in scenarios with uneven and insufficient data collection.
3. To help readers better understand the relationship between data scarcity, causality, and interpretability, we have **cited more relevant literature** in the paper. These citations not only provide a historical context for the technical developments in this area but also highlight the critical correlation between data scarcity, causality, and interpretability. By grounding our discussion in established research, we illustrate how our approach aligns with and advances current understanding, providing a clearer picture of how addressing **causality and interpretability** can significantly improve generalization capabilities in spatio-temporal data analysis.
**Q2:** In the left side of Fig.4, the visualizations of finer details are small and not clear enough, so it would be more informative to zoom in on local details of the image.
**Answer:** Thank you for pointing this out. We appreciate your feedback and recognize the importance of clear and detailed visualizations. To address this issue, we will provide zoomed-in versions of the local details for different datasets to offer a clearer and more informative visualization. To improve the readability of the paper, we have re-arranged the layout of Figure 4. We have enlarged the relevant images and fonts to make it easier for readers to view the finer details. Additionally, we have systematically refined the layout of the entire paper, which includes adjustments to image sizes, fonts, and tables to enhance overall readability**.Upon acceptance, we will include these additional results and enhanced visualizations in the appendix for ease of reference.** This will ensure that all visual details are accessible and that the readers can fully appreciate the improvements and performance of our proposed method.
I believe my response has addressed your concerns. If you have any further questions, please feel free to let me know. Thank you!
**Reference**:
[1] Causal Attention for Interpretable and Generalizable Graph Classification.
[2] Enhancing Out-of-distribution Generalization on Graphs via Causal Attention Learning.
[3] Reinforced causal explainer for graph neural networks.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed explanation and the improved visualization. Their discussion effectively highlights the advantages of their approach, offering a thorough understanding of how causality and interpretability contribute to improved generalization. In addition, the changes on visualitzations will enhance the readability and comprehension of the paper. While the authors have addressed the issues raised in a satisfactory manner, we have decided to withhold our score at this time. We appreciate the effort put into the rebuttal and look forward to seeing the final revisions.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you very much for your considerate feedback and for acknowledging our efforts in addressing the issues raised. We appreciate your kind words regarding the improvements and the detailed explanations provided. Your insights have been invaluable in guiding our revisions, and we are committed to making the final adjustments to further enhance the quality of our paper.
Thank you again for your support and thoughtful evaluation.
Best regards, | Summary: The paper presents CaPaint to improve spatio-temporal predictions by identifying causal regions and employing diffusion inpainting techniques. The approach addresses the challenges of high computational costs in ST causal discovery.
Strengths: 1. CaPaint seamlessly integrates with a variety of existing spatio-temporal prediction models. The paper thoroughly evaluates the method using diverse backbone models, showcasing the robustness and versatility of CaPaint across different scenarios.
2. The experimental results across five real-world ST benchmarks demonstrate substantial improvements
3. The combination of causal inference and diffusion models is sound.
Weaknesses: 1. The novelty of the proposed method is somewhat limited. The concept of causal patch discovery was already introduced in NuwaDynamics. This work primarily builds on that by utilizing diffusion models for data generation and proposing a different SCM, which is not necessarily better.
2. The evaluations in Table 1 use the same datasets as NuwaDynamics. To ensure a fair comparison and better highlight the improvements, it is recommended that the authors use the same settings as NuwaDynamics and directly compare their results with it.
Technical Quality: 2
Clarity: 2
Questions for Authors: see weakness
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The technical contribution of this work is somewhat incremental, providing only limited improvements compared to NuwaDynamics.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answer(Q1):** Thank you for your feedback. We **respectfully** disagree with the assessment that our method lacks novelty. Our work presents significant improvements and innovations over NuwaDynamics.
1. **Conceptual Difference**:
- Our proposed method employs a different approach to causal adjustments, utilizing front-door adjustment rather than back-door adjustment as in NuwaDynamics. Back-door adjustment requires traversing and repeatedly sampling all environmental patches, leading to exponential complexity: ${\cal O}( {T \times {\cal N}_E^{{\cal M}\left( {\rm{*}} \right)}} )$
- In contrast, our method with front-door adjustment avoids the need for repeated sampling of all environmental patches, reducing the complexity to linear levels: $\cal{O}(T \times {\cal N}_E)$
- This significant reduction in complexity provides practical advantages, especially in large-scale data processing and real-time applications, by substantially lowering computational resource requirements.
2. **Different Methodology**:
- Our approach involves the use of generative models, specifically diffusion inpainting methods, for causal interventions. After fine-tuning the model, it achieves a global awareness of the data, enabling effective inpainting of environmental regions. In contrast, NuwaDynamics can only perceive local information around the surrounding areas.
- Traditionally, Diffusion Models (DMs) were utilized to generate high-quality synthetic images in the field of Computer Vision (CV). Inspired by the powerful generation capability of DMs given an input picture, we incorporate generative models like DMs into spatio-temporal video data analysis for data augmentation, without harming its efficiency greatly. Our key insight lies in addressing the challenge of data sparsity, and the experimental results have demonstrated that our method achieves significant improvements. This is a substantial innovation in this field.
3. **Broad Applicability**:
- Our CaPaint method effectively addresses the issue of exponential complexity found in NuwaDynamics, resulting in more efficient processing. Furthermore, the spatio-temporal data sequences generated by our method are of higher quality, enhancing the robustness and generalizability of the model. This allows CaPaint to be applied in various scenarios, making it advantageous for practical applications and deployment in the industry.
By addressing these aspects, we hope to clarify the distinct advantages and innovations of our method compared to existing approaches.
**Answer(Q2):** Thank you for your suggestion. Our CaPaint and Nuwa approaches inherently differ in their focus on front-door and back-door adjustments, respectively. Back-door adjustment, as used in Nuwa, requires traversing and repeatedly sampling as many environmental patches as possible, which leads to fundamentally different experimental setups. In contrast, front-door adjustment, which we employ, only needs to perceive the overall environment and fill in the environmental parts. This eliminates the need for repeatedly sampling numerous environmental patches, which is one of the key advantages of our method.
Additionally, in the experimental section of our paper, we have provided a comparison between our method and Nuwa under our settings. The results show that our method outperforms Nuwa across different datasets. This demonstrates the robustness and effectiveness of our approach.
| Datasets | Flip | Rotate | Crop | NuWa | CaPaint |
| ----------- | ----- | ------ | ----- | ----- | ------- |
| **DRS** | 2.10 | 2.11 | 2.34 | 2.02 | 1.57 |
| **KTH** | 23.15 | 23.14 | 23.11 | 22.32 | 20.56 |
| **SEVIR** | 15.41 | 15.45 | 15.95 | 15.14 | 14.63 |
| **TaxiBJ+** | 16.47 | 16.39 | 15.94 | 15.11 | 12.87 |
| **FireSys** | 17.02 | 17.07 | 17.15 | 16.68 | 15.79 |
To better explore the results of our method and Nuwa under the same settings, we followed your suggestion to align the experimental setup with that of Nuwa. Additionally, our experiments introduced backbones that have been updated in recent years, meaning the backbones we used are newer than those in Nuwa. Due to limited time and resources, we selected overlapping backbones such as SimVP and PredRNN-V2, and overlapping datasets,TaxiBJ+ and KTH for comparison, setting the environmental ratio to 15% for testing.
| Datasets | SimVP | SimVP | PredRNN-v2 | PredRNN-v2 |
| ----------- | ------- | ----- | ---------- | ---------- |
| | CaPaint | Nuwa | CaPaint | Nuwa |
| **TaxiBJ+** | 2.21 | 2.56 | 2.86 | 3.25 |
| **KTH** | 31.26 | 33.98 | 38.45 | 40.37 |
We selected MAE as the reference metric, and it is evident that our proposed CaPaint outperforms Nuwa across the board. It is worth mentioning that Nuwa requires generating 4 and 5 augmented sequences per spatio-temporal sequence for TaxiBJ+ and KTH, respectively, to achieve its results. In contrast, we only needed to generate 2 sequences to surpass Nuwa's performance. This is because Nuwa can only mix local information around the environment, whereas our method can consider and fill in global details, generating higher-quality data. This aligns with the background of our paper, which addresses data scarcity and uneven sensor collection.
In summary, our more advanced concept is fundamentally different from Nuwa and is more advantageous for practical applications. I believe my response has addressed your concerns. If you have any further questions, please feel free to let me know. Thank you!
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' rebuttal and additional experimental results. I recommend the authors to add the comparison results with Nuwa to the main paper. I have raised my rating accordingly.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your thoughtful feedback. We sincerely appreciate your constructive suggestions and are grateful for your support throughout this process.
Best regards,
Authors | Summary: This paper introduces a groundbreaking framework named CaPaint, which is designed to tackle the critical issues of data scarcity and the absence of causal connections in spatiotemporal (ST) prediction models. The authors have established a robust causal framework that not only identifies regions within data that exhibit causal relationships but also endows the model with the capability to reason about causality during a two-stage processing procedure. In the initial stage, they leverage self-supervised Vision Transformer (ViT) reconstruction to identify the crucial causal patches within ST observations. This is followed by an intervention phase where they employ diffusion inpainting techniques to manipulate non-causal areas while preserving the integrity of core causal areas. The innovative method reduces the complexity of generating data from exponential levels to quasi-linear levels, thereby significantly enhancing efficiency. Moreover, it has shown remarkable improvements across various ST benchmarks by integrating diffusion models as a novel data augmentation technique, marking a paradigm shift for this field.
Strengths: - Addresses data scarcity and lack of causal connections in ST prediction models effectively.
- Novel Method: Innovative use of self-supervised Vision Transformer reconstruction for causal patch identification. And employs diffusion inpainting techniques to manipulate non-causal areas, preserving core causal integrity.
- Demonstrates significant improvements across various ST benchmarks, integrating diffusion models as a novel data augmentation technique. Besides, it reduces data generation complexity from exponential to a quasi-linear level.
Weaknesses: - Details on computational efficiency or scalability of the proposed method are not provided, leaving it as a potential limitation for practical applications.
- More visualization of the prediction results should be included even in supplementary material.
Technical Quality: 3
Clarity: 3
Questions for Authors: Mentioned in the weakness section.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I think the author addressed the limitation mentioned in this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: - Details on computational efficiency or scalability of the proposed method are not provided, leaving it as a potential limitation for practical applications.
Thank you for your valuable feedback. We appreciate your concern regarding the computational efficiency and scalability of our proposed method.
**1:** We leverage the attention mechanism for causal deciphering, which does not introduce additional parameters. This ensures that our model remains efficient without incurring extra computational overhead. **2:** In the Introduction, we briefly analyze the computational efficiency of our method. Specifically, traditional backdoor adjustment requires traversing and repeatedly sampling all environmental patches, leading to exponential complexity: ${\cal O}( {T \times {\cal N}_E^{{\cal M}\left( {\rm{*}} \right)}} )$. In contrast, our CaPaint method employs front-door adjustment, which eliminates the need for repeated sampling of all environmental patches. This improvement reduces the complexity to linear levels: $\cal{O}(T \times {\cal N}_E)$.
**Reduction in Complexity**: Our approach substantially reduces the complexity of the optimal spatio-temporal causal discovery process from exponential to quasi-linear levels. By performing targeted interventions only on identified environmental patches rather than the entire dataset, we achieve significant computational savings.
**Scalability**: This reduction in complexity ensures efficient causal interventions, making our method more practical and scalable for real-world applications. Our method has been tested across multiple benchmark datasets (FireSys, SEVIR, Diffusion Reaction System (DRS), KTH, and TaxiBJ+), demonstrating its robustness and effectiveness in enhancing model performance.
**Practical Applications**: By implementing these enhancements, CaPaint provides a robust framework for spatio-temporal dynamics with improved computational efficiency and scalability. This makes it suitable for deployment in various real-world scenarios where data scarcity and computational efficiency are critical concerns.
- More visualization of the prediction results should be included even in supplementary material.
Thank you for your suggestion. In the main body of the paper, we have presented the visualization results for the SEVIR and TaxiBJ+ datasets. Additionally, we have included the visualization results for the KTH and DRS datasets in Appendix I and J, respectively. Although these visualizations are not in the supplementary material section, they are provided in the appendix. Furthermore, Appendix F also showcases the visualization of the inpainting effects of our CaPaint method on the SEVIR dataset.While we believe these visualizations adequately demonstrate the performance advantages of incorporating CaPaint, we are continuously updating our results to include more comprehensive visualizations. Specifically, we plan to add:
1. **Visualizations for More Datasets**: We will supplement the visualizations with results from the FireSys dataset.
2. **Visualizations for Different Backbones**: We will include visualization results using different backbone models.
3. **Comparisons with Different Augmentation Methods**: We will provide comparisons of visualizations using various augmentation methods.
Upon acceptance, we will include these additional results in the appendix for ease of reference. These updates will ensure that our paper provides a comprehensive overview of the performance improvements achieved by our method and its applicability to a wide range of datasets and model architectures.
I believe my response has addressed your concerns. If you have any further questions, please feel free to let me know. Thank you! | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to extend our sincere gratitude to all reviewers for their thorough and insightful feedback on our manuscript. We appreciate the time and effort you have invested in evaluating our work. Below, we provide an overall summary addressing the common strengths, weaknesses, and suggestions highlighted in your reviews.
#### Common Strengths:
1. **Addressing Data Scarcity and Causal Connections**: (Reviewer `botx`, `nGNj`,`mbnW`)
- Multiple reviewers noted that our work effectively addresses the critical issues of data scarcity and the lack of causal connections in spatio-temporal (ST) prediction models. We are pleased that our efforts to establish a robust causal framework were recognized.
2. **Innovative Methodology**: (Reviewer `botx`, `s4LS`,`mbnW`)
- The innovative use of self-supervised Vision Transformer (ViT) reconstruction for causal patch identification and the employment of diffusion inpainting techniques were highlighted as significant strengths. We are encouraged that our novel approach was well-received.
3. **Performance Improvements**: (Reviewer `botx`, `nGNj`,`mbnW`, `s4LS`)
- Reviewers acknowledged the substantial improvements demonstrated across various ST benchmarks, validating the robustness and versatility of CaPaint. The integration of diffusion models as a novel data augmentation technique was noted as a key contribution.
4. **Soundness and Contribution**:(Reviewer `botx`, `nGNj`,`mbnW`, `s4LS`,)
- The overall soundness and contribution of our work were rated positively. We are grateful for the recognition of the technical rigor and the potential impact of our research in advancing the field of spatio-temporal data analysis.
#### Common Weaknesses and Our Responses:
1. **Computational Efficiency and Scalability**: (Reviewer `botx`, `mbnW`)
- Several reviewers expressed concerns regarding the computational efficiency and scalability of our method. We have addressed this by elaborating on how our use of the attention mechanism and front-door adjustment reduces complexity from exponential to quasi-linear levels, ensuring efficient causal interventions suitable for real-world applications.
2. **Visualization of Prediction Results**: (Reviewer `botx`, `nGNj`)
- Reviewers suggested including more visualizations of prediction results, even in supplementary material. We have updated our paper to include zoomed-in versions of local details for different datasets, re-arranged the layout for better readability, and added more comprehensive visualizations for additional datasets, backbones, and augmentation methods. These enhancements will be included in the appendix upon acceptance.
3. **Clarification on Causality and Interpretability**: (Reviewer `mbnW`, `s4LS`,)
- The relationship between causality, interpretability, and their impact on generalization capabilities when dealing with uneven and insufficient data collection was pointed out as lacking. We have enriched our discussion by citing more relevant literature and providing a detailed explanation of how our approach leverages self-supervision and inpainting to dynamically address biased areas, thereby achieving robust causal discovery and interpretability.
4. **Comparison with NuwaDynamics**: (Reviewer `s4LS`,)
- Reviewers noted the need for a more direct comparison with NuwaDynamics using the same settings. We have aligned our experimental setup with NuwaDynamics and provided additional experiments to highlight our method's superior performance with fewer sequences, demonstrating the efficiency and quality of CaPaint.
#### Additional Improvements:
- **Typographical Corrections**: We have corrected the typo in the abstract where front-door adjustment was incorrectly referred to as back-door adjustment.
- **Detailed Methodological Clarifications**: We have provided more detailed descriptions on the synthesis of spatio-temporal data into coherent sequences and included additional experiments to substantiate the improvements in quality and efficiency of generation.
In conclusion, we are grateful for the positive feedback and constructive criticisms provided by the reviewers. Your comments have been invaluable in helping us improve our manuscript. We believe that the revisions and additional experiments we have incorporated address the concerns raised and enhance the overall quality and clarity of our work.
Thank you once again for your detailed and thoughtful reviews. We look forward to your continued feedback and hope our revised submission meets your expectations.
Warm regards,
Authors
Pdf: /pdf/d6b5240e81e14bb502407f18fc3826e66d1dfb22.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
M$^3$-Impute: Mask-guided Representation Learning for Missing Value Imputation | Reject | Summary: The paper introduces M3-Impute, a mask-guided representation learning method for missing value imputation. The core idea of M3-Impute is to leverage missingness information as an explicit input to the model through masking schemes. This approach allows M3-Impute to effectively learn both feature-wise and sample-wise correlations, accommodating various types of data missingness. The model employs a variant of GraphSAGE for graph representation learning, incorporating edge embeddings via neighborhood aggregation. It outperforms traditional tabular data models in various benchmark datasets.
Strengths: 1. The paper presents a novel approach to missing value imputation through the introduction of a mask-guided representation learning method (M3-Impute). The originality of the work lies in its utilization of missingness information as a model input, employing innovative masking schemes. This allows M3-Impute to accurately capture feature-wise and sample-wise correlations despite varying types of missing data (MCAR, MAR, MNAR). The use of GraphSAGE for graph representation learning, combined with edge embeddings via neighborhood aggregation, further distinguishes this work from traditional tabular data models.
2. The quality of the research is demonstrated through comprehensive experiments across multiple datasets and missing data mechanisms. The empirical results show that M3-Impute consistently outperforms baseline methods. The authors include a code package and datasets with the submission.
Weaknesses: 1. The paper evaluates the sensitivity of the M3-Impute model to the initialization parameter ϵ (Table 3), demonstrating that a non-zero value of ϵ improves imputation accuracy. However, the lack of detailed sensitivity analysis for other critical hyperparameters, such as the learning rate, batch size, number of GNN layers, and the dropout rate, represents a weakness.
2. The paper also has notable limitations in its contextualization relative to prior work. While it effectively presents M3-Impute and compares it against several baseline models, it lacks a deeper analysis of how these baseline models have evolved and the specific innovations they have introduced over time. For instance, the paper mentions GRAPE and IGRM as key prior graph-based imputation methods but does not adequately explore their strengths and weaknesses or how M3-Impute directly addresses the limitations of these methods. This omission makes it challenging to understand the novelty and improvements offered by M3-Impute.
3. Relying solely on MAE to evaluate the performance of imputation models has several limitations. MAE measures the average magnitude of errors but does not account for the variance or distribution of those errors, making it insensitive to outliers and providing no insight into model bias. This can result in an incomplete understanding of a model's performance, particularly in contexts where large errors or systematic biases are important considerations. To address these limitations, incorporating RMSE alongside MAE would be beneficial. RMSE penalizes larger errors more heavily, offering additional insight into the presence and impact of significant errors in the model's predictions.
Technical Quality: 3
Clarity: 2
Questions for Authors: - The statement that most learning-based methods are``built upon the raw tabular data structure as is, which greatly restricts them from jointly modeling the feature-wise and sample-wise correlations” (line 41) is not entirely accurate for two prominent tabular generative models, MIDA and GAIN. MIDA transforms raw tabular data into a higher-dimensional space through its encoder-decoder architecture. This transformation allows MIDA to capture more complex, nonlinear relationships that are not immediately apparent in the raw data. The adversarial process of GAIN allows it to model the joint distribution of the data, thus capturing complex correlations between features and samples.
- Gondara, L., Wang, K. (2018). MIDA: Multiple Imputation Using Denoising Autoencoders. In: Phung, D., Tseng, V., Webb, G., Ho, B., Ganji, M., Rashidi, L. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2018. Lecture Notes in Computer Science(), vol 10939. Springer, Cham. https://doi.org/10.1007/978-3-319-93040-4_21
- Wang, Zhenhua, et al. "Are deep learning models superior for missing data imputation in surveys? Evidence from an empirical comparison." Survey Methodology 48 (2022): 375-399.
- When discussing statistical methods (line 70), the authors should mention that FCS approaches such as MICE are flexible in imputing different types of variables.
- Related to the previous point, MICE is generally considered a statistical method rather than a learning-based method, although a learning algorithm such as CART can be used as the imputer. See the paper by Wang et. al. (2022).
- In Sec. 4.2, it should be noted that the M3-Impute model tends to perform slightly better under MAR and MNAR settings for most datasets, indicating its effectiveness in handling missingness that depends on the observed data.
- How can interpretability techniques be incorporated into M3-Impute to help users understand the imputation decisions?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors should consider summarizing the limitations of their method in the conclusion to provide a comprehensive overview of their work. Specifically, in Section 4.2, the authors discuss the cases of MAE degradation for the Kin8nm and Naval datasets. They attribute this to the independence of features in Kin8nm, which prevents observed features from aiding in the imputation of missing values, and the strong linear correlations between nearly all features in the Naval dataset. Summarizing these points in the conclusion would give readers a clear understanding of the method's limitations and the contexts in which it performs best.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our method as a novel approach with innovative masking schemes and that our experiments comprehensively demonstrate superior performance of our method to baseline methods. We also appreciate the constructive comments. Below we provide our response to the concerns raised.
**Q1. Detailed sensitivity analysis for other critical hyperparameters.**
Thanks for the comment. The results of a detailed sensitivity analysis for other critical hyperparameters are presented in Tables 5-7 in the rebuttal PDF. We will incorporate them into our final manuscript.
**Q2. Differentiation with GRAPE and IGRM.**
Thanks for pointing this out. GRAPE is a pioneering work that models tabular data with missing values as a bipartite graph. IGRM builds on the bipartite graph modeling and further computes sample correlations to connect similar samples by creating new edges for learning. However, they consider each sample or feature as a whole in computing correlations, whereas in M$^3$-Impute, we treat each entry in the tabular dataset individually to compute its correlation with other entries. This is achieved by our novel soft masking design with Feature Correlation Unit (FCU) and Sample Correlation Unit (SCU). We will clarify this point in the final manuscript. Thanks.
**Q3. RMSE comparison between M$^3$-Impute and baseline methods**
Thanks for the suggestion. The RMSE results are now provided in Tab.1 below, which shows that our method outperforms the baselines as well. We will incorporate them into the final manuscript.
**Tab.1: RMSE under MCAR setting with 30\% missingness. RMSE scores are enlarged by 10 times for better clarity.**
| | Yacht | Wine | Concrete | Housing | Energy | Naval | Kin8nm | Power |
|---------|:-------:|:------:|:----------:|:---------:|:--------:|:-------:|:--------:|:-------:|
| Mean | 2.92 | 1.33 | 2.23 | 2.49 | 3.54 | 2.89 | **2.88** | 1.98 |
| Svd | 3.07 | 1.28 | 2.65 | 2.11 | 4.09 | 1.18 | 4.75 | 2.74 |
| Spectral| 3.20 | 1.28 | 2.59 | 2.14 | 3.04 | 1.05 | 3.37 | 3.72 |
| Mice | 2.49 | 1.05 | 1.78 | 1.77 | 2.28 | 0.54 | **2.88** | 1.47 |
| kNN | 2.70 | 1.20 | 2.05 | 1.89 | 3.12 | 0.47 | 3.48 | 2.14 |
| Gain | 2.90 | 1.16 | 2.21 | 1.87 | 2.70 | 0.89 | 3.23 | 1.63 |
| Miwae | 5.93 | 1.41 | 2.41 | 4.70 | 3.78 | 3.03 | 3.03 | 2.08 |
| Grape | _2.34_ | **0.89** | _1.27_ | _1.32_ | _2.24_ | _0.18_ | _2.89_ | _1.32_ |
| Miracle | 43.86 | 1.56 | 2.17 | 43.61 | 42.21 | 0.60 | _2.89_ | 1.46 |
| HyperImpute | 2.50 | _0.99_ | 1.47 | 1.62 | 2.52 | 0.19 | 3.04 | 1.43 |
| **M${^3}$-Impute** | **2.27** | **0.89** | **1.23** | **1.28** | **2.19** | **0.16** | _2.89_ | **1.30** |
|
**Q4. Misclassification of MIDA and GAIN.**
Thanks for the correction. We will restructure our introduction as suggested to make our arguments more precise in the final manuscript.
**Q5. When discussing statistical methods (line 70), the authors should mention that FCS approaches such as MICE are flexible in imputing different types of variables.**
Thanks for the comment. We will restructure the related work section and discuss such approaches correctly in the final manuscript.
**Q6. More precise statement in Sec. 4.2.**
Thanks for the detailed comment. We will make the statements more precise in the final manuscript.
**Q7. How can interpretability techniques be incorporated into M3-Impute to help users understand the imputation decisions?**
Thanks for the suggestion. We plan to add a plot in the final manuscript to show the intermediate correlation learning process. This plot will illustrate the dependency on other entries when imputing the value of a missing entry in the table.
**Q8. Summarize limitations in the conclusion.**
Thanks for the suggestion. We will summarize the limitations of our methods, e.g., presence of the datasets where our method is not the top performer, in the conclusion of the final manuscript.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed the lack of empirical validation by providing additional sensitivity analyses for hyperparameters (Tables 5-7 in the rebuttal PDF) and RMSE results (Table 1 in the rebuttal) as requested.
They acknowledge the need to add a limitations section in the conclusion and summarize the cases where their method underperforms (KIN8NM and NAVAL datasets), and commit to adding this in the final manuscript.
The broader empirical validation and commitment to adding a limitations section will strengthen the revised paper. | Summary: This study proposes a missing value imputation method. The proposed method tackles the missing value imputation problem as a link prediction task on the bipartite graph. It represents a data matrix with missing values as a bipartite graph, then uses a graph neural network on the bipartite graph to learn the embeddings of samples and features. Next, a feature correlation unit and a sample correlation unit are employed to obtain feature-wise and sample-wise correlations, which are then fed into an MLP to obtain imputed values.
Strengths: - An advanced and novel approach to missing value imputation. The idea is intuitive.
- Experimental comparison is done with various existing methods, including recent ones.
- Significant performance improvements achieved.
Weaknesses: - The size of the bipartite graph may drastically increase with data size and dimensionality.
- No investigation on various missingness scenarios and missing rates.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Computational and space complexity analysis is needed. Can the proposed method be used for a very large training dataset with high dimensionality?
- How does the proposed method perform under lower and higher missing rates?
- The inference phase is unclear. Please elaborate on how the proposed method makes imputations for a query instance containing missing values?
- While the authors reviewed other recent methods that leveraged graph neural networks ([40], [52], [54]), only the method in [52] was compared in the experiments. Why were the methods in [40] and [54] not compared?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our method as an advanced and novel approach that achieves significant performance improvements, and that our contribution is excellent. We also appreciate the constructive comments. Below we respond to the main concerns raised.
**Q1. The size of the bipartite graph may drastically increase with data size and dimensionality.**
Yes, the size of the bipartite graph would increase with larger datasets. However, as demonstrated in Table 4 in the rebuttal PDF, the computation cost would not increase drastically as the mini-batch neighborhood aggregation is efficient in generating node embeddings, which use a reasonable amount of computational resources.
**Q2. No investigation on various missingness scenarios and missing rates.**
Thanks for the comment. We would like to point out that we indeed carried out this study, and the results are available in Tables 5, 6, 8, and 9 in the appendix of the submitted manuscript.
**Q3. Computational and space complexity analysis is needed. Can the proposed method be used for a very large training dataset with high dimensionality?**
Yes, we have conducted experiments on new large datasets, including Protein, Spam (with 57 features), Letter, and California Housing, which have a large number of samples and features. The results confirm that our method is feasible and efficient in handling such large datasets. The dataset statistics and running times can be found in Table 1 and Table 4 in the rebuttal PDF, respectively. Thanks.
**Q4. How does the proposed method perform under lower and higher missing rates? The inference phase is unclear. Please elaborate on how the proposed method makes imputations for a query instance containing missing values?**
Thanks for the comment. The results of different missing ratios are available in Table 8 in the submitted manuscript.
In the inference phase, the pipeline for imputing the missing values of a given sample is detailed in Algorithm 1 in the manuscript. For instance, given a sample with 5 features, where 2 features are missing, M$^3$-Impute first introduces new sample node and establishes edges between the sample node and the three non-missing feature nodes, which already exist in the original graph. The sample node embedding is initialized as per Equation (2). The embeddings of feature nodes and the sample node are then updated through the neighborhood aggregation process using the node embeddings learned from training.
Next, these updated node embeddings are processed through Feature Correlation Unit (FCU) and Sample Correlation Unit (SCU) to learn the feature-wise and sample-wise correlations. Suppose we are to impute the first missing feature, denoted by $f$. FCU computes the correlations between $f$ and all three non-missing features, resulting in the vector $c^{f}_s$. In addition, SCU first measures the similarity between the query sample and a set of peer samples when imputing the missing feature $f$ according to Equation (6), and then fuses the information from the peer samples as in Equation (9), resulting in the vector $z^{f}_s$. Finally, the missing feature $f$ is imputed using Equation (11).
**Q5: While the authors reviewed other recent methods that leveraged graph neural networks ([40], [52], [54]), only the method in [52] was compared in the experiments. Why were the methods in [40] and [54] not compared?**
Thanks for the comment. The code of [54] was unavailable at the submission time of our paper. While their code was recently available, we have not been able to complete running their code on all the 25 datasets without out of memory errors on our experiment platform (NVIDIA A100 GPU with 80GB memory). Hence, we don't report the results here. Nonetheless, we have been able to run the code of [40] and report the results in Tab.1.
### Tab. 1: Imputation accuracy in MAE under MCAR setting with 30\% missingness
| | Yacht | Wine | Concrete | Housing | Energy | Naval | Kin8nm | Power |
|---------------|-------|------|----------|---------|--------|-------|--------|-------|
| GINN [40] | 6.89 | 9.49 | 142.85 | 60.66 | 55.06 | 3529.54 | 2.16 | 29.91 |
| M${^3}$-Impute | **1.33** | **0.60** | **0.71** | **0.59** | **1.31** | **0.06** | **2.50** | **0.99** |
| | Summary: This paper addresses the challenge of missing values in data analysis and machine learning by proposing M3-Impute, a novel imputation method. Traditional imputation techniques often neglect 'missingness' information and fail to explicitly model feature and sample correlations, leading to suboptimal results. M3-Impute innovatively incorporates missingness information and correlations through advanced masking schemes. It represents data as a bipartite graph and utilizes a graph neural network with a refined initialization process to learn node embeddings. These embeddings are further optimized using feature correlation and sample correlation units, which explicitly consider the correlations during imputation. The method's effectiveness is demonstrated through experiments on 15 benchmark datasets with three different missing patterns.
Strengths: The feature correlation unit (FCU) and sample correlation unit (SCU) are particularly compelling. The FCU learns correlations between the target missing feature and observed features within each sample, refined by a soft mask on missingness information. Similarly, the SCU computes sample-wise correlations, enhanced by another soft mask on missingness information for pairs of samples.
Integrating FCU and SCU outputs to estimate missing values is methodologically sound. Extensive experiments on 15 open datasets show M3-Impute's superior performance in 13 out of 15 cases under various missing value patterns. The reported improvements in mean absolute error (MAE), up to 11.47% over the second-best method, underscore M3-Impute's practical relevance and robustness.
Weaknesses: M3-Impute demonstrates strong performance across many datasets but shows limitations in handling datasets with highly independent features or strong linear correlations, such as the KIN8NM and NAVAL datasets.
The model's robustness in general scenarios may degrade when confronted with datasets containing extreme correlation structures.
Future improvements could concentrate on enhancing M3-Impute's adaptability to these challenging cases to broaden its applicability and robustness across diverse datasets.
Technical Quality: 2
Clarity: 2
Questions for Authors: In ablation study (4.3), Could you provide more details on the refined initialization process of feature-node and sample-node embeddings? How does it differ specifically from the initialization used in Grape?
Can you provide more insights into the computational complexity and runtime performance of M3-Impute compared to the baselines, especially for large datasets?
In missing ratio test, why does M3-Impute perform similarly to Grape on the KIN8NM dataset, and what characteristics of KIN8NM contribute to this result?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Although M3-Impute outperforms other baselines on most datasets, two cases (KIN8NM, NAVAL dataset) highlight its limitations in handling datasets with either highly independent features or strongly linear correlations. This suggests that while M3-Impute is robust in general scenarios, its performance may degrade in datasets with extreme correlation structures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing that our FCU and SCU are particularly compelling, our method is methodologically sound, and the experiments are extensive. We also appreciate the reviewer for the constructive comments. Below we provide our response to the concerns raised.
**Q1. M3-Impute demonstrates strong performance across many datasets but shows limitations in handling datasets with highly independent features or strong linear correlations, such as the KIN8NM and NAVAL datasets. The model's robustness in general scenarios may degrade when confronted with datasets containing extreme correlation structures.**
Thanks for the comment. While our method does not show a significant improvement compared to state-of-the-art methods in such extreme cases, it does remain competitive. For instance, in the KIN8NM and NAVAL datasets, M$^3$-Impute achieves second-best results, with a difference of less than 0.02 in MAE compared to the top performers. We have considered 10 new open datasets in our experiments, in addition to the original 15 datasets, leading to a total of 25 open datasets. Our method consistently outperforms the baselines, demonstrating its general applicability.
**Q2. Future improvements could concentrate on enhancing M3-Impute's adaptability to these challenging cases to broaden its applicability and robustness across diverse datasets.**
Thanks for the comment. We will explore challenging cases where data have extremely strong or weak correlations in future work.
**Q3. In ablation study (4.3), Could you provide more details on the refined initialization process of feature-node and sample-node embeddings? How does it differ specifically from the initialization used in Grape?**
We appreciate the opportunity to clarify our approach regarding node embedding initialization.
**Feature-node Embeddings:** In Grape, all feature-node embeddings are initialized as one-hot vectors. Since one-hot vectors are orthogonal, this implicitly assumes that all features are independent. However, in reality, features are often correlated, and initializing feature nodes as one-hot vectors can hinder the modeling of these correlations. To address this issue, we have refined the initialization of feature-node embeddings by using a learnable vector for each feature node. This approach enables the feature node embeddings to be learned during training, allowing correlated features to potentially have similar embeddings from the early stage of learning, which better captures the relationships between features.
**Sample-node Embeddings:** In Grape, sample-node embeddings are initially set to all-one vectors, which results in identical embeddings for all sample nodes at the beginning. This approach does not capture the unique missingness information of each sample. In contrast, we propose using Equation (2) for initializing sample-node embeddings, where observable feature values of a sample are kept and a small value $\epsilon$ is attached to unobserved features. This method ensures that the initial embeddings reflect the missingness information that is specific to each sample.
We hope this explanation clarifies the rationales behind our refinements and their importance in improving imputation accuracy.
**Q4. Can you provide more insights into the computational complexity and runtime performance of M3-Impute compared to the baselines, especially for large datasets?**
Thanks for the suggestion. The running time of each method is now shown in Table 4 in the rebuttal PDF. The results show that our method is both accurate and time-efficient. Among the datasets, **PR**otein, **SP**am, and **LE**tter are large datasets, where our method takes a little more time to achieve higher accuracy, which we believe is worthwhile.
**Q5. In missing ratio test, why does M3-Impute perform similarly to Grape on the KIN8NM dataset, and what characteristics of KIN8NM contribute to this result?**
As can be seen from Figure 4 in the appendix of the submitted manuscript, the features in KIN8NM are highly independent. Almost all the imputation methods perform similarly to Grape on this dataset. Please also refer to our response to Q1. Thanks. | Summary: The paper proposed a new imputations method called M3-impute. M3-impute follows the basic structure of some recent imputation methods: a undirected bipartite graph is constructed with nodes for features and samples, where edge weights correspond to observed data at the given feature-sample pair. Previous approaches use Graph Neural Networks (GNNs) to impute missing values via edge weight prediction. M3-impute improves these approaches by adding two new components on top of an initial GNN to model feature-wise and sample-wise correlations respectively. Empirical results show that M3-impute achieves competitive performance in terms of MAE for imputation across several tabular datasets.
Strengths: - The paper is generally well written.
- Empirical results are extensive. Many other imputations methods are included for comparison, providing a good representation for the state-of-the-art for tabular data imputation. Ablation studies and robustness studies also further strengthen the credibility of the methodology.
Weaknesses: - The paper does not support categorical features. This is a big weakness compared to other imputation methods that can handle categorical features such as iterative approaches like hyperimpute.
- The paper does not discuss the impact of missing value imputation on downstream tasks. Imputation is usually a preprocessing step, and thus assessing the impact on possible downstream tasks is paramount. For example, in supervised learning, some recent evidence suggests that mean/zero imputation is as good as more complex imputations [1, 2].
[1] Le Morvan, Marine, et al. "What’s a good imputation to predict with missing values?." Advances in Neural Information Processing Systems 34 (2021): 11530-11540.
[2] Van Ness, Mike, and Madeleine Udell. "In defense of zero imputation for tabular deep learning." NeurIPS 2023 Second Table Representation Learning Workshop. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the current approach for handling categorical features? Do none of the datasets in the experiments have any categorical features, or are these feature simply being one-hot encoded?
- How does the runtime of M3-impute compare to the other methods? I know this is briefly discussed in appendix 5, but more details runtimes would be appreciated.
- Why are standard deviations not included for Table 1?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses. In particular, not handling categorical features is not mentioned in the paper anywhere as a limitation of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for recognizing that our paper is well-written and our empirical results are extensive. Below we provide our response to the concerns raised.
**Q1. The paper does not support categorical features. This is a big weakness compared to other imputation methods that can handle categorical features such as iterative approaches like hyperimpute.**
We appreciate your feedback and would like to clarify that our method is indeed capable of handling mixed features, including categorical ones.
Specifically, during the forward computation, in M$^3$-Impute, we first convert categorical feature values into numerical values (e.g., categories 1, 2, and 3 are converted to real numbers 1, 2, and 3) and incorporate them alongside numerical features in the initialization stage. Subsequently, M$^3$-Impute performs GNN computations to obtain node embeddings. These embeddings are then processed through Feature Correlation Unit (FCU) and Sample Correlation Unit (SCU) to compute the corresponding context vectors. Finally, M$^3$-Impute uses an MLP with a ReLU activation function for imputing numerical features and an MLP with a softmax activation function for imputing categorical features. This is feasible because we know the data type of the target feature to be imputed and can switch the activation function accordingly. For parameter updates, we use the L1 error for numerical features and the cross-entropy loss for categorical features.
In our submitted manuscript, we already reported the results on the performance of our method on the datasets with mixed feature types, such as the housing and airfoil datasets. To further demonstrate the efficacy of our method in handling categorical features, we have evaluated M$^3$-Impute on four new datasets with mixed feature types. The dataset statistics are presented in Tab.1 and the results are shown in Tab.2.
### Tab.1: 4 Additional datasets that contain categorical features.
| | abalone | Ai4i | CMC | German |
|-------------------------|:---------:|:-------:|:------:|:--------:|
| #Samples | 4177 | 10000 | 1473 | 1000 |
| #Numerical Features | 7 | 7 | 8 | 13 |
| #Categorical Features| 1 | 5 | 1 | 7 |
|
### Tab.2: Imputation accuracy in MAE under MCAR setting with 30\% missingness.
| | Abalone | Ai4i | CMC | German |
|------------|:---------:|:------:|:------:|:--------:|
| Mean | 2.52 | 1.07 | 2.35 | 2.52 |
| Svd | 2.60 | 1.18 | 2.52 | 2.60 |
| Spectral | 2.45 | 1.58 | 2.96 | 2.45 |
| Mice | 2.26 | 0.87 | 2.06 | 2.26 |
| kNN | 2.34 | 1.17 | 2.32 | 2.34 |
| Gain | 2.27 | 1.03 | 2.33 | 2.27 |
| Miwae | 2.59 | 1.12 | 2.37 | 2.59 |
| Grape | *2.01* | 0.79 | *1.87* | *2.01* |
| Miracle| 39.17 | 1.02 | 2.16 | 39.17 |
| HyperImpute | 2.05 | **0.75** | 1.91 | 2.05 |
| **M³-Impute** | **1.84** | *0.76* | **1.81** | **1.87** |
|
**Q2. The paper does not discuss the impact of missing value imputation on downstream tasks. Imputation is usually a preprocessing step, and thus assessing the impact on possible downstream tasks is paramount. For example, in supervised learning, some recent evidence suggests that mean/zero imputation is as good as more complex imputations.**
Thanks for your constructive comments. We have conducted new experiments on the impact of imputation methods on downstream tasks and report the results in Tab.3. The results show that M$^3$-impute outperforms the baselines, including the mean imputation, for most cases. We will include the new results in the final manuscript.
### Tab.3: Averaged RMSE of label prediction under MCAR setting with 30\% missingness.
| | Yacht | Wine | Concrete | Housing | Energy | Naval | Kin8nm | Power |
|--------------|:-------:|:------:|:----------:|:---------:|:--------:|:--------:|:--------:|:-------:|
| Mean | 13.10 | 0.69 | 13.10 | 6.19 | 5.49 | 0.0075 | **0.22** | 8.38 |
| Svd | 13.60 | 0.69 | 13.80 | 6.08 | 4.82 | 0.0070 | 0.24 | 9.87 |
| Spectral | 13.70 | 0.69 | 13.40 | 6.13 | 4.81 | 0.0067 | _0.23_ | 9.45 |
| Mice | 12.80 | 0.69 | 13.00 | 5.80 | 3.99 | 0.0057 | **0.22** | 6.94 |
| kNN | 13.00 | _0.67_ | 12.40 | 5.85 | 4.02 | 0.0062 | _0.23_ | 7.75 |
| Gain | 13.70 | 0.68 | 13.30 | 5.91 | 3.67 | 0.0070 | _0.23_ | 6.83 |
| Miwae | 17.40 | 0.70 | 13.20 | 7.28 | 4.98 | 0.0075 | _0.23_ | 8.45 |
| NeuMiss | 16.79 | 0.87 | 14.44 | 18.13 | 14.09 | 1.95 | 0.22 | 56.74 |
| Grape | _12.67_ | _0.67_ | _11.63_ | _5.23_ | 3.60 | _0.0049_ | _0.23_ | 6.72 |
| Miracle | 17.80 | 0.70 | 13.20 | 9.24 | 9.54 | 0.0051 | **0.22** | 6.62 |
| HyperImpute | 13.10 | _0.67_ | 12.40 | 5.45 | _3.54_ | **0.0043** | _0.23_ | **6.44** |
| **M${^3}$-Impute** | **12.43** | **0.66** | **11.47** | **5.19** | **3.45** | _0.0049_ | **0.22** | _6.49_ |
|
**Q3. How does the runtime of M3-impute compare to the other methods? I know this is briefly discussed in appendix 5, but more details runtimes would be appreciated.**
Thanks for the question. We now report the running time performance of each imputation method in Table 4 in the rebuttal PDF. We will incorporate it into the final manuscript.
**Q4. Why are standard deviations not included in Table 1?**
Thanks for the question. Due to space constraints, we were unable to include both MAE scores and the standard deviations of all methods in Table 1, but we provided the comprehensive results (including both MAE scores and standard deviations) in Table 8 in the appendix of the submitted manuscript.
---
Rebuttal Comment 1.1:
Comment: I've raised my score to a 6 (weak accept) as the authors have addressed several of the concerns in my review. | Rebuttal 1:
Rebuttal: We appreciate the constructive comments from Reviewer LUcW (R1), JuhZ (R2), LMmS (R3), f6SV (R4), E1Qa (R5), Bqr5 (R6), and WjjR (R7). We are encouraged that they find our approach novel (R1, R2, R3, R5, R6, R7), our masking scheme innovative and effective (R2, R3, R5, R7), our experiments comprehensive and extensive (R1, R2, R4, R5, R7), and our manuscript well written (R1, R4). Below, we address the common concerns raised. For individual questions, please refer to our separate response to each reviewer.
**Experiments on 10 more datasets:**
We have further tested our method on 10 additional datasets, totaling 25 datasets. Of the 10 new datasets, five are relatively large, and four contain mixed types of features. Details of the datasets and the results are provided in Table 1 and Table 2 below, respectively. The results again demonstrate the effectiveness of M$^3$-Impute, achieving nine best and one second-best in imputation accuracy.
**Table 1: 10 additional datasets for data imputation: **PR**otein, **SP**am, **LE**tter, **AB**alone, **AI**4i, **CM**c, **GE**rman, **ST**eel, **LI**bras, and **CA**lifornia-housing, totaling 25 datasets studied.**
| | PR | SP | LE | AB | AI | CM | GE | ST | LI | CA |
|------------|:-------:|:------:|:-------:|:------:|:------:|:------:|:------:|:------:|:-----:|:-------:|
| # of Samples | 45730 | 4601 | 20000 | 4177 | 10000| 1473 | 1000 | 1941 | 360 | 20640 |
| # of Features| 9 | 57 | 16 | 8 | 12 | 9 | 20 | 33 | 91 | 9 |
|
**Table 2: MAE under MCAR setting with 30\% missingness. Please refer to the caption of Table 1 for dataset names.**
| Model | PR | SP | LE | AB | AI | CM | GE | ST | LI | CA |
|---------------|:-------:|:------:|:-------:|:------:|:------:|:------:|:------:|:------:|:-----:|:-------:|
| Mean | 0.91 | 0.23 | 1.28 | 2.52 | 1.07 | 2.35 | 2.52 | 1.80 | 1.82 | 1.13 |
| Svd | 1.00 | 0.31 | 1.29 | 2.60 | 1.18 | 2.52 | 2.60 | 1.37 | 0.37 | 1.35 |
| Spectral | 1.14 | **0.16** | 1.75 | 2.45 | 1.58 | 2.96 | 2.45 | 1.10 | 0.18 | 1.50 |
| Mice | 0.33 | 0.22 | 1.00 | 2.26 | 0.87 | 2.06 | 2.26 | 0.95 | _0.11_ | 0.69 |
| kNN | 0.58 | _0.17_ | 0.89 | 2.34 | 1.17 | 2.32 | 2.34 | 0.78 | 0.25 | 1.17 |
| Gain | 0.72 | 0.21 | 1.09 | 2.27 | 1.03 | 2.33 | 2.27 | 1.03 | 0.46 | 1.07 |
| Miwae | 0.94 | **0.16** | 1.33 | 2.59 | 1.12 | 2.37 | 2.59 | 1.70 | 2.13 | 1.16 |
| Grape | _0.25_ | _0.17_ | _0.53_ | _2.01_ | 0.79 | _1.87_ | _2.01_ | _0.45_ | **0.10** | **0.54** |
| Miracle | 0.32 | 1.07 | 1.06 | 39.17 | 1.02 | 2.16 | 39.17 | 1.51 | 51.37 | 0.67 |
| HyperImpute | _0.25_ | 0.18 | 0.61 | 2.05 | **0.75** | 1.91 | 2.05 | 0.72 | _0.11_ | _0.57_ |
| **M${^3}$-Impute** | **0.24** | **0.16** | **0.52** | **1.84** | _0.76_ | **1.81** | **1.87** | **0.39** | **0.10** | **0.54** |
|
**Computational resources and runtime (@R1, R2, R5, R6):**
We have added a runtime comparison in Table 4 in the rebuttal PDF and will include the results in the final manuscript. The results show that our method is both accurate and time-efficient. For example, for inference with GPU, the time taken to impute *all* the missing values for any dataset we tested is less than one second under the setting of MCAR with 30\% missingness.
**Performance on downstream tasks (@R1, R4):**
We have conducted new experiments on the impact of imputation methods on downstream tasks. Specifically, given datasets with missing values, we first impute all the missing values and then use the completed datasets for downstream tasks. We report the results on the downstream-task performance of imputation methods in Tab.1 in our response to @R1 and Tab.3 in our response to @R4. As shown in the tables, our method consistently outperforms the imputation baselines, indicating that the values imputed by our method are more beneficial for downstream tasks.
**Hyperparameter management (@R1, R7):**
We followed the commonly used hyperparameter settings from the previous studies for baselines. For M$^3$-Impute, we used the same set of parameters for all the experiments. Additionally, we have now included an analysis of the impact of hyperparameters on the performance of M$^3$-Impute. Results can be found in Tables 5-7 in the rebuttal PDF.
**Questions regarding the set $\mathcal{P}$ (@R3)**
The set $\mathcal{P}$ is obtained through a sampling strategy based on the cosine similarity with the target node since directly selecting peers based on the degree of similarity, e.g., using $k$-nearest neighbors, may introduce extra computational complexity and hinder the method's generalization ability. It is *updated* every epoch. We also provide a rule of thumb for the size of the set based on extensive experiments. Please refer to our responses to Q1 and Q3 @R3 for more details.
**Categorical Features Handling (@R4):**
We would like to point out that our method M$^3$-Impute can handle mixed types of features, including categorical ones. Specifically, it converts categorical features into numerical values during initialization and uses an MLP with softmax activation for imputing categorical values, with cross-entropy loss for parameter updates. We already included the results of M$^3$-Impute on the datasets with mixed feature types, which are shown in Tables 1 and 9 in our submitted manuscript. We have further tested M$^3$-Impute on four additional datasets with mixed data types. Dataset details and results can be found in Tab.1 and Tab.2 in our response to @R4.
**Imputation results with RMSE (@R7):**
We have included new results on the performance of imputation methods measured in RMSE in Tab.1 in our response to @R7. The results show that our method consistently outperforms the baselines. We will include them in the final manuscript.
Pdf: /pdf/78f5a05fa8a4409deca8c1967e6958907b9b76e2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper proposes M^3-Impute, a missing value imputation method that utilizes GNNs to learn embeddings of samples and features. By incorporating feature correlation unit and sample correlation unit, M^3-Impute effectively captures correlations between features and samples for accurate imputation.
Strengths: S1. This paper introduces a novel masking scheme that effectively utilizes missing information for modeling.
S2. This paper proposes the feature correlation unit (FCU) and sample correlation unit (SCU), which help to consider feature and sample correlations during imputation.
S3. Experimental evaluations on various datasets compare the proposed method with state-of-the-art approaches, demonstrating good imputation performance.
Weaknesses: W1. SCU takes into account the pairwise similarity of \mathcal{P} during its construction, which subsequently determines the scalar parameter \alpha during imputation. The initialization of \mathcal{P} seems to directly impact the model's performance and remains unchanged once set. It would be beneficial for the authors to discuss this aspect and, if possible, provide some experimental evidence to support their approach.
W2. In Section 4.4, the paper mentions different sampling strategies for SCU and uses a new strategy in the ablation study (Table 2), which is different from the strategy mentioned in Section 3.4. The authors claim that this strategy leads to inferior performance compared to previous strategies, thereby highlighting the superiority of the ablation study results. This lacks experimental evidence and results in inconsistency in the experimental setup.
W3. As a crucial parameter, the size of \mathcal{P} directly affects the construction of SCU. Table 3 presents experimental results with different sizes, but the differences are not significant, which is somewhat counterintuitive. Although the authors discuss the experimental results in Section 4.4, the performance fluctuation of 0.01 to 0.02 does not clearly reflect the "decrease then increase" trend mentioned by the authors. Exploring larger peer values and providing more detailed analysis and guidance on parameter selection might be beneficial.
W4. The authors emphasize the importance of specific missingness information throughout the paper. Intuitively, different types of missing data (MAR, MNAR, MCAR) might offer varying types of missingness information, potentially impacting the model's performance. While the paper experiments with data of different missingness types, a more thorough discussion of the results could enhance the motivation and clarity of the paper.
W5. In Section 4.3, the caption of Table 2 references a concept, the uniform sampling strategy, which is introduced for the first time in a later subsection. The authors might consider adjusting the structure of the paper for better clarity.
W6. In Figure 3, in the subfigure with the missing ratio of 0.7, two bars exceed the upper boundary and need adjustment.
W7. It would be helpful to add independent labels to all subfigures, such as (a), (b), etc., to facilitate referencing.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1. In Section 3.4, the authors mention that \mathcal{P} is constructed by randomly choosing with a certain probability. Why not select directly based on the degree of similarity rather than introducing randomness? How might these different approaches impact the model's performance?
Q2. To what extent does applying different GNN models to learn embeddings impact the performance of the model proposed in this paper?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing that our masking schemes are novel, the proposed correlation units are helpful, and our experiments demonstrate good performance. We also appreciate the constructive comments. Below we provide our response to the concerns raised.
**Q1. The initialization of $\mathcal{P}$ in SCU remains unchanged once set. Why not select peers directly based on the degree of similarity rather than introducing randomness?**
Sorry for the confusion caused. $\mathcal{P}$ is obtained from a sampling strategy where the probability of a peer being selected is proportional to its cosine similarity with the target node. The set $\mathcal{P}$ is updated every epoch and does *not* remain unchanged.
We would also like to point out that the set $\mathcal{P}$ balances both high-similarity peers and potential peers that serve for regularization and generalization purposes since directly selecting peers based on the degree of similarity, such as using a $k$-nearest neighbors method, may introduce extra computational complexity and hinder the method's generalization ability. We will incorporate the clarification into our final manuscript. Thanks.
**Q2. Table 2 in Section 4.4 for sampling strategy comparison is confusing; the caption is unclear.**
The cosine-similarity sampling strategy (introduced in Section 3.4) is integrated into M$^3$-impute for the main results across all experiments. Section 4.4 presents an ablation study where the original cosine-similarity sampling strategy is replaced with a uniform sampling strategy. The performance comparison is shown in Table 2, where M$^3$-impute indicates the cosine-similarity sampling strategy and M$^3$-uniform represents the uniform sampling strategy. As uniform sampling does not consider peer similarities, its performance is expected to be inferior to the original strategy, which is verified in Table 2. We will elaborated more in the table caption and incorporate the changes into the final manuscript.
**Q3. Table 3 does not clearly present the trend of $\mathcal{P}$.**
As explained in our response to Q1, the set $\mathcal{P}$ is updated every epoch. A proper peer size should balance high-similarity peers and potential peers that serve for regularization and generalization purposes. In general, the trend across different datasets shows that a too-small peer size may only include high-similarity peers, while a too-large peer size may include too many noisy nodes and incur higher computational overhead. The small fluctuations indicate that our method is relatively robust to this parameter. From the extensive experiments on 25 datasets, we recommend a peer size of 5–10 for practical use. We will properly revise the manuscript, not only proving a more comprehensive table but also showing the running time. Thanks.
**Q4. A more thorough discussion of the results under different missing types could enhance the motivation and clarity of the paper.**
Thanks for the comments. As demonstrated in Tables 1, 5, and 6 in our manuscript, M$^3$-Impute consistently outperforms the baselines across all three missingness patterns. This superior performance is due to M$^3$-Impute's unique approach. Rather than assuming the data follows MCAR, MAR, or MNAR missingness patterns from the outset, we designed M$^3$-Impute to leverage missingness information directly, enabling it to learn feature-wise and sample-wise correlations. Specifically, in Feature Correlation Unit (FCU), M$^3$-Impute uses the missingness results (i.e., known masks) to learn the correlations between the imputation targets and observable features. In Sample Correlation Unit (SCU), M$^3$ employs the missingness results to better capture sample correlations. Since feature and sample correlations exist regardless of the cause of missingness, M$^3$-Impute is naturally adaptive and robust across all three missingness settings.
Another notable feature of M$^3$-Impute is its ability to learn the cause of missingness. In the FCU unit, M$^3$-Impute explicitly captures the correlations between observed and missing features. When data is missing under MAR and MNAR conditions, the missing values depend on the observed ones. Since FCU explicitly captures these relationships, M$^3$-Impute can potentially identify the cause of missingness and enhance imputation accuracy. This capability may explain why M$^3$-Impute significantly outperforms the baselines in the MAR and MNAR settings compared to the MCAR settings.
**Q5. Two bars exceed the upper boundary of Figure 3.**
We will update the figure to show the full range of the performance. Thanks.
**Q6. Add independent labels to all subfigures.**
The subfigure labels will be added to the final manuscript. Thanks.
**Q7. The influence on performance by applying different GNN models.**
We have conducted new experiments using GNN variants such as GraphSAGE, GAT, and GCN. The results are shown in Table 3 in the rebuttal PDF. The results indicate that different aggregation mechanisms may introduce varying errors, but our method consistently outperforms its GRAPE counterpart, demonstrating its effectiveness.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. My concerns have been addressed, and I will raise my score to 5. | Summary: This is a novel approach for imputing missing data using mask-guided representation learning. The main contributions include the development of an imputation model that leverages both feature and sample correlations. This model improves imputation accuracy and robustness compared to existing methods. The paper also provides comprehensive experiments and ablation studies to validate the effectiveness of the proposed approach across various datasets and missing data scenarios.
Strengths: - Novelty: A unique mask-guided representation learning method that effectively combines feature-wise and sample-wise correlations.
- Comprehensive experiments
- Strong empirical performance
Weaknesses: - Computational complexity
## Minor Points
l4: "Existing imputation methods, however, fall short of considering the ‘missingness’ information in the data during initialization and modeling the entangled feature and sample correlations explicitly during the learning process,"
-> This is not true. Many existing methods consider missingness patterns.
The distinction between "statistical" and "learning based" methods seems off. Certainly most learning based methods are statistical, and vice versa.
l. 38 "struggles" -> struggle
Technical Quality: 3
Clarity: 3
Questions for Authors: - (How) do you do HPO for competing methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - Gives little insight into why it works better on some datasets than others.
- Would be interesting to understand better how robust results are under systematic changes in datasets, e.g., different types of missingness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for recognizing that our work is novel with a unique mask-guided representation learning method and we demonstrate strong empirical performance with comprehensive experiments. Below we provide our response to the concerns raised.
**Q1. Weakness: Computational complexity**
Thanks for the comment. The nature of neural network architectures makes it difficult to analyze the computational complexity rigorously. Nonetheless, we provide numerical results of running time here to show the computation overhead of each method. Please refer to Table 4 in the rebuttal PDF.
**Q2. Minor Points \#1: ''l4: `Existing imputation methods, however, fall short of considering the ‘missingness’ information in the data during initialization and modeling the entangled feature and sample correlations explicitly during the learning process,' $\rightarrow$ This is not true. Many existing methods consider missingness patterns.''**
We appreciate your feedback. Regarding the point about ''fall short of considering the missingness information in the data during initialization'' not being precise, our intention was to highlight that many existing methods ignore the missingness information during their **initialization stage**. For instance, in iterative imputation frameworks such as MICE and HyperImpute, the initialization stage often involves filling in missing values with some starting values (typically using the mean values of features) before learning. Similarly, graph-based imputation methods like GRAPE and IGRM do not explicitly consider missingness information when initializing their node and edge embeddings. Nonetheless, we agree that our statement can be misleading. We will carefully revise the statement in the final manuscript to avoid any confusions. Thanks.
**Q3. Minor Points \#2: ''The distinction between statistical and learning based methods seems off. Certainly most learning based methods are statistical, and vice versa.''**
Thanks for the suggestion. We agree with this point and will restructure our related work section in the final manuscript.
**Q4. Minor Points \#3: ''l. 38 struggles $\rightarrow$ struggle''**
Thank you very much for pointing out the typo. We will correct it in the final version.
**Q5. Questions: (How) do you do HPO for competing methods?**
For baselines, we followed the commonly used hyperparameter settings from the previous studies, such as GRAPE, including edge drop-out ratio during training, dropout rate, learning rate, and the number of GNN layers. For M$^3$-Impute, we used the same set of parameters for all the experiments. Additionally, we have now included an analysis of the impact of hyperparameters on the performance of M$^3$-Impute. Results can be found in Tables 5-7 in the rebuttal PDF. They will be incorporated into the final manuscript. Thanks.
**Q6. Limitation1: ''Gives little insight into why it works better on some datasets than others.''**
Thanks for the comment. We included a performance analysis on different datasets in `Section 4.2 Overall Performance' of the submitted manuscript. To summarize, we apply learnable masks to the data matrix with missing values via our novel units, Feature Correlation Unit (FCU) and Sample Correlation Unit (SCU) to better capture feature-wise and sample-wise correlations, respectively. These correlations are crucial for improving the accuracy of missing data imputation.
We admit that there are few datasets with extreme cases of correlations, e.g., having almost all independent features or all completely dependent features, where our method may not perform as good as on the other datasets. For instance, in the KIN8NM dataset where most features are independent of each other, M$^3$-Impute does not perform as effective as it can be. It is somewhat expected since knowing any of the non-missing features offers little help in imputing the missing ones due to their independence. Nonetheless, M$^3$-Impute achieves second-best results for such datasets, and it does remain competitive, with a difference of less than 0.02 in MAE compared to the top performers. Furthermore, we have considered 10 new open datasets in the experiments, together with the original 15 datasets, leading to a total of 25 open datasets. The results show that our method consistently outperforms the baselines for most cases, demonstrating its general applicability.
**Q7. Limitation2: ''Would be interesting to understand better how robust results are under systematic changes in datasets, e.g., different types of missingness.''**
Thanks for the comment. We indeed reported the performance of our method under three types of missingness in the submitted manuscript. The results for the MAR and MNAR settings are presented in Table 5 and Table 6 in the appendix of the submitted manuscript, respectively. In addition to the 8 UCI datasets discussed in the main text, we also included performance results on 7 additional datasets under all three types of missingness. These results can be found in Table 9 of the appendix of the submitted manuscript.
---
Rebuttal Comment 1.1:
Title: Thanks for the detailed response! My rating remains.
Comment: . | Summary: This paper presents a novel imputation method, based on a bipartite graph constructed from the data and the missing-data patterns, and two components which allow to measure similarities between the features and samples.
The method shows very good results in terms of MAE on several datasets for MCAR, MAR and MNAR data.
Strengths: - The paper is well written.
- Experiments are well conducted, on several datasets, with different missing-data ratios and considering MCAR, MAR or MNAR data. The authors have made an effort to compare themselves with many other imputation methods.
- There is a true discussion on the parameters to choose in the experiments. The authors are honest about the performance of their method, and give explanations when another method is better.
Weaknesses: - Although well presented, the method is complicated to understand.
- The methods uses 8 MLPs and one GNN. The authors discuss in Appendix the computational resources, but do not compare other methods on this point.
Technical Quality: 3
Clarity: 3
Questions for Authors: General remarks:
- How does this methodology relate to the simple concatenation of the mask to the data matrix, and the execution of an imputation method on the augmented matrix? (see Josse, Julie, et al. "On the consistency of supervised learning with missing values.")
- Is M3-Impute supposed to work well for MNAR? This should be discussed more in details, as the authors claim that the method utilizes the data-missingness information. A remark: there exists for MIWAE an extension specifically designed for MNAR data, called not-MIWAE. Ipsen, Niels Bruun, Pierre-Alexandre Mattei, and Jes Frellsen. "not-MIWAE: Deep generative modelling with missing not at random data."
It can be interesting to have a comparison of M3-Impute with this one in a final version.
Algorithm:
- Figure 1: maybe the authors should add numbers in the graphics to refer to them when describing the method in the text (especially in 3.1)
- In Algorithm 1, one of the input is the GNN model. How are hyperparameters of the GNN managed in practice?
Numerical experiments:
- In the final version, the authors should add a comparison with the missForest algorithm, which is one of the most widely used imputation methods.
- for other methods, such as MIWAE, which hyperparameters did the authors choose?
- in the MAR setting, how many features are selected to be observed? Did the authors take the best subset for the results?
Minor comments:
- l.15 mechanisms instead of patterns
- l.56 "the the"
- l.137 notation col_s: harmonise d and m
- l.548 "these remaining" <- "the remaining"
- l.291 mechanisms instead of patterns
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our paper as well-written and the experiments as well-conducted. We also appreciate the constructive comments. Below we provide our response to the concerns raised.
**Q1. Although well presented, the method is complicated to understand.**
The main idea behind our method is to explicitly utilize the missingness information as input and apply learnable masks to the data matrix to better capture feature-wise and sample-wise correlations, thereby improving imputation accuracy. To this end, we propose two novel units, Feature Correlation Unit (FCU) and Sample Correlation Unit (SCU), to capture feature-wise and sample-wise correlations, respectively. We will improve the presentation of the final manuscript. Thanks.
**Q2. Computation resource comparison.**
We have added the comparison of running time with other methods in Tab.4 of the rebuttal PDF and will incorporate this into the final manuscript.
**Q3. Difference from simple concatenation of the mask to the data matrix.**
We would like to point out that the simple concatenation of the mask to the data matrix corresponds to our transformation of the masked data matrix into a bipartite graph, and the execution of an imputation method on the augmented matrix corresponds to running our M$^3$-Impute method on the bipartite graph.
In addition, while we are not quite sure in what sense the paper ''On the consistency...'' is referred to here, we point out that M$^3$-Impute is a task-general architecture, as it is not limited to any specific downstream task. In contrast, in their paper, a downstream task is involved in the design of their method. We further compare M$^3$-Impute with theirs (NeuMiss + MLP). As shown in Tab.1, M$^3$-Impute consistently outperforms. We will include the new results in the final manuscript. Thanks.
### Tab.1: MAE of label prediction under MCAR setting with 30\% missingness
| | Yacht | Wine | Concrete | Housing | Energy | Naval | Kin8nm | Power |
|----------|:-------:|:------:|:----------:|:---------:|:--------:|:-------:|:--------:|:-------:|
| NeuMiss+MLP | 11.69 | 0.65 | 11.57 | 14.72 | 11.04 | 1.25 | **0.18** | 27.57 |
| M${^3}$-Impute | **8.82** | **0.51** | **9.04** | **3.60** | **2.57** | **0.0036** | **0.18** | **4.69** |
|
**Q4. Performance under MNAR setting and comparison with not-MIWAE.**
We presented the results under the MNAR setting in Table 6 in the submitted manuscript and confirmed that M$^3$-Impute consistently outperforms the baselines. This superior performance is due to M$^3$-Impute's unique approach. Rather than assuming the data follows MCAR, MAR, or MNAR missingness patterns from the outset, we designed M$^3$-Impute to leverage missingness information directly, enabling it to learn feature-wise and sample-wise correlations. Specifically, in FCU, M$^3$-Impute uses the missingness results (i.e., known masks) to learn the correlations between the imputation targets and observable features. In SCU, M$^3$ employs the missingness results to better capture sample correlations. Since feature and sample correlations exist regardless of the cause of missingness, M$^3$-Impute is naturally adaptive and robust across all three missingness settings.
Another notable feature of M$^3$-Impute is its ability to learn the cause of missingness. In FCU, M$^3$-Impute explicitly captures the correlations between observed and missing features. When data is missing under MAR and MNAR conditions, the missing values depend on the observed ones. Since FCU explicitly captures these relationships, M$^3$-Impute can potentially identify the cause of missingness and enhance imputation accuracy. This capability may explain why M$^3$-Impute significantly outperforms the baselines in the MAR and MNAR settings compared to the MCAR settings.
In addition, we have done new experiments for the comparison with not-MIWAE and report the results in Tab.2, showing that our method outperforms not-MIWAE substantially. We will include the results in the final manuscript. Thanks.
### Tab 2: MAE of imputation under MNAR setting with 30\% missingness
| | Yacht | Wine | Concrete | Housing | Energy | Naval | Kin8nm | Power |
|------------|:-------:|:------:|:----------:|:---------:|:--------:|:-------:|:--------:|:-------:|
| not-MIWAE | 3.08 | 1.43 | 2.14 | 1.80 | 3.87 | 2.27 | 2.50 | 2.46 |
| M³-Impute | **1.15** | **0.60** | **0.68** | **0.54** | **1.09** | **0.08** | **2.46** | **1.00** |
|
**Q5. Add numbers to Figure 1.**
The numbers will be included in the final manuscript. Thanks.
**Q6. Hyperparameter setting in Algorithm 1 and other baselines.**
For all the 25 datasets, we use the same hyperparameters for our method and follow the same setups as in the original papers for the baselines.
**Q7. Comparison with missForest.**
The results are shown in Tab.3 and will be added in the final manuscript. Thanks.
### Tab. 3: MAE of imputation under MCAR setting with 30\% missingness
| | Yacht | Wine | Concrete | Housing | Energy | Naval | Kin8nm | Power |
|-----------|:-------:|:------:|:----------:|:---------:|:--------:|:-------:|:--------:|:-------:|
| MissForest| 1.78 | 0.73 | 1.31 | 0.80 | 1.48 | 0.25 | 2.52 | 1.18 |
| M³-Impute | **1.33** | **0.60** | **0.71** | **0.59** | **1.31** | **0.06** | **2.50**| **0.99** |
|
**Q8. Feature selection in MAR setting.**
For the 30\% missingness setup, we randomly selected 50\% of the features to be observed (ensuring these features do not contain any missing values) and masked out values from the remaining 50\% of the features until the desired missingness ratio is reached. We did not cherry-pick the subset of observed features; rather, we randomly selected them so that they could be different in each repeated run, as different random seeds are applied.
**Q9. Minor comments:**
They will be incorporated in the final manuscript. Thanks. | null | null |
CryoSPIN: Improving Ab-Initio Cryo-EM Reconstruction with Semi-Amortized Pose Inference | Accept (poster) | Summary: This paper introduces an approach to ab-initio homogeneous reconstruction that handles multi-modal pose distributions with a tailored encoder and accelerates pose optimization with semi-amortization. The approach uses a shared CNN feature extractor with multiple pose predictor heads, predicting several plausible poses for each image to account for pose uncertainty. Unlike the computationally expensive implicit networks used by cryoDRGN and cryoAI, this method employs an explicit 3D decoder, speeding up reconstruction. The encoder-decoder architecture is trained with a "winner-takes-all" loss, where the 3D decoder generates multiple 3D-to-2D projections, and the one with the lowest reconstruction error determines the loss. This method achieves faster convergence and more accurate poses compared to relying solely on the encoder's predictions. Evaluations show that the approach outperforms cryoAI and is competitive with cryoSPARC on both synthetic and experimental datasets.
Strengths: - This paper proposes a two-stage ab-initio homogeneous reconstruction algorithm based on a novel multi-head transformer architecture to output multiple proposals of predicted poses. As a result, the performance on both synthetic and real datasets outperforms learning-based baselines.
- A novel winner-takes-all loss has been introduced to specialize multi-heads of the decoder. Each head of the decoder can account for the pose estimation in a local region, mitigating the burden on the decoder.
- This paper is well-structured and easy to follow, with appropriate references.
Weaknesses: Despite its strengths, I see some major weaknesses in this paper:
1. It is unfair to claim "our semi-amortized method is faster than the amortized method of cryoAI" (Line 79) as there are some minor modifications, like changing the decoder from an implicit representation to an explicit volume to reduce the computational cost. An explicit density volume or a feature volume can easily replace all coordinate-based neural networks. In this paper, there is no ablation study about the representation of the decoder or report on how much training and inference time has been reduced by this replacement. Explicit volume with multi-head predictions leads to higher memory costs during training. Advanced representations that can achieve a better trade-off between computational and memory costs, like TensoRF, triplanes, and hash encodings, are not well discussed or explored.
2. The improvement of results is incremental. In Figure 2, the performance improvements on synthetic datasets are very subtle. On the experimental results, it seems cryoSPARC, which uses a traditional ab-initio method, achieves the best performance. Additionally, why do the resolutions of EMPIAR-10028's FSC curve all meet the resolution limit?
3. Translation estimation is important for a complete ab-initio model. Why does this method not support translation estimation while cryoAI is capable?
Technical Quality: 3
Clarity: 3
Questions for Authors: Overall, I am not convinced that the proposed semi-amortized approach can fundamentally improve pose prediction results without any significant improvements on the image encoder side. All experiments conducted in this paper show only minor performance improvements or even no improvements compared to cryoSPARC, which has no deep learning but only traditional optimization, does it imply that the actual bottleneck is the weak image encoder? The image encoder (VGG in this paper) is not pre-trained on a diverse cryo-EM dataset but trained from scratch on a per-scene dataset in a self-supervised manner. How can it be ensured that the extracted features can be applied for accurate pose prediction? I believe the image encoder is also a performance bottleneck. A PCA visualization of the feature space of the encoder (or calculating the cosine similarity) would be very helpful to validate if images with similar poses can be clustered and images with different poses can be far from each other. It is important for this work to conduct a performance analysis for both the encoder and decoder to identify the actual problem. Unfortunately, I see no analysis in the current version of this paper. I am happy to improve my ratings if my main concerns (including weaknesses) can be addressed by the authors.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, there is a discussion of the limitations in this paper, but I believe the limitation of the image encoder used in this work should be also discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the insightful comments and questions. Below we address the concerns.
**speed comparison and decoder ablation.** To clarify, we perform an additional study, detailed in the rebuttal PDF Tab.1. We report reconstruction time per epoch, GPU memory usage, and number of parameters for various methods. As our method consists of two stages, we report numbers for each stage individually: the first stage uses an image encoder with H=7 heads, while, in the second stage, the encoder is replaced with a pose module with size depending on the number of particles (N). For comparison, we include another baseline, cryoAI-explicit, in which the implicit decoder is replaced by an explicit one as in our method. The explicit and implicit decoders have 4.29 and 0.33 million parameters, respectively. Despite a larger decoder, our method is ~6x faster and uses ~5x less memory compared to cryoAI. Importantly, swapping the implicit decoder with an explicit one (cryoAI-explicit) significantly drops time and memory, indicating that the implicit decoder is a major computational and memory bottleneck. Yet, cryoAI-explicit uses ~2x more GPU for encoding and is ~1.5x slower than our method as it performs early input augmentation and runs the entire encoder twice per image. Our method, by augmenting the encoder head, saves memory and time during pose encoding. Moreover, the amortized baseline, which uses an explicit decoder with multi-head encoder (H=7), uses ~7x more memory in decoding (negligible vs implicit decoder) and runs slower than the direct optimization stage of our method. We will include these results and clarify this comparison.
**Hybrid representations.** We share our findings on more recent hybrid representations in Fig.1-B of the rebuttal PDF. In a simplified setting with known poses, we optimize the structure represented by TensoRF which yields spurious blobs of density. We conjecture this is due to the low-rank assumption made by these representations which assume a 3D signal can be factorized into 2D axis-aligned planar signals. While this assumption may be adequate for structured, real-world scenes, it tends to break down for cryo-EM structures. Thus factorization might reduce the capacity leading to artifacts of wrong density masses. We believe effective adoption of hybrid representation in cryo-EM context is non-trivial and requires further investigation. We will add these failure cases in the supplement.
**Significance of improvement.** Our primary goal is to show that amortized inference methods can be enhanced by combining with more traditional optimization approaches to achieve the best of both worlds: (1) enabling pose space exploration in early stages with amortized inference empowered by deep-learning architectures, and (2) boost accuracy of DL-based models to obtain results competitive with cryoSPARC. As shown in Fig. 2, on synthetic datasets of Spliceosome and HSP, our method outperforms cryoAI by ~1A and ~2A in resolution, respectively, while achieving the same or better resolution to cryoSPARC. Similarly, for experimental data (see last row of Fig. 2), our approach provides substantial qualitative and quantitative improvements compared to cryoAI. Our reconstruction is also comparable with that obtained by cryoSPARC. Moreover, our results with shift estimation (rebuttal PDF, Fig.1A) shows that our method is able to outperform cryoSPARC on experimental data. We believe our work is a step towards developing DL-based methods for cryo-EM reconstruction and reducing their gap with traditional methods like cryoSPARC.
**Resolution for EMPIAR-10028.** The deposited images in EMPIAR-10028 have a box size of D=360, with 1.34 A/pixel. In our experiments, for fair comparison to cryoAI (which learns a $128^3$ volume), the particle images are downsampled to D=128 before reconstruction. With downsampled images, the best possible resolution to obtain (at the Nyquist rate) is 7.54A. Both cryoSPARC and our method reach this limit as shown by the FSC plot in Fig. 2. We should note that this is a mid-level resolution.
**Estimating in-plane shifts.** As discussed in the ‘global response’ to all reviewers, we show in the Rebuttal PDF (Fig 1-A) that our semi-amortized approach extends in a straightforward fashion to estimation of rotation and translation while still outperforming the baselines.
**Image encoder bottleneck.** There is evidence in the literature that image encoders for pose prediction are a bottleneck in amortized inference for cryoEM [1, 2]. Indeed, while improvements to image encoder may have benefits, accurate pose prediction from cryo-EM images is non-trivial because (1) input images are extremely noisy, resulting in pose ambiguities, and (2) the job of the encoder is to invert the decoder and predict pose—intuitively, this requires knowledge of the 3D structure and is difficult to accomplish from the input image alone as the 3D structure changes during optimization. Our work pursues an orthogonal direction: we take advantage of the encoder’s ability to rapidly converge to sufficiently accurate poses such that direct optimization can be performed. Our approach significantly improves pose prediction accuracy (see Table 1 and Fig. 4). For instance, on the Spliceosome our method reaches <1 degree error while cryoSPARC and cryoAI reach ~1.5 and ~2.5 degree error. Also, on the HSP dataset, cryoAI fails to handle pose uncertainty (see Fig. 5) leading to high error: ~45 degrees on average. Our method reaches ~3 degree error vs ~6 degrees for cryoSPARC.
**References**\
[1] Edelberg, et al. "Using VAEs to learn latent variables: Observations on applications in cryo-EM" (2023)\
[2] Klindt, et al. "Towards interpretable Cryo-EM: disentangling latent spaces of molecular conformations" (2024)
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' detailed rebuttal! The authors have thoroughly addressed my concerns based on my review comments. However, as shown in Figure 1 of the uploaded PDF, the performance of the proposed method appears to be similar to that of cryoSPARC. Could the authors please further elaborate on the advantages of the proposed method compared to cryoSPARC, such as faster speed, automation, or other benefits? If the performance is indeed comparable at this stage, what potential improvements could be made?
---
Reply to Comment 1.1.1:
Comment: We’re glad the rebuttal addressed your concerns, and we appreciate your quick response and thoughtful engagement.\
Regarding your question, the semi-amortized method inherits some benefits from amortized methods like cryoAI in terms of runtime for large datasets (e.g. Fig. 3 (left) in the cryoAI paper shows that cryoAI requires less time than cryoSPARC to obtain a given resolution for large numbers of particles). And our inference in the amortized stage (see rebuttal PDF, Table 1) is faster than cryoAI due to encoder/decoder modifications. More specifically, based on the resolution-time plots in Fig 3 in our paper, our method reaches 10A resolution on the Spike and HSP datasets ~4x faster than cryoAI (in 11.0 and 5.3 minutes vs 41.7 and 31.4 minutes, respectively). This advantage over cryoSPARC is significant as cryo-EM datasets regularly contain more than 1M particles.\
More generally, the cryo-EM community has been excited about deep learning mainly due to its potential to handle heterogeneous experimental data where inference of latents is critical along with pose. The fact that deep learning methods (like our semi-amortized inference) can now meet or exceed the performance of finely-honed methods like cryoSPARC, will help enable improved inference methods on heterogeneous data, going well beyond the current capabilities of cryoSPARC (e.g. with its linear methods for heterogeneity).
---
Rebuttal 2:
Comment: Thank you! I would like to improve my rating regarding to these discussions. | Summary: This work introduces a new method for 3D ab initio estimation in cryo-EM using amortized inference. It relies on an existing technique which predicts the 3D rotation of a cryo-EM image using a convolutional neural network. That estimated rotation is then used together with a neural representation of the 3D volume to regenerate the projected image. The network weights (both of the pose estimation network and the 3D neural representation) are then optimized to reduce this auto-encoding loss.
The first important difference between the proposed framework and previous work is that this method produces several estimates using multiple heads and computes the loss with respect to the rotation estimates that yields the closest matching image. The second important difference is that after a few epochs of training, the pose estimation network is discarded, and the best pose estimates are refined using alternating minimization together with the 3D neural representation. The resulting algorithm is shown to perform well on both simulated and experimental datasets.
Strengths: The writing of the manuscript is clear, in particular with respect to the introduction of the problem and its context as well as the presentation and interpretation of numerical results. The method seems to be well thought through and successfully mitigates the important pitfalls of the 3D ab initio reconstruction problem.
Weaknesses: The exposition of the pose inference network and the 3D neural representation are quite superficial and deserve a more detailed explanation. For example, how are the 3D rotations obtained from the convolutional backbone output? Are they encoded in axis–angle parametrization? If so, how are the outputs properly constrained? Similarly, the 3D neural representation is described as leveraging the Hartley transform and somehow uses a decomposition of this transform into its mantissa and exponent. How is this relevant to the actual neural network architecture?
The work would also benefit from more extensive testing on experimental datasets beside EMPIAR-10028. This is a relatively high-SNR dataset depicting a very large molecule (an 80S ribosome) and is not representative of experimental datasets in general.
Technical Quality: 3
Clarity: 3
Questions for Authors: – On like 54, what is meant by “we adopt an explicit paremeterization to further accelerate the reconstruction”? How does the parametrization affect convergence here?
– The idea of “semi-amortized inference” is never actually explicitly described. Presumably this means switching from amortized estimation to pose refinement at some point.
– How is L_{i,j} defined in eq. (6)?
– Why can the proposed method not handle shifts? Should this not be just a matter of adding another output to the pose estimation heads?
– For the synthetic and real datasets, the method switches from amortized inference to pose refinement after 7 and 15 epochs, respectively. How was this switching point determined and is there some way to automate this?
– The labels (Left) and (Right) appear to be switched in the caption for Figure 4.
– On line 227, “experiment” should be “experiments”.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have largely addressed the limitations of the work in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your thoughtful feedback. In what follows, we address the main concerns and questions individually.
**3D rotation parameterization.** In the supplement, Sec. A, we discuss the rotation parameterization and its optimization using PyTorch Autodiff. Specifically, we follow cryoAI and design the convolutional backbone to output the six-dimensional representation commonly referred to as S2S2. To compute the rotation matrix from this representation, the 6D vector is split into two 3D vectors and normalized, resulting in the unit vectors, $v_1, v_2$. We then compute the cross product ($v_1\times v_2$) to get another unit vector ($v_3$), which becomes the last column of the rotation matrix. Selecting v1 as the first column, we compute $\tilde{v_2}=v_3\times v_1$, which is a new unit vector orthogonal to $v_1$ and $v_3$. Then, $R = [v_1, \tilde{v_2}, v_3]$. During the auto-decoding stage, we switch to using a 3D axis-angle representation which requires fewer parameters.
**"What is meant by “we adopt an explicit parameterization to further accelerate the reconstruction”?** We meant that the decoder uses an explicit volumetric representation of density values to model the 3D structure rather than a multi-layer perceptron (MLP). Compared to MLPs, using the explicit representation yields faster reconstruction, which is especially helpful in our case because the multi-head architecture requires querying the decoder multiple times for each input image. We will revise the text to better clarify the distinction between explicit and implicit representations, and the approach taken in our work.
**On Hartley transform and decomposition.** We use the Hartley transform to save memory and reduce redundancy of the transformation of a real signal; compared to the Fourier transform, it represents frequency information using real values instead of complex values. One notable issue with the Fourier/Hartley coefficients is their high dynamic range across different frequencies. To account for this, we parameterize each coefficient using a mantissa and exponent such that we actually store two values per voxel.
**Extensive testing on experimental datasets.** To ensure fair comparison, we benchmark on EMPIAR-10028 which is the same dataset used in cryoAI. We believe this is an important experimental benchmark as it is widely adopted in methodology papers (e.g. cryoDRGN). Also, we acknowledge that further study on more recent experimental datasets with smaller particles and a range of SNRs is an important direction to explore in future work.
**“The idea of semi-amortized inference is never actually explicitly described”** Thank you for raising this issue. Semi-amortized inference is described in the introduction (L58-65) and depicted in Figure 1. We provide more detail in Sec. 4.2 where “semi-amortized inference” is explicitly mentioned with references to prior work. In the revised paper we will provide a more detailed description of “semi-amortization” and justify its use in our case. For example, as higher frequency details are resolved, the variance of the pose posterior tends to decrease, becoming unimodal. At this point, the gap between the amortized and variational posterior is mainly determined by the error in the pose estimate (predicted mean). The encoder might be too restrictive as a globally parameterized function leading to limited prediction accuracy hindering further refinement of the 3D structure. This motivates us to stop amortization and switch to direction optimization, which is the main idea of semi-amortized inference. We will clarify these points in the revision.
**Loss function.** We define $L_{i,j}$ as the negative log likelihood for the i-th image given its j-th predicted pose. This is formalized in Eq. 3 and is computed in the Fourier space in practice.
**Estimating in-plane shifts.** As discussed in the ‘global response’ to all reviewers, we show in the Rebuttal PDF (Fig 1-A) that our semi-amortized approach extends in a straightforward fashion to estimation of rotation and translation while still outperforming the baselines.
**Switching point.** We considered the switching point as a hyper-parameter and experimented with a range of values. We found that after a sufficient amount of optimization (i.e., 7 epochs for synthetic data and 15 epochs for real data) the reconstruction error is insensitive to when the switch occurs. Our intuition is that when switching too early, the pose posterior may have significant uncertainty, and the pose estimates may be too far from the correct basin of attraction to afford robust convergence. Therefore, it is better to switch late than early. Exploring how to identify the optimal switching point is a promising direction that we will explore in future work.
Finally, thank you for bringing the typos to our attention. We will incorporate all feedback into the revision.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their rebuttal and will raise my score to 7. | Summary: The submission addresses ab-initio cryo-EM reconstruction, where both image poses and the 3D structure are estimated. The authors adopt a multihead architecture to estimate multiple poses for each image, encouraging the exploration of pose space early in the reconstruction process. They then refine poses in an auto-decoding fashion using SGD. Experiments on synthetic and experimental datasets demonstrate acceleration and resolution improvement over baseline approaches.
Strengths: 1. Compared to cryoAI, mapping the input image to multiple pose candidates in the auto-encoding stage can account for pose uncertainty and encourage exploration of the pose space during the initial stages of reconstruction.
2. The experimental results provided in the paper include both synthetic and real experimental data, as well as video results.
3. The proposed method offers a speed advantage over cryoAI.
Weaknesses: 1. The paper's technical contribution is primarily limited to a multi-head architecture for estimating multiple plausible poses.
2. The results presented for cryoSPARC are significantly worse than my experience suggests (i.e., on Splicesome or EMPIAR-10028). CryoSPARC's results should not be inferior to those obtained by the proposed approach.
3. The paper cannot handle heterogeneous reconstruction, which limits its general applicability in structural biology research.
4. Import baselines are missing: RELION, cryoFIRE, cryoDRGN-BNB and cryoDRGN2.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The results presented for cryoSPARC are significantly worse than my experience. What is the setting for cryoSPARC? Are there any important assumptions?
2. Will the multi-head structure predict very similar poses, thereby undermining the assumption that this design encourages exploration of the pose space?
3. Can the authors provide a comparison of the training time and memory consumption between the proposed method and the baselines?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and questions. Below, we address the concerns raised in the review.
**"technical contribution limited to a multi-head architecture".**
Our contributions include the design of a new encoder equipped with multiple heads to mitigate uncertainty in pose auto-encoding, and with it, a new “winner-takes-all” loss that encourages diversity in pose predictions. Beyond the architecture and objective, we propose semi-amortized pose inference, combining amortized pose inference and auto-decoding, which outperforms the predominant fully-amortized alternative. In particular, semi-amortized inference accelerates and stabilizes pose convergence (as illustrated in Fig. 4) leading to improved reconstruction.
**CryoSPARC results.** In our experiments, for fair comparison to cryoAI (which learns a $128^3$ volume), the particle images are downsampled to D=128 before reconstruction. The deposited images in EMPIAR-10028 have a box size of D=360, with 1.34 A/pixel. The downsampling increases the pixel size to 3.77 A, and hence the best possible resolution (at the Nyquist rate) is 7.54 A. Both cryoSPARC and our method reach this limit as shown by the FSC plot in Fig. 2. We agree this is a mid-level resolution and hence atomic resolution details are not determined.
**Heterogeneous reconstruction.** We intentionally focus on the homogeneous case instead of heterogeneous reconstruction; we show that even for the simpler homogeneous reconstruction problem there is a significant disparity between applying amortization in this setting (e.g., CryoAI) and semi-amortization (our proposed method, see Fig. 4). We expect that the benefits of semi-amortized inference will also apply to the heterogeneous reconstruction case. In particular, our proposed multi-head encoder should be applicable to the estimation of the multimodal posterior over heterogeneity latent parameters as well as pose. Yet, there are significant challenges in heterogeneous reconstruction, making it outside the scope of our analysis on semi-amortized inference—for example, (1) heterogeneity adds significant uncertainty and multimodality in the posterior over the latent variables, (2) changes in heterogeneous structure during optimization complicate pose estimation using amortized inference, and (3) existing methods for heterogeneous reconstruction conflate pose and conformational state, and disentangling these properties remains a challenge. We plan to investigate these lines in future work.
**Other baselines.** We primarily compare with CryoAI and CryoSPARC as they are the state-of-the-art methods for ab-initio homogeneous reconstruction, and our focus is on investigating the tradeoff between amortized and semi-amortized inference. The deep-learning-based methods cryoFIRE and cryoDRGN (which adopt the auto-encoding approach similar to cryoAI) are specially designed for reconstruction of heterogeneous datasets. Still, we provide an additional comparison to cryoDRGN with the same setup as in the paper (see rebuttal PDF, Fig. 1-C). We run a homogenous ab initio reconstruction with cryoDRGN on synthetic and experimental datasets. The results show that our semi-amortized method outperforms cryoDRGN and reaches higher resolution reconstruction in most cases. We will add these results in the revision.
**“Will the multi-head structure predict very similar poses?”.** The multi-head encoder does not predict similar poses. Figures 5 and 8, for example, show that different heads return different poses. Moreover, in the supplement, Fig. 6, we investigate the behavior of each head across all images by mapping their accuracy across different regions of pose space. As shown in the figure, the multi-head encoder adopts a divide-and-conquer approach where each head effectively specializes in pose estimation over localized regions with minimal overlap with other heads. This specialization behavior is a result of the winner-takes-all loss, which has been demonstrated to improve diversity in other prediction tasks as well [12, 13, 27]. Thus, the encoder is able to explore the pose space to help avoid local minima in the early stages of reconstruction.
**Comparison on time and memory.** In the rebuttal PDF, we provide detailed comparison with baselines in terms of time, memory, and number of parameters (Table 1). As the semi-amortized method consists of two stages, we report numbers for each stage separately. In the first stage, the encoder has H=7 heads, while in the second stage, it is replaced with a pose module with size depending on the number of particles (N). For comparison, we include another baseline, cryoAI-explicit, in which the implicit decoder is replaced by an explicit one as in our method. The explicit and implicit decoders have 4.29 and 0.33 million parameters, respectively. Despite a larger decoder, our method is ~6x faster and uses ~5x less memory compared to cryoAI. Importantly, swapping the implicit decoder with an explicit one (cryoAI-explicit) significantly drops time and memory, indicating that the implicit decoder is a major computational and memory bottleneck. Yet, cryoAI-explicit uses ~2x more GPU memory for encoding and is ~1.5x slower than our method as it performs early input augmentation and runs the entire encoder twice per image. Our method, by augmenting the encoder head, saves memory and time during pose encoding. Finally, the amortized baseline, which uses an explicit decoder coupled with a multi-head encoder (H=7), uses more memory in decoding (negligible vs implicit decoder) and runs slower than the direct optimization stage of the semi-amortized method. We will add these performance comparisons in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors rebuttal! My understanding is that this work deals with the task of homogeneous ab-initio reconstruction, exactly the same with CryoAI. Although the results are improved compared to cryoAI, the method proposed in the paper does not have a clear advantage over cryoSPARC, let alone the ability of cryoSPARC to handle heterogeneous reconstruction.
Can the authors provide some insights into the significance of their proposed method given cryoSPARC? What is the next step for the proposed method or more generally for deep learning methods? How long will it take for them to be applied in practical structural biology research?
---
Rebuttal 2:
Comment: Thanks for the thoughtful response.
There are several reasons to believe that deep learning (DL) methods have the opportunity to outperform cryoSPARC. Among them, cryoAI with amortized inference has been shown to be faster than cryoSPARC for large datasets; e.g. Fig. 3 (left) in the cryoAI paper shows that cryoAI requires less time than cryoSPARC to obtain a given resolution for large numbers of particles. In turn, our inference in the amortized stage is faster than cryoAI due to encoder/decoder modifications (see rebuttal PDF, Table 1); e.g. based on the resolution-time plots in Fig. 3 in our paper, our method reaches 10A resolution on the Spike and HSP datasets ~4x faster than cryoAI (in 11.0 and 5.3 minutes vs 41.7 and 31.4 minutes, respectively). This advantage over cryoSPARC is significant as cryo-EM datasets continue to increase in size, now regularly with more than 1M particles.
A further advantage of DL is the potential for sample heterogeneity. With our semi-amortized inference, DL methods can now meet or exceed the performance of finely-honed methods like cryoSPARC for homogeneous reconstruction. This opens the door for DL and the inference of heterogeneity latents, going well beyond the capabilities of cryoSPARC. For context, cryoSPARC handles heterogeneity via (1) discrete 3D classification (to which homogeneous methods are easily extended), (2) 3D variability analysis, which is linear and lacks expressiveness, and (3) 3DFlex, which uses DL to learn non-linear variations, but is limited to conformational variability of large substructures. In contrast, our semi-amortized DL method can be extended to both conformational and compositional heterogeneity (a la cryoDRGN and cryoFIRE).
Indeed, to realize the full potential of cryo-EM in practical structural biology research, it is essential to be able to model structural dynamics in heterogeneous datasets. There are open challenges in heterogeneous reconstruction, such as how to handle multi-modality in the posterior over the latent variables and the conflation of pose and conformational states. In this respect, extension of our multi-head encoder and semi-amortization to heterogeneity inference should be helpful, and we plan to investigate these lines in future work.
---
Rebuttal Comment 2.1:
Comment: Thanks for the response. I will consider it for my final decision. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful feedback. In the attached rebuttal PDF, we include results of additional experiments to clarify and address concerns raised by reviews, including additional baselines and datasets, and including empirical results using semi-amortized inference to estimate both particle rotation and translation, as requested by reviewers Kgto and xz2Y.
**Estimation of in-plane translation.** Our paper originally focused mainly on the inference of 3D orientation as this is widely regarded to be more challenging than the inference of planar shifts per se. As such, we assumed that particles have been centered. Nevertheless it is straightforward to extend our method to estimate rotation and translation: We did so simply by allocating additional encoder heads to estimate translation parameters, as suggested in the reviews. We demonstrate the resulting model through application to experimental data with non-centered input particles. Fig. 1-A in the rebuttal PDF shows the reconstructed 3D density maps along with FSC curves on an experimental data (EMPIAR-10028), allowing comparison to cryoSPARC and cryoAI. A direct comparison of the FSC curves in Fig. 1-A shows that our semi-amortized approach outperforms cryoSPARC, and in Figure 4 of the original cryoAI paper clearly shows that cryoSPARC outperforms cryoAI on this same dataset. (Using the publicly available code for cryoAI by the authors, we also tried to replicate these results in order to show all FSC curves on the same plot. However, in the presence of unknown planar translation, we found that cryoAI becomes trapped in local minima and hence we could not replicate the published FSC curves in the original paper.) We will add these results to the camera-ready version of the paper, along with the other changes or clarifications requested in the individual reviews.
Pdf: /pdf/dec111f7829c81e05dfe8a5b39674f2a844c9a98.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multi-view Masked Contrastive Representation Learning for Endoscopic Video Analysis | Accept (poster) | Summary: The authors propose a self-supervised learning regime for spatio-temporal data called multi-view masked contrastive learning which combines a frame-aggregated attention guided tube mask and a multi-view mask strategy using a student-teacher framework. The learnt representations are evaluated on multiple downstream tasks on publicly available datasets.
Strengths: - The manuscript is well and clearly written, different aspects are well motivated and explained.
- The background section summarizes a lot of related literature and is easy to read. It gives a good overview of related work, particular challenges associated with endoscopy images and motivates the proposed approach.
- The conducted experiments are exhaustive. The learnt representations are tested in multiple downstream tasks for their usefulness (classification, segmentation, detection) on three different publicly available datasets, comparing to a large number of recent competing methods. The results are averaged over three runs and show consistent improvements in all three tasks. The study includes an ablation study to report on the effect of different components of the proposed methods.
- The method has been developed for application in endoscopy, but are relevant for spatio-temporal imaging in general and the type of pretraining is relevant for other medical imaging modalities with a temporal component.
Weaknesses: - Please provide some more information on the downstream tasks (how many classes, what kind of labels, how are they obtained,…)
- Source code is not (yet) released.
Minor Remarks:
- “temporal” is missing the “l” in a few places (e.g. l. 162,208)
- “reach” -> “reaches” in l.281
- I would suggest to remove the particular results from the abstract
Technical Quality: 3
Clarity: 3
Questions for Authors: - How many classes do the datasets used for evaluations have in the case of classification and segmentation? What are they?
- Are any modifications needed to employ this method to other spatio-temporal data?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The potential negative societal impact is not sufficiently discussed. In the broad impact statement, the author discuss the need and benefits for their method one more time. I urge them to consider potential *negative* impact on society.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive and insightful comments. We address all weaknesses and questions below.
**W1 and Q1. Detail information about downstream tasks.**
Our downstream tasks include classification, segmentation, and detection, each addressing different aspects of endoscopic video analysis.
* **Classification Task:** The classification task is a binary classification problem aimed at diagnosing polyps. We use the PolypDiag [1] dataset, which follows the gastroscope examination protocol and includes 253 videos and 485,561 frames. Each video is annotated to indicate the presence or absence of a lesion. The dataset is divided into 71 normal videos without polyps and 102 abnormal videos with polyps for training, and 20 normal videos and 60 abnormal videos for testing.
* **Segmentation Task:** The segmentation task aims to perform polyp segmentation. We use the CVC-12k [2] dataset, which follows the colonoscopy examination protocol and serves as the official training dataset for the Automatic Polyp Detection sub-challenge of the MICCAI 2015 competition. This dataset includes 29 videos and 612 frames, with 20 videos allocated for training and 9 videos for testing. Each frame in the videos is annotated with ground truth masks (with a single class) to identify the regions covered by polyps.
* **Detection Task:** The detection task aims to perform polyp detection. We use the KUMC [3] dataset, which follows the colonoscopy examination protocol and is sourced from the Kansas University Medical Center. This dataset includes 53 videos and 19,832 frames. Each frame in each video is annotated with bounding boxes and polyp categories, with 36 videos allocated for training and 17 videos for testing, including hyperplastic polyps and adenomatous polyps.
For a fair comparison, all datasets are sourced from Endo-FM, and the data partitioning for all datasets follows Endo-FM guidelines. For more details on the experimental reproduction of downstream tasks, we will provide more detailed relevant hyperparameters in the appendix.
**W2. Source code is not (yet) released.**
We will release the code and pre-trained checkpoints upon acceptance. For reproducibility we build on the open source implementation of Endo-FM and provide all the relevant hyperparameters in the appendix.
**W3. Minor Remarks.**
Thank you for your valuable suggestions. We will remove the particular results from the Abstract and make careful writing revisions in subsequent revisions.
**Q2. Are any modifications needed to employ this method to other spatio-temporal data?**
Although our work is based on an endoscopic video dataset, it is important to note that the video format of endoscopic videos is the same as that of natural videos (Kinetics, SSv2). The basic techniques for processing and analyzing video data, such as frame extraction and video decoding, are essentially the same for both types. Therefore, applying our method to other spatio-temporal data does not necessitate specific modifications; rather, it only requires adherence to standard techniques for processing video data.
**Limitations. The potential negative societal impact is not sufficiently discussed.**
Our study involves self-supervised learning, which requires extensive pre-training leading to significant energy consumption. Additionally, large-scale model training requires high-performance computing hardware (GPUs), leading not only to increased hardware costs but also resource overconsumption and electronic waste. These negative impacts underscore the necessity of considering environmental protection and resource conservation. In future work, we will adopt more efficient training methods and optimization strategies to address these issues.
[1] Tian Y, et al. Contrastive transformer-based multiple instance learning for weakly supervised polyp frame detection. MICCAI 2022.
[2] Bernal J, et al. WM-DOVA maps for accurate polyp highlighting in colonoscopy. Computerized medical imaging and graphics 2015.
[3] Li K, et al. Colonoscopy polyp detection and classification: Dataset creation and comparative evaluations. Plos one 2021.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal by Authors
Comment: I acknowledge having read the other reviewers' reviews as well as responses by the authors. I thank the authors for the detailed responses addressing everyone's questions and concerns. I will keep my original score.
---
Rebuttal 2:
Title: Thanks for the feedback!
Comment: Thank you for recognizing our efforts in rebuttal. Your comments have greatly improved the quality and clarity of our paper. We appreciate your support! | Summary: This work presents M$^2$CRL, a self-supervised learning method for representation learning of endoscopic videos. The method leverages a multi-view masking technique with attention-guided masking of global features and random spatiotemporal tube masking of local features. Both contrastive learning and masked autoencoding pretraining objectives are employed for stronger representation learning. The model is pretrained on 7 publicly available endoscopy video datasets and fine-tuned on 3 other datasets on a variety of tasks, outperforming other relevant baselines.
Strengths: - The organization of the paper is excellent, with helpful use of bold text, logical flow from one passage to the next, and high-quality, information-dense illustrations and tables.
- The motivation is clearly laid out and the reference to prior related work is thorough.
- The experiments are very thorough and appear to be soundly conducted. Results demonstrate notable improvement upon existing competitive baselines, and ablation studies help showcase which elements of M$^2$CRL are most impactful.
Weaknesses: - Experimental details could be clarified. Were all methods pretrained on the same union of 7 datasets as M$^2$CRL? Was Endo-FM used as is for downstream fine-tuning or retrained on this pretraining dataset? Section 4.2 could clarify these details. Additionally, how were hyperparameters selected and were they the same for each model and task? Appendix A should additionally include fine-tuning hyperparmeters.
- There are a few instances of confusing passages of writing. See examples in the section below.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Were all methods (perhaps except EndoFM) pretrained on the same union of 7 datasets as M$^2$CRL? Please clarify this in Section 4.2.
- It appears that the same hyperparameters were used for all methods and datasets – is this true? If so, how were hyperparameters selected? Please include full fine-tuning hyperparmaters in Appendix A alongside the pretraining hyperparameters.
- Several instances of awkward/confusing writing:
- The passage from L225-233 is confusing, e.g.: “It is worth noting that the two students in Fig. 1(a) share weights, but are actually one student network. Both encoders share the same structure.” Further, perhaps explain why only local views are fed to the student network.
- L267: “The results of other methods are taken from the comparative method of Endo-FM”. Is this saying these were the baselines considered in the EndoFM paper?
Minor comments:
- I would replace “mask strategy” with “masking strategy” throughout the paper.
- It might be useful to explain in words what Equation 5 represents.
- L275: I would refrain from saying “significant” without a statistical significance test.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Limitations are adequately addressed in Appendix E.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive and insightful comments. We address all weaknesses and questions below.
**Q1. Were all methods pretrained on the same union of 7 datasets as M$^2$CRL?**
Yes, for a fair comparison, all methods were pretrained on the same union of 7 datasets as our M$^2$CRL. We apologize for this and we will make careful revisions in subsequent versions.
**Q2. Was Endo-FM used as is for downstream fine-tuning or retrained on this pretraining dataset?**
Endo-FM [1] was originally pre-trained on this pretraining dataset, and its backbone was subsequently used for fine-tuning in downstream tasks. Endo-FM serves as our baseline, and the datasets we use are also provided and processed by it.
**Q3. How were hyperparameters selected and were they the same for each model and task? Section 4.2 could clarify these details.**
For the other methods, we used the hyperparameters as documented in their original papers or provided in their code. Similarly, for the hyperparameters of downstream tasks, we followed the guidelines outlined by Endo-FM [1], which are comprehensively explained in their paper and open-source code. We pretrained all methods on the same union of 7 datasets and then loaded them into the backbone of downstream tasks for fine-tuning. We apologize for this unclear presentation and we will make careful revisions in subsequent versions.
**Q4. Appendix A should additionally include fine-tuning hyperparameters.**
Thank you for your valuable suggestions. We will update Appendix A to include the fine-tuning hyperparameters as requested.
**Q5. The passage from L225-233 is confusing.**
We propose a Multi-view Masked Contrastive Representation Learning framework. In the manuscript, L225-233 primarily introduce the contrastive representation learning component of our framework.
**L225-229** describe self-distillation and explain the relationship between contrastive learning and self-distillation. It is clarified that the contrastive representation learning component in our framework is achieved through self-distillation.
**L229-233** provide a description of the specific workflow of our framework, including data generation, model composition, and data flow within the model. Our model employs a teacher-student architecture, consisting of a student network and a teacher network, which consists of a student network and a teacher network sharing the same structure. It should be noted that the two student networks depicted in Fig. 1(a) actually represent a single student network. We illustrated two student networks in the figure to more clearly convey the data flow.
**Q6. Explain why only local views are fed to the student network.**
We create two spatiotemporal views (global and local views) of the input endoscopic videos with varying spatial sizes and frame rates. The global views are fed into both the teacher and student networks, while the local views are only fed into the student network. Both the teacher and student networks process these views and predict one view from another in the latent feature space. This approach enables the model to learn spatiotemporal invariant features that can be transferred across different endoscopic domains and disease types.
* **Cross-View Matching**: The global views are fed into the teacher network primarily because it provides stable and reliable outputs as references for the student network. These global views contain comprehensive information about the video, which assists the teacher network in generating accurate feature representations. The student network processes both the global and local views. By aligning the global features of the student network with those of the teacher network, the student network can learn rich global information. Furthermore, by aligning the local views of the student network with the global features of the teacher network, it can acquire more fine-grained feature representations. This cross-view matching approach involving global and local views facilitates a more comprehensive learning process for a rich feature representation within this model.
* **Increased Learning Challenge**: Not feeding the local views to the teacher network introduces an information imbalance between the teacher and student networks. This design introduces a learning challenge for the model, as it requires the student network to exert more effort in extracting and integrating information from the local views to compensate for the lack of global information. Consequently, this approach enhances the contrastive learning process by necessitating that the student network maximize agreement between representations of different views despite their partial and incomplete nature. Ultimately, this design encourages the model to develop robust and discriminative features.
**Q7. L267 Is this saying these were the baselines considered in the Endo-FM paper?**
Endo-FM [1] compares several methods, including TimeSformer, CROP, FAME, ProViCo, VCL, and ST-Adapter. These methods are used for comparison rather than serving as baselines. The authors of Endo-FM pre-trained these methods on the same union of 7 datasets and then fine-tuned them on downstream tasks. Since we followed the same experimental procedure, we have directly utilized the recorded results from Endo-FM instead of re-running the experiments.
**Q8. Minor comments:**
Thank you for your comments. We will replace the term “mask strategy” with “masking strategy” throughout the entire manuscript in subsequent versions. We will also provide a detailed textual explanation of what Equation (5) represents. Furthermore, we will remove “significant” from Line 275. We sincerely appreciate once again your valuable suggestions, which have greatly contributed to the improvement of our manuscript.
[1] Wang Z, et al. Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-train. MICCAI 2023.
---
Rebuttal Comment 1.1:
Comment: I acknowledge that I have read the authors' rebuttal and thank them for the clarifications. I will maintain my score.
---
Rebuttal 2:
Title: Thanks for the feedback!
Comment: We sincerely appreciate your positive feedback on our manuscript. We're glad our efforts to address your questions were satisfactory. Thank you for your support. | Summary: The paper proposes a representation learning framework that combines masked pretraining strategies with contrastive learning methods. Particularly, this framework aims to generate a representation learning approach that can work with downstream tasks requiring dense pixel-level representations (for image segmentation) or discriminative features (for image classification). The masking strategy is guided by the aggregation of the attention layers of a teacher model over different frames of the endoscopic video, while the contrastive learning framework follows a self-distillation approach. Testing is performed on colonoscopic datasets and on three downstream tasks, including classification, detection, and segmentation.
Strengths: * Representation learning in the medical domain can have an impact on the development of models for medical image analysis.
* The model is validated against different works and on three downstream tasks.
* Ablation experiments show the contribution of the components.
Weaknesses: * Downstream tasks are evaluated on a single endoscopic modality
* Similarly, a cross-validation like approach that keep the downstream tasks but change the testing dataset, could help to support the results.
Technical Quality: 3
Clarity: 2
Questions for Authors: Most of the datasets, including the downstream tasks, are related to colonoscopy. Given the intention to develop a method to operate with endoscopy in general, it would be interesting to see the performance in downstream tasks that involve other endoscopic modalities.
It appears that the combination of masking and contrastive learning is more beneficial for the classification and detection tasks, while the improvements for the segmentation tasks are still at a similar level as previous works. Would it be possible to elaborate on why the segmentation tasks might not take significant benefits compared with the other tasks?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Limitations are discussed in the supplementary material of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive and insightful comments. We address all weaknesses and questions below.
**W1 and Q1. Downstream tasks are evaluated on a single endoscopic modality.**
In our study, we used multiple publicly available endoscopic video datasets. These datasets cover 3 types of endoscopic procedures (colonoscopy, gastroscopy, and laparoscopy) and 10+ different diseases. We believe that this comprehensive and large-scale dataset is valuable for endoscopic research (as detailed in Table 1). In downstream tasks, classification is for gastroscopy, while segmentation and detection are for colonoscopy.
Based on your suggestion, we have added the Cholec80 dataset in downstream tasks. This dataset contains 80 complete laparoscopic cholecystectomy videos and is specifically designed for laparoscopic surgical recognition. Followed by previous works [1, 2], we used 40, 8, and 32 videos from Cholec80 as our training, validation, and test sets, respectively. We compared our approach with the classical endoscopic foundation model Endo-FM [3] and the masked video modeling method VideoMAE [4] using F1-score in Table 2. The experimental results demonstrate that our approach achieves superior performance. This experiment further validates the robustness and effectiveness of our approach across diverse endoscopic video tasks.
Table 1. Details of pre-train and downstream datasets
| **Phase** | **Dataset** | **Provider** | **Videos** | **Protocol** |
| :--------: | :------------: | :------------: | :--------: | :---------------------: |
| Pre-train | Colonoscopic | CNRS | 210 | colonoscope |
| | SUN-SEG | ANU | 1018 | colonoscope |
| | LDPolypVideo | USTC | 237 | colonoscope |
| | Hyper-Kvasir | Simula | 5704 | gastroscope |
| | Kvasir-Capsule | Simula | 1000 | gastroscope |
| | CholecTriplet | BIDMC | 580 | laparoscope |
| | Renji-Hospital | Renji Hospital | 16494/7653 | colonoscope/gastroscope |
| Downstream | PolypDiag | Adelaide | 253 | gastroscope |
| | CVC-12k | UAB | 29 | colonoscope |
| | KUMC | Kansas | 53 | colonoscope |
| | Cholec80 | CAMMA | 80 | laparoscope |
Table 2. Results of surgical phase recognition
| **Methods** | **Recog.** |
| :----------: | :----------: |
| Endo-FM [3] | 82.2±0.8 |
| VideoMAE [4] | 73.7±1.4 |
| M$^2$CRL | **85.0±0.4** |
**W2. A cross-validation like approach.**
We conducted five-fold cross-validation on the 3 downstream tasks, and the experimental results are presented in Table 3. We observe that the results of cross-validation are lower than those recorded in our original manuscript. We think that this difference is caused by the different ways of data division. For a fair comparison, we strictly followed the public partitioning of the datasets for conducting experiments in our study. Considering that there is no existing work to cross-validate this dataset, we also cross-validated the classical endoscopy foundation model Endo-FM [3] for comparison. The results show that our method outperforms Endo-FM in all three tasks.
Table 3. Results of cross validation for 3 downstream tasks
| **Methods** | **Cla.** | **Seg.** | **Det.** |
| :---------: | :------: | :------: | :------: |
| Endo-FM | 84.5±4.5 | 55.5±7.0 | 80.5±4.2 |
| M$^2$CRL | 87.5±4.9 | 58.6±8.5 | 82.0±4.2 |
**Q2. Why the segmentation tasks might not take significant benefits compared with the other tasks?**
Due to varying requirements in different downstream tasks, the features acquired through different pre-training methods may exhibit distinct performances on specific downstream tasks. Previous studies, such as VideoMAE, DropMAE, and VideoMAE V2, have shown impressive performance in segmentation tasks due to their reliance on masked modeling. The strength of masked modeling lies in capturing rich pixel-level information, making these single-task pretraining approaches particularly effective for dense pixel-level tasks like segmentation. However, their performance tends to underperform in structural tasks that focus on global features and the holistic understanding of objects in images or videos.
Our method integrates masked modeling with contrastive learning, which not only encourages the model to capture fine-grained pixel-level features but also compels it to learn comprehensive discriminative representations. During the pre-training process, the model needs to carefully consider feature selection tradeoffs across multiple pre-tasks. This consideration may result in some downstream tasks exhibiting less prominent performance compared to single pre-task training. Although the improvement of our method in segmentation tasks is not as significant as that using mask modeling techniques alone, our method still achieves a significant improvement in segmentation tasks compared to the method using only contrast learning. Overall, our method achieves robust performance across both structural and pixel-level tasks, highlighting its versatility and effectiveness in learning both detailed and global features.
[1] Ramesh S, et al. Dissecting self-supervised learning methods for surgical computer vision. Medical Image Analysis, 2023.
[2] Czempiel T, et al. Surgical phase recognition with multi-stage temporal convolutional networks. MICCAI 2020.
[3] Wang Z, et al. Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-train. MICCAI 2023.
[4] Tong Z, et al. VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training. NeurIPS 2022.
---
Rebuttal Comment 1.1:
Title: Renspose to rebuttal
Comment: I thank the authors for the response. I will update my score to accept.
Thanks.
---
Rebuttal 2:
Title: Thanks for the feedback!
Comment: Thank you for recognizing our efforts in the rebuttal. We sincerely appreciate your consideration in raising our paper's score to 7: Accept. Your feedback has been invaluable to our work. Thank you for your support. | Summary: The paper proposes a pre-training approach called Multi-view Masked Contrastive Representation Learning for endoscopic videos. The approach combines self-distillation and masked video modeling under multi-view setting. To consider the characteristics of inter-frame instability and small inter-class differences of endoscopic videos, the paper introduces a frame aggregated attention guided tube masking strategy to capture global spatio-temporal representation and employs random tube masking on local views to capture local representations. The approach is pre-trained on seven endoscopic datasets and fine-tuned on three additional datasets. Experiments show that it outperforms the baselines on classification, segmentation and detection tasks.
Strengths: - The paper is easy to read.
- The paper shows the combination of self-distillation and masked video modeling for pre-training ViT-B model using endoscopic videos.
- Frame-aggregated attention guided tube masking (FAGTM) to learn global spatio-temporal representation learning.
- Experiments on multiple tasks to show the efficacy of the approach.
Weaknesses: - Although the pre-training approach is proposed for endoscopic videos, the novelty is limited. It's a combination of self-distillation and masked video modeling.
- FAGTM is also an extension of either [1] and [2] which propose attention guide masking strategy for image based pre-training. The paper merely aggregates the attention for all frames and use the mean of that to guide the masking.
- The paper mentions in section 3.3, "This self-distillation method of self-supervision is also considered a form of contrastive learning". Can the authors please give remark on why self-distillation is a form of contrastive learning?
- Which block of the teacher ViT-B is used for FAGTM? Ablation study would be great.
- Did the author pre-train ViT-B from the scratch? or initialized from some weights?
- Ablation on number of epochs during pre-training is missing
- Only ViT-B is used in the experiment? Different architecture should be studied too.
- Did the authors also pre-train VideoMAE and other baselines using your dataset? Can the authors show some results using kinetics pre-trained SSL weights?
- More baselines should be compared with. For example, MME[3], AdaMAE[4], and other masked video modeling approach.
- What did the author use as an evaluation or metric to stop pre-training?
- Can FAGTM be used on local views?
- The approach looks very sensitive to $\gamma$. Table 3 shows the impact of it on all the tasks.
- Is the masking ratio used for FAGTM and random tube masking same? Ablation study on the impact of different masking ratio for each of the masking strategy would be useful.
- The paper only uses linear layer for the reconstruction objective. Study of different decoders would be helpful. In masked video modeling, most of the approaches used asymmetric encoder and decoder design. It would be great to pre-train baselines like VideoMAE with linear layer decoder for a better comparison.
- Most of the recent approaches use L2 loss for the reconstruction objective. Comparison of L1 and L2 loss is missing and can the authors please give a remark on why L1 loss is preferred?
- Given the approach is mostly empirical with limited novelty, the performance on other datasets like Cholec80, a surgical phase recognition benchmark dataset, would be great.
- There are some typos in the paper: section 3.2.1 'spatiotempora' -> temporal, table2 'gloabl' -> global
[1] What to Hide from Your Students: Attention-Guided Masked Image Modeling, ECCV 2022
[2] Good helper is around you: Attention-driven Masked Image Modeling, AAAI 2023.
[3] Masked Motion Encoding for Self-Supervised Video Representation Learning, CVPR 2023
[4] AdaMAE: Adaptive Masking for Efficient Spatiotemporal Learning with Masked Autoencoders, CVPR 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness section for the questions and suggestions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your constructive and insightful comments. We address all weaknesses and questions below.
**W1. The novelty.**
Existing self-supervised pre-training methods for endoscopic videos predominantly rely on contrastive learning. However, using contrast learning alone is not sufficient to capture the fine-grained feature representations required for endoscopic videos.
* To address the limitations of current methods, we have integrated contrastive learning with masked modeling to effectively acquire endoscopic video representations that possess both comprehensive discriminative capability and fine-grained perceptive ability.
* Unlike the general approach of combining self-distillation and masking modeling, our method specifically designed a novel multi-view masking strategy for endoscopic video features, which significantly improved the model performance. This strategy involves utilizing the frame-aggregated attention guided tube mask to capture global-level spatiotemporal contextual relationships from the global views, while also employing the random tube mask to focus on local variations from the local views.
* We conducted extensive experiments on 10 endoscopic video datasets to evaluate the performance of M$^2$CRL in comparison to other methods. Experimental results demonstrate the superiority of our method.
This work holds inherent value in clinical practice. The development of robust pretrained models for endoscopic video analysis through self-supervised learning can effectively support various downstream tasks, ultimately enhancing clinical workflow efficiency.
**W2. Innovation in FAGTM.**
Unlike [1] and [2], our work focuses on video rather than static images. Videos have a specific temporal dimension that should be considered in the mask design process to fully leverage spatiotemporal information. Furthermore, endoscopic videos present unique challenges due to their domain-specific characteristics. In this study, we propose a multi-view masking strategy tailored to endoscopic video features, which enhances the model’s robustness and effectiveness.
Specifically, to address the instability of inter-frame variations in endoscopic videos, we designed the FAGTM to capture global spatiotemporal representations. Traditional image-based attention mechanisms only focus on the spatial information of the current frame, neglecting dependencies across the entire video sequence. In contrast, our FAGTM aggregates the attention of frames to the mean value to obtain a simplified and holistic attention distribution. This attention distribution is capable of obtaining an approximation of the critical region of the video that are being attended to from both the temporal and spatial dimensions, while reducing the impact of excessive variations in individual frames or regions. Thus, our FAGTM provides more comprehensive and context-aware guidance for the masking process.
**W9. Comparison with more methods.**
As you suggested, we have included comparisons with MME, AdaMAE, and UTM methods, as shown in Table 5. The results indicate that our method achieves the best performance in classification and detection tasks and performs moderately well in the segmentation task. This is primarily due to the fact that these mask modeling methods excel in capturing rich pixel-level information, making them particularly effective for dense pixel-level tasks such as segmentation. However, their performance tends to underperform in structural tasks that focus on global features and the holistic understanding of objects in images or videos. Our method integrates masked modeling with contrastive learning, which not only encourages the model to capture fine-grained pixel-level features but also compels it to learn comprehensive discriminative representations. This is friendly for both pixel-level and structured tasks, but it also increases the complexity of model pre-training. During the pre-training process, the model needs to carefully consider feature selection tradeoffs across multiple pre-tasks. This consideration may result in some downstream tasks exhibiting less prominent performance compared to single pre-task training. Overall, our method achieves robust performance across both structural and pixel-level tasks, highlighting its versatility and effectiveness in learning both detailed and global features.
Table 5. Comparison with more methods
| **Methods** | **Cla.** | **Seg.** | **Det.** |
| :---------: | :----------: | :----------: | :----------: |
| MME [10] | 92.3±1.0 | 81.9±0.7 | 84.8±0.9 |
| AdaMAE [11] | 92.0±0.3 | **82.3±0.4** | 83.4±1.6 |
| UTM [12] | 93.2±0.3 | 80.8±0.5 | 84.0±1.1 |
| M$^2$CRL | **94.2±0.7** | 81.4±0.8 | **86.3±0.8** |
**W16. The performance on other datasets like Cholec80.**
Based on your suggestion, we have added the Cholec80 dataset in our downstream tasks. We compared our approach with the classical endoscopic foundation model Endo-FM [4] and the masked video modeling method VideoMAE [5] using F1-score in Table 9. The experimental results demonstrate that our approach achieves superior performance.
Table 9. Results of surgical phase recognition
| **Methods** | **Recog.** |
| :----------: | :----------: |
| Endo-FM [4] | 82.2±0.8 |
| VideoMAE [5] | 73.7±1.4 |
| M$^2$CRL | **85.0±0.4** |
**Additional comment:** We would have liked to include the rest of answers to the questions mentioned by Reviewer wGJR (marked as W3, W4, etc..). Unfortunately, we did not have enough space in this rebuttal box. As soon as the discussion phase will begin, we will include the mentioned answers in an additional comment for the reviewer.
---
Rebuttal 2:
Title: Response to additional questions (Part 1/3)
Comment: As we mentioned in the main rebuttal, we include the answers to the additional questions of Reviewer wGJR. We hope that this helps to address all remaining concerns and we thank again for taking the time to review our work.
**W3. Why self-distillation is a form of contrastive learning?**
The pretraining tasks for self-supervised learning are summarized into two categories: contrastive and generative. Contrastive methods focus on maximizing the similarity between different views of the same image after augmentation, while also potentially minimizing the similarity between views of different images after augmentation. Self-distillation is optimized by comparing the feature representations of the teacher and student models to extract a closer representation from the same image. In our work, the self-distillation component also serves to align features across different views, which follows the paradigm of contrastive learning. Therefore, self-distillation can be considered a form of contrastive learning. Furthermore, several other studies [1-3] have also classified self-distillation in self-supervised learning as a type of contrastive learning.
**W4. Ablation on teacher’s block used for FAGTM.**
In our study, we used the last layer block of the teacher ViT-B for FAGTM. The higher layer block incorporates lower-level features and object-level semantic information, offering comprehensive and abstract features that are essential for the model. Thus, it effectively guides the student network in masking. Table 1 shows that it is most beneficial for FAGTM to utilize the last layer block of the teacher network.
Table 1. Ablations on blocks
| **Blocks** | **Cla.** | **Seg.** | **Det.** |
| :--------: | :----------: | :----------: | :----------: |
| 4 | 91.8±0.7 | 76.6±1.5 | 83.6±1.1 |
| 8 | 92.5±0.4 | 79.7±2.2 | 84.8±1.0 |
| 10 | 93.9±1.0 | 80.6±0.9 | 85.9±1.7 |
| 12 | **94.2±0.7** | **81.4±0.8** | **86.3±0.8** |
**W5. Whether to initialize ViT-B?**
We pre-trained ViT-B with weight initialization to accelerate convergence and enhance training stability. This way also ensures consistency with the baseline (Endo-FM), which similarly employed weight initialization for pre-training ViT-B.
**W6 and W10. Ablation on pre-training epochs. Evaluation or metric to stop pre-training.**
For a fair comparison, we used the same number of epochs as Endo-FM. As you suggested, we performed ablation studies on different pre-training epochs, as shown in Table 2. While increasing the number of epochs generally enhances performance, excessive training may result in diminishing returns or overfitting, particularly when dealing with a smaller endoscopic dataset compared to Kinetics. This overfitting can impair the model’s generalization.
During the pre-training process, we stopped pre-training based on monitoring the loss on the training set. If there is no significant decrease in training loss over multiple epochs or if the loss curve becomes flat, it suggests that the model has likely acquired most of the features. At this point, we consider stopping pre-training.
Table 2. Ablations on epochs
| **Epochs** | **Cla.** | **Seg.** | **Det.** |
| :--------: | :----------: | :----------: | :----------: |
| 10 | 84.6±3.1 | 73.8±1.7 | 83.6±1.3 |
| 20 | 92.7±0.4 | 78.2±1.5 | 85.8±2.5 |
| 30 | **94.2±0.7** | **81.4±0.8** | **86.3±0.8** |
| 40 | 93.7±0.6 | 81.0±0.4 | 86.0±1.5 |
| 50 | 94.7±0.9 | 80.9±0.6 | 85.5±1.0 |
**W7. Ablation on different architecture.**
For a fair comparison, we used the weights for initialization as Endo-FM did. However, since this work did not provide weights for ViT variants, we were unable to conduct ablation experiments on different architectures with weight initialization. Consequently, we conducted a set of ablation experiments without weight initialization for the backbone, as shown in Table 3. It was observed that the performance improvement is more pronounced with larger models due to their increased parameters and more complex structures, enabling them to capture more intricate features. In contrast, smaller models have limited feature extraction capabilities and cannot fully extract visual features. Although larger models exhibit stronger learning abilities, they are more prone to overfitting during training. Additionally, larger models require higher computational resources and longer training times. In conclusion, choosing ViT-B as the pre-trained backbone is a suitable compromise.
Table 3. Ablations on architecture
| **Backbone** | **Cla.** | **Seg.** | **Det.** |
| :----------: | :------: | :------: | :------: |
| ViT-T/16 | 93.4±0.9 | 76.8±1.2 | 76.3±2.4 |
| ViT-S/16 | 93.8±0.4 | 78.2±1.5 | 79.4±0.7 |
| ViT-B/16 | 93.4±0.9 | 80.5±0.5 | 83.4±2.8 |
| ViT-L/16 | 94.0±0.9 | 83.2±0.8 | 84.2±2.0 |
---
Rebuttal 3:
Title: Response to additional questions (Part 2/3)
Comment: **W8. Pre-train VideoMAE and other baselines using your dataset? Some results using kinetics pre-trained SSL weights.**
Yes, to ensure the fairness of experiments, all compared methods were pretrained on the same union of 7 datasets as our M$^2$CRL. As you suggested, Table 4 presented results using SSL weights pretrained on Kinetics for 3 downstream tasks. We observed that the performance of these methods pretrained on Kinetics is comparable to those pretrained on the endoscopic datasets. This can be attributed to the significantly larger size of the Kinetics dataset compared to the endoscopic datasets. Pretraining models on such a large dataset allows them to learn more generalized features and patterns. Consequently, when these models are transferred to endoscopic downstream tasks, their robust feature extraction capabilities allow them to effectively fine-tune on the endoscopic datasets, thereby adapting efficiently to the new tasks. Moreover, despite the substantial differences in content between endoscopic datasets and Kinetics, both consist of color images, which means they share some similar fundamental visual features such as edges and colors. Therefore, models pretrained on Kinetics can capture these common features to some extent, enabling them to perform well when transferred to endoscopic downstream tasks.
Table 4. Results of using Kinetics pre-trained SSL weights on 3 downstream tasks
| **Methods** | **Cla.** | **Seg.** | **Det.** |
| :----------: | :------: | :------: | :------: |
| SVT [7] | 88.7±0.7 | 74.8±1.1 | 84.8±0.9 |
| VideoMAE [5] | 90.9±0.6 | 81.1±0.3 | 85.5±0.9 |
| DropMAE [9] | 85.8±0.9 | 81.2±0.4 | 83.2±0.3 |
**W11. Can FAGTM be used on local views?**
FAGTM cannot be used on local views. In our method, the global views are fed into both the teacher and student networks, allowing the teacher to generate attention corresponding to the global views to guide the student model in masking. However, the local views are only fed into the student networks, thus the teacher does not have attention for the local view to guide masking. This approach is inspired by self-supervised visual transformers [6, 7], where a teacher-student framework is used for contrastive pre-training. In this framework, different views of the video are processed by the teacher and student networks, and predictions are made between views in the latent feature space, enabling the model to learn spatiotemporally invariant features.
**W12. The approach looks very sensitive to $\gamma$.**
The parameter $\gamma$ serves as the threshold for FAGTM, which is designed to allow the model to sample visible tokens from its attention regions and mask the rest in a reasonable manner. This ensures that our method can effectively perform reconstruction tasks even at a high masking rate. A lower value of $\gamma$ implies selecting visible patches from a smaller high-attention region, potentially leading to an overemphasis on non-critical areas during reconstruction, thus contradicting the objectives of the self-supervised pretraining task. Conversely, a higher value of $\gamma$ means selecting visible patches from a broader region, which may dilute the focus on high-attention areas and adversely affect the model’s learning efficiency.
**W14. Ablation of VideoMAE using linear layer decoder.**
As you suggested, we evaluated VideoMAE on downstream tasks after pre-training with a linear layer decoder. As shown in Table 7, the impact of the decoder on the experimental results is minimal and almost negligible. The prediction head can be of arbitrary form and capacity, as long as its input conforms with the encoder output and its output accomplishes the prediction target. This has already been validated in SimMIM [8].
Table 7. Ablations on decoder
| **Decoder** | **Cla.** | **Seg.** | **Det.** |
| :----------: | :------: | :------: | :------: |
| Linear layer | 91.2±0.8 | 81.2±0.3 | 82.6±1.4 |
| Asymmetric | 91.4±0.8 | 80.9±1.0 | 82.8±1.9 |
**W15. Why use L1 loss for reconstruction?**
The mask modeling component of our model follows SimMIM [8], which employs the L1 loss function. The study demonstrates that different loss functions have minimal impact. To maintain consistency, we used the same loss function as SimMIM. Furthermore, we conducted ablation studies to demonstrate that different loss functions have a negligible effect on our results.
Table 8. Ablations on loss
| **Loss** | **Cla.** | **Seg.** | **Det.** |
| :------: | :------: | :------: | :------: |
| L1 | 94.2±0.7 | 81.4±0.8 | 86.3±0.8 |
| L2 | 93.8±0.7 | 82.0±0.7 | 85.9±1.5 |
**W17. Some typos.**
We apologize for any typos. We will proofread the whole manuscript carefully and make revisions in subsequent versions.
---
Rebuttal 4:
Title: Response to additional questions (Part 3/3)
Comment: **W13. Ablation on different masking ratio for each of the masking strategy.**
We conducted ablation experiments on different masking rates for FAGTM (global views) and RTM (local views). As our work is video-related, we aligned our masking rates with those used in other video masking modeling studies. As shown in Table 6, both excessively high and low masking rates are unfavorable for endoscopic representation learning. A high masking rate increases the difficulty of pre-training and hinders the ability of the model to learn effective representations, while a low masking rate reduces the challenge for the model and fails to fully utilize the advantages of masked learning to extract potential features from the video.
Table 6. Masking ratio
| **FAGTM (Global)** | **RTM (Local)** | **Cla.** | **Seg.** | **Det.** |
| :----------------: | :-------------: | :------: | :------: | :------: |
| 95% | 95% | 94.0±0.3 | 81.3±0.4 | 85.1±1.1 |
| | 90% | 93.8±0.9 | 80.7±0.7 | 86.2±1.3 |
| | 85% | 93.2±0.7 | 78.5±0.6 | 84.9±2.3 |
| | 75% | 92.6±0.4 | 77.4±1.7 | 85.2±2.1 |
| 90% | 95% | 93.8±1.4 | 80.5±0.7 | 85.8±0.9 |
| | 90% | 94.2±0.7 | 81.4±0.8 | 86.3±0.8 |
| | 85% | 93.8±0.8 | 81.4±1.7 | 85.6±2.2 |
| | 75% | 93.2±0.9 | 78.5±1.9 | 84.8±1.3 |
| 85% | 95% | 93.2±0.8 | 79.9±0.4 | 83.1±1.5 |
| | 90% | 93.8±0.2 | 81.2±0.2 | 83.8±0.9 |
| | 85% | 94.0±0.4 | 80.5±1.0 | 85.1±1.8 |
| | 75% | 92.5±1.2 | 79.6±0.7 | 83.8±2.5 |
| 75% | 95% | 91.7±1.3 | 76.8±1.8 | 84.2±2.1 |
| | 90% | 91.3±0.3 | 79.0±2.2 | 84.0±1.3 |
| | 85% | 91.8±0.2 | 77.5±1.9 | 83.8±0.4 |
| | 75% | 91.2±0.7 | 74.6±1.4 | 85.0±0.8 |
[1] Dong X, et al. Maskclip: Masked self-distillation advances contrastive language-image pretraining. CVPR 2023.
[2] Gupta A, et al. Siamese masked autoencoders. NeurIPS 2023.
[3] Chen X, et al. Context autoencoder for self-supervised representation learning. IJCV 2024.
[4] Wang Z, et al. Foundation Model for Endoscopy Video Analysis via Large-scale Self-supervised Pre-train. MICCAI 2023.
[5] Tong Z, et al. VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training. NeurIPS 2022.
[6] Caron M, et al. Emerging properties in self-supervised vision transformers. ICCV 2021.
[7] Ranasinghe K, et al. Self-supervised video transformer. CVPR 2022.
[8] Xie Z, et al. SimMIM: A simple framework for masked image modeling. CVPR 2022.
[9] Wu Q, et al. Dropmae: Masked autoencoders with spatial-attention dropout for tracking tasks. CVPR 2023.
[10] Sun X, et al. Masked motion encoding for self-supervised video representation learning. CVPR 2023.
[11] Bandara, et al. Adamae: Adaptive masking for efficient spatiotemporal learning with masked autoencoders. CVPR 2023.
[12] Li K, et al. Unmasked teacher: Towards training-efficient video foundation models. ICCV 2023.
---
Rebuttal 5:
Title: Response to the authors (Part 1/3)
Comment: I sincerely thanks authors for clarifications and experiments. However, I still have some questions regarding novelty and experiments. I would also suggest the authors to add all new experiments in the revision.
$\textbf{Novelty}$
I agree that for surgical videos, most of the papers have been using contrastive learning for pre-training, but recently, there are a few papers that have studied self-distillation and contrastive leanring. EndoFM uses global and local views and apply self-distillation. [1] comprehensively studies the self-distillation and contrastive learning approaches for endoscopic videos. The only component that differentiates this approach with other methods is applying masked modeling on both global and local views. FAGTM module is mere extension of attention guided masking for images.
[1] Dissecting Self-Supervised Learning Methods for Surgical Computer Vision. Medical Image Analysis 2023
$\textbf{Comparison with more methods.}$
Thanks for adding more experiments. Did you pre-train MME and AdaMAE from the scratch or did you fine-tune these models for endoscopic downstream task?
$\textbf{Cholec80}$
I see some missing baselines like TeCNO[2], LoViT[3]
[2] Tecno: Surgical phase recognition with multi-stage temporal convolutional networks
[3] LoViT: Long Video Transformer for Surgical Phase Recognition
In the above paper [1], table 6 shows F1 score on Cholec-80 dataset. SimCLR, when fine-tuned with 10 labeled videos, can achieve 85.0 and can go upto 93.6 when fine-tuned will all labeled videos. Same goes for other self-supervised approaches like DINO, MoCo v2 and SwaV with temporal model TCN.
---
Rebuttal Comment 5.1:
Title: Re: Response to the authors (Part 1/3)
Comment: Thank you for your thoughtful feedback on our submission. These valuable suggestions have improved the clarity and quality of our work. We have further addressed all comments and questions below.
**Novelty**
We summarized our novelty as follows:
* **Why did we apply masked modeling?** As pointed out by the reviewer, existing self-supervised pre-training methods for endoscopic videos primarily rely on contrastive learning. Contrastive learning naturally endows the pre-trained model with strong instance discrimination capabilities. However, relying solely on contrastive learning is insufficient to capture the fine-grained feature representations required for endoscopic videos. Therefore, we combine contrastive learning with masked modeling to acquire endoscopic video representations that possess both comprehensive discriminative capability and fine-grained perceptive ability, effectively addressing the limitations of contrastive learning in capturing dense pixel dependencies. ***This is our first major contribution.***
* **What is unique about our masking strategy?** We have developed a novel multi-view masking strategy specifically tailored to the characteristics of endoscopic video data to address two key challenges inherent in endoscopic videos. First, the instability of inter-frame variations is a prominent issue in endoscopic videos. These variations are caused by factors such as camera movement, instrument manipulation, and the uneven distribution of the lesion area. However, traditional attention-guided masking strategies fail to adequately address the global spatiotemporal nature of videos, and therefore cannot effectively consider the instability between frames in endoscopic videos. To address this, we designed the FAGTM strategy from global views, which aggregates features across multiple frames to capture comprehensive spatiotemporal information. Second, endoscopic videos often exhibit minimal inter-class differences, where lesion characteristics closely resemble those of the surrounding normal tissue. This requires the model to capture fine-grained pixel-level details to distinguish these subtle differences. Thus, we employed a random tube masking strategy from local views to learn finer local details. The effectiveness of our multi-view masking strategy is demonstrated through ablation experiments. ***This is our second major contribution.***
* **Clinical significance.** We conducted extensive experiments on 10 endoscopic video datasets to evaluate the performance of M$^2$CRL in comparison to other methods. The experimental results demonstrate the superiority of our method, which holds inherent value in clinical practice. The development of robust pretrained models for endoscopic video analysis through self-supervised learning can effectively support various downstream tasks, ultimately enhancing clinical workflow efficiency.
**Comparison with more methods**
We pre-trained MME and AdaMAE from scratch before fine-tuning them for the endoscopic downstream task.
**Cholec80**
Regarding the supplementary experiments in Table 6 of Paper [1]: The differences in experimental results arise from the use of different data partitioning methods. The results you mentioned in Table 6 are supplementary experiments for comparison with external methods, following the approach in reference [2], which uses 40 videos as the training set and 40 videos as the test set. In contrast, our method follows the data partitioning method in Table 4 of Paper [1], using 40 videos as the training set, 8 videos as the validation set, and 32 videos as the test set. As shown in the table below, our method achieved optimal performance under this setup. Furthermore, we will include additional experimental results based on the data partitioning outlined in Table 6 in the revised manuscript.
Regarding the baselines of TeCNO and LoViT: Our data partitioning and evaluation metrics followed the methods outlined in Paper [1]. However, due to different evaluation metrics (we use F1 score, while TeCNO uses accuracy), direct comparison with TeCNO is not feasible. Additionally, LoViT utilizes different data partitioning and evaluation metrics from ours, which also prevents direct comparison. We appreciate your reminder. In future research, we will conduct additional experiments, including adding TeCNO and LoViT methods to the comparisons presented in Table 6 of Paper [1], to provide more comprehensive results.
Table 9. Result of surgical phase recognition.
| **Methods** | **F1** |
| :---------: | :------: |
| DINO | 81.6 |
| MoCo v2 | 79.6 |
| SimCLR | 81.1 |
| SwAV | 79.5 |
| Endo-FM | 82.2±0.8 |
| VideoMAE | 73.7±1.4 |
| M$^2$CRL | 85.0±0.4 |
[1].Dissecting Self-Supervised Learning Methods for Surgical Computer Vision. MedIA 2023.
[2].Semi-supervised learning with progressive unlabeled data excavation for label-efficient surgical workflow recognition. MedIA 2023.
---
Rebuttal 6:
Title: Response to Authors (Part 2/3)
Comment: $\textbf{initialize ViT-B?}$
What would be the performance gap if ViT-B is not initialized with weights? Can we pre-train M$^{2}$CRL from the scratch?
$\textbf{Pre-training epochs?}$
The pre-training epochs is too low considering the we are pre-training a transformer using some proxy tasks. I think merely looking at the loss doesn't really justify to stop the pre-training.
$\textbf{Ablation on different architecture?}$
I think it would be great to pre-train a larger model using M$^{2}$CRL and compare it random initialization or imagenet/kinetics weight. The current set of experiments don't really tell the whole story. The only thing I can infer is the performance of M$^{2}$CRL can be achieved without weight initialization or pre-training.
$\textbf{VideoMAE and other baselines?}$
Did you pre-train them from initialized SSL weights? From the Table 4, these SSL methods achieve the same performance without pre-training on pixel-level tasks like segmentation and detection. M$^{2}$CRL only achieves significant performance on classification task.
$\textbf{Approach sensitive to $\gamma$}$
There's a drastic difference in performance when using 0.5 and 0.6 values. I understand the performance variability is using smaller or higher value, but performance gap in 0.5 and 0.6 don't correlate with the authors justifications.
---
Rebuttal Comment 6.1:
Title: Response to Authors (Part 3/3)
Comment: $\textbf{Ablation on masking ratio}$
It looks like with 0.95 making ratio for both global and local views yield comparable performance which doesn't correlate when the authors say both excessively high and low masking rates are unfavorable for endoscopic representation learning.
Given all my above mentioned concerns regarding novelty and experiments, I will maintain my original score of 4.
---
Reply to Comment 6.1.1:
Title: Re: Response to Authors (Part 3/3)
Comment: **Ablation on masking ratio**
We agree your point that a masking rate of 0.95 can also yield comparable performance results. Our statement that excessively high and low masking rates are unfavorable for endoscopic representation learning is a relative observation. Compared to the optimal masking rate, a higher masking rate increases the difficulty of pre-training, hindering the model’s ability to learn effective representations. Conversely, a lower masking rate reduces the challenge for the model, preventing it from fully capitalizing on the benefits of masked reconstruction for extracting latent features from the video.
Although our ablation experiments have shown that a masking rate of 0.95 can achieve comparable performance, its effectiveness in three downstream tasks is lower than that of a 0.9 masking rate. This suggests that at a masking rate of 0.95, the model is placed in a relatively unfavorable learning situation, resulting in suboptimal results.
**Overall**.
We have re-summarized the innovation of our methods with a special emphasis on our research motivation and method design. At the same time, for the ablation experiments, we have addressed each comment point-to-point. If our response has resolved your concerns on our paper, we will greatly appreciate it if you could re-evaluate our paper. We are also willing and ready to engage in discussions, if you have any further questions.
---
Rebuttal Comment 6.2:
Title: Re: Response to Authors (Part 2/3) (1)
Comment: **Initialize ViT-B?**
We can pre-train M$^2$CRL from scratch. As shown in the ablation study Table 10, the results indicate that M$^2$CRL without weight initialization performs slightly worse under the same pre-training conditions. This is because weight initialization accelerates model convergence and enhances model stability. However, for a fair comparison, we followed Endo-FM and used initialized weights.
Table 10. Initialization status.
| **Initialization status** | **Cla.** | **Seg.** | **Det.** |
| :-----------------------: | :--------: | :--------: | :--------: |
| Random | 93.4 ± 0.9 | 80.5 ± 0.5 | 83.4 ± 2.8 |
| Kinetics weights | 94.2 ± 0.7 | 81.4 ± 0.8 | 86.3 ± 0.8 |
**Pre-training epoch?**
We acknowledge your concerns regarding the pre-training epochs. we aligned with Endo-FM by using initialized weights and the same number of epochs when pre-training M$^2$CRL. Our model achieved excellent performance under these settings, thus demonstrating the comparability and effectiveness of our method.
During the pre-training process, if the training loss does not significantly decrease over multiple epochs or if the loss curve flattens, it suggests that the model may have been adequately trained. We select checkpoints from different epochs based on the convergence of the loss and determine the appropriate number of pre-training epochs by balancing computational time and performance.
**Alation on different architecture?**
We agree with your perspective that pre-training a larger M$^2$CRL model using random initialization or Kinetics weights can achieve great performance. Larger models have more parameters, which enhance their learning capacity, allowing them to capture more complex patterns and subtle differences in the data. To ensure the fairness of our experimental comparisons, we followed the Endo-FM by using weight initialization and achieved superior performance under the same training epochs, which also demonstrates the effectiveness of our method.
However, since Endo-FM does not provide ViT variant weights, we were unable to conduct ablation experiments using Kinetics weights for different architectures. Consequently, we conducted a set of ablation experiments without weight initialization for the backbone, as shown in Table 3. Additionally, in Table 10, we compared our method with random initialization and weight loading. The results indicate that M$^2$CRL with weight loading performs better under the same pre-training conditions, as weight loading can accelerate model convergence and enhance model stability.
**VideoMAE and other baseline?**
We did not use initialized SSL weights for pre-training; but refer to the original setup of the comparison methods.
The results from Table 4 demonstrate that single masked modeling pre-training methods, such as VideoMAE and DropMAE, exhibit strong performance in fine-tuning pixel-level tasks for endoscopic downstream tasks following pre-training on the Kinetics dataset. This can be attributed to the extensive size of the Kinetics dataset, which enables the model to acquire a more diverse range of features and patterns through pre-training. Furthermore, both endoscopic data and Kinetics data are comprised of color images with shared fundamental visual characteristics. As a result, the weights pre-trained on Kinetics can be effectively transferred to endoscopic tasks.
The advantage of masked modeling lies in its ability to capture rich pixel-level information, making it particularly effective for tasks requiring dense pixel-level processing. In contrast, the SVT method in Table 4, which is based on contrastive learning, does not perform as well as masked modeling methods in pixel-level tasks. Our method combines contrastive learning and masked modeling, enabling the model to capture both fine-grained pixel-level features and comprehensive discriminative features. This hybrid pre-training strategy may result in some downstream tasks not performing as prominently as single pre-training tasks.
Compared to Table 4, our method demonstrates a significant improvement in classification tasks. This is due to the fact that classification is a relatively straightforward structured task. Our model was pre-trained on an endoscopic dataset, which enabled it to effectively learn the features and patterns of endoscopic images. However, segmentation and detection tasks, being more complex in nature, may not exhibit as substantial an improvement under the same training conditions.
Overall, our method achieves robust performance in both structured and pixel-level tasks, demonstrating its versatility and effectiveness in learning both detailed and global features.
---
Rebuttal Comment 6.3:
Title: Re: Response to Authors (Part 2/3) (2)
Comment: **Approach sensitive to $\gamma$**
The parameter $\gamma$ is the threshold for FAGTM, used to guide the model in sampling visible tokens from high-attention regions. Due to the feature of endoscopic videos, where the camera moves within the body and inter-frame variations are unstable, different regions of the video frames exhibit significant changes. When $\gamma$ = 0.5, the model considers half of the area as high-attention regions and samples visible tokens from them. In this case, the model samples within a more concentrated attention area, which can lead to insufficient capture of the extensive variations in endoscopic videos during reconstruction.
Conversely, selecting a relatively larger high-attention region to sample visible tokens enables the model to better adapt to the significant inter-frame variations in endoscopic videos and facilitates the capture of important content across frames. However, it is important to note that the $\gamma$ value should not be too large, as a higher $\gamma$ value implies sampling visible tokens from a wider area, which could potentially dilute the focus on high-attention regions and consequently impact the model’s learning efficiency. Through ablation experiments, we ultimately chose $\gamma$ = 0.6. | Rebuttal 1:
Rebuttal: Dear reviewers and AC,
We sincerely appreciate your valuable time and effort spent reviewing our manuscript. We thank all reviewers for their useful comments, positive consideration and relevant feedback on our paper. It seems that the reviews are positive in general and acknowledges our main contributions and soundness of our work. Our submission has received four ratings, including one accept (7), one weak accept (6), one borderline accept (5), and one borderline reject (4).
The valuable feedback from the reviewers has significantly contributed to enhancing the quality of our manuscript. We have supplemented all ablation experiments according to the reviewers’ comments and suggestions. We have addressed each comment and question individually below and we would be glad to engage in discussion in case of more questions or concerns exist. We will also update the main paper in the future with the main requested changes for improvement. Furthermore, we kindly request Reviewer wGJR to reconsider our work after reviewing our response. Your reconsideration will be highly valued.
Based on the comments from the reviewers, we have summarized the strengths of our paper as follows:
* **Motivation: [Reviewer BGWe, u6Br]** The motivation is clearly laid out and the reference to prior related work is thorough. Different aspects are well motivated and explained. It gives a good overview of related work, particular challenges associated with endoscopy images and motivates the proposed approach.
* **Method: [Reviewer wGJR, R5Z9, BGWe, u6Br]** The paper shows the combination of self-distillation and masked video modeling for pre-training ViT-B model using endoscopic videos. Frame-aggregated attention guided tube mask (FAGTM) to learn global spatio-temporal representation learning. Ablation experiments show the contribution of the components. Ablation studies help showcase which elements of M$^2$CRL are most impactful. The study includes an ablation study to report on the effect of different components of the proposed methods.
* **Experiment: [Reviewer wGJR, R5Z9, BGWe, u6Br]** Experiments on multiple tasks to show the efficacy of the approach. The model is validated against different works and on three downstream tasks. The experiments are very thorough and soundly conducted. The conducted experiments are exhaustive. The results are averaged over three runs and show consistent improvements in all three tasks.
* **Expression: [Reviewer wGJR, BGWe, u6Br]** The paper is easy to read. The organization of the paper is excellent, with helpful use of bold text, logical flow from one passage to the next, and high-quality, information-dense illustrations and tables. The manuscript is well and clearly written. The background section summarizes a lot of related literature.
* **Impact: [Reviewer R5Z9, BGWe, u6Br]** Representation learning in the medical domain can have an impact on the development of models for medical image analysis. Results demonstrate notable improvement upon existing competitive baselines. The method has been developed for application in endoscopy, but are relevant for spatio-temporal imaging in general and the type of pretraining is relevant for other medical imaging modalities with a temporal component.
Here we also present the response to two questions below.
_**Concern about novelty of the model**_
We have summarized our novelty as follows:
* Existing self-supervised pre-training methods for endoscopic videos predominantly rely on contrastive learning. However, using contrastive learning alone is not sufficient to capture the fine-grained feature representations required for endoscopic videos. To address the limitations of current methods, we have integrated contrastive learning with masked modeling to effectively acquire endoscopic video representations that possess both comprehensive discriminative capability and fine-grained perceptive ability.
* Given the characteristics of inter-frame instability and small inter-class differences in endoscopic videos, we propose a multi-view mask strategy. Specifically, we introduce a frame-aggregated attention guided tube mask strategy for the global views, which aggregates features from multiple frames to capture global spatiotemporal information. Simultaneously, a random tube mask strategy is employed from the local views, enabling the model to focus on local features.
* Extensive experiments have verified that our M$^2$CRL significantly enhances the quality of endoscopic video representation learning and exhibits excellent generalization capabilities in multiple downstream tasks.
_**Concern about experiment to other datasets or endoscopy modalities**_
* **[Reviewer wGJR]:** The performance on other datasets like Cholec80, a surgical phase recognition benchmark dataset, would be great.
- **[Reviewer R5Z9]:** It would be interesting to see the performance in downstream tasks that involve other endoscopic modalities.
In our study, we used multiple publicly available endoscopic video datasets, provided by research groups worldwide and previous EndoVis challenges. These datasets cover 3 types of endoscopic procedures (colonoscopy, gastroscopy, and laparoscopy) and 10+ different diseases. We believe that this comprehensive and large-scale dataset is valuable for endoscopic research. In downstream tasks, classification is for gastroscopy, while segmentation and detection are for colonoscopy. Based on reviewer’s suggestion, we have added the Cholec80 dataset in downstream tasks. The results demonstrate that our approach achieves superior performance. This experiment further validates the robustness and effectiveness of our approach across diverse endoscopic video tasks.
We strongly believe that M$^2$CRL can be a useful addition to the NeurIPS community, in particular, due to the enhanced manuscript by reviewers’ comments helping us better deliver the effectiveness of our method.
Thank you very much!
Best regards,
Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diffusion Policy Attacker: Crafting Adversarial Attacks for Diffusion-based Policies | Accept (poster) | Summary: This paper introduces DP-Attacker, a suite of algorithms designed to generate adversarial attacks against diffusion-based policies (DPs). The paper explores two attack scenarios: (1) hacking the scene camera by adding imperceptible digital perturbations to the visual inputs, and (2) hacking the scene by attaching small adversarial patches to the environment. The authors demonstrate the effectiveness of DP-Attacker through extensive experiments on pre-trained diffusion policies across various robotic manipulation tasks, showing significant degradation in performance for both online and offline attacks.
Strengths: - Novelty: The paper presents the first suite of white-box attack algorithms specifically designed for visual-based diffusion policies. The proposed approach, based on noise prediction loss, effectively circumvents the challenges posed by the chained denoising structure and high randomness of diffusion models.
- Significance: By highlighting the vulnerability of diffusion policies to adversarial attacks, the paper raises important safety concerns for real-world applications. This research serves as a crucial step towards developing more robust DP systems and ensuring their reliability in practical scenarios.
Weaknesses: - Lack of Defense Strategies: While the paper demonstrates the effectiveness of DP-Attacker, it does not explore or propose any defensive strategies to mitigate the identified vulnerabilities. This omission hinders the development of robust DP systems and leaves open the question of how to protect against such attacks.
- Limited Attack Scenarios: The paper focuses on two specific attack scenarios: hacking the camera and attaching adversarial patches. Exploring a broader range of attack scenarios, such as manipulating the robot's sensors or exploiting other weaknesses in the DP system, would provide a more comprehensive understanding of the system's vulnerabilities.
- Computational Complexity: The paper mentions the computational complexity of gradient calculations for the end-to-end action loss. While the proposed noise prediction loss mitigates this issue, further analysis and comparison with other attack methods in terms of computational efficiency would be beneficial.
Technical Quality: 3
Clarity: 3
Questions for Authors: • Can the authors provide a more detailed analysis of the transferability of the generated adversarial perturbations across different environments and robot models?
• How does the performance of DP-Attacker compare to existing adversarial attack methods designed for deep neural networks? Are there specific advantages or disadvantages of using DP-Attacker?
• What specific defensive strategies can be implemented to mitigate the identified vulnerabilities of diffusion policies? Can the authors provide a theoretical analysis or simulation results to demonstrate the effectiveness of these defenses?
• What are the potential limitations of the proposed attack method in terms of computational complexity and scalability to large-scale systems?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors acknowledge the limitations of their work and discuss the broader societal impacts of their research.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and comments. We are glad that the author finds our work novel and impactful for developing more robust DP systems. Below is our response to some of the questions raised by the reviewer.
> Lack of Defense Strategies: While the paper demonstrates the effectiveness of DP-Attacker, it does not explore or propose any defensive strategies to mitigate the identified vulnerabilities. This omission hinders the development of robust DP systems and leaves open the question of how to protect against such attacks.
>
>
> What specific defensive strategies can be implemented to mitigate the identified vulnerabilities of diffusion policies? Can the authors provide a theoretical analysis or simulation results to demonstrate the effectiveness of these defenses?
>
We refer the reviewer to **general response 3** on defense strategies.
> Limited Attack Scenarios: The paper focuses on two specific attack scenarios: hacking the camera and attaching adversarial patches. Exploring a broader range of attack scenarios, such as manipulating the robot's sensors or exploiting other weaknesses in the DP system, would provide a more comprehensive understanding of the system's vulnerabilities.
>
Attacking the robot’s sensors (proprioception) in the DP framework is an interesting direction for future research. However, we did not include this because we believed hackers were more likely to manipulate the image input (especially in physical-world attacks).
> Computational Complexity: The paper mentions the computational complexity of gradient calculations for the end-to-end action loss. While the proposed noise prediction loss mitigates this issue, further analysis and comparison with other attack methods in terms of computational efficiency would be beneficial.
What are the potential limitations of the proposed attack method in terms of computational complexity and scalability to large-scale systems?
>
The computational complexity and scalability of our attack are interesting research directions. DP models are often quite small now to satisfy the requirement of fast inference as a visual-motor policy. Our proposed optimization is based on single-step noise prediction loss, which ensures fast gradient calculation.
> Can the authors provide a more detailed analysis of the transferability of the generated adversarial perturbations across different environments and robot models?
>
We refer the reviewer to **general response 5** on the transferability of our attacks.
> How does the performance of DP-Attacker compare to existing adversarial attack methods designed for deep neural networks? Are there specific advantages or disadvantages of using DP-Attacker?
>
Our DP-Attacker is the first proposed suite of methods to attack diffusion policy. One can view the end-to-end loss comparison as an existing method. As shown in Table 4 in our paper, E2E loss with DDPM is dramatically more computationally heavy and impossible to use for online attacks. Our proposed DP-Attacker is faster and performs better in terms of degrading the policy performance.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply, some of my doubts have been resolved. I maintain my original score. | Summary: The paper proposes an adversarial example construction method targeting Diffusion-based Policies, aiming to create malicious observation inputs to diffusion policies that cause them to fail in generating correct actions, leading to robot errors. The authors present attack methods including both untargeted and targeted attacks. The main idea is to construct adversarial examples by optimizing a malicious denoising loss and updating the image according to the gradient. Experiments demonstrate that the generated adversarial examples can lower the success rate of the target diffusion policy.
Strengths: - This is the first adversarial attack against a diffusion-based visuomotor policies.
- The proposed attack framework is comprehensive, including targeted, untargeted, online, and offline attacks.
- The paper is well-written and easy to understand.
Weaknesses: - The proposed method seems to be a straightforward extension of a previous approach applied to images.
- The threat model is not well-defined.
- The attack method makes strong assumptions on attackers, and its effectiveness in real-world scenarios, especially for patched attacks, is unclear.
Technical Quality: 3
Clarity: 3
Questions for Authors: Overall, this paper is interesting and presents the first adversarial attack against diffusion-based visuomotor policies. However, I have the following concerns:
- The proposed method seems to be a straightforward extension of prior work [1], with the main modification being the adjustment of the adversarial loss to accommodate the differences between diffusion policy and standard T2I models.
- Unlike related work that uses adversarial examples to prevent painting imitation, this paper attacks the system directly. Can the authors provide a clear threat model, including a well-defined attacker's goal and capabilities? It would be helpful to provide corresponding scenarios.
- The authors claim that DP-Attacker can generate highly transferable perturbations. How is this demonstrated in the experiments? For example, the authors claim that it is actually the encoder being attacked. The diffusion policy uses ResNet-18 as the encoder [2], can the adversarial examples generated by DP-Attacker transfer to other encoders?
[1] Adversarial example does good: Preventing painting imitation from diffusion models via adversarial examples, 2023
[2] Diffusion Policy: Visuomotor Policy Learning via Action Diffusion, 2023
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: One major limitation is the lack of real-world demonstrations for patched attacks. The authors have acknowledged this in the paper. Additionally, I have some questions regarding the authors' claim about the transferability of the generated adversarial example.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful valuable comments and feedback! We are glad that the reviewer liked our writing and acknowledges our comprehensive attacks. We would like to address the reviewer’s questions below.
> The proposed method seems to be a straightforward extension of a previous approach applied to images.
The proposed method seems to be a straightforward extension of prior work [1], with the main modification being the adjustment of the adversarial loss to accommodate the differences between diffusion policy and standard T2I models.
>
Yes, previous works like [24, 25] propose attacks for LDMs, and [25] further point out that the mechanism of why LDM is vulnerable is because the noises are scaled up by the encoder of the diffused latents.
However, Diffusion Policy is different from LDMs since the attacked item is the condition instead of the diffused latent; as far as we know, no previous works have conducted attacks against the conditional image of diffusion models.
We refer the reviewer to our **general response 1** for more details about the novelty of our method.
> The threat model is not well-defined. Unlike related work that uses adversarial examples to prevent painting imitation, this paper attacks the system directly. Can the authors provide a clear threat model, including a well-defined attacker's goal and capabilities? It would be helpful to provide corresponding scenarios.
>
We have added a detailed definition of the threat model in general response 4. We will also add this clarification to the final paper.
> The attack method makes strong assumptions on attackers, and its effectiveness in real-world scenarios, especially for patched attacks, is unclear.
>
We refer the reviewer to **General Response 2** for attacks in the real world.
> The authors claim that DP-Attacker can generate highly transferable perturbations. How is this demonstrated in the experiments? For example, the authors claim that it is actually the encoder being attacked. The diffusion policy uses ResNet-18 as the encoder [2], can the adversarial examples generated by DP-Attacker transfer to other encoders?
>
That's a good question. We conduct experiments in **general response 5** for the transferability of the attacks across different model structures (CNN and Transformer). For the visual encoder, while there are no pre-trained DP to use, we think there is still some level of transferability according to previous investigations into different model structures.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their responses. I find the approach of attacking the conditional input instead of the denoising input to be interesting. However, I feel that the overall novelty is still limited, so I did not increase my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. We aim to build comprehensive exploration into the robustness of DP, which turns out to be a promising backbone choice for behavior cloning.
The difference of inputs is one point,another major difference is in the global attacks, we need to optimize one noise for all frames which have not been done before.
Overall, we believe our contribution is good to the community and many following works can be inspired. Thanks again for your acknowledgment! | Summary: This paper presents strategies to attack visual-based diffusion policy networks. The authors investigated two attacking scenarios: hacking the scene camera by adding imperceptible digital perturbations and hacking the scene by attaching small adversarial patches to the environments.
Strengths: 1. The paper is very well-written.
2. Authors have shown experimental evidence that diffusion-based policy networks are vulnerable to adversarial attacks in digital and physical-domain settings.
Weaknesses: 1. Although authors have demonstrated that diffusion-based policy networks are not robust and susceptible to adversarial perturbations in visual inputs, their attacks are not novel. PGD and patch-based attacks are very common. From this perspective, this work does not contribute substantially.
2. The authors mentioned in the related work section that physical-world attacks are always based on patches. This claim is not true. Many existing physical adversarial attacks are not based on patches. (e.g., Adversarial laser, adversarial shadows, etc.). The
3. The authors mentioned in the limitations section that they have not evaluated this method in real-world settings, so it is hard to assess its practicality.
4. The authors' main contribution is the demonstration that diffusion policy networks are vulnerable to adversarial attacks. Whenever the input is corrupted, it affects the model's output. It is unclear how likely a policy network or robot (in case of physical attacks) is to face these perturbations in real life. It would be great if authors could focus on crafting application-specific attacks with a high chance of occurring in real life.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there a reason authors have considered PGD and patch-based attacks? Did they consider crafting realistic and novel attacks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I appreciate that the authors have clearly mentioned their work's limitations. But my biggest concern is the use of already-existing (very common) visual perturbations. I suggest the authors craft attacks that are more realistic, challenging to implement, and have a higher probability of being seen in the real world.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer's comments and valuable feedback. We are delighted that the author liked our writing and is convinced by our algorithms. We appreciate that the reviewer is convinced that diffusion-based policy networks are susceptible to adversarial attacks. Here’s our response to the reviewer’s concerns:
> W1 Although authors have demonstrated that diffusion-based policy networks are not robust and susceptible to adversarial perturbations in visual inputs, their attacks are not novel. PGD and patch-based attacks are very common. From this perspective, this work does not contribute substantially.
>
The focus of our work is to derive a suite of tools that could successfully hijack diffusion-based imitation learning and reveal that there is still a long barrier to making this algorithm safe for deployment. We have successfully reviewed this by developing fast and efficient online attacks, transferable offline attacks, and physically realizable patch attacks. We list some points in the **general response 1** about the comparison of our work with existing methods and why it is challenging.
> W2 The authors mentioned in the related work section that physical-world attacks are always based on patches. This claim is not true. Many existing physical adversarial attacks are not based on patches. (e.g., Adversarial laser, adversarial shadows, etc.).
>
Thank you for pointing this out to us. Adversarial patches are widely used in real-world attacks [3, 4, 5], especially in robotics applications [6, 7]. Future work can be done on exploring the robustness of DP against different types of realistic attacks.
Adversarial lasers and shadows are interesting forms of physical attacks. We have found two works that have used them in physical world attacks. [1] focuses on adversarial lasers, and [2], which focuses on adversarial shadows. However, we believe patch-based attacks will be more suitable for conducting physical attacks for **diffusion policy**. [1] and [2] focus on attacking **DNNs for classification**. As mentioned by [1], the effectiveness of these attacks either creates new visual cues that lure DNNs to misclassify or perform as a dominant feature of a set of classes of objects. This logic does not transfer that well to diffusion policy, which is a visual motor policy that uses images as conditions to predict continuous actions.
> W3 The authors mentioned in the limitations section that they have not evaluated this method in real-world settings, so it is hard to assess its practicality.
>
We refer the reviewer to our **general response 2** for discussion on real-world attacks.
> W4 The authors' main contribution is the demonstration that diffusion policy networks are vulnerable to adversarial attacks. Whenever the input is corrupted, it affects the model's output. It is unclear how likely a policy network or robot (in case of physical attacks) is to face these perturbations in real life. It would be great if authors could focus on crafting application-specific attacks with a high chance of occurring in real life.
>
To answer the lack of real-world experiments, we refer to the answer for **general response 2**. In this work, we focus more on exposing the security risk of diffusion policy, so we have chosen patched attacks. As answered in W3 we believe other forms of physical attacks to be less effective for attacking DP. We will discuss the limitation that we have not thought of making physical attacks more realizable and will point in the direction of making less conspicuous patches as attacks.
>Q1 Is there a reason authors have considered PGD and patch-based attacks? Did they consider crafting realistic and novel attacks?
>
Since our attack scenario is white-box, PGD is one of the best tools for gradient-based adversary construction. As we have mentioned in response to W2 and W4, we believe that physical patches are good candidates for attacking diffusion policies.
[1] R. Duan et al., “Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink,” Mar. 11, 2021, arXiv: arXiv:2103.06504. doi: 10.48550/arXiv.2103.06504.
[2] Y. Zhong, X. Liu, D. Zhai, J. Jiang, and X. Ji, “Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon,” Mar. 22, 2022, arXiv: arXiv:2203.03818. doi: 10.48550/arXiv.2203.03818.
[3] Y. Mirsky, “IPatch: a remote adversarial patch,” Cybersecurity, vol. 6, no. 1, p. 18, May 2023, doi: 10.1186/s42400-023-00145-0.
[4] T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial Patch,” May 16, 2018, arXiv: arXiv:1712.09665. doi: 10.48550/arXiv.1712.09665.
[5] Y.-C.-T. Hu, J.-C. Chen, B.-H. Kung, K.-L. Hua, and D. S. Tan, “Naturalistic Physical Adversarial Patch for Object Detectors,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada: IEEE, Oct. 2021, pp. 7828–7837. doi: 10.1109/ICCV48922.2021.00775.
[6] A. Tanev, S. Pavlitskaya, J. Sigloch, A. Roennau, R. Dillmann, and J. M. Zollner, “Adversarial Black-Box Attacks on Vision-based Deep Reinforcement Learning Agents,” in 2021 IEEE International Conference on Intelligence and Safety for Robotics (ISR), Tokoname, Japan: IEEE, Mar. 2021, pp. 177–181. doi: 10.1109/ISR50024.2021.9419509.
[7] Y. Jia, C. M. Poskitt, J. Sun, and S. Chattopadhyay, “Physical Adversarial Attack on a Robotic Arm,” IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9334–9341, Oct. 2022, doi: 10.1109/LRA.2022.3189783.
---
Rebuttal Comment 1.1:
Comment: Due to the transformations that happen in the physical world, the perturbations injected in the digital world are sometimes not generalizable in physical settings. Demonstrating the vulnerability of policy networks to adversarial threats is not a significant contribution. For this reason, I will not increase my current score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply
> Due to the transformations that happen in the physical world, the perturbations injected in the digital world are sometimes not generalizable in physical settings.
Our physical patched attacks simulates the perturbations in the real world settings (including transformations and lightenings), which can be done by using render engine. It can highly reflect the performance for real world settings.
> Demonstrating the vulnerability of policy networks to adversarial threats is not a significant contribution.
Diffusion policy is not the same as previous end to end policy network. The structure is very different, and the robustness of it under different kinds of attacks remains unexplored. It is actually not that intuitive why it is vulnerable because previous works find that non-latent diffusion model is **very robust** [a]. Here we contribute the vulnerability to the visual encoder, further work can be conducted to make it more robust by using better vision encoder.
[a] Pixel is a barrier: Diffusion Models are more Robust Than we Think | Summary: This paper studies the adversarial attack to diffusion policy. Two attack scenario settings are introduced. One is to attack the scene camera by adding imperceptible digital perturbations to the visual observation. The other is to attack the scene by adding small adversarial patches to the environment. Experiments show promising results.
Strengths: 1. This paper is easy to follow and well-organized.
2. The motivation to attack diffusion policy is interesting.
3. Two hacking settings are studied.
Weaknesses: 1. The technical novelty is marginal. The proposed framework directly applies the existing attack method ([25, 24, 52] in the Reference) to diffusion policy in two settings. This paper is more like an experimental report by applying adversarial attack methods to diffusion policy in different settings.
2. The investigation on diffusion policy attack is very shallow. Deeper analysis is needed for improving the contribution.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is there any specific challenge for applying the existing attack methods to diffusion policy?
2. Technically, what is the technical difference between attacking the conditional images of diffusion policy and those conditional images of diffusion models for other tasks, i.e., anomaly detection?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments and feedback. Below is our response to address some of the questions raised in the review. We hope that it will address some of your concerns:
> W1 The technical novelty is marginal. The proposed framework directly applies the existing attack method ([25, 24, 52] in the Reference) to diffusion policy in two settings. This paper is more like an experimental report by applying adversarial attack methods to diffusion policy in different settings.
>
Yes, previous works like [24, 25] propose attacks for LDMs by perturbing the diffused image, and [25] further point out that the mechanism of why LDM is vulnerable is because the noise are scaled up by the encoder of the diffused latents.
However, Diffusion Policy is different from LDMs since the attacked item is the condition instead of the diffused latent; as far as we know, no previous works have conducted attacks against the conditional image of diffusion models.
Also, the diffusion policy is a decision model that includes iterative interaction with the environments; attacking such a diffusion-based system remains an open field before our work.
We refer the reviewer to our **general response 1** for more details about the novelty of our work. Attacking diffusion policy is fundamentally different from attacking LDMs for T2I tasks. Our proposed DP-Attacker successfully dramatically decreases the performance of DP in simulation.
> W2 The investigation on diffusion policy attack is very shallow. Deeper analysis is needed for improving the contribution.
>
We are the first to investigate adversarial attacks against diffusion policy. We focus on providing a set of effective attack algorithms and verifying their effectiveness. We also provide some insights into why diffusion policy can be easily attacked.
> Q1 Is there any specific challenge for applying the existing attack methods to diffusion policy?
>
We have mentioned a few challenging aspects of attacking diffusion policy in **general response 1**.
> Q2 Technically, what is the technical difference between attacking the conditional images of diffusion policy and those conditional images of diffusion models for other tasks, i.e., anomaly detection?
>
This is an interesting question. We found the following paper on using diffusion models for anomaly detection with conditional images [1]. In this work, the image condition is used directly to guide the generation process by modifying the score with a mathematical formula. This differs from the diffusion policy, where the image condition is used as an input of DNN to predict the noise/score.
Nevertheless, attacking diffusion policy is a new task. DP is a fundamental algorithm for embodied AI, we are the first to evaluate the robustness of it.
[1] A. Mousakhan, T. Brox, and J. Tayyub, “Anomaly Detection with Conditioned Denoising Diffusion Models,” Dec. 03, 2023, arXiv: arXiv:2305.15956. Accessed: Aug. 05, 2024. [Online]. Available: http://arxiv.org/abs/2305.15956
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: Thank the authors for answering my questions. I still believe that this is a borderline paper. I still think that the task studied in this paper is novel while the technical novelty is limited, as it applies the existing attack techniques including diffusion attack methods [25, 24, 52] to robotic diffusion policy. I am good with the paper to be accepted while I would like keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your reply. We are happy to see that we addressed most of the questions.
For the technique novelty, we listed the challenging points of tasks in the general response. While the specific methods for online global attacks do not differ a lot from attacking DM, the methods we proposed to do online attack is never done before because we need to generate the same perturbations across different frames.
Overall, we believe our contribution is good to the community and many following works can be inspired. | Rebuttal 1:
Rebuttal: We thank the reviewers for their valuable comments and feedback on our paper. We are delighted that all the reviews find our method a novel and effective attack against diffusion policies. We are glad that the **reviewers kPb5 and 69xg** are convinced by our experimental visualizations and results. **Reviewers 7zbu, fJsu, and FStp** found our paper well-written and easy to follow.
In the following section, we would like to address some of the common questions raised by the reviewers:
> 1. Comparison of DP-Attacker with existing methods.
We are the first to present a set of algorithms (DP-Attacker) that can successfully attack diffusion policy, one of the most popular methods for imitation learning in robotics. We focus on exploring the attackability of diffusion policy and devising new adaptive attacks for this special task and special structure, which is an open question. Our method is based on previous adversarial attacks for DNNs and LDMs. However, none of them **can be directly used** to attack diffusion policy effectively since:
1. Diffusion policy is a “dynamic” policy network, whereas LDMs are more “static” models in their uses. The policy network is invoked at a high frequency, requiring fast adversary construction for online settings. Our method successfully achieved this with carefully designed loss and algorithms. We also devised offline attacks that can function across the whole policy rollout.
2. Although methods have been developed for attacking diffusion models (DMs) for image generation [24, 25, 43, 52], the problem differs from how diffusion policy is formulated. In attacking image DMs, which are mostly latent DMs, where the image generation is processed through SDEdit, the attacked image is the denoising input of the LDM network. However, in diffusion policy, the image is the conditional input of the diffusion model, and the denoising input is the robot's actions.
3. Randomness is injected during the denoising process while generating an action in diffusion policy. This might make the policy more robust to adversarial inputs.
> 2. Lack of real world experiments.
First, the digital attacks (e.g. online attacks in our settings) will not be affected by real-world settings, since the attacks are generated to be directly added to the image.
For adversarial physical patches, we try our best to make it as close as the real settings in the real world by evaluating our attack with a "physically" stuck patch in the simulation:
The patch is put into the photo-realistic simulation environment provided by robomimic and robosuite and has been shown to decrease the performance of diffusion policy successfully. Real-world data is more complex and is challenging to learn for diffusion policy. We conducted our evaluation in simulation to show the effectiveness of our attack method better. Models trained on real-world data may also be more sensitive to adversarial perturbations. Thus, we believe our proposed DP-Attacker will work in the real world as well.
For further research, we will try to apply it to the real world.
> 3. What could be the defense strategy?
While the defense of DP-Attacker is not the focus of this work, we provide some possible defenses:
- (1) purify the observation using diffusion-based purification methods
- (2) apply adversarial training to increase the robustness of the image encoder
> 4. Could you clarify the threat model?
In our work, we only consider white-box attacks on diffusion policy (DP) in which the attacker can access the model, its parameters, and the data used to train it. The attacker's goal is to decrease the performance of DP (task score or success rates). In targeted attacks, we also wish to be able to control the model’s generated trajectory. Two attacking scenarios are explored (Fig. 1 in the manuscript). In the first scenario, the hacker is allowed to modify every pixel of the image with some budget $\delta$. Afterward, the modified image is used for DP inference and rollout. We develop two types of perturbations; one is calculated online and is generated per inference. The second type is pre-generated and used throughout the rollout. In the second scenario, the attacker puts a pre-generated colored patch in the camera's view. The patch undergoes a physical process of reflection and camera imaging before being used for inference in the model.
> 5. What is the transferability of your attacks?
Originally, we used transferability to indicate that our devised offline attacks (global or patched) can function to disrupt the performance of the DP across the rollout of the policy where the input image is consistently changing. However, since the reviewers are also interested in the transferability of our attacks across models, we conducted the following experiment. We believe it is more reasonable to test the transferability of offline attacks rather than online attacks because online attacks are often very specific to the model with white-box access and will not transfer well to other networks.
We first tested the transferability of offline global attacks. We used DP-attacker to generate untargeted global offline attacks ($\delta=0.03$) on two checkpoints (CAN-MH-CNN and CAN-MH-TF). Then, we evaluated these models using the two generated adversarial perturbations, and the success rate is listed in Table 1 of the attached PDF.
| Runner Model\Attacked Model | CNN | TF | Original SR |
| --- | --- | --- | --- |
| CNN | 0.34 | 0.78 | 0.98 |
| TF | 0.32 | 0.46 | 0.92 |
We also tested the transferability of patched attacks. We used DP-Attacker to generate adversarial patches on two checkpoints (CAN-PH-CNN and CAN-PH-TF). Then, we evaluated these models using the two generated adversarial perturbations, and the success rate is listed in Table 2 of the attached PDF.
| Runner Model\Attacked Model | CNN | TF | Original SR |
| --- | --- | --- | --- |
| CNN | 0.16 | 0.54 | 0.98 |
| TF | 0.42 | 0.44 | 0.92 |
Pdf: /pdf/fb71c2360a0b719d542d6e91d9ae738415dd5257.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper analyzes the security vulnerabilities of the diffusion strategy and proposes possible attack scenarios.
A set of algorithms called DP-Attacker is proposed, which can successfully reduce the performance of the diffusion strategy in different adversarial scenarios (including online and offline attacks).
Strengths: This paper proposes a novel adversarial attack framework, specifically targeting diffusion model-based strategies. It not only analyzes the security vulnerabilities of diffusion strategies, but also successfully implements effective attacks on these strategies, demonstrating powerful attack capabilities.
The paper verifies the effectiveness of the attack algorithm through extensive experiments. These experiments not only include attacks on existing diffusion strategy models, but also cover attack scenarios under different settings, such as online and offline attacks, and provide comprehensive experimental results and analysis.
Weaknesses: 1. The method used is not specific to the diffusion model.
2. Novelty is limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is unique about this approach compared to a large number of previous black-box and white-box attacks?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the questions raised. Please see our response below. We sincerely ask the reviewer to refer to the general response for possible concerns.
> Q1: The method used is not specific to the diffusion model. Novelty is limited. What is unique about this approach compared to a large number of previous black-box and white-box attacks?
>
We refer the reviewer to **general response 1** for the novelty of our work. Our algorithm is specifically designed for diffusion policy to ensure online fast adversary generation. To achieve this, we have used a Monte Carlo approximation with the noise prediction loss rather than the end-to-end loss. | Summary: Diffusion policy is used to generate the action trajectory from a pure Gaussian noise conditioned on the input images, applied in many applications such as autonomous driving. This paper proposes white-box adversarial attacks against diffusion policy, which aim to generate a target bad action or an untargeted action by attaching global perturbation or patch-based perturbation on the observation image. To this end, they formulate the training loss for the perturbation as minimizing the distance between the generated action and the target bad action (in targeted attack) or maximizing the distance between the generated action and a sampled good solution (in untargeted attack). Empirically, they validate the effectiveness of the proposed adversarial attack on six robotic manipulation tasks.
Strengths: 1. They explored adversarial attacks in a new setting, i.e., against diffusion policy.
2. They have good visualizations of experimental results.
Weaknesses: 1. As mentioned in the limitation section, there is no intuitive defensive strategy or experimental results in real-world scenarios included in the paper.
2. The proposed attack only considers the white-box setting, i.e., the attacker requires knowledge of all parameters of the diffusion policy. Discussion on black-box setting could also be included in the limitation section.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Threat model could be clarified with more clearness in problem settings. It appears to me it is a white-box attack where the attacker requires knowledge of all parameters of the diffusion policy.
2. The novelty of the proposed adversarial attack remains unclear to me. On the one hand, there are existing works [25, 43] which propose adversarial attacks against latent diffusion models, introducing the Monte-Carlo-based adversarial loss. On the other hand, the difficulty of computing gradients due to the inherent long-denoising chain of the diffusion policy is solved by [25, 24, 52]. So, it seems that the novelty over these existing works is that this paper is modifying the training loss as the difference between actions. Clarification of the novelties compared with existing works would be appreciated.
3. Is there any other evaluation metric in existing works? It seems that there is only one evaluation metric used (i.e., the drop of the result task completion scores) in the experimental results to demonstrate the effectiveness of the attack. For me, this one single metric would be fine for the untargeted attacks; however, for targeted attacks, it might be better to show the success rate of generating the target bad actions.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: See weakesses part.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable comments. We are glad that the reviewer found our attack scenario novel and liked our visualizations. Below is our response to some of the question raised.
First, we would like to clarify the loss used in our DP-Attacker is not the distance between generated actions. Instead, we use a Monte-Carlo-based estimate with a loss between predicted noise and actual noise. In this way, we were able to significantly speed up the adversary construction process while maintaining good attack abilities. Below is our response to address the weaknesses and questions raised in the review.
> W1 As mentioned in the limitation section, there is no intuitive defensive strategy or experimental results in real-world scenarios included in the paper.
>
We refer the reviewer to sections **2** and **3** of our **general response**. These are interesting directions for future research.
> W2 The proposed attack only considers the white-box setting, i.e., the attacker requires knowledge of all parameters of the diffusion policy. Discussion on black-box setting could also be included in the limitation section.
>
We are not considering black-box attacks in this work. We will add a discussion of black-box attacks in the limitation section.
> Q1 Threat model could be clarified with more clearness in problem settings. It appears to me it is a white-box attack where the attacker requires knowledge of all parameters of the diffusion policy.
>
In **general response 4**, we have clarified the threat model. We will also incorporate this into the final paper.
> Q2 The novelty of the proposed adversarial attack remains unclear to me. On the one hand, there are existing works [25, 43] which propose adversarial attacks against latent diffusion models, introducing the Monte-Carlo-based adversarial loss. On the other hand, the difficulty of computing gradients due to the inherent long-denoising chain of the diffusion policy is solved by [25, 24, 52]. So, it seems that the novelty over these existing works is that this paper is modifying the training loss as the difference between actions. Clarification of the novelties compared with existing works would be appreciated.
>
We have a detailed subsection addressing this in the **general response 1**.
> Q3 Is there any other evaluation metric in existing works? It seems that there is only one evaluation metric used (i.e., the drop of the result task completion scores) in the experimental results to demonstrate the effectiveness of the attack. For me, this one single metric would be fine for the untargeted attacks; however, for targeted attacks, it might be better to show the success rate of generating the target bad actions.
>
The success rate is the most direct way for imitation learning to evaluate the effectiveness of a model. Although other continuous reward functions can also be used, the decrease in success rate will be enough to show whether the attacks have largely affect the system.
The second point for evaluating targeted attacks is interesting. Previously, we only evaluated the targeted attacks qualitatively (see the videos at the bottom of our video website). Here, we have added an experiment that measures the distance between the model's output actions after the attack and target actions. Diffusion policy outputs a sequence of actions for the robot to execute. We use DP-Attacker with different strengths to attack it with some target action sequences and measure the average L2 distance between the output action and targeted action over the length of the action. Using predicted action instead of actual agent position is justified by the fact that the low-level controller will execute these actions without any checking. We report the results in a line graph that shows the distance over the environmental steps during the whole rollout. All attacks have step number $N=50$ and $\alpha=\frac{2\delta}{N}$.
Please see the **newly added PDF Figure 1**. for the results.
In the first scenario, we used DP-Attacker on the PushT (CNN) task to run global online attacks. The target is set to a 2D coordinate around (323.875, 328.75). Note the side length of the PushT task action space is 1024. See the figure in the added pdf. We were able to manipulate the generated action to be within 20 units of the target coordinate with attack strength $\delta=0.06$.
In the second scenario, we used DP-Attacker on the CAN (PH CNN) task to run global online attacks. The end-effector (EE) target is set to a 3D coordinate around (0.1686, 0.1049, 1.0848). The unit is meters. To simplify the metric, we did not set a target for the EE pose or the gripper opening or closing (or else we would need some distance calculating scheme for the 7D output). With attack strength $\delta=0.06$, we could manipulate the generated action within 5 centimeters close to the target position. Both experiments show the effectiveness of our method.
Example attacked frames of different strengths have been added to the pdf as well (please open it with Adobe Acrobat Reader to view the animated frames). Please see **Figure 2~7**.
---
Rebuttal Comment 1.1:
Comment: Q1: Thanks for the clarification on the threat model.
Q2: I appreciated your clarification on the novelty compared with adversarial attacks on LDMS, especially: 1) "Diffusion policy is a “dynamic” policy network, whereas LDMs are more “static” models in their uses." 2) "the attacked image is the denoising input of the LDM network. However, in diffusion policy, the image is the conditional input of the diffusion model, and the denoising input is the robot's actions."
Q3: Thank you for incorporating the distance between the generated action sequence and the target action sequence. I recommend including these findings in the final paper. While qualitative observations confirm successful generation of the target, presenting quantitative results would enable future comparisons and enhance the paper's contribution.
---
Reply to Comment 1.1.1:
Title: Thanks for you reply
Comment: Thanks for your reply. We are happy to see that we have addressed your concerns! We will include the results during the rebuttal into the final version. | null | null | null | null |
Unveiling Causal Reasoning in Large Language Models: Reality or Mirage? | Accept (poster) | Summary: The paper is concerned with the question of whether LLM can perform causal reasoning on a human-like level by incorporating contextual information in their decision and answer process. The authors argue that many everyday causal inferences are not purely logical but take into account general knowledge and intention. Therefore, a distinction between 'shallow' (level-1) and 'human-like' (level-2) causal reasoning is proposed. Two versions of a novel "CausalProbe-2024" causal Q&A data set for benchmarking level-2 reasoning capabilities of LLM are presented. The presented benchmark is sourced from recent news sources. The authors, therefore, claim that the presented data is unlikely to be contained in the training set of the tested LLM (LLaMa 2/3, GPT-3.5 Turbo, and Claude 3).
To tackle level-2 causal reasoning, the authors propose a theory on how cause-effect pairs get instantiated into natural language sentences and further argue that a general world context affects the instantiated cause and effect nodes. To reason in this kind of setting, the authors propose a "$G^2$-Reasoner" to incorporate general knowledge and inherent goals via additional context information retrieved from a general knowledge base.
Evaluating multiple LLMs on the CausalProbe data set shows a strong deterioration in performance compared to earlier causal reasoning datasets. The proposed $G^2$-reasoner helps improve reasoning capabilities over a naive baseline approach and performs on par with Chain-of-Thought or retrieval-augmented generation approaches.
Strengths: Overall, the author's proposal of assessing the causal reasoning capabilities of LLM in context is a valid contribution and improves in realism and difficulty over existing benchmarks. The work is well embedded into existing related work on the causal reasoning capabilities of LLMs. Relations to existing causal Q&A data sets are drawn, and previous examinations of LLM causal reasoning capabilities are discussed.
The novel data set is constructed by extracting information from recent news articles of two major news sources, covering events that lie after the information cut-off of all tested LLMs (after Jan 1st, 2024). To support the thesis of querying the LLMs on unseen information, LLama models are tested via the Min-K% Prob technique on whether the models might have memorized the collected information. Compared to older datasets, CausalProbe seems to be composed of more novel information than COPA, e-CARE, or CausalNet data sets.
The authors contribute a possible explanation of how causal information is instantiated in natural language via a high-level SCM. This involves a general context, drawing information from world knowledge, and incorporating the naturally occurring diversity of expressing such relations.
The experimental setup and individual steps of the $G^2$-reasoner are described clearly. Prompt templates are provided. The approach is compared to a 'vanilla' approach, Chain of Thought (CoT), and a retrieval-augmented generation (RAG), giving the LLMs access to a general knowledge base.
Weaknesses: While I agree with the authors that benchmarking LLMs for causal relations in context is a more difficult task than pure logical causal reasoning, the paper remains vague on the particular effect that the additional context might impose in terms of the cause-effect pairs as discussed in Sec. 5 / Fig. 4.
(1) Generally, the described distinction between level-1 and -2 causal reasoning (def. 2 and 3) seems to be about how LLMs draw information from either memorization (information embedded in the model weights) or dynamically from the input text. LLMs are known to have difficulties when reasoning over non-memorized facts. In this regard, the general shortcomings of LLMs are non-surprising and have already been presented in previous works (e.g., Kiciman et al., 2024; Jin et al., 2024; Zecevic et al., 2023). In that regard, the proposed $G^2$-reasoner seems to feature no explicit mechanism to help improve on the causal reasoning of LLMs other than providing general background knowledge. No alternative prompt variations or combinations with, e.g. CoT or fine-tuning, are presented to improve results.
(2) My main concern is the quality of the collected data set itself. While the authors do clearly describe the automated process of extracting information from credible news sources, no human involvement in quality checking is mentioned. Given the more implicit nature of causal relations considered in this paper, many answers can not be directly inferred but need to be extrapolated from the texts (as intended by the authors). However, checking on the first ten entries of CausalProbe-H seems to reveal a generally poor quality of the samples. In detail, I get the impression that multiple answer options per sample can be valid. Given that only a single answer is indicated as 'correct' by the data set, I question the significance of the reported results. In detail:
* id 3: correct marked answer 2 is implicit, but so could be 3 and 4.
* id 4: 1, 2 and 4 are possible answers with partial contribution to the effect.
* id 5 - 1 and 2 are both correct.
* id 7 - 2 and 4 describe partial factors that could both be valid.
The main problem seems to stem from the unclear definition of what counts or does not count as a causal factor for a specific scenario. As a result, alternative answers --other than the one indicated by the dataset-- might contribute to the causal effect relation and, therefore, could be viewed as reasonable answer options. The authors might have either adopted an answer format capable of accepting multiple correct answers or might impose a more extensive quality checking to prevent the appearance of alternative valid answer options.
(3) The partition of samples into causalProbe-S and -H is unclear to me. The authors mention that "the highest-quality Q&A pairs are selected as the reference" (l.289) but do not describe under which criteria this selection is done.
(4) The authors mention in Appendix (C) "Full Implementation details" to have set the temperature to "1.0 for closed-source LLMs [...], and set it as 0 for open-source LLMs [...].". Non-zero temperature evaluations might introduce additional variance to the results. However, the authors do not quantify the possible variance in in their evaluation results.
Minor comments:
* Typo in the caption of table 3: "represnt".
* While I agree in general with the discussion in Sec. 4.1 about causal event sequence and sequence of appearance, it dismisses possible generalization capabilities of LLMs. Claims 1) and 2) are provided without reference or experimental evidence. The authors might want to tune down these claims.
* I would advise not captioning Fig. 2 (b) as a "sequential causal relationship" but rather as a "sequence of appearance" since --as the authors mention correctly-- word sequences do not need to imply any causal relations inherently.
* Typo "we proposes G2-Reasoner" (l.340)
Technical Quality: 3
Clarity: 3
Questions for Authors: My questions mainly concern the weaknesses mentioned above. In detail, I would like the authors to comment on the following:
I) Regarding (2): could the authors clarify the measures that were imposed to ensure the quality of the collected data samples? How do the authors view the possibility of multiple valid answers with regard to the proposed data set format and evaluation results?
II) Regarding (1) and (2), I would like to ask the authors to clarify the expected effects of the context on the causal relation. In detail, could the authors comment (e.g., in the context of epistemic reasoning / modal logic) how 'reasonable/possible' and 'correct/necessary' answers would be distinguished in their setup and which implications would follow for their data set design?
III) Regarding (3): is the partition of the samples performed by human judgment or via some automated metric? Which criteria are used to judge sample difficulty?
IV) Regarding (4): Can the authors quantify the possible randomness-induced variance for the closed-source models?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: In their paper, the authors correctly address the causal reasoning limitations of LLMs in general and their $G^2$-reasoner in detail.
The authors seem to have not performed or do not mention a human-backed review of the data set contents. The quality in terms of answer texts and evaluation results could be improved by establishing such a process.
The data set seems to be free of ethically questionable content; However, the applied filtering process was not disclosed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for constructive comments and they are valuable for improving our paper. In the following, I will address your concerns one-by-one.
> Q1. The paper remains vague on the particular effect that the additional context might impose in terms of the cause-effect pairs as discussed in Sec. 5 / Fig. 4.
A1. In my understanding, what you refer to as 'the additional context' means general knowledge. We model a textual causal reasoning task through a SCM in Fig. 4. General knowledge, i.e., variable $C$, **drives the causal relationship between two events**. Without it, the derived causal relationship may contradict the objective causal laws. Thus, general knowledge serves as a **guide** for LLMs' causal reasoning.
> Q2. Weakness (1)
A2. First, level-2 causal reasoning refers to the **genuine complex reasoning** akin to human cognition, rather than merely retrieving information from sources.
Our paper's main contribution is **exploring the boundary of LLM causal reasoning abilities and identifying the causes of their limitation by CausalProbe 2024**. Previous works did not systematically studied this. We also proposed the G$^2$-Reasoner for LLM's **textual causal reasoning**. It involves general knowledge and a goal-oriented prompt: the former serves as a guide, and the latter steers LLMs to identify correct cause-effect pairs and reach the intended goal, alleviating off-target generation.
CoT and finetuning are both effective for LLMs. While finetuning works well for specific tasks, it often leads to catastrophic forgetting, conflicting with our general-purpose goal. CoT is versatile and as a baseline in our paper. Incorporating CoT into G$^2$-Reasoner only boosts performance slightly, so we didn't combine it with G$^2$-Reasoner to avoid additional inference costs.
> Q3. Weakness (2) & Question (I)
A2. Upon review, we found that some questions might have more than one correct option if we only consider the "problem" itself. However, within the given "context", the unique answer provided by us is usually the most relevant option.
First, we discuss the works we **have done** to ensure the quality. To further improve our benchmark, we have taken **two additional steps** based on your comment. **Please refer to the *Author Rebuttal* for more details.**
> Q4. Weakness (3) & Question (III)
A4. We clarify that CausalProbe-E and -H **are not created by partioning existing data** based on a difficulty metric. Instead, they are **constructed from scratch** using different strategies but the same corpus. We presented the methods for constructing them in **Section 6.1 and Figure 7**. I am glad to introduce their differences here.
CausalProbe-E follows the format of CausalQA [1]. We provided GPT with original corpus and samples from CausalQA and asked GPT to generate Q&A data by imitating them.
The construction of CausalProbe-H is more complex. We provided GPT with an article and asked it to generate several cause-effect pairs. These pairs were classified as true (accurate information) or false (distortions). GPT then generated Q&A pairs from these cause-effect pairs. Three co-authors rated the Q&A pairs for quality, selecting the best one. This top-quality Q&A pair, along with its source article, was used as an in-context example. We then used this in-context example with a new article to prompt GPT to generate new data. Unlike CausalProbe-E, CausalProbe-H tests LLMs' ability to reason and identify erroneous causal information.
[1] Bondarenko et al. Causalqa: A benchmark for causal question answering. COLING 2022.
> Q5. Weakness (4) & Question (IV)
A5. During the experimental phase, we discovered that setting the temperature of the closed-source models to 1.0 slightly improved performance compared to 0, albeit with variance. Due to budget constraints, we did not initially repeat the experiment to obtain the standard deviation. Now, we have repeated the experiments **two more times**, and the standard deviations are shown in Table 4 in PDF.
> Q6. Minor comments
A6. We have polished our paper again to correct the remaining typos and grammar issues.
For Sec. 4.1 and the caption of Fig. 2 (b), we have modified them following your advice: 1) we have weakened our claims 1) and 2): "Based on the above discussion, there are two **possible** issues: 1) ..."; 2) we have changed "sequential causal relationship" to "sequence of appearance".
> Q7. Question (II)
A7. The context serves as a **reference** for specific causal reasoning tasks. Without constraints, causal or general reasoning can be aimless, especially for LLMs. Given only a question, LLMs might find multiple "reasonable" options. However, with the context provided, LLMs can more easily identify the "correct" option, which is the optimal answer. In our benchmark design, we provided context for each question, **distinct from previous causal Q&A benchmarks**. Figure 5 & 6 compare the performances w/ and w/o contexts, showing that context helps LLMs' causal reasoning.
---
Rebuttal 2:
Comment: Dear authors,
I highly appreciate the extensive efforts that you put in place to improve the quality of your paper. I believe that, both, the human annotations as well as the newly added multi-choice options greatly contribute in consolidating the notion of causality within the data set. Although, the reported sample rejection rate found in the study seems to be quite low compared to my (admittedly very small) sample set, I believe that the procedure is sufficient to ensure a well curated collection of items.
Apart from that, all my other questions other have been answered sufficiently. As such, I have raised my score to weak accept and recommend the acceptance of this paper.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
We sincerely appreciate your support for our paper and are particularly grateful for your invaluable comments. These comments have significantly enhanced the quality of our benchmark, making it more precise and reliable.
We have carefully incorporated our responses and clarifications into the paper and are currently conducting thorough quality control on the remaining data in CausalProbe 2024.
The opportunity to discuss our work with you has been immensely constructive, leading to substantial improvements in our research. Your time, expertise, and thoughtful feedback are deeply appreciated. Should you have any further suggestions or comments, please do not hesitate to share them with us.
Thank you once again for your invaluable contribution to the enhancement of our research.
Best regards,
Authors of Paper 8470 | Summary: The authors proposed a new causal reasoning framework, to improve the causal reasoning capacity of LLMs, with inspiration drawn from causal graph theory and human reasoning process. The work utilized "general world knowledge" as a component to take a step closer to making LLMs perform a more human-like causal reasoning process. The authors also developed a new benchmark and carried out experiments evaluating how different LLMs perform on their benchmark.
Strengths: The paper is well-written and easy to follow. The authors draw inspiration from the human causal reasoning process and provide an analysis of why LLMs cannot perform "genuine" causal reasoning, from not only a methodological perspective but also an empirical perspective.
Weaknesses: 1. The key hypothesis of this work is that LLM is capable of "level-1" reasoning, but lacks the capacity of "level-2" reasoning. Given that this work is centered around this hypothesis, a clearer definition and illustrative example of level-1 and level-2 should be provided.
2. I suggest the authors provide an example of a "random exogenous variable" in line 238. I can understand what this variable is from the causal graph framework, but it would be helpful to better clarify it in your setting.
3. Can you provide a more detailed explanation of how equation (1) plays a role in your G2 reasoner, and how can you compute this term?
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author didn't provide a separate limitations section. One limitation the authors mentioned in the paper is that they only consider causal reasoning tasks with single cause-effect pair. Another limitation is that their approach doesn't enable LLMs to achieve level-2 causal reasoning, but rather provides insights into it.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for constructive comments and they are valuable for improving our paper. In the following, I will address your concerns one-by-one.
> Q1. The key hypothesis of this work is that LLM is capable of "level-1" reasoning, but lacks the capacity of "level-2" reasoning. Given that this work is centered around this hypothesis, a clearer definition and illustrative example of level-1 and level-2 should be provided.
A1. Thank you for point out this ambiguity. Here we provide more clear definitions for two causal reasoning levels.
- **Level-1** causal reasoning refers to retrieving existing causal knowledge embedded in model parameters and the given contexts. It is usually fast and suitable for handling simple causal relationships. Current LLMs are at this level.
- **Level-2** causal reasoning mimics human cognition, allowing LLMs to use powerful reasoning mechanisms and existing knowledge to infer causal relationships, even with unfamiliar tasks. This process is slower but capable of handling unknown causal knowledge.
Note that level-2 reasoning is not always better, as it is less efficient and cost-effective. Ideally, LLMs would adaptively choose the appropriate reasoning mode based on task difficulty. However, our paper shows that LLMs lack level-2 causal reasoning capabilities, preventing this adaptability.
> Q2. I suggest the authors provide an example of a "random exogenous variable" in line 238. I can understand what this variable is from the causal graph framework, but it would be helpful to better clarify it in your setting.
A2. Of course. The formula $h(X,Y,\epsilon)=T$ represents a natural language generation process that contains cause-effect information. Here, $X$ and $Y$ represent the concepts of cause and effect, respectively, and $T$ is the textual expression of this causal relationship through the mapping $h$. The variable $\epsilon$ represents **various factors** in generating readable text from the causal concepts $X$ and $Y$, **such as language type, context, and mode of expression (e.g., active or passive voice)**. While $\epsilon$ contributes to the **diversity and flexibility of natural language**, it also complicates LLM's causal reasoning from a linguistic perspective.
For example, consider the concepts "smoking" ($X$) and "lung cancer" ($Y$). With different $\epsilon$, we can get different natural language expressions: 1) "A history of smoking is a common risk factor for lung cancer." 2) "Knowing that smoking greatly increases the risk of lung cancer, why take the risk?" Both sentences imply the same causal relationship but differ linguistically.
> Q3. Can you provide a more detailed explanation of how equation (1) plays a role in your G2 reasoner, and how can you compute this term?
A3. Certainly. We are happy to explain the role and motivation bebind Eq. (1). This equation represents the task of **inferring the most probable effect from a cause given a context**, in the language of statistics. To achieve this goal, we need the cause ($X$), the natural language expression of the causal proposition ($T$), and general knowledge ($C$). In our setup, $X$ and $T$ are known, so we need access to a complete general knowledge base ($C$). With this, LLMs can ideally reason out the correct effect, and mathematically, the total probability formula in Eq. (2) holds.
However, we emphasize that what Eq. (1) and Eq. (2) provides is the **technical motivation** from a causal inference perspective, and we **do not need to calculate them explicitly**. We use such technical motivation to design the G$^2$-Reasoner, which integrates the general knowledge base into LLMs' causal reasoning processes through RAG.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I've read it carefully and my questions have been addressed. I will keep my current score.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We sincerely appreciate your support for our paper and are particularly grateful for your invaluable comments.
The three comments you raised have greatly improved the clarity and readability of our paper. We have carefully revised our paper based on them, especially regarding the understanding of SCM in the causal reasoning tasks of LLMs .
The opportunity to discuss our work with you has been immensely constructive, leading to substantial improvements in our research. Your time, expertise, and thoughtful feedback are deeply appreciated. Should you have any further suggestions or comments, please do not hesitate to share them with us. Thank you once again for your invaluable contribution to the enhancement of our research.
Best regards,
Authors of Paper 8470 | Summary: The paper investigates the causal reasoning capabilities of LLMs and argues that current LLMs are limited to shallow (level-1) causal reasoning. To support this claim, the authors introduce a new benchmark, CausalProbe-2024, which reveals that LLMs struggle with causal reasoning in fresh and unseen contexts. To address this, the authors propose G2-Reasoner, method that incorporates general knowledge and goal-oriented prompts to enhance causal reasoning capabilities. Experiments show that G2-Reasoner improves performance, particularly in novel and counterfactual scenarios.
Strengths: - The paper goes beyond simple retrieval from LLM memory causal reasoning tasks.
- They attempt to establish the sensitivity of the model for pretraining
- The authors also propose a new benchmark CausalProbe-2024 to evaluate level-2 reasoning.
- The authors identify fundamental limitation in the current architecture of LLMs as authoregressive next token prediction mechanism (which is intuitive though) for causal reasoning.
Weaknesses: - The long-term viability of this approach is uncertain. Continuous advancements in model architecture and training techniques might be required to truly enable level-2 reasoning, and the proposed method might only be a temporary solution. However I understand that this paper still makes a good progress.
- Unless I missed it, I did not find much detail on RAG since it is so knowledge-specific. It isnt always aplicable. Also the results dont show a big improvement for G2 reasoner. It then begs the question how dependent was the model the knowledge of RAG.
- paper primarily addresses simple, single cause-effect pairs which is a bit limiting
- the variances arent given
Technical Quality: 3
Clarity: 3
Questions for Authors: Just a minute comment, for better readability, I would appreciate it if the authors could an identifier(eg bold) in the Tables 2,3 just to make it easier to read.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for constructive comments and they are valuable for improving our paper. In the following, I will address your concerns one-by-one.
> Q1. The long-term viability of this approach is uncertain. Continuous advancements in model architecture and training techniques might be required to truly enable level-2 reasoning, and the proposed method might only be a temporary solution. However I understand that this paper still makes a good progress.
A1. We partially agree with this comment. In the future, more powerful model architectures, training methods, or new forms of AI may eventually achieve genuine causal reasoning even surpass humans. However, this may **take a long time**. Therefore, our current efforts in causal reasoning for LLMs are **meaningful at the current development stage**.
> Q2. Unless I missed it, I did not find much detail on RAG since it is so knowledge-specific. It isnt always aplicable. Also the results dont show a big improvement for G2 reasoner. It then begs the question how dependent was the model the knowledge of RAG.
A2. Thank you for this insightful comment. The implementation details of RAG was presented in Appendix C.
In our paper, RAG incorporates general knowledge into LLM causal reasoning. As you noted, complete and high-quality knowledge bases are crucial for RAG performance. Our reported results for G$^2$-Reasoner are based on a **very small general knowledge base (about 16 MB), yet it achieved non-marginal performance improvement**. If we use a complete one, such as Wikipedia API, the performance can be boosted a lot. However, due to resource constraints, we couldn’t repeat all experiments with Wikipedia API. Instead, we are creating a more complete offline general knowledge base and will open source it, which is helpful for LLM causal reasoning and other fields.
> Q3. Paper primarily addresses simple, single cause-effect pairs which is a bit limiting.
A3. Thank you for your insightful comment. Our paper is **the first to comprehensively explore the boundary of LLMs' causal reasoning abilities**. We created a benchmark with over 6,000 multiple-choice questions. However, our benchmark **is not simply established on single cause-effect pairs**. To pick the correct option, especially for CausalProbe-H, LLMs are also required to identify incorrect but confusing cause-effect pairs. Our benchmark construction method was deliberately designed to achieve this goal, and the details were presented in Sec. 5 "Bechmark construction" and Figure 7. Following your suggestion, we will construct more diverse and complex causal reasoning tasks for LLMs in the future.
> Q4. The variances arent given.
A4. Of course, thank you for your suggestion! Due to budget constraints at the time, we ultimately did not repeat the experiment multiple times to obtain standard deviations. Currently, we have repeated the experiments of closed-source models (GPT 3.5 Turbo and Claude 3 opus) **anthor two times** and the results together with standard deviations are shown in Table 4 in PDF.
> Q5. Just a minute comment, for better readability, I would appreciate it if the authors could an identifier(eg bold) in the Tables 2,3 just to make it easier to read.
A5. Thank you for this reminder. We have highlighted the best results under each benchmark and each LLM in **bold** in Table 2 and Table 3. In addition, we have completely polished our paper again to make it easier to read.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I would like to thank the authors for responding to my concerns. I have now raised the score to weak accept. I would appreciate it if the author would include variances in the final version.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
We sincerely appreciate your support for our paper and are particularly grateful for your invaluable comments.
We have carefully repeated our experiments and included the standard deviation into our paper during the author response phase. This well demonstrates the stability of the performance of G$^2$-Reasoner and baseline methods when the temperature of LLMs is larger than 0.
The opportunity to discuss our work with you has been immensely constructive, leading to substantial improvements in our research. Your time, expertise, and thoughtful feedback are deeply appreciated. Should you have any further suggestions or comments, please do not hesitate to share them with us. Thank you once again for your invaluable contribution to the enhancement of our research.
Best regards,
Authors of Paper 8470 | Summary: This paper studies the autoregressive mechanism of the transformer-based LLMs and their ability to reason causally and introduces a new Q&A-based benchmark named CausalProbe-2024 with around 3k Q&A pairs (believed to be unseen by the existing LLMs). The paper further introduces a reasoning strategy that considers general knowledge and the goal of performing a task called G2reasoner. The proposed G2-reasoner is based on a RAG based framework that encodes the information related to general knowledge in the form of a vector database, and the goal orientation is simulated by modifying the prompt that steers an LLM to be an intelligent causal reasoner and provides the correct causal relationship between the events and reason about them.
The paper reports the empirical evaluation results in over 3 widely used benchmarks (COPA, e-CARE, and CausalNet) along with the two versions of the proposed Causal Probe dataset.
Strengths: * The paper raises an interesting question regarding the causal reasoning in LLMs, and the research goal established by the paper with the underlying question regarding causality is timely.
* The paper’s primary strength comes from creating a newly created benchmark for facilitating the study of causal reasoning in LLMs when compared to humans. Formulating a formal causal reasoning benchmark in a natural language is an extremely challenging task, and this work will take a small step toward making the causal reasoning evaluation effective.
* The paper highlights the training data detection using MinK% Prob in Table 3 for the existing causal benchmarks/datasets, which will be helpful for future research. With evaluation and comparison with the widely used causal reasoning benchmarks, the paper highlights some findings for the open-weight and popular proprietary models.
Weaknesses: * The definitions of level 1 and level 2 are not concretely stated. Though lines 48 to 51 vaguely describe it, and the expanded version of the definitions is present in section 3, line 126 to line 135, it would be better to make them more concrete. In the current version, the definition is not very clear when it talks about complex or unknown tasks. A clear distinction between the levels would help the reader/research community to explore it further in future works.
* The results of the studied LLMs on four causal Q&A benchmarks, as shown in Table 2, highlight very little improvement of the G2-reasoner compared to the other approaches. Moreover, the motivation/inspiration taken for the G2 reasoner is generic and not causal specific (specifically the context and the RAG part) and may help improve the performance over tasks where causal reasoning is not required.
* Figure 5 and Figure 6 shows the performance improvements when provided with the context, which is generally true for all the LLM evaluations, and the observation may not be causal reasoning specific. It may be noted that the G2-reasoner approach may also be applicable to other non-causal tasks and may have a similar result in other non-causal benchmarks. The proposed strategy will be more reliable when there is a direct effect over the causal-reasoning benchmark.
Technical Quality: 2
Clarity: 3
Questions for Authors: * The assumption behind the autoregressive models made in line 159, “in a sequence, the current value is determined by past values, not related to future values,” may not be entirely correct. Specifically, highlighting “the current value is not related to future values” may be incorrect.
* Line 148 raises a very good point regarding the complexity of natural language, and there can be many different ways or sentence patterns to express the same information. However, the evaluation metrics used in the paper is Exact match (Table 2, Figure-5, Figure-6). It would be good to consider a better evaluation metric while performing the evaluations.
* The claims made in the para [line 52 to line 61] and figure 2 are not completely justified in paras [line 156 to line 180], concluding on the statement [lines 178 to 180] “Thus, the autoregression mechanism makes LLMs’ causal reasoning primarily rely on correct causal knowledge in a large number of training corpora, i.e., the level-1 causal reasoning”. It would be great if the authors could provide a more detailed justification and talk about level 2 as well. Currently, it is not very clear if the statements say that autoregressive training is sufficient for achieving level-1 causal reasoning.
* Equation 2 removes the dependency over the confounding variable using the total probability formula, where the $P_C$ is the general world knowledge base that may not always be available. Moreover, even if the general knowledge base is available, applying the total probability formula would need it to be complete, covering all the possibilities, which may not be feasible in general for natural language descriptions. I may have understood it incompletely, but it would be good if you could share some more thoughts and assumptions behind equation 2 to make it more transparent to the reader.
* The use of GPT 3.5 turbo for constructing the benchmark may add a bias in the dataset construction. It would be good to highlight such biases and talk about them in detail.
Minor Suggestions:
* The use of fancy terms like AGI would be better avoided since the definitions of these terms are unclear and still under construction.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations of the work are highlighted in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments and they are valuable for improving our paper. In the following, I will address your concerns one-by-one.
> Q1. Weakness (1)
A1. Thank you for point out this ambiguity. Here we provide more clear definitions for two causal reasoning levels.
- **Level-1** causal reasoning refers to retrieving existing causal knowledge embedded in model parameters and the given contexts. It is usually fast and suitable for handling simple causal relationships. Current LLMs are at this level.
- **Level-2** causal reasoning mimics human cognition, allowing LLMs to use powerful reasoning mechanisms and existing knowledge to infer causal relationships, even with unfamiliar tasks. This process is slower but capable of handling unknown causal knowledge.
We think that level-2 reasoning is not always better, as it is less efficient and cost-effective. Ideally, LLMs would adaptively choose the appropriate reasoning mode based on task difficulty. However, our paper shows that LLMs lack level-2 causal reasoning capabilities, preventing this adaptability.
We have added a link in lines 48-51 to the formal definitions of level-1 and -2 causal reasoning, i.e., Def. 2 and Def. 3.
> Q2. Weakness (2)
A2. We get your concern and clarify it for you. Our paper focuses on **textual causal reasoning tasks**, instead of the numerical ones like classical causal inference/discovery. There is a significant gap in addressing textual tasks with numerical causal methods. Instead, **G$^2$-Reasoner is specifically designed for textual causal reasoning of LLMs** with clear motivations:
- In Sec. 4, we showed that sequential causality differs from logical causality in natural language, a similar view also proposed by philosopher David Hume [1]. As the autoregressive nature of LLMs, they inherently learn sequential causality. To address this, we proposed a **goal-driven prompt** to steer LLMs in identifying correct cause-effect pairs and reaching the intended goal during decoding.
- In Sec. 5, we model the data generation process of a textual causal reasoning task with a SCM (Figure 4). This SCM shows that complete **general knowledge essentially drives causal relationships**. Without it, LLMs may contradict objective causal laws, rendering final reasoning conclusions meaningless.
Thus, G$^2$-Reasoner's performance relies on an general knowledge base. The reported results were obtained using a **very small knowledge base (around 16 MB), yet G$^2$-Reasoner generally achieved non-marginal improvements**. If we use a complete one, such as Wikipedia API, performance can be boosted a lot. However, due to resource constraints, we couldn't repeat all experiments with Wikipedia API. Instead, we are creating a more complete offline general knowledge base and will open source it, which is helpful for LLM causal reasoning and other fields.
In addition to LLM's causal reasoning, we also believe that these two motivations are helpful for general LLM reasoning tasks.
[1] David Hume. A treatise of human nature. Clarendon Press, 1896.
> Q3. Weakness (3)
A3. The 'context' in CausalProbe 2024 **is not a technical contribution for boosting performance but rather a contribution in terms of more reasonable evaluation benchmarks**. Previous causal Q&A benchmarks did not provide contexts. However, we have figured out that LLM's causal reasoning may be meaningless without specific contexts. Figure 5 & 6 exactly highlight this contribution. The causal insights behind G$^2$-Reasoner is discussed in Q2. We also think that G$^2$-Reasoner and necessary contexts will be applicable for non-causal tasks of LLMs.
> Q4. Question (1)
A4. Sorry, we don't fully understand your concerns. For widely-used decoder-only LLMs, the masked self-attention naturally determines that the current token is only influenced by the previous ones and the prompt. We are happy to further discuss this interesting problem with you.
> Q5. Question (2)
A5. Thank you for your insightful comment. We understand your desire for better metrics to measure text complexity. However, our benchmarks are in the form of multiple-choice questions, and the **final outputs are only option IDs**, making it difficult to introduce other metrics.
>Q6. Question (3)
A6. To reach this main conclusion, we constructed CausalProbe 2024, which **has a format and difficulty similar to previous benchmarks** but has a **completely new corpus (see Tables 1 and 3)**, which was released later than the LLMs' data cut-off time. Instead, the previous benchmarks may have been training data. It allows CausalProbe 2024 to test an LLM's **true** causal reasoning ability, and the effect of the LLM's memorized knowledge are partially exluded. Figure 1(d) shows that four LLMs perform significantly worse on CausalProbe 2024 than the previous benchmarks, indicating that autoregressive LLMs only achieve level-1 causal reasoning but not level-2.
> Q7. Question (4)
A7. Your understanding for the Eq. (2) is correct. Ideally, a complete general knowledge base $P_C$ is needed to ensure the total probability formula holds. However, obtaining an absolutely complete $P_C$ is challenging, limiting G$^2$-Reasoner's performance as discussed in Q2.
> Q8. Question (5)
A8. Thank you for your insightful feedback on the potential biases introduced by using GPT-3.5 Turbo to construct our benchmark. We discuss it here:
- **Model bias**: GPT-3.5 Turbo may inherit biases and limitations from its training data.
- **Generation bias**: The model may frequently produce certain types of questions or answers.
- **Language and cultural bias**: Using English news corpora may introduce a Western or Anglophone bias.
We will include this discussion in Sec. 5 of our paper.
> Q9. Minor Suggestions
A9. Thank you for your valuable suggestion. 'AGI' is an unclear concept and using it in a research paper is inappropriate. We used 'AGI' three times and have replaced them with more conservative terms.
---
Rebuttal 2:
Comment: Dear reviewer ZNie,
Thanks for taking the time to review our work. We have carefully considered your comments and made every effort to respond to your concerns.
If you have any further questions or require additional clarification, please kindly let us know.
Best regards,
Authors of Paper 8470
---
Rebuttal Comment 2.1:
Comment: Dear reviewer,
Thank you for your time and effort in reviewing our work. We hope our response has adequately addressed your concerns. If you have any further questions or require additional clarification, please don't hesitate to let us know.
Best regards,
Authors of Paper 8470 | Rebuttal 1:
Rebuttal: Here, we mainly present the work we **have done** and the **new efforts** added for data quality control.
> Current quality control for CausalProbe 2024.
We discuss our efforts for ensuring the benchmark's quality here, which have not been discussed in detail in our paper. We have merged them into our paper. Our efforts are threefold:
- **Preparation**: Our corpus is sourced from two famous media with high quality. We further performed an initial cleaning of the corpus using regular expressions. Subsequently, we used the Google DLP API to detect sensitive information (such as pornography, violence, advertisements, etc.) in the corpus and removed any violating content.
- **Production**: We use GPT 3.5 Turbo, one of the best LLMs at the time, to construct the benchmark from the prepared corpus. To improve its quality, we experimented with different prompt templates and adopted the best.
- **Verification**: First we used Python scripts to exclude incomplete/garbled items. Then we re-organize them to .json format for ease of reading. Finally we went through all items to find out problematic ones and excluded them.
> Newly added quality control.
We have performed **additional quality control** following reviewer ZgYU's comment. This quality control followed a crowdsourcing pipeline. We recruited 17 volunteers, all of whom hold a master’s degree or higher and are currently engaged in frontline research. Additionally, all volunteers are fluent in English. In the preliminary tests, we randomly sampled 20 questions from CausalProbe-H and asked each volunteer to answer them. We then recorded each volunteer's answer and their perceived difficulty level about this task (on a scale of 1-10, with 10 being the most difficult). The selection criteria required the perceived difficulty level to be **no more than 7** and the accuracy rate to be **no less than 80\%**. Ultimately, **13 out of the 17 volunteers** met these criteria and were selected as qualified, and the test results are shown in Table 1 in PDF.
Given the limited time available during the author response phase, we performed quality control on a subset of CausalProbe 2024. Specifically, we **randomly sampled 260 questions** from CausalProbe-H and assigned each question to 3 volunteers randomly, using the Algorithm 1 in PDF. Each volunteer received a total of 60 questions. After receiving their feedback, we treated those questions correctly answered by no less than two volunteers as high-quality data. Finally, 232 out of 260 questions were filtered out (temporarily called CausalProbe-HQ), achieving **the qualification rate of 89.2\%**. The randomly sampled data IDs and the high-quality ones among them is shown in Table 5 in PDF. Next, we use CausalProbe-HQ to evaluate four LLMs used in our paper again, whose results are shown in Table 2 in PDF. The results show that all four LLMs still perform poorly on this subset, suggesting that their failure is primarily due to **limited causal reasoning abilities rather than the errors in the benchmark**.
We will continue to perform quality control on the remaining data and eventually open-source the fully quality-controlled version.
> Newly added indefinite multi-choice version of CausalProbe 2024.
In addition, we also constructed an **indefinite multi-choice version of CausalProbe 2024**, named as **CausalProbe-M**, following reviewer ZgYU's comment. Its corpus is the same as CausalProbe 2024 and we designed a prompt template to generate it using GPT-4o mini. **We have uploaded CausalProbe-M to the anonymous link attached in our paper**. CausalProbe-M consists of 3441 Q&A data. The number of correct options is **indefinite**, ranging from 1 to 4, and the statistic is shown in Figure 1 in PDF. The ratio of query types (cause or effect) is shown in Figure 2 in PDF.
We also sampled a subset of CausalProbe-M to perform quality control like the above. The **evaluation results** on CausalProbe-M are shown in Table 3 in PDF. All four LLMs experienced a **more significant performance drop** than CausalProbe-E and -H, under the exact matching (i.e., all the correct answers are exactly picked). However, for the partial matching (i.e., missed options are allowed but incorrect options are not allowed), GPT and Claude performed relatively well, achieving near 75% and 85% accuracy rates, respectively. This suggests that **LLMs cannot fully figure out the causal information in each option**, implying their limited causal reasoning abilities from a new perspective. However, it is gratifying that **LLMs make fewer false positive errors**.
Pdf: /pdf/5cdec58d296dbfad18fb0cd1d5b43268448ac8de.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Flow Priors for Linear Inverse Problems via Iterative Corrupted Trajectory Matching | Accept (poster) | Summary: The paper investigates the application of flow matching-based generative models for high-resolution image synthesis, particularly as priors for solving inverse problems. A notable challenge addressed is the slow computation of log-likelihoods in high-dimensional contexts, necessitating backpropagation through an ODE solver. To overcome this, the authors propose an efficient iterative algorithm for approximating the maximum-a-posteriori (MAP) estimator. This method involves approximating the MAP objective through a series of "local MAP" objectives and employs Tweedie's formula for sequential gradient optimization. The proposed method's performance is validated across multiple inverse problems and various datasets.
Strengths: - The paper introduces a unique method for solving inverse problems and reconstructing a single image.
- The proposed method is supported by a solid theoretical foundation.
- The approach is effective across diverse linear inverse problems.
Weaknesses: - Insufficient Empirical Evidence: The paper lacks enough qualitative results and empirical evidence to conclusively demonstrate the superiority of the proposed method.
- Need for Additional Metrics: Metrics like FID and LPIPS scores should be included alongside PSNR and SSIM.
- Figure 2 Presentation: The data in Figure 2 should be presented in a table format for easier interpretation of quantitative performance improvements.
- Limited Dataset Testing: For natural image experiments, the algorithm was only tested on the CelebA-HQ dataset. Pretrained models for other datasets (LSUN-bedroom, LSUN-church, and AFHQ-cat) are publicly available from the authors of the rectified flow paper and should be included.
- Blurry reconstruction results: A closer look at Figure 3 (a,c) reveals that the reconstructed images are too smooth and lack high-frequency details. Compared to OT-ODE, the results are smoother and blurrier.
- Comparison with Recent Work: Apart from OT-ODE, recent work like "D-Flow: Differentiating through Flows for Controlled Generation" also addresses inverse problems and should be considered for comparison.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Does the proposed method work for very noisy corruptions with high measurement noise (e.g., sigma_y = 0.2 or more)?
- Does the proposed method work for severely ill-posed inverse problems like 8x super-resolution or inpainting with 90% of pixels missing?
- Why was the algorithm evaluated on only 100 images from the CelebA-HQ dataset? Why wasn't the entire validation dataset used?
-There are numerous qualitative results for medical images in the appendix, but for natural image datasets like CelebA-HQ, there is only one small figure (Figure 3). It is hard to draw any conclusions based on just Figure 3. Was the authors' focus primarily on medical images?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - The paper's empirical validation is limited, requiring more comprehensive testing across various datasets and additional qualitative results to strengthen the claims of superiority.
- The presentation of quantitative results could be improved by using tables for easier comparison and interpretation.
- The evaluation metrics need to be expanded to include FID and LPIPS scores to provide a more comprehensive assessment of performance.
- Blurry results
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your helpful review and provide the additional experiments which we hope address your concerns:
**More baselines: RED-Diff, $\Pi$GDM, and D-Flow** See Tab. 2 in the attached pdf above. In addition to the flow-based baselines, we have included two representative diffusion-based baselines: 1) RED-Diff [1], a variational Bayes-based method; and 2) $\Pi$GDM [2], an advanced MCMC-based method. Our method demonstrates competitive performance with both diffusion-based baselines and other approaches in terms of recovery fidelity.
Thank you for bringing D-Flow [3], recently accepted to ICML 2024, to our attention. We did not compare with D-Flow as we consider it concurrent work and it has not yet released its official implementation. Notably, D-Flow also includes [1] and [2] as diffusion-based baselines. Implementation details are included at the end of the [Author Rebuttal] above.
**Runtime comparison with D-Flow** As shown in the Table 3 in the appendix of our paper, our method requires 1.6 min for each image, OT-ODE [2] requires 2.46 min, while the concurrent work D-Flow [1], which formulates the MAP into a constrained optimization problem (Eq. 9), requires 5-10 mins for each image as documented in their Sec. 3.4. This is because each of its optimization steps requires a backpropogation through an ODE solver, as they calculate the log-likelihood fully. Our work is significantly faster due to our principled local MAP approximation.
**Highly noisy setting** See Tab. 2 of the attached pdf above. We present the results for $\sigma_y = 0.2$ along with a qualitative comparison at the end of the document. We adopted the same hyperparameter selection strategy as described in the reference paper. With a fixed step size of $\eta = 10^{-2}$, we observed that decreasing the guidance scale is beneficial in very noisy conditions. Specifically, we chose $\lambda = 10^{2}$ for super-resolution, random inpainting, and Gaussian deblurring, while $\lambda = 10^{1}$ was selected for box inpainting. Our method outperforms other flow-based baselines, demonstrating its robustness in handling very noisy scenarios.
**Severely ill-posed problems** See Tab. 1 of the attached pdf above. We consider increasing the level of ill-posedness in the compressed sensing MRI experiments, as this is a more challenging inverse problem. We set the compression rate to be $\nu=1/10$, i.e. only 10\% of the output signal has been observed, which relates to inpainting with 90\% of pixels missing you mentioned. Our method outperforms the classical recovery algorithms and other baselines in all settings, demonstrating our method's capability to handle challenging scenarios and the advantages of utilizing modern generative models as priors. Implementation details are included at the end of the [Author Rebuttal] above.
**LPIPS FID scores** They are provided in Tab. 3 of the attached pdf above. Note that in papers oriented by posterior sampling, more emphasis is put on perceptual quality metrics, such as FID and LPIPS. A large number of images (usually 1k) is thus required due to FID's Gaussian assumptions on the distribution. However, our paper focuses mainly on metrics of recovery fidelity such as PSNR and SSIM, which can be calculated by individual image pairs (the same applies to LPIPS). We find both metrics remain stable after the number of test images reach 100. Note that our method is also competitive with other baselines in terms of perceptual quality as evidenced by our LPIPS scores. The FID score calculated by 100 images is quite noisy and just for reference.
We hope our additional experiments on severely ill-posed inverse problems, in highly noisy setting, with more baselines compared and perceptual metrics reported can address some of your concerns. Finally, we want to highlight our theoretical contribution to inverse problems with flow-based models as priors, where backpropagation through an ODE solver (usually requiring 100 or more NFEs) is avoided, and asymptotic convergence is proven, as demonstrated in Theorem 1.
We kindly request a re-evaluation of the rating if some of your concerns have been resolved.
[1] A Variational Perspective on Solving Inverse Problems with Diffusion Models. Mardani, Morteza and Song, Jiaming and Kautz, Jan and Vahdat, Arash. ICLR 2024.
[2] Pseudoinverse-guided diffusion models for inverse problems. Song, Jiaming and Vahdat, Arash and Mardani, Morteza and Kautz, Jan. ICLR 2023.
[3] D-Flow: Differentiating through Flows for Controlled Generation. Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman. ICML 2024 (concurrent work)
---
Rebuttal 2:
Comment: Dear reviewer,
Thanks a lot for your time and effort. As the discussion period approaches its deadline, we would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns. Please reach out if you have additional questions or comments which we are happy to address. Thank you again.
Best, Authors
---
Rebuttal Comment 2.1:
Comment: The authors have addressed my concerns by providing empirical evidence for the requested cases. Hence, I raise my score to 5.
---
Reply to Comment 2.1.1:
Comment: We are glad to hear that your concerns have been addressed. Thank you for your time and support. | Summary: The paper proposes a flow prior under the MAP structure for solving inverse problems, a theoretical analysis is also given.
Strengths: 1. The paper is easy to follow, the presentation from concepts to methods is concise, the motivation of the proposed method on overcoming shortage of flow model is also clear and straightforward.
2. Incorporating the flow prior into MAP to solve inverse problems is interesting and worth exploring.
3. The discussion on feasibility of assumptions made in the paper, as well as hyperparameter effects are very informative to the readers, and are valuable to the research in the community.
Weaknesses: 1. Theorem 1 is trivial in terms of $N \rightarrow 0$, and the existence of constant $c(N)$. The approximation gap converges to 0 as N goes to infinity does not provide insightful guidance on the practical implementation since $N \rightarrow 0$ is infeasible (decreases efficiency) in practice. Also, the implicit representation of approximation error constant $c(N)$ does not make sense because it gives no information about how the error could be scaled with $N$, the authors are expected to provide explicit expression of $c( \cdot )$ as a function of $N$.
2. Obtaining measurements $u_t$ in the ‘corrupted trajectory’ and generating auxiliary path $s_t$ do not make too much sense to me, especially when $x_t$ is very noisy and the forward operator $A(\cdot)$ is challenging. Suppose $A$ is very ill-posed (e.g. compressed sensing with low sampling rate), then the $y_t$ in the auxiliary path is not good itself, nevertheless the measurement $u_t = A(x_t)$ obtained in the corrupted trajectory especially when $x_t$ has high noise level. The assumption on exact compliance of trajectory (between corrupted trajectory and auxiliary path) is not theoretically guaranteed as stated by the authors (line 170-171), and the strong empirical results can not support the existence of the assumption made. The good empirical results could only be valid in less challenging inverse problems, or those inverse problems that are suitable for generative models to solve, e.g. inpainting, super-resolution.
3. In the compressed sensing experiment, the sampling rate $\nu$ is relatively large (0.25, 0.5). The authors are expected to use smaller sampling rates (e.g. 0.05, 0.1), and at the same time compare it with classical recovery algorithms, developed from the seminal work by Donoho, et al [1,2] (no neural networks involved), as well as compressed sensing with other generative models (VAE, GAN, Diffusion). The current results and comparison are not convincing.
[1] Donoho, David L. "Compressed sensing." IEEE Transactions on information theory 52.4 (2006): 1289-1306.
[2] Lustig, Michael, et al. "Compressed sensing MRI." IEEE signal processing magazine 25.2 (2008): 72-82.
4. The $\lambda_t$ used in Proposition 1 as SNR can also be found in previous work [1], the authors are expected to provide some comparison or discussion with [1] although it is based on Diffusion, it uses MAP structure and solve optimization problem during the sampling process.
[1] Mardani, Morteza, et al. "A variational perspective on solving inverse problems with diffusion models." arXiv preprint arXiv:2305.04391 (2023).
5. The number of iterations $K$ affects the performance a lot on different tasks, the authors are expected to provide more explanation on the choice of $K$. Also, how are the $K$ and $\lambda$ picked in the experiments? Given that the testing performances are sensitive to these hyperparameters.
6. The baseline methods compared are limited, only 3 flow based models are picked, generative models have been widely used for image restoration tasks, the authors are expected to provide at least several other representative generative models for comparison.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for your helpful and detailed feedback, which significantly helped us improve the paper quality.
**Dependence on $N$ in Theorem 1** We appreciate your concern regarding our theory. While we agree that it could be beneficial to obtain a non-asymptotic error bound in terms of $N$, the goal of this Theorem is to provide intuition for why our *local MAP* objectives are a reasonable approximation to the global MAP problem. The asymptotic nature of the Theorem simplifies its presentation, while still providing support for why our approach works. The terms that actually govern how the error scales with $N$ are the $\hat{\mathcal{J}}_i$ terms. Obtaining non-asymptotic error bounds in $N$ is possible, but this would require further technical assumptions on the flow model to control the approximation error via Riemannian sums.
The $c(N)$ term is given by $c(N) = \sum_{i=1}^N \gamma_i c_i - \log p(y) = \sum_{i=1}^N (\frac{1}{2})^{N-i+1} m\log (\frac{i}{N} ) - \log p(y)$. This quantity converges to a constant as $N \rightarrow \infty$. We promise to include the explicit expression of $c(N)$ into Theorem 1 in the revision.
**Classical recovery algorithms: Wavelet and TV priors** Thank you for this suggestion. See Tab. 1 in the attached pdf above. We have added results for two classical recovery algorithms, Wavelet and TV priors. Based on the first two rows, the TV prior consistently surpasses the Wavelet prior. We surmise that this is because the TV prior is more effective for images with clear boundaries and homogeneous regions, such as those in the HCP T2w dataset. We find that our method outperforms the classical recovery algorithms in all settings.
Implementation details are included at the end of the [Author Rebuttal] above.
**More challenging setting: smaller compression rate $\nu=1/10$** Thank you for this suggestion. See Tab. 1 in the attached pdf above. We conduct an additional experiment using the compression ratio $\nu = 1/10$, i.e only 10\% of the output signal has been observed. Based on the two rightmost columns of Tab.1, we see that our method consistently outperforms all the other baselines in terms of PSNR and SSIM.
**More baselines: RED-Diff and $\Pi$GDM** See Tab. 2 in the attached pdf above. In addition to the flow-based baselines, we have included two representative diffusion-based baselines: 1) RED-Diff [1], a variational Bayes-based method; and 2) $\Pi$GDM [2], an advanced MCMC-based method. Our method demonstrates competitive performance with both diffusion-based baselines and other approaches in terms of recovery fidelity. Additionally, one reviewer mentioned a concurrent work, D-Flow [3], recently accepted to ICML 2024, which has not yet released its official code. Notably, D-Flow also includes [1] and [2] as diffusion-based baselines. Implementation details are included at the end of the [Author Rebuttal] above.
**Hyperparameter tuning for $\lambda, \eta, K$** In our implementation, we found that with the Adam optimizer, the choice of step size $\eta$ and guidance weight $\lambda$ remains consistent. Practically, for a new task, we recommend starting by fixing the iteration number $K$ at 1. An initial step size of $\eta=10^{-2}$ is likely to perform well, as this value was effective across all tasks, from natural images to medical applications, after a comprehensive grid search (refer to Fig 5 in the paper).
The value of $\lambda$ should then be determined by a grid search within the set {$10^2, 10^3, ..., 10^7$}. After establishing $\lambda$, we suggest incrementally increasing $K$ to observe if performance improves. If performance continues to increase, keep raising $K$ until the metrics plateau; if there is no improvement, $K=1$ is the optimal choice.
We hope that our additional experiments in more challenging setting, with classical recovery algorithms and representative diffusion-based methods compared can address some of your concerns. We kindly request a re-evaluation of the rating if some of your concerns have been resolved.
[1] A Variational Perspective on Solving Inverse Problems with Diffusion Models. Mardani, Morteza and Song, Jiaming and Kautz, Jan and Vahdat, Arash. ICLR 2024.
[2] Pseudoinverse-guided diffusion models for inverse problems. Song, Jiaming and Vahdat, Arash and Mardani, Morteza and Kautz, Jan. ICLR 2023.
[3] D-Flow: Differentiating through Flows for Controlled Generation. Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman. ICML 2024 (concurrent work)
---
Rebuttal Comment 1.1:
Title: Response to the Rebuttal
Comment: Thank the authors for the rebuttal.
I strongly recommend the authors to characterize more about the non-asymptotic case of c(N), given that the implementation is indeed not exactly the same as the assumption made in asymptotic case. The authors should explicitly clarify this in the paper, especially in the Theorem part. I have no question about the comparison with the traditional methods and the experiment settings. I appreciate the authors for their hard work on the additional experiments and clarification. After careful consideration, I keep the original rating, but increase the contribution rating and my confidence.
---
Rebuttal 2:
Comment: We greatly appreciate your confidence in our work and your thoughtful suggestions regarding our theoretical contributions. We promise to incorporate the necessary adjustments in the revision. **Before the discussion period ends, we would like to provide a last-minute highlight of the strengths of our approach, particularly in comparison to the concurrent flow-based method, D-Flow [1], as recognized by other reviewers.** As shown in the Table 3 in the appendix of our paper, our method requires 1.6 min for each image, OT-ODE requires 2.46 min, while the concurrent work D-Flow [1], which formulates the MAP into a constrained optimization problem (Eq. 9), requires 5-10 mins for each image as documented in their Sec. 3.4. This is because each of its optimization steps requires a backpropogation through an ODE solver, as they calculate the log-likelihood fully. Our work is significantly faster due to our principled local MAP approximation. We hope this runtime comparison further highlights the novelty of our method.
[1] D-Flow: Differentiating through Flows for Controlled Generation. Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman. ICML 2024 (concurrent work) | Summary: This paper addresses the challenge of solving linear inverse problems in high-resolution image synthesis using generative models based on flow matching. While these models are appealing due to their ability to compute image likelihoods directly from a learned flow, they suffer from slow log-likelihood computations that involve backpropagation through an ODE solver. This computational bottleneck can be particularly problematic for high-dimensional problems.
To overcome this issue, the authors propose a novel iterative algorithm designed to approximate the maximum-a-posteriori (MAP) estimator efficiently. The key insight is to decompose the MAP objective into multiple "local MAP" objectives, leveraging Tweedie’s formula to perform gradient steps for sequential optimization. This approach allows for a more efficient approximation of the MAP estimator.
The paper validates the proposed algorithm across various linear inverse problems, such as super-resolution, deblurring, inpainting, and compressed sensing. The results show that the new method outperforms existing techniques based on flow matching, making it a promising tool for high-resolution image synthesis and related tasks.
The authors conclude that their approach successfully addresses the computational challenges of flow matching models, offering a robust solution for incorporating flow priors in solving linear inverse problems. They also mention discussing limitations and future work in the appendix.
Strengths: - originality: the authors present an original idea to utilize the image probabilities obtained from the learned flows in Flow Matching as priors for MAP estimation in inverse problems
- quality: The authors provide a good account of Flow Matching and deep understanding of the inverse problems. Delivery of the argument is elaborated and supported with theoretical proofs as well as intuitive toy data examples and more complex tasks. Ablation studies to support the argument are provided.
- clarity: fairly clear, improvements on the flow of argument would be desired.
- significance: applications to real-world imaging problems such as medical imaging are very important, and the authors provide promising results in section 4.2, including Table 1 and Figure 4 and Appendix G.
Weaknesses: The the paper could benefit from more clear presentation, tying the proofs and experimental results to the claims introduced at the beginning a bit tighter.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your recognition of our original contributions and will work on enhancing the clarity and coherence of our presentation. In the attached pdf above, more experiments have been added to show our method's capability to handle more challenging conditions ($\nu=1/10$, Tab. 1), highly noisy settings (Tab. 2), strong performance over classical signal recovery algorithms (Tab. 1) brought by utilizing generative models as priors, and its competitiveness with representative diffusion-based methods (Tab. 2). | Summary: The paper proposes efficiently recovering MAP estimates by utilizing flow-matching priors. The main proposition in the presented method is to break down the MAP objective into a sum of N local MAP objectives, facilitating a computationally feasible approach that runs in reasonable time. The results are compared to several proposed baselines in terms of distortion (PSNR/SSIM) showing consistent improvements across two datasets and multiple tasks.
Strengths: * The paper proposes a theoretically motivated approximation of MAP estimates utilizing flow-matching priors.
* The paper is well written and overall well structured.
* The experiments entail multiple tasks and datasets, consistently showing improved performance in distortion.
Weaknesses: * The paper argues that flow-matching priors are useful due to the ability to calculate reconstruction log-likelihood, yet this information is never used and does not appear in the experiments section
* The authors focus only on distortion (and not on perceptual quality), arguing for a single (blurry) MAP estimate. Nonetheless, by this point it is well understood within the image restoration community that summarizing posteriors to a single prediction is inevitably throwing away uncertainty information which is vital for proper down-the-line decision-making in safety-critical applications. Therefore, posterior samplers (aka stochastic inverse problem solvers) are a much better solution in underdetermined inverse problems with a multitude of admissible solutions.
* Nowadays posterior samplers utilizing distilled diffusion models can produce samples with a single neural function evaluation. Compared to such methods, it is unclear what might be the advantage of the presented technique besides maybe accompanying each reconstruction with its likelihood, which is again missing in the presented results.
Technical Quality: 3
Clarity: 3
Questions for Authors: * What information is there to be gained from the likelihood computation if all the user is going to be presented with is a single MAP estimate?
* In terms of runtime, how does your method fair with other types of MAP estimates employing different generative models as priors?
* What changes will it take to apply your method to non-linear inverse problems?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors were upfront about the limitations of their method and stated these in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback and comments. We would like to address your concerns regarding our paper's motivation and contribution to the inverse problems community. We will respond to each concern below:
**The use of log-likelihood and flow prior**
**First**, we would like to note that our algorithm does use the likelihood under a flow prior. In particular, our algorithm optimizes a sequence of objectives that locally approximate the log-likelihood for each timestep $t$. In the lines 7 and 12 of Algo. 1 of the paper, the terms $\frac{1}{2}||x_t||$, $\log p(x_t)$, and the trace approximation are derived from the flow prior's log likelihood. We can obtain the gradient of $\log p(x_t)$ using Eq (11). We experimentally demonstrate that these prior terms are important in obtaining quality reconstructions in Fig. 2 by comparing our results to the results of ours without the local prior terms. Hence the flow prior and log-likelihood is heavily used in our algorithm, and are crucial to help us obtain good performance.
**Second**, with flow-based models, we are able to calculate $\log p(x)$ by the instantaneous change-of-variable formula (Eq. 8 in the paper). We choose not to present the log-likelihoods and mainly present reconstruction metrics, such as PSNR and SSIM, to properly compare our method with flow-based baselines such as D-Flow [1] and OT-ODE [2], and diffusion-based methods like RED-Diff [3] and $\Pi$GDM [4]. We fully agree that reporting log-likelihoods could help users gain more information of the reconstructed images and we promise to do so in the revision.
**On MAP Estimation and Uncertainty Quantification** Thank you for raising this issue. We acknowledge that posterior sampling methods have the advantage that they can provide multiple reconstructions to visualize uncertainty. However, most methods that we compare to, such as our baselines of [1] [2] [3] and [4], only provide one estimate for each single image and do not focus on uncertainty quantification. We would like to highlight, however, that it is possible to equip MAP with uncertainty quantification, which we hope to explore in future work. For example, given an MAP solution $\hat x_{MAP}$, one can compute a Laplace approximation $p(x_{MAP}|y) \sim N(\hat x_{MAP}, H^{-1})$ where $H$ is the Hessian matrix of $\log p(x|y)$ at $\hat x_{MAP}$. This method has its pros and cons relative to posterior sampling methods. Here, computing this Hessian can be challenging, especially for flow-based log priors, but once calculated, sampling is straightforward. Similarly, posterior sampling methods require many iterations to generate many quality samples, which can be time-consuming. We believe investigating the benefits of this approach constitutes an interesting direction for future work.
**On Distilled Models** While distilled models offer a promising way to speed up inversion with diffusion models, to our knowledge there have been few works that have successfully demonstrated their strong performance. Moreover, one-step distilled diffusion models behave similarly to push-forward generators, such as GANs. Even with distilled diffusion models, both posterior sampling and MAP estimation require many optimization steps to achieve satisfactory reconstructions, as shown in classical papers using GAN priors. See, for example, Sec 6.1.1 in CSGM [5], where they optimize the latent space of the GAN for 1000 steps to reconstruct an MNIST digit.
**Runtime Comparison** As shown in the Table 3 in the appendix of our paper, our method requires 1.6 min for each image, OT-ODE [2] requires 2.46 min, while the concurrent work D-Flow [1], which formulates the MAP into a constrained optimization problem (Eq. 9), requires 5-10 mins for each image as documented in their Sec. 3.4. This is because each of its optimization steps requires a backpropogation through an ODE solver, as they calculate the log-likelihood fully. Our work is significantly faster due to our principled local MAP approximation.
Finally, we want to highlight our theoretical contribution to inverse problems with flow-based models as priors, where backpropagation through an ODE solver (usually requiring 100 or more NFEs) is avoided, and asymptotic convergence is proven, as demonstrated in Theorem 1. Practically, our ICTM algorithm consistently achieves strong performance in distortion across multiple tasks and datasets, from natural images to medical application, as evidenced by Tabs. 1 and 2 in the attached pdf above. We kindly request a re-evaluation of the rating if some of your concerns have been resolved.
[1] D-Flow: Differentiating through Flows for Controlled Generation. Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman. ICML 2024. (concurrent work)
[2] Training-free linear image inverses via flows. Ashwini Pokle, Matthew J. Muckley, Ricky T. Q. Chen, Brian Karrer. TMLR 2024.
[3] A Variational Perspective on Solving Inverse Problems with Diffusion Models. Mardani, Morteza and Song, Jiaming and Kautz, Jan and Vahdat, Arash. ICLR 2024.
[4] Pseudoinverse-guided diffusion models for inverse problems. Song, Jiaming and Vahdat, Arash and Mardani, Morteza and Kautz, Jan. ICLR 2023.
[5] Compressed Sensing using Generative Models. Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G. Dimakis. ICML 2017.
---
Rebuttal 2:
Comment: Dear reviewer,
Thanks a lot for your time and effort. As the discussion period approaches its deadline, we would greatly appreciate your feedback on whether our rebuttal has adequately addressed your concerns. Please reach out if you have additional questions or comments which we are happy to address. Thank you again.
Best,
Authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the clear and thorough rebuttal. I'm hesitant to change my score mainly because I do not agree with you on two points:
* First, the likelihood information, if the user is to be presented with a single reconstruction, is practically useless as ultimately what matters is the likelihood ratio between different possibilities.
* Second, the results are extra blurry. As evident in the FID numbers from the rebuttal PDF, this method is hardly competitive in perceptual quality.
I do recognize the advantage of this method over the concurrent work of D-Flow. Nonetheless, in its current form, I think it has limited applicability and therefore I maintain my score.
---
Rebuttal 3:
Comment: We greatly appreciate your time and recognition of our paper's strengths compared to the concurrent work, D-Flow. We would like to provide a last-minute clarification on the two points you mentioned, which we hope will address some of your concerns:
- **Regarding likelihood information:** We appreciate that giving the user likelihood estimates in a more systematic way could be useful, but we want to highlight that the main focus of our work is using a flow-based prior's likelihood to help obtain high-quality reconstruction estimates for inverse problems in a computationally efficient way. As compared to baselines such as D-Flow, we can do this in a computationally efficient manner based on our theoretically-motivated MAP approximation. Moreover, focusing on reconstruction enables a fair comparison with flow-based baselines such as D-Flow and OT-ODE, as well as diffusion-based methods like RED-Diff and $\Pi$GDM. Of note is that likelihood information has also not been provided in the experiments of the four baselines.
- **Perceptual quality:** We would like to highlight that our method demonstrates strong competitiveness in perceptual quality, as evidenced by the fact that our method outperforms $\Pi$GDM in terms of FID in super-resolution and inpainting (random), surpasses OT-ODE in both FID and LPIPS in inpainting (random) and inpainting (box), and exceeds RED-Diff in both FID and LPIPS across all tasks except Gaussian Deblurring. As shown in the Figure 1 in the rebuttal pdf, our method generates clear images even in the highly noisy setting and is strong in terms of PSNR and SSIM. Overall, our method achieves consistently improved performance in reconstruction while maintaining competitive perceptual quality compared to the baselines.
---
Rebuttal Comment 3.1:
Comment: Thank you for these additional clarifications. After reading the reviews of others, re-reading the paper, and looking again into the rebuttal PDF, I would like to raise my score to 5 to enhance your chances of getting accepted. Best of luck!
---
Reply to Comment 3.1.1:
Comment: We greatly appreciate the time and effort you've dedicated to our work. Thank you for your support and thought-provoking suggestions! | Rebuttal 1:
Rebuttal: Dear Reviewers,
We first thank you for your valuable feedback and appreciate the recognition of our paper's strengths, such as
- concise presentation from concepts to methods and clear motivation (sT2G),
- a solid theoretical foundation supporting the method (qjx2),
- consistently improved performance in distortion across multiple tasks and datasets (hhXk).
In the attached pdf, we provide additional experimental results that we hope address your concerns (implementation details are documented at the end):
**[Highlight: more diffusion-based baselines in Table 2]**
In addition to the flow-based baselines, we have included two representative diffusion-based baselines: 1) RED-Diff [1], a variational Bayes-based method; and 2) $\Pi$GDM [2], an advanced MCMC-based method. Our method demonstrates competitive performance with both diffusion-based baselines and other approaches in terms of recovery fidelity. Additionally, one reviewer mentioned a concurrent work, D-Flow [3], recently accepted to ICML 2024, which has not yet released its official code. Notably, D-Flow also includes [1] and [2] as diffusion-based baselines.
**Table 1(two classical recovery algorithms and a more challenging setting $\nu=1/10$ included)**: Results of compressed sensing with varying compression rate $\nu$ on the HCP T2w dataset. We have added results for two classical recovery algorithms, Wavelet and TV priors, as well as a more challenging setting $\nu=1/10$, where only 10\% of the output signal is observed. Our method outperforms the classical recovery algorithms and other baselines in all settings, demonstrating our method's capability to handle challenging scenarios and the advantages of utilizing modern generative models as priors.
**Table 2(two new baselines and a highly noisy setting $\sigma_y=0.2$ included)**: Quantitative comparison results of PSNR and SSIM on the CelebA-HQ dataset (best values highlighted in blue and second-best underlined). We have included two representative diffusion-based baselines: 1)RED-Diff [1], a variational Bayes-based method; 2)$\Pi$GDM [2], an advanced MCMC-based method. Our method is competitive with diffusion-based baselines and other approaches in terms of recovery fidelity. Additionally, we included an extremely noisy setting with
$\sigma_y=0.2$. The last row of the table shows our method's capability to handle noisy settings across different tasks.
**Table 3(FID and LPIPS scores included)**: Quantitative comparison results of FID and LPIPS on the CelebA-HQ dataset (best values highlighted in blue and second-best underlined). This table shares the same setting as Table 2. Our method is also competitive with other baselines in terms of perceptual quality as evidenced by LPIPS.
Please see our reviewer-specific feedback for more information. We kindly request a re-evaluation of the rating if some of your concerns have been resolved.
[1] A Variational Perspective on Solving Inverse Problems with Diffusion Models. Mardani, Morteza and Song, Jiaming and Kautz, Jan and Vahdat, Arash. ICLR 2024.
[2] Pseudoinverse-guided diffusion models for inverse problems. Song, Jiaming and Vahdat, Arash and Mardani, Morteza and Kautz, Jan. ICLR 2023.
[3] D-Flow: Differentiating through Flows for Controlled Generation. Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman. ICML 2024 (concurrent work)
---
[Implementation Details]
Table 1: We use the pytorch package *DeepInverse*$^1$ to implement Wavelet and TV priors as shown in Tab.1 of the pdf. For both priors, we use the default Proximal Gradient Descent algorithm and perform a grid search for regularization weight $\lambda$ in the set {$10^0, 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}$} and gradient stepsize $\eta$ in {$10^0, 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}$}. The maximum number of iterations is 3k, 5k, and 10k for compression rate $\nu = 1/2, 1/4,$ and $1/10$, respectively. The stopping criterion is the residual norm $\frac{||x_{t-1}-x_t||}{||x_{t-1}||} \le 1\times 10^{-5}$ and the initialization of the algorithm is the backprojected reconstruction, i.e., the pseudoinverse of $\mathcal{A}$ applied to the measurement $y$.
- For the TV prior, the objective we aim to minimize is $\min_x \frac{1}{2}||\mathcal{A}x - y||_2^2 + \lambda \||x\||{\rm tv}$. We find that the optimal combination of hyperparameters is $\lambda = 0.01, \eta = 0.1$ for all values of $\nu$.
- For the Wavelet prior, the objective we want to minimize is $\min_x \frac{1}{2}||\mathcal{A}x - y||_2^2 + \lambda ||\Psi x||_1$. We use the default level of the wavelet transform and select the “db8” Wavelet. The optimal combination of hyperparameters is
$\lambda=0.1, \eta=0.1$ for all values of $\nu$.
Table 2: We use the official repository$^2$ from Nvidia to reproduce the results of RED-Diff and $\Pi$GDM with the pretrained CelebAHQ checkpoint using the architecture of the guided diffusion repo from OpenAI$^3$.
- For RED-Diff, the optimization objective is $\min_\mu ||y - \mathcal A (\mu)||^2 + \lambda (sg(\epsilon_\theta(x_t,t) - \epsilon))^T \mu$. Following the implementation of the original paper, we use Adam optimizer with 1,000 steps for all tasks. We choose learning rate $lr=0.25, \lambda=0.25$ for super-resolution, inpainting(random) and inpainting(box) and $lr=0.5, \lambda=0.25$ for deblurring as recommended by the paper.
- For $\Pi$GDM, we follow the original paper and use 100 diffusion steps. Specifically, we use $\eta = 1.0$ which corresponds to the VE-SDE. Adaptive weights $r_t^2 = \frac{\sigma_{1-t}^2}{1+\sigma_t^2}$ are used if there is an improvement on metrics.
1. https://deepinv.github.io/deepinv/
2. https://github.com/NVlabs/RED-diff
3. https://github.com/openai/guided-diffusion
Pdf: /pdf/fb9dfa5dd199e95258cac39496bdb83a3ea1ceeb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Trajectory Diffusion for ObjectGoal Navigation | Accept (poster) | Summary: This paper tackles the object goal navigation problem, where given visual observations and a target goal object, the task is to plan a navigation path to find said object. The paper proposes a trajectory diffusion model, where the future navigation trajectory is predicted starting from random points. The diffusion model is conditioned on an a semantic map and past trajectories, and these choices are validated in an ablation study. Finally, the proposed approach is compared with previous methods on two different simulators, demonstrating its effectiveness.
Strengths: Overall, the paper is well written and presents relevant prior work as well as its contributions in a clear way that is easy to follow.
The problem under consideration is relevant for robotics, and is challenging as it considers navigation in unknown environments, utilizing only a camera as its sensor.
The paper proposes a novel approach based on diffusion models to generate trajectory points for a local policy to follow.
Ablations and comparisons with other methods demonstrate strong performance.
Weaknesses: Since a fixed 224x224 size is used for the semantic map, the effectiveness of the proposed approach is limited on large-scale environments. Either fine details are lost by representing the full map at that resolution, or distant regions are lost by representing a local neighborhood.
Experiments are only conducted in simulation. Experiments on a real robot would further improve the quality of the paper.
No error bars. Although explained in the paper that the variance is low, reporting quantitative values is more precise and complete.
Technical Quality: 3
Clarity: 4
Questions for Authors: How is the goal object represented? By text, one-hot vector, or something else?
How can the method work at all without goal information (rows 1-3 in Table 1) or without environment knowledge (i.e. no map or image in row 1)? Or does this only refer to the diffusion process and the model always takes the object and map as input? Does row 1 correspond to predicting trajectory points directly?
What happens if the model explores the wrong part of the room, i.e. when two directions are equally good at first glance? Does the model stop prematurely or does it manage to backtrack, overlapping its previous path and find the goal?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Some limitations are discussed in the appendix, such as prediction biases and optimal paths for supervision.
Additional limitations include:
* The proposed approach seems to assume known pose, which is a limiting factor if it is not available.
* The fixed map resolution limits the generalization to large-scale environments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the appreciation and suggestions for our work. We address the concerns in the following lines.
### (W1 & L2) Concerns about small map size limiting the proposed method's generalization to large-scale environments.
We evaluate navigation performance with various map sizes, as illustrated in RFig.3 of the supplementary PDF in the rebuttal. Our model is compatible with higher-resolution semantic maps as input. However, for navigation tests in the simulator, as shown in RFig.3 (for more details, please refer to Common Q3 in the Overall response), a map size of 224×224 optimally balances navigation performance and computational efficiency. Thus, for navigating simulators, we set the map to 224×224 based on performance-to-cost considerations. Nevertheless, due to its compatibility with different map sizes, our T-Diff can transfer to large-scale environments by utilizing maps with richer fine details.
---
### (W2) Real world experiments.
We provide additional evaluation results and navigation visualizations in real world environments. Please refer to the Overall Response, Common Q4.
---
### (W3) Lack of Error bars.
Compared to end-to-end learning methods, both modular approaches and our T-Diff demonstrate significantly smaller result variations due to the use of explicit semantic maps and local policies. Results indicate that after multiple experimental runs, the error bars for the end-to-end method (PIRL[1]) are 0.47% in SR and 0.56% in SPL, whereas for T-Diff, they are only 0.03% in SR and 0.09% in SPL.
We agree with the reviewer's concerns and will incorporate error bars into the results in our final version to enhance the precision and completeness of our experimental results.
---
### (Q1) Representation of target object.
The target object is represented as a one-hot vector. We will emphasize this in the revised version for better understanding.
---
### (Q2) Detailed explanation of Tab.1 in the main text.
In Tab.1 of the main text, row1 indicates that the baseline method does not utilize T-Diff but is still equipped with semantic maps, target object, and local policy for navigation.
The **visual** or **goal** conditions in the main text table merely represent different settings in the diffusion process of T-Diff, rather than completely omitting semantic maps and goals in navigation.
For clarity, we have modified this table, and the revised version is presented as RTab.3 in the supplementary PDF in the rebuttal. RTab.3 offers clearer explanations and includes an additional row (row 0), corresponding to navigation without any visual or goal information.
---
### (Q3) What happens if the agent explores wrong rooms?
During navigation, when T-Diff plans an incorrect direction, the agent may explore the wrong room, as illustrated in RFig.2 in the supplementary PDF in the rebuttal.
To mitigate the impact of such erroneous cases, we do not employ predefined, fixed strategies, such as backtracking the previous path upon entering an incorrect room. Fixed strategies could potentially trap the agent in local optima, limiting its ability to explore and adapt to newly acquired observations. Instead, we opt for a dynamic approach by controlling the planning length (i.e., trajectory length) to prevent the agent from being misguided for extended periods, and setting an appropriate update frequency for generating new trajectories based on continuously updated local maps to correct previous erroneous planning.
As shown in RFig.2, during the initial steps of navigation, T-Diff plans an incorrect direction due to limited environmental observations. However, as more environmental information is observed on the semantic map, new correct trajectories overwrite the previous erroneous ones with the update frequency. Consequently, the agent can quickly adjust its direction (see the orange circle). The visualization results demonstrate the robustness of our T-Diff to single-step prediction errors.
---
### (L1) Assumption of known pose.
The experimental setting with known pose follows previous modular methods, as pose information is necessary for constructing semantic maps.
While pose-known methods are less flexible than those requiring only RGB-D input, they exhibit greater stability in complex, large-scale environments and better generalization to real-world scenarios [2].
Moreover, in practical deployments, pose can be obtained through sensors such as odometers.
---
### Reference
[1] PIRLNav: Pretraining with Imitation and RL Finetuning for OBJECTNAV, CVPR 2023
[2] Object Goal Navigation using Goal-Oriented Semantic Exploration, NeurIPS 2020
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive rebuttal, additional experiments, and clarifications. I especially appreciate the real world experiments in RFigure 1, and the visualization in RFigure 2 showing that the agent can successfully re-plan its path when necessary.
I have read the other reviews and rebuttals and followed the discussion. Overall I retain my original high rating of the paper.
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer sqUP,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024 | Summary: This paper argues that the previous object navigation algorithms generally only consider one-step decision-making, which can lead to temporal inconsistency and shortsightedness. Therefore, the authors propose using diffusion models to learn sequential decision-making. By collecting expert trajectories and then using the DiT model to learn trajectory diffusion, experimental results have demonstrated that the trajectory diffusion model significantly improves object navigation performance across multiple benchmarks.
Strengths: 1. The authors attempt to apply diffusion models to the object navigation task, and experimental results show that this approach improve navigation performance and generalization in unseen scenarios.
2. The figures in the paper are very refined, effectively explaining the differences between their sequential prediction and one-step prediction.
3. In Section 3, the authors provided a good summary of end-to-end and modular methods and their drawbacks, despite some redundancy.
Weaknesses: 1. I would cautiously suggest that authors might appropriately streamline some of the discussion in Section 3 or place too much of it in supplemental material. Sec 3 also has some overlap with what's in Sec 1, and I would cautiously suggest some adjustments.
2. I'm not very convinced about the motivation of the paper. In Line 35, The authors claim that the previous end-to-end approach suffered from temporal inconsistencies, and I suggest that the authors provide some examples to demonstrate this. And I think there seems to be no difference between the waypoint prediction among modular methods and the method in this paper: because after the waypoint prediction, a trajectory is also formed. And let's consider a sample case like this: when the policy predicts only the next step, the robot executing this step may receive new information that makes the further next step better; whereas predicting a sequence, if the robot finds a better direction before it has finished executing the sequence, then the robot will continue in the wrong direction. In such an example, it seems that only one step is predicted to perform better.
3. In Line 147, the authors claim that "end-to-end learning methods .... suffers from sample inefficiency and high training costs". Can the authors provide the number of samples and training consumption required for the trajectory diffusion model? How effective would these samples be if they were used to train methods of imitation learning?
4. In Line 148, "the generalization of their planner is constrained by location-related supervision". I think that the current modular algorithms based on LLM and VLM segmentation models in the field of zero-shot object navigation do not have the problems described by the authors and outperform many trained algorithms. Could the authors have some discussion or comparison of this like Table 2?
Technical Quality: 2
Clarity: 4
Questions for Authors: 1. In Line 6 and Line 35, what does the "temporal consistency" mean in the paper? Can the authors give an example?
2. Is the generated trajectory a coordinate or an action? In context, it appears to be an xy coordinate, so is this equivalent to some waypoints?
3. Is predicting a sequence really better than predicting a step? The information used to make decisions is all the same, and a well-trained policy can theoretically accomplish the task by simply predicting the optimal next step.
# After Rebuttal
The authors' paper and rebuttal demonstrated to me very well, the motivation and necessity of diffusion models applied in the field of navigation, and more importantly, that diffusion models significantly improve navigation performance. What the authors supplement is so convincing.
While the novelty level is still limited, the paper's analysis on diffusion models and navigation makes enough of a contribution to boost my score.
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 4
Limitations: the authors discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the insightful and valuable feedback. We address your concerns below.
### (W1) Streamlining Sec. 3 of the main text.
We appreciate the reviewer's suggestion and will adjust the content arrangement to ensure conciseness.
---
### (W2-1 & Q1) Temporal inconsistencies of end-to-end RL methods.
Temporal consistency in planning should ensure:
- Consistency in past trajectories: Planning should avoid revisiting past trajectories to prevent redundant exploration.
- Consistency in future planning: Planning for future exploration at short time intervals should maintain spatial consistency, avoiding frequent large-scale goal switching, which can hinder exploration efficiency.
However, end-to-end methods implicitly encode past information (e.g., past trajectories and plans) and predominantly rely on single-step egocentric observations for planning.
Therefore, their planning cannot avoid redundant exploration or prevent frequent goal switching, thus failing to ensure temporal consistency.
### (Q2) Is the generated trajectory a coordinate or an action?
The generated trajectory consists of a series of coordinates. The local policy selects a point on the trajectory as a waypoint.
### (W2-2 & Q2) Difference between waypoint prediction in modular methods and T-Diff.
Both methods provide a waypoint to the local policy, but they differ in several key aspects:
- **Direct goal vs. Gradual goal.** Modular methods use the absolute position of the target as supervision for training, meaning the predicted waypoint represents the target's final position, i.e., Direct goal. In contrast, T-Diff is trained with segments of the optimal trajectory to the goal, with each segment representing a progressive sub-goal towards the target position. Therefore, the waypoint predicted by T-Diff is a Gradual goal. Compared to Direct goals, Gradual goals are more reachable for the agent. Splitting one absolute position into several sub-goals enriches the state space and prevents sparse supervision, improving generalization, particularly when the target position changes significantly.
- **Temporal consistency.** The modular method predicts waypoints based on observed objects and obstacles, while T-Diff also considers historical trajectories and current agent pose. This ensures the predicted waypoints are consistent over short time intervals, preventing frequent waypoint switching.
- **Interpretability**: T-Diff outputs a series of coordinates with explicit trajectories, making the predictions interpretable.
Additionally, the similarity in usage (predicting a waypoint for the local policy) ensures T-Diff's compatibility with existing modular navigation frameworks, allowing it to benefit from modular methods like improved mapping and local policy.
### (W-3 & Q3) Is predicting a sequence really better than predicting a step?
The advantage of sequence planning lies in ensuring consistency, preventing redundant exploration, and avoiding frequent spatial jumps of predicted waypoints over short intervals.
However, as the reviewer concerns, its disadvantage is that if the sequence is too long and an error occurs, the agent cannot be corrected promptly.
Thus, balancing the pros and cons of sequence planning is crucial, which involves controlling the sequence length.
As shown in Fig. 3 (a) and (c) of the main text, the ablation study on sequence length and waypoint selection indicates that when the sequence is too long or waypoints are too far apart, the performance of sequence planning decreases due to the lack of timely correction. However, when an appropriate scale is set, performance gradually increases with the sequence length and surpasses that of single-step planning.
---
### (W3-1) Comparisons of samples and training consumption.
Please refer to the RFig.4 in the supplementary PDF in the rebuttal, where we compare end-to-end learning methods (DD-PPO, Habitat-Web), modular methods (PONI), and our T-Diff in terms of training samples and training consumption. The results show that T-Diff's training samples and consumption are similar to modular methods and significantly lower than end-to-end methods.
### (W3-2) Using collected trajectories for imitation learning.
We use collected trajectories to fine-tune SemExp[1] through imitation learning.
Results are shown below.
Both SemExp and T-Diff utilize the same inputs, i.e., semantic map and target object.
The results indicate that collected trajectories help improve performance further, but imitation learning is less effective than using DDPM for training.
We hypothesize that DDPM, through the noise-adding process, enriches the training state space, allowing the diffusion model to learn the target distribution better.
| Method | SR(%) ↑ | SPL(%) ↑ | DTS(m) ↓ |
|------------------|---------|----------|----------|
| SemExp | 71.1 | 39.6 | 1.39 |
| SemExp (finetune)| 73.3 | 41.2 | 1.18 |
| T-Diff | 79.6 | 44.9 | 1.00 |
---
### (W-4) Comparing T-Diff with zero-shot navigation methods.
We choose ESC [2] for comparison, which achieves zero-shot navigation using the VLM model (i.e., GLIP model) and LLMs.
As shown in RTab.4 in the supplementary PDF in the rebuttal, results indicate that zero-shot method performs better in cross-domain generalization than previous modular methods relying on location-related supervision, i.e., the performance difference is minimal when test data comes from different simulators.
However, due to the lack of optimal trajectory training, their success rate, especially in the SPL metric reflecting navigation efficiency, is lower than our method.
---
### Reference
[1] Object Goal Navigation using Goal-Oriented Semantic Exploration, NeurIPS 2020
[2] Esc: Exploration with soft commonsense constraints for zero-shot object navigation, ICML 2023
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for their replies! I am very grateful to the authors for the additional experiments. The author's rebuttal addresses part of my questions, however there are a few questions that need to be discussed further.
**First Question:** For W2-1 & Q1, end-to-end methods encode past information, which **theoretically** avoid revisiting past trajectories to prevent redundant exploration. The authors claim that the end-to-end methods can not avoid frequent goal switching, which I think requires some evidence rather than **drawing conclusions from intuition**. So I would suggest that the authors provide some mathematical proof or statistical results to show that the end-to-end methods do indeed perform more frequent goal switching than T-Diff.
**Second Question:** As for "Direct goal vs. Gradual goal.", the author claim that "Gradual goals are more reachable for the agent." But generally among modular methods, the predicted waypoints will generally be on the established point clouds and some path planning algorithms (e.g. the Dijkstra algorithm, BFS) will be used to get the path. What is the difference between the predicted gradual goals/subgoals of T-Diff and the intermediate points obtained by the path planning algorithm? If the waypoints prediction is accurate in modular methods, I don't think the subgoal of T-Diff planning can outperform the Dijkstra algorithm. As for "Temporal consistency", I think modular method can easily mark which areas are explored and which are not (e.g. set visible areas less than three meters away as explored by depth cameras' projection and building a point cloud map). And recording past trajectories is something that can be done for the modular method. From these two points, it seems that modular method is also Temporal consistent. Moreover, the recently SOTA modular method in Zero-Shot Object Navigation exceeds the performance of T-Diff. Please see ICRA2024 "_VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation_", which was released at December 2023, several months before NeurIPS 2024 deadline. **On Gibson, they achieved _SR 84.0_ and _SPL 52.2_, while T-Diff was _SR 79.6_ and _SPL 44.9_. On MP3D, they achieved _SR 36.4_ and _SPL 17.5_, while T-Diff was _SR 39.6_ and _SPL 15.2_.** To summarize, I suggest that the author reconsider the modular method.
As reviewer DH2d said, "The main weakness of the paper is the lack of novelty in the methods/architecture; " Actually, that's also my biggest concern. It's okay to use methods from other fields in navigation, however if it's just used without explaining why it's being used or what difficulties need to be solved to use it, I think that hardly makes up a sufficiently contributing paper. The authors explain that by the fact that past methods have been unable to follow behavioral consistency, but these conclusions are drawn intuitively and thus unconvincing. And, T-Diff does not exceed the current SOTA object navigation method, which is not discussed by the authors. So I choose to keep my score at this moment. **I am very willing to continue the discussion with the authors on the two questions above**, and hope that the authors will forgive me for some of the inappropriate wording in this paragraph.
---
Reply to Comment 1.1.1:
Title: Reply to k5Ci Part-1/3
Comment: We appreciate the reviewer's timely and detailed feedback. We will address the raised concerns below.
### (Q1) More evidence for end-to-end methods performing more frequent goal switching than T-Diff
According to our statistical analysis, frequent goal switching will lead to frequent changes in agent movement trends.
We employ average trajectory curvature as a statistical metric to evaluate these changes.
Formally, the agent's trajectory in one episode $\tau=[p_0...p_t..p_T]$ during navigation, where $p_t=(l_t,\theta_t)$, $l_t$ is a 2D position coordinate, and $\theta_t$ is the agent orientation angle, the average trajectory curvature is defined as:
$\kappa=\mathbb{E}{\tau}[\sum_{t=1}^{T}\frac{|\theta-\theta_{t-1}|}{1+||l_{t}-l_{t-1}||_{2}}]$
where $\kappa$ is in units of rad/m.
We evaluate the following end-to-end learning methods and T-Diff on the Gibson test set with the same initial positions and goals, as shown in the table below:
|Methods (end-to-end)| **$\kappa$** (rad/m) | SR (%) | SPL (%) |
|-|-|-|-|
| DD-PPO[1] | 179.07 | 15.0 | 10.7 |
| EmbCLIP[2] | 121.78 | 68.1 | 39.5 |
| T-Diff | 63.96 | 79.6 | 44.9 |
The statistical results indicate that end-to-end methods exhibit greater changes in motion trends, supporting our claim that end-to-end methods perform more frequent goal switching than T-Diff.
---
### (Q2-1) Direct goal vs. Gradual goal
Following most works settings, the ObjectNav task is set in unseen scenes where the entire map is not available, and the location of the goal could not be observed at first, requiring the agent to infer potential target locations in unobserved areas.
Typically, for navigation in unseen scenes, it seems different from the reviewer said "the predicted waypoints will generally be on the established point clouds." Instead, the predicted waypoints are out of the observed area in most cases, in pursuit of achieving better efficiency in exploration. Waypoints are set to the target location when the target is observed; in this case, the waypoint is within the observed local map (i.e., established point clouds). However, in other cases, the predicted waypoint is in the unknown area, rather than "on the established point clouds." We will discuss these two cases.
(1) Target is observed
When the target is observed in the local semantic map, its location is set as the waypoint. Since the target's location is accurate and obstacles from the current position to the target are observed, point-to-point planning is done with computational methods (e.g., FMM, Dijkstra, BFS). In this case, T-Diff's local policy uses the FMM algorithm for path planning, which is equivalent to other computational methods mentioned by the reviewer.
(2) Target is not observed
This case accounts for the majority of navigation cases, approximately 88.46%. Our previous discussion on Direct goal vs. Gradual goal pertains to this case.
When the target is unobserved, the agent needs to predict a waypoint out of observed map to infer the target's possible location for further exploration. The current modular methods predict waypoints as follows:
- Corner-based method (Stubborn[3], 3D aware[4]). In these methods, the waypoint is not predicted but simply alternates between the four farthest corners of the map, leading to greedy exploration.
- Frontier-based method (PONI[5]). This method selects waypoints on the frontier (the boundary between the observed map and the unknown area). The selection is based on predicting the distance from each frontier point to the target and selecting the closest one.
- Position-based method (SemExp[6], Peanut[7]). These methods directly use the target's location as supervision to train the model to predict the coordinates points as the waypoints.
Existing waypoint prediction is supervised only by the target's absolute location (direct goal), which is sparse in unknown area. T-Diff, however, uses segments of the trajectory to the target as training supervision, effectively inserting several gradual goals in the state space of unknown regions, reducing supervision sparsity, and improving waypoint prediction. Additional evaluations for waypoint prediction accuracy, as shown in the table below, demonstrate that T-Diff predicts waypoints more accurately.
|Methods (modular)| Distance between predicted waypoint and its GT on MP3D (m) |
|-|-|
| SemExp[6] | 15.36 |
| PONI[5] | 9.84 |
| PEANUT[7] | 8.62 |
| T-Diff | 5.48 |
Furthermore, after the waypoint is predicted on unobserved area, current modular methods perform path planning using computational methods. Since obstacles in unobserved areas are unknown, path planning in unknown regions is unreliable. On the contrary, T-Diff not only predicts waypoints but also the trajectory path, which is trained with obstacles from training rooms, making it more reachable for the agent.
Therefore, in case (2), T-Diff surpasses current modular methods in waypoint accuracy and path planning reliability.
---
Reply to Comment 1.1.2:
Comment: Many thanks for the valuable feedback and detailed questions. The suggestions have been very helpful in improving our work. We hope our response has addressed the concerns and questions. If there are any further comments or questions, please let us know, and we will do our best to address them. | Summary: The authors propose a diffusion trajectory planner in the context of indoor object navigation that takes current semantic maps (could be partial) and the target object as input to produce a planned future sequential trajectory.
Evaluation is done in simulation using the habitat simulator on two datasets.
Strengths: - Comparison to an exhaustive list of other methods with improvement in performance.
Weaknesses: - The main weakness of the paper is the lack of novelty in the methods/architecture; using DDPM for trajectory generation as well as goal and observation conditioning have all been presented in the literature as cited by the authors in the related works.
- Given the previous point, although the authors propose a sequential implementation with the capability of predicting trajectories based on partial semantic maps, the work lacks demonstrations of a complete solution, including semantic map building and diffusion planning in parallel.
- An agent can either see the target or not, if it does not see the target and is in a new room it has to explore, if it does see it then the task comes down to reaching a goal. The authors claim based on results in table 1 that the sequential nature of the task is sufficient motivation for the approach chosen. However, in that same table the results seem to suggest that even without any conditioning, let alone image or goal conditioning the success rate is above 70%, while with the use of semantic maps the score does not exceed 80%. This is very odd and seems to go against the authors’ claim or suggests that the task is ill specified.
- The manuscript’s quality is below average, with multiple typos (the use of “senor” instead of “sensor” in multiple instances and grammatical errors (“They are is also single-step planners that predict waypoints”, “For the navigation planner, they formulate it as” amongst many other instances). It need a lot of polishing to make it enjoyable to read.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Why does it make sense to use optimal trajectories from an omniscient planner for training when at deployment partial semantic maps do not know where the goal is ? How does using the Fast Marching Method (FMM) to compute optimal paths to a specific targets with knowledge of precise collision maps translate to efficient exploration in unseen rooms ?
- How is this expected to generalise across objects and scenes? What modifications would it require to become generalisable?
- When does the size of the map become an obstacle in itself as rescaling to 224x224 reduces the granularity of trajectories.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: - No real world experiments to validate the approach’s practicality (goes hand in hand with the lack of semantic map+planning)
- Limited technical novelty that is not convincingly motivated.
- Poor manuscript quality
On the basis of these limitations, in my appreciation the paper does not meet the conference's quality level for acceptance.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the constructive and insightful feedback.
We address your concerns below.
### (W1 & L2) Lack of novelty.
We discuss the novelty of our work from two perspectives:
(1) Comparison with existing diffusion-based planning methods
Conditional diffusion models are widely adopted generative frameworks.
Several works leverage diffusion models for planning, as shown in table below.
|Method|Task|State|Model Structure|Target State|
|-|-|-|-|-|
|Diffuser[1]|Point-to-point path planning|Grid matrix|-|Specific state (goal location)|
|Diffusion Policy[2]|Robot motion planning|Fully observed RGB |CNN-based + Transformer-based|Specific state (image)|
|CrossWay[3]|Robot motion planning|Fully observed RGB|CNN-based|Specific state (image)|
|T-Diff|ObjectNav|Partially observed semantic map|DiT with cross-attention|Abstract state (User-specified object)|
These studies consistently utilize agent states as conditions for diffusion models to generate planning paths.
The trend is to extend diffusion models to more complex tasks (e.g., more complex condition and state). Our work follows this trend by applying conditional diffusion models to the ObjectNav task, where the current state is only partially observable, and the goal condition is more abstract. The improvements are as follows:
- Higher capacity model architecture: We adopt an improved DiT structure to build the model, enabling it to describe more complex distributions and accommodate more intricate conditions.
- Advanced condition representation: We replace previously used few single-step visual observations with a continuously updating semantic map as a condition. The semantic map integrates all historical observations into a unified geometric space, facilitating the accumulation of partial observations for planning.
(2) Comparison with existing ObjectNav works
This aspect has been thoroughly discussed in the main text.
In summary, our work is the first to utilize a diffusion model for sequence planning based on geometric memory in the context of ObjectNav task.
---
### (W2) Lack demonstrations of semantic map building and diffusion planning in parallel.
We draw attention to Sec. 4.2 (Page 5, lines 211-239). Specifically, lines 211-224: building the semantic map; lines 225-230: generating trajectory based on the semantic map and target; lines 231-239: waypoint selection and planning.
To further clarify, we re-summarize these two modules in parallel. As shown in Fig. 2 (c) of the main text, at each navigation timestamp $t$, the semantic map $m_t$ is continuously updated. Every $t_{T-diff}$ steps, our T-Diff is activated and iteratively generates trajectories over $\tau_{max}$ steps.
Then, the local policy selects waypoints at each timestamp $t$:
- When the goal is observed in $m_t$, the local policy simply selects the goal's position as the waypoint and drives the agent towards it.
- When the goal is not observed, the local policy adopts T-Diff guidance (i.e., selects points on the T-Diff generated trajectory as waypoints) for more efficient exploration. Note that the generated trajectory remains until a new trajectory is generated.
We will add these descriptions in our final version to enhance clarity.
---
### (W3-1) An agent can either see the target or not?
Except for simple cases (e.g., the agent starts close to the target), the target is invisible from the agent’s initial location.
Therefore, to effectively complete the ObjectNav task, agent needs to: 1) efficiently explore to quickly locate the target, and then 2) move to the target once it becomes visible.
Our T-diff is designed to enhance exploration efficiency by deducing paths to potential locations of the target.
### (W3-2) The result in Tab. 1 (without any conditioning) is odd.
We discuss this concern in detail in the Overall Response, please refer to Common Q5.
---
### (Q1) Why does it make sense to use optimal trajectories for training, and why can these trajectories from training rooms translate to unseen rooms to improve exploration efficiency?
The ObjectNav task typically focuses on indoor environments, where the key to efficiency is improving exploration stage.
Even though test environments are unseen during deployment, there are regularities in object layouts, e.g., sofas are often found in living rooms, surrounded by cushions and blankets.
Therefore, to improve exploration efficiency, recent works[4,6,7] have focused on learning such prior knowledge $P(p_o|m_t, o)$, i.e., inferring the target's position $p_o$ based on the map $m_t$ that records historical information and the target $o$.
Similarly, we aim to learn the prior knowledge about $P(\tau|m_t, o)$, i.e., deducing a path $\tau$ from the current position to the likely location of the goal. For example, if the agent observes a sofa, a coffee table, and a microwave, and the goal is set as a toaster, the agent should plan a path towards the microwave to locate the target faster.
Due to the regularity in the contextual layout of objects, trajectories from training rooms can be transferred to unseen rooms.
Additionally, recent studies [5] have shown that learning trajectories from human demonstrations can improve ObjectNav efficiency, further evidencing that navigation trajectories are transferable to unseen rooms.
---
Rebuttal 2:
Title: Response for DH2d - Part 2
Comment: ### (Q2-1) How is this expected to generalize across objects and scenes?
Our T-diff focuses on transferring navigation capabilities to unseen rooms, with target object categories encountered during training, which is consistent with current definition of the ObjectNav task[4, 6, 7].
T-diff achieves a 79.6% success rate in unseen rooms, and 78.2% even when unseen rooms are from different simulators, significantly outperforming single-step planners like PONI[4], which only achieves 43.9% under same conditions.
### (Q2-2) What modifications would it require to become generalizable?
T-diff's generalizability hinges on the representativeness of collected trajectory segments.
To enhance T-diff's generalizability, possible modifications include increasing the diversity of rooms and targets for trajectory collection and refining selection of trajectory segments to better represent the scene layout.
---
### (Q3) When does map size constrain performance?
We have addressed this concern in the Overall Response above. Please refer to Common Q3.
---
### (L1) Real world experiments.
Please refer to the Overall Response, Common Q4.
---
### (W4 & L3) Typos and grammatical errors.
We appreciate your detailed feedback.
We have corrected all the pointed-out grammatical errors and carefully revised manuscript to ensure its readability and quality.
### Reference
[1] Planning with Diffusion for Flexible Behavior Synthesis, ICML 2022
[2] Diffusion Policy: Visuomotor Policy Learning via Action Diffusion, Robotics 2023
[3] Crossway Diffusion: Improving Diffusion-based Visuomotor Policy via Self-supervised Learning, ArXiv 2024
[4] PONI: potential functions for objectgoal navigation with
interaction-free learning, CVPR 2022
[5] Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale, CVPR 2022
[6] Peanut: Predicting and navigating to unseen targets, ICCV 2023
[7] Imagine Before Go: Self-Supervised Generative Map for Object Goal Navigation, CVPR 2024
---
Rebuttal 3:
Comment: Thanks to the reviewer for the concerns and suggestions for this work. We hope our response has resolved the confusion and questions in the review. If there are any questions or further comments, please let us know and we will try our best to answer them!
---
Rebuttal Comment 3.1:
Comment: I acknowledge the authors' comprehensive response, and appreciate the quality of the rebuttal and addition of valuable elements: - real world experiments
- clarification of strength of the method for exploration (which I believe warrants a clear discussion in the paper)
I have reviewed my score to represent the solidity of the work, although I maintain that novelty is borderline limited.
I have no further questions. | Summary: The paper "Trajectory Diffusion for ObjectGoal Navigation" introduces a novel method called "trajectory diffusion" for the task of ObjectGoal Navigation (ObjectNav), where an agent is required to navigate to a specified object in an unseen environment based on visual observations. The existing methods for ObjectNav often rely on single-step planning, leading to a lack of temporal consistency. The proposed approach leverages diffusion models to learn the distribution of trajectory sequences conditioned on the current observation and the goal. By training with Diffusion Denoising Probabilistic Models (DDPMs) and using optimal trajectory segments, the model can generate a coherent sequence of future trajectories for the agent. The paper demonstrates significant improvements in navigation accuracy and efficiency using the Gibson and MP3D datasets, showcasing the effectiveness of trajectory diffusion in guiding agents in real-world navigation tasks.
Strengths: 1. The paper is well-written and clearly positions itself in the literature, highlighting the use of diffusion models to generate a sequence of waypoints for ObjectGoal Navigation.
2. The method is innovative, leveraging diffusion models to enhance temporal consistency in navigation, which addresses a common issue in existing approaches.
3. The evaluations are thorough, with the use of datasets like Gibson and MP3D demonstrating the effectiveness of the approach. The visualizations provided are also helpful in understanding the results.
Weaknesses: 1. The paper does not compare its approach with other methods that might use a sequence of waypoints for navigation, leaving a gap in understanding the uniqueness or superiority of using diffusion models for this purpose.
2. The necessity of using diffusion models to predict waypoints is not well justified. There is no comparison with simpler models, such as a standard decoder that outputs a sequence of waypoints, to establish the added value of the diffusion approach.
3. The study might be better suited for a robotics-focused conference rather than NeurIPS, as it leans more towards robot learning than core machine learning innovations.
4. The differences in performance across various hyperparameter settings in the ablation study are relatively small, suggesting that the problem may not be as challenging as presented. For example, varying the length of generated trajectories from 8 to 32 only changes performance by about 3%.
5. There is no exploration of the performance with minimal trajectory lengths, such as 4 or even 1, which could provide insights into the importance of multi-step planning versus single-step planning.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Are there other papers in the literature that use a sequence of waypoints for ObjectGoal Navigation, and how does this method compare?
2. How necessary are diffusion models for predicting waypoints? Could simpler methods like a standard decoder suffice?
3. How would the performance change if the trajectory length were reduced further, possibly to the point of a single-step planner?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the detailed questions and address them in the following lines.
### Comparison with other methods of sequence planning for ObjectNav.
We choose Habitat-web [1] and ENTL [2] for comparison, as they also output sequence predictions for ObjectNav task. The comparison is based on the following aspects:
- **Training Data.** Both Habitat-web and ENTL rely on human demonstration trajectories for training. In contrast, T-Diff uses automatically collected trajectories, which incur lower collection costs.
- **Model Structure.** T-Diff's predictions are based on a semantic map. However, both Habitat-web and ENTL rely solely on the implicit encoding of the egocentric view from a few adjacent steps for sequence planning. The semantic map geometrically preserves all past observations, whereas the implicit encoding of a few egocentric views leads to a loss of geometric spatial information, thus limiting their performance in complex environments.
- **Navigation Performance.** The comparison of navigation performance on MP3D is shown in Table 3 of the main text (for better readability, we have copied the results below). The results indicate that T-Diff achieves higher performance compared to other sequence planning methods, especially with a more significant improvement over ENTL.
These comparisons demonstrate the superiority of using T-Diff for sequence planning.
| | SR(%) ↑ | SPL(%) ↑ |
|--------------|---------|----------|
| ENTL | 17.0 | 5.0 |
| Habitat-Web | 35.4 | 10.2 |
| T-diff (Ours)| 39.6 | 15.2 |
[1] Habitat-Web: Learning Embodied Object-Search Strategies from Human Demonstrations at Scale. CVPR 2022
[2] ENTL: Embodied Navigation Trajectory Learner. ICCV 2023
---
### Comparison with simpler model for trajectory generation.
Please refer to Common Q2 in the Overall Response, where we discuss this concern in detail.
---
### The study emphasizes robot learning over machine learning, possibly unsuitable for NeurIPS.
We focus on the ObjectNav task, which is related to robotics.
To efficiently complete this task, we learn a conditional distribution $p(\tau|m_t,o)$, i.e., inferring the trajectory $\tau$ from the current position to the likely location of the target $o$ based on partial observations $m_t$ of the current scene.
Thus, the foundation of our work lies in learning this conditional distribution, which is inherently a machine learning problem.
Moreover, NeurIPS has established a Robotics area in the main track, which is the track we submitted to.
Recently, several ObjectNav-related works have been accepted by NeurIPS.
A subset is listed below.
Therefore, we believe our work is suitable for NeurIPS.
[1] CaMP: Causal Multi-policy Planning for Interactive Navigation in Multi-room Scenes. NeurIPS2023
[2] ZSON: Zero-Shot Object-Goal Navigation using Multimodal Goal Embeddings. NeurIPS2022
[3] ProcTHOR: Large-Scale Embodied AI Using Procedural Generation. NeurlPS2022
[4] Object Goal Navigation using Goal-Oriented Semantic Exploration NeurIPS2020
---
### The difference in performance across trajectory length from 8 to 32 is relatively small (about 3%).
We analyze why the navigation performance in Fig. 3 is not significantly affected by hyperparameter variations within a certain range (e.g., trajectory length from 8 to 32). Two possible reasons are:
- **Simulator difficulty.** Experiments in Fig. 3 of the main text are conducted on Gibson, which has a smaller average area (221.33 $m^2$) compared to other datasets (e.g., MP3D, 682.68 $m^2$), making navigation easier and resulting in smaller performance fluctuations. After repeating the experiments on MP3D, we observe that different hyperparameters have a greater impact on performance, causing differences of 8.1% in SR and 3.9% in SPL.
- **Model robustness.** Another factor is that T-Diff essentially achieves its full potential when the trajectory length is set within a reasonable range (neither excessively large nor small). Consequently, the model's performance exhibits relatively low sensitivity to hyperparameter variations within a certain range.
---
### Performance with minimal trajectory lengths (such as 4 or even 1) of T-Diff.
We address this concern in the Overall Response. Please refer to Common Q1 for more details.
---
Rebuttal 2:
Comment: Thanks to the reviewer for the concerns and suggestions for this work. We hope our response has resolved the confusion and questions in the review. If there are any questions or further comments, please let us know and we will try our best to answer them!
---
Rebuttal 3:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer JCTG,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024
---
Rebuttal 4:
Title: Reply to author response
Comment: I appreciate the authors' response to my review, it is helpful for better understanding the paper. I'd like to keep my score as I remain my evaluation to the paper. | Rebuttal 1:
Rebuttal: # Overall Response
We thank all reviewers for their valuable and insightful feedback. We appreciate the supportive comments regarding our well-written presentation (ria9, JCTG, sqUP), novel approach (JCTG, sqUP), precise figures (ria9, k5Ci), sound motivation (ria9), robust performance (ria9, k5Ci, sqUP), and comprehensive evaluations (JCTG, DH2d).
Several common questions have been raised, which we address below.
### Common Q1 (@ria9 @JCTG) Using minimal trajectory lengths for T-Diff training.
As suggested by the reviewers, we add an ablation study using minimal trajectory lengths (e.g., 1 and 4) for training T-Diff, as shown in RTab.1 in the supplementary PDF in the rebuttal. Note that when the length is set to 1, the prediction of T-Diff is a single waypoint.
The results indicate that when the length is set to 1, T-Diff's performance is influenced by the choice of ground truth point (i.e., the **i-th** point from current position on the optimal trajectory). The performance with shorter lengths (1 or 4) is lower compared to longer lengths (32).
We hypothesize that predicting a sequence of trajectories, as opposed to a single point, allows each predicted point to receive contextual information from neighboring points. This helps correct and smooth out prediction errors of individual points, reducing the sensitivity of the results to single-point errors. Consequently, this ensures more stable predictions and enhances overall accuracy of trajectory prediction.
This finding further supports our motivation for using sequence planning.
---
### Common Q2 (@ria9 @JCTG) Comparison with simpler model for trajectory generation.
We consider the following simple decoder to learn the trajectory generation $P(\tau|m_t, o)$ as a comparison.
It adopts a similar Transformer-based architecture to T-Diff, with comparable parameters and the same conditional inputs.
However, unlike T-Diff, which is trained using DDPM, this competitor is trained with MSE loss. The results are shown in the table below.
The results indicate that directly learning $P(\tau|m_t, o)$ through supervised training yields poor performance. Our analysis suggests that since both $m_t$ and $\tau$ are high-dimensional, the target distribution $P(\tau|m_t, o)$ is also high-dimensional. Given the limited number of training rooms (less than 100), $P(\tau|m_t, o)$ is sparse and difficult to learn directly.
In contrast, the diffusion model (DDPM), through its diffusion and denoising process, gradually simplifies complex distribution into multiple simpler distributions.
This allows for better learning of $P(\tau|m_t, o)$ distribution.
Therefore, our experiments and analysis confirm the necessity of using diffusion models for learning trajectory generation.
||MSE ↓|SR(%) ↑|SPL(%) ↑|DTS(m) ↓|
|-|-|-|-|-|
|Simple decoder|0.6541|59.2|33.5|2.05|
|T-diff|0.0357|79.6|44.9|1.00|
> Note that MSE measures the quality of generated trajectories, while SR, SPL, and DTS indicate navigation performance.
---
##### Common Q3(@DH2d @sqUP) Concerns about map size.
We conduct experiments with T-Diff using various map sizes. Results are presented in RFig.3 of the supplementary PDF. Our findings demonstrate that T-Diff is compatible and performs well with different map sizes (except when excessively small). The results indicate that as map size increases, both information granularity and performance improve, albeit with increased computational complexity. Performance plateaus beyond a size of 300, while complexity continues to rise. To balance performance and computational cost, we opt for a 224×224 map size. However, when T-Diff requires adaptation to larger-scale scenarios, map size can be scaled up and readily integrated (addressing **sqUP**'s concerns about small map sizes limiting T-Diff's performance in large-scale environments). Additionally, we observe that performance degrades significantly when the map size falls below 150 (addressing **DH2d**'s inquiry about the point at which map size constrains performance).
---
### Common Q4 (@DH2d @sqUP) Real world experiments.
To validate T-diff in real-world environments, we provide additional evaluation results and navigation visualizations in real world, as shown in the supplementary PDF in the rebuttal.
We set up a 140 $m^2$ space, utilizing movable walls to create 3 scenes with different layouts. Each scene is divided into several rooms and furnished with common furniture and objects. The space contains a total of 35 object categories, from which we select 7 object types for our object navigation experiments. The experiments are deployed on a Locobot-wx250s. For each object type, we conduct trials from 10 different starting positions in each of the 3 scenes, and calculate the SR metrics.
As shown in RTab.2, our method achieves a higher success rate compared to the baseline (PONI[1]). Additionally, the visualizations in RFig.1 show that the generated trajectories align well with actual target positions, demonstrating T-diff's robustness in real-world scenarios.
---
### Common Q5 (@DH2d @sqUP) Misunderstanding regarding the result of row 1 in Tab. 1 in the main text.
In Tab. 1 of the main text, rows 2-4 represent different variants of T-Diff (i.e., sequence planner with geometric memory).
The comparison in row 1 uses an enhanced FBE method (proposed by PONI[1]) combined with a local policy (i.e., single-step planner with geometric memory).
The 'X' marks under 'visual' and 'goal' condition in row 1 indicate that T-Diff is not used, but this alternative still employs semantic map and goal for navigation.
To improve clarity, we revise this table, as shown in RTab.3 in the supplementary PDF in the rebuttal. RTab.3 provides clearer explanations and includes an additional row (row 0) to address the reviewer's concern about the case where navigation process does not use any image or goal.
---
### Reference
[1] PONI: potential functions for objectgoal navigation with interaction-free learning, CVPR 2022
Pdf: /pdf/21e20a27587ad7193a60f804d0d929b7994007bc.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper presents a novel, diffusion-based modular sequential planning algorithm for image-based goal-conditioned ObjectNav tasks. Concretely, the method performs supervised training to solve the following task: given a partial map constructed from past image observations and a user-specified object, predict a finite sequence of future 2D waypoints that would lead the object.
The method's main novelty is a diffusion-based conditional sequence generator. Specifically, the condition denoted $s_t$ is a learned embedding of the incomplete map and the target object. The whole training pipeline starts with data preparation, where ground truth paths are used to generate the incomplete map incrementally, serving as the diffusion model conditions.
The authors perform extensive empirical studies in the Gibson and MP3D simulated environment. The results show that the proposed method is more generalizable across domains than the existing approach. Moreover, it outperforms a collection of end-to-end and modular algorithms, some of which are trained on more data.
Strengths: - The overall presentation of the work is clean and easy to follow.
- The specific learning problem, source of data, and training method are covered in great detail.
- The framework figure is precise and helps the understanding of the approach.
- Using a conditional diffusion model to predict the waypoint sequences makes sense.
- I'm unfamiliar with relevant literature, but the presented empirical results are strong in terms of both in-domain validation results and cross-domain generalization.
Weaknesses: - I find myself scrolling back and forth to read the figures. Maybe their placements could be adjusted.
- The method uses a heuristic $k_g$ to select the one point within the sequence as guidance. While this design seems to work as is, it would be nice if some post-processing on the sequence could be used to pick the point more smartly.
- Additional studies could make the design choices more convincing:
* If only one point is eventually used as guidance, what would a model perform if trained to directly predict the $k_g$-th point?
* How would simpler conditional generation models work, for example, directly predicting the sequence or learning a latent variable model?
Technical Quality: 3
Clarity: 3
Questions for Authors: - For the hyper-parameter search experiment in Figure 3, how are the other variables chosen when experimenting with one specific parameter?
- The method seems able to plan correct sequences with a very limited initial map. Does this indicate the model learns some inductive bias on where objects are in the rooms? Are there examples where the agent makes a wrong prediction and has to back out?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors cover the limitations and broader impacts on pages 14 and 15.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks to the reviewer for the appreciation and suggestions for our work. We address the concerns in the following lines.
### (W1) Figure placement.
Thanks for the valuable suggestion. We will adjust the figure placement in the final version to enhance readability.
---
### (W2) Post-processing on the sequence for picking the waypoint more smartly.
Thanks for the valuable suggestion. The parameter $k_g$ is important for selecting the appropriate waypoint on the generated trajectory.
Previously, point selection was determined through parameter tuning experiments on the validation set.
As the reviewer suggests, we add post-processing on the generated trajectory to autonomously select the optimal waypoint.
We build an additional trajectory scoring model that takes the generated trajectory, the embedding of semantic map and target as inputs, and outputs a score vector of the same length as the input trajectory.
Based on collected trajectory segments from training rooms, we calculate the distance of each points on the trajectory from the target location, and construct a one-hot vector with the point closest to the target set to 1 as the training ground truth. The scoring model is trained with Cross-Entropy loss.
By incorporating the score model, the performance of the enhanced T-Diff is shown in the table below. Without hyper-parameter tuning, it achieves comparable or slightly better performance compared to the original version.
We will include this enhanced version in our final version to further improve the proposed T-Diff.
| | SR(%) ↑ | SPL(%) ↑ | DTS(m) ↓ |
|-------------------------|----------|----------|----------|
| T-diff | 79.6 | 44.9 | 1.00 |
| T-diff + s(τ,m_t,o) | 79.8 | 45.1 | 1.00 |
---
### (W3-1) What would a model perform if trained to directly predict the $k_{g}$-th waypoint?
We appreciate your concern. This topic is addressed in the Overall Response above, under Common Q1.
### (W3-2) How would simpler conditional generation models work?
We discuss this concern in the Overall Response, please refer to Common Q2.
---
### (Q1) How are the other variables chosen when experimenting with one specific parameter in Fig. 3?
When experimenting with one specific parameter, the values of other variables are held constant. For example, in Fig. 3(c) of the main text, which tests the impact of different $k_g$ values on navigation performance, the length of the trajectory is fixed at $k=32$ and the max de-noising step is set to $\tau=100$.
---
### (Q2-1) Does this indicate the model learns some inductive bias on where objects are in the rooms?
Yes, T-Diff learns some inductive bias regarding the object layout, which is an expected outcome.
In the ObjectNav task, agents need prior knowledge about object layout to infer the likely location of the target for more efficient navigation.
For example, if the target is a sofa, the agent should learn to first plan a trajectory to the living room to explore if the target is there.
This inductive bias provides a form of prior knowledge to improve navigation efficiency.
However, as the reviewer concerns, this inductive bias can hinder navigation efficiency if it differs from the current room layout.
Therefore, we control the length of sequence planning (i.e., trajectory length).
As shown in Fig. 3(a) of the main text, excessively long sequence planning harms navigation performance, while controlling it within a reasonable range ensures that the planning results can be updated in time based on new observations. Within this reasonable range, such inductive bias can improve navigation performance.
### (Q2-2) Are there examples where the agent makes a wrong prediction and has to back out?
We provide visualizations of instances where T-Diff makes incorrect predictions, leading to exploration in the wrong direction, as shown in RFig.2 in the supplementary PDF in the rebuttal.
The results show that in the initial steps of navigation, due to limited observations of the environment, T-Diff plans an incorrect direction.
However, since the generated trajectory is controlled to be of a reasonable length and the frequency of new trajectory generation is appropriately set, the agent can quickly change direction (see the orange circle), mitigating the impact of a single-step incorrect prediction by T-Diff.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for addressing my concerns in the original review!
The additional experiment results are convincing.
- The scoring model seems to help compared to a fixed heuristic step number.
- Predicting a trajectory is shown to outperform single-point predictions.
- The decoding process is non-trivial, and the diffusion process excels.
The additional visualizations in the rebuttal are nice to have and answer my questions.
Overall, this work is solid, and I will keep my good rating.
---
Rebuttal 2:
Title: Required Action: Please Respond to the Author Rebuttal
Comment: Dear Reviewer ria9,
As the Area Chair for NeurIPS 2024, I am writing to kindly request your attention to the authors' rebuttal for the paper you reviewed.
The authors have provided additional information and clarifications in response to the concerns raised in your initial review. Your insights and expertise are invaluable to our decision-making process, and we would greatly appreciate your assessment of whether the authors' rebuttal adequately addresses your questions or concerns.
Please review the rebuttal and provide feedback. Your continued engagement ensures a fair and thorough review process.
Thank you for your time and dedication to NeurIPS 2024.
Best regards,
Area Chair, NeurIPS 2024 | null | null | null | null | null | null |
A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding | Accept (poster) | Summary: This paper proposes a depth-range free MVS network that considers the information within the MVS framework. The initial depth is derived from the epipolar geometry of the reference image and the source images. In addition, the disparity features are enhanced by an uncertainty encoding module and a multi-view disparity attention module. The proposed method is assessed on several benchmarks compared to existing works. It shows promising results regarding the robustness to different depth ranges and the overall performance compared to RNN-based methods.
Strengths: 1. The evaluation of the proposed method shows better results than the existing RNN-based methods on several public benchmarks and it is more robust to different depth ranges.
Weaknesses: 1. The methodology lacks some clarity. For example, a) no explanation for $\vec{f}$ in equations 3 and 4, b) no explanation for $0$ and $1$ in $p^{0}$ and $p^{1}$ for the search range, c) the dot before $\varphi4$ in equation 8, d) how $H_{di}$ is initialized in equation 8, e) in equation 10, is $R^{0,i}$ the relative rotation between two images, or generated by the intersection angle $\theta^{0,1}$, f) no explanation for $t_{c}$ and $t_{f}$ in equation 11. Readers in the same research field might get the actual meaning without indication, but the broader audiences need to be considered.
2. There are problems with writing. For example, a) wrong reference for DispMVS, b) table 2 is indexed before tables 3 and 4 but it is mentioned after them in the main text, 3) missing reference of a section in the last paragraph in section 1.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. In Figure 1, when the depth range is clipped within the actual depth range, e.g., the first column, how can you still get the depth for the foreground (425mm -- 552.5mm) and background (807.5mm -- 935mm) points? It seems that the proposed method yields the same results for all configurations as the four figures look the same.
2. When sampling M points to construct the cost volume, what is the interval between the points and how it is decided? Is it adaptive during the iterations?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. It is suggested that the authors refine the writing of the methodology section and make clarifications regarding W1 and Q2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > no explanation for $\overrightarrow{f}$ in equations 3 and 4.
Similar to DispMVS, $\overrightarrow{f}$ is a 2D flow vector along the epipolar line that provides flow in the x dimension $\overrightarrow{f} _{xr\to xs }(p_r)$ and y dimension $\overrightarrow{f} _{yr\to ys }(p_r)$.
> no explanation for 0 and 1 in $p_0$ and $p_1$ for the search range.
In Sec 3.2 we define the search range as $[p_{d_0,d_i>0}^0, p_{d_0,d_i>0}^1 ]$, where 0 and 1 represent the left end and right end of the range.
> the dot before $\varphi_4$ in equation 8.
We will remove the redundant dot before $\varphi_4$ in Eq. 8.
> how $Hd_i$ is initialized in equation 8.
$Hd_i$ is randomly initialized and learned via training.
> in equation 10, is $R^{0,i}$ the relative rotation between two images, or generated by the intersection angle $\theta ^{0,i}$.
As mentioned in paper, the calculation of relative pose distance is exactly the same as in [12], where $R^{0,i}$ is the relative rotation between two images.
> no explanation for $t_c$ and $t_f$ in equation 11.
Similar to DispMVS, $t_c$ represents the iterations at the coarse stage, and $t_f$ represents the iterations at the fine stage.
> wrong reference for DispMVS
We will correct the citation accordingly.
> table 2 is indexed before tables 3 and 4 but it is mentioned after them in the main text.
This issue is due to the LaTeX formatting, and we will make the necessary modifications.
> missing reference of a section in the last paragraph in section 1.
Thank you for your suggestion. We will make the necessary modifications.
> In Figure 1, when the depth range is clipped within the actual depth range, how can you still get the depth for the foreground (425mm -- 552.5mm) and background (807.5mm -- 935mm) points?
Since we have completely removed the depth range prior, the output results are not affected by any errors in the depth range.
In contrast, IterMVS samples based on the depth range, which prevents it from obtaining the correct depth for foreground (425mm - 552.5mm) and background (807.5mm - 935mm) points.
DispMVS's initialization also rely on the depth range. During each iteration, it uses the depth range to apply a depth
normalization and filter out outliers for stability, so an underestimated depth range can significantly affect its performance.
> What is the interval between the sampling points and how it is decided? Is it adaptive during the iterations?
Similar to DispMVS, we perform 2D sampling, where we sample
$M$ points around current position $p_i$ along the epipolar line at each scale with a distance of one pixel. As noted on L.169, by constructing a 4-layer pyramid feature using average pooling, uniform pixel sampling at different levels allows for a larger receptive field. The sampling interval in 2D is fixed.
---
Rebuttal Comment 1.1:
Comment: Thanks for the responses and clarification. | Summary: This paper proposes a depth-range-free Multi-View Stereo (MVS) method, which iteratively updates the depth using a GRU-based approach. To eliminate the dependency on depth priors, the paper improves the depth initialization method of DispMVS. To fully utilize multi-frame information, the paper encodes the observation information between multiple source frames and the reference frame into features and proposes a Multi-view Disparity Attention module for multi-frame information fusion. To enhance the preservation and utilization of geometric information, the paper introduces 3D pose embedding, uncertainty estimation, and disparity hidden states. The paper has been tested on the DTU and T&T datasets, and the experimental results show that the method exhibits robust depth estimation results in the absence of depth range priors.
Strengths: -The motivation for the model design is clear, and the experimental results to some extent reflect the effectiveness of the model.
-The experimental results demonstrate that the method can achieve good predictive results without relying on depth priors.
Weaknesses: -Poor writing quality: The consistency of the symbols in the paper is poor, such as the left side of Eq. 5 should be V_i(p0); The Fd_i in Fig. 2, Eq. 8, and F^d_i in Sec 3.4, the H_i in Fig. 2 and Hd_i in Eq. 8; the use of symbols is not standardized, such as the representation of matrices and vectors, the authors are advised to carefully check the entire text; In Eq. 8, there is an extra dot multiplication symbol before phi_4.
-Missing method description: For the final GRU update part, the paper lacks a complete description. My current guess is that the authors use Fd_i and context features as input information to update the hidden state not mentioned in the paper, and decode the residual from this hidden state.
-Incorrect key citation: There is an error in the citation of one of the main comparative methods, DispMVS[5], and the authors are advised to carefully check the reference list. "Rethinking the multi-view stereo from the perspective of rendering-based augmentation." -> "Rethinking Disparity: A Depth Range Free Multi-View Stereo Based on Disparity"
-The application of cross-attention is strange: The cross-attention in this paper is only calculated at the same pixel positions in different frames, and the paper emphasizes that there are few continuous smooth trajectories between different frames in MVS, making this design very illogical. The authors are expected to provide a detailed explanation.
-Missing comparisons: CER-MVS [Ⅰ], as an early GRU-based MVS work, should be included in the comparison. Other works such as MVSFormer series[Ⅱ, Ⅲ], GeoMVSNet [Ⅳ] should not be ignored either.
-Incomplete ablation experiments: There is a lack of separate ablation experiments on uncertainty and disparity feature hidden states, and the existing ablation experiments couple the two together. There is a lack of ablation experiments on the MDA module.
[Ⅰ] Ma, Zeyu, Zachary Teed, and Jia Deng. "Multiview stereo with cascaded epipolar raft." ECCV 2022.
[Ⅱ] Cao, Chenjie, Xinlin Ren, and Yanwei Fu. "MVSFormer: Multi-view stereo by learning robust image features and temperature-based depth." arXiv preprint arXiv:2208.02541 (2022).
[Ⅲ] Cao, Chenjie, Xinlin Ren, and Yanwei Fu. "MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo." ICLR 2024.
[Ⅳ] Zhang, et al. "Geomvsnet: Learning multi-view stereo with geometry perception." CVPR 2023.
Technical Quality: 2
Clarity: 1
Questions for Authors: -The number of Fd_i encoded by the authors is related to the number of source frames. From Fig. 3, it seems that during the GRU update process, they are directly concatenated, which will cause the channel number to be related to the number of source frames. So, if the model uses 5 frames in training and then the number of frames cannot be changed during testing, is that correct?
-From the numerical results and method description, the paper has completely abandoned the use of depth priors now. But if the depth prior is known and the prior is more compact than the depth range calculated in Sec 3.2, can using the depth prior improve the performance of this method?
-As a GRU-type method, how does the accuracy of the estimated results change with the number of iterations?
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: -Limited performance: From the experimental results, the method in this paper is not outstanding on the benchmark, and the gap with the state-of-the-art methods is quite obvious, even only compared with GRU-based methods (such as CER-MVS).
-Limited novelty: The method in this paper is more like a combination of existing modules.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > the left side of Eq. 5 should be $V_i(p0)$; The $Fd_i$ in Fig. 2, Eq. 8, and $F^d_i$ in Sec 3.4, the $H_i$ in Fig. 2 and $Hd_i$ in Eq. 8.
Thanks for your reminder. I will standardize the notation for cost volume as $V_i(p0)$, epipolar disparity features as $Fd_i$, and disparity hidden State as $Hd_i$ to eliminate the gap between the formulas and the figures.
> the use of symbols is not standardized, such as the representation of matrices and vectors.
We will recheck the entire document for correct notation usage, including matrix and vector parts, and make the necessary corrections.
> In Eq. 8, there is an extra dot multiplication symbol before $\varphi_4$.
Additionally, we will remove the redundant dot multiplication symbol before $\varphi_4$ in Eq. 8.
> For the final GRU update part, the paper lacks a complete description.
We follow the structure of DispMVS, iteratively updating the epipolar flow for each source image. In each iteration, the input to the update operator includes the hidden state, the disparity feature $Fd_i$ output from MDA module, the current epipolar flow, and the context feature of the reference image. The output of the update operator includes a new hidden state, an increment to the disparity flow, and the weight.
We obtain the depth from the disparity flow and utilize a weighted sum for the depth in a multi-view situation. After fusion, the depth is converted back to disparity flow to perform the next iteration.
> There is an error in the citation of DispMVS[5]
Thank you very much for your reminder. We will correct the citation accordingly.
> Cross-attention is only calculated at the same pixel positions in different frames, making this design very illogical.
It was misunderstood that the attention is not on the sample pixels' image features in different frames but on the disparity features encoded from the cost volume. This cost volume is obtained by back-projecting the sampled depths into the source images. Therefore, the features among multiple frames are associated through the sampled points' depths. Additionally, we enhance the implicit disparity relationships among multi-view frames by encoding pose embedding, which introduces multi-view relative pose information and geometric information between specific sampled points.
> Missing comparisons: CER-MVS, MVSFormer series, GeoMVSNet. \& Limited performance
Thank you for your suggestion. We include CER-MVS, GeoMVSNet, and MVSFormer++ in our comparative experiments. However, these methods all require the depth prior, whereas our algorithm is designed to operate in a depth-range-free setting.
In paper, we mainly focus on experiments related to Depth Range in Sec. 4.5. And our main comparison is with depth-range-free methods like DispMVS, which reduce dependence on depth priors through network design.
For methods that rely on depth range priors, whether based on GRU or Transformer, they may exhibit better performance with accurate depth priors. However, their performance significantly degrades when there is an error in the depth prior.
Due to the length limitations, the detailed explanation and experiments for this part are placed in the Author Rebuttal.
> There is a lack of separate ablation experiments on uncertainty and disparity feature hidden states.
Due to the length limitations of the rebuttal, the detailed explanation and experiments for this part are placed in the Author Rebuttal.
> During the GRU update process, the channel number to be related to the number of source frames, and the number of frames cannot be changed during testing?
This is not the case. In Fig. 3, "concat" refers to concatenating the disparity features output from self-attention and cross-attention. After the MDA module, similar to DispMVS, the disparity feature corresponding to each source image is fed to GRU updating module individually. However, since we have already performed extensive global information interaction, the epipolar flows obtained by multi-view situation will be interconnected.
> If the depth prior is known and the prior is more compact than the depth range calculated in Sec 3.2, can improve the performance?
Due to the length limitations of the rebuttal, the detailed explanation and experiments for this part are placed in the Author Rebuttal.
> As a GRU-type method, how does the accuracy of the estimated results change with the number of iterations?
As shown in Fig. 3, the depth error decreases progressively with each iteration.
The vertical axis represents the depth error, and the horizontal axis represents the number of iterations. Iterations 0-7 correspond to the coarse stage, while iterations 8-9 correspond to the fine stage.
Fig. 3 shows the depth maps on DTU, in which we can see that the depth map recovers from coarse to fine.
> Limited novelty: The method in this paper is more like a combination of existing modules.
Driven by practical application needs, we creatively proposed applying transformer operations to the disparity features constructed after 2D sampling to remove the dependency on depth range. Additionally, we were the first to address the issue of depth mismatches among different source images during 2D sampling. By designing uncertainty and pose embedding, we endowed the features with geometric relationships, making multi-frame consistent estimation more efficient and accurate.
---
Rebuttal Comment 1.1:
Title: About the limited novelty
Comment: I appreciate for the authors' response. Now, I think I understand this work. It is heavily based on DispMVS, but there is a main improvement.
1. Replace the vanilla feature correlation (named SIMI) in DispMVS with Disparity Feature Encoding combining Multi-view Disparity Attention
Besides, there are several less important improvement
1. The variance-based uncertainty, which replaces the sum weight in original DispMVS and is also used in disparity feature encoding.
2. The initialization method for each flow map
It shows effectiveness of the proposed modules, although being with more compuration costs.
I still have concerns about the model performance (especially when being compared with CERMVS, which is also a GRU based method), and therefore I decide to keep my rating
---
Rebuttal 2:
Title: Separate ablation experiments
Comment: We sincerely apologize for not noticing the failure in transferring the ablation experiments due to a system refresh.
> There is a lack of separate ablation experiments on uncertainty and disparity feature hidden states.
We add the separate ablation experiments. Compared to the performance without the MDA module, this demonstrates the effectiveness of the uncertainty and disparity feature hidden state update modules.
| | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :--------------------------------- | :-------------: | :-------------: | :-------------: |
| No Disparity Feature Hidden States | 0.370 | 0.354 | 0.362 |
| No Uncertainty | **0.336** | 0.378 | 0.357 |
| Ours | 0.338 | **0.331** | **0.335** |
---
Rebuttal 3:
Title: About the model performance
Comment: Thanks for your feedback. Regarding the performance comparison test of CER-MVS with respect to depth gt priors, we have included this in the Author Rebuttal. We hope this response can address your concerns.
---
Rebuttal 4:
Title: Response to Reviewer
Comment: Dear Reviewer:
We are pleased that our response addressed your questions. We notice that your rating is still borderline reject, and we sincerely want to know if there are any remaining concerns about this work.
---
Rebuttal Comment 4.1:
Title: Response to Reviewer
Comment: Thanks very much for your recognition of our work. | Summary: - The author proposes a depth-range-free multi-view stereo framework that simultaneously takes into account all the source images.
- The author has specially designed a 3D pose embedding to better encode specific geometric information.
- The Multi-View Stereo method proposed in the paper achieves more robust depth estimation by cascading Disparity Feature Encoding, Multi-view Disparity Attention, and GRU Updating, and has demonstrated good performance across multiple datasets.
Strengths: - The depth-range has always been an issue affecting the robustness of Multi-View Stereo algorithms. Although previous methods have attempted to address it, they are still imperfect. The method proposed by the author exhibits technical novelty.
- The iterative updating approach proposed by the author can simultaneously take into account the information of all source images.
- The method presented in the article has shown good performance on the dataset's benchmark.
Weaknesses: - The visual results of the method proposed in the article exhibit many floater artifacts.
- The overall description of the network architecture needs further refinement. The writing need to be improved. The current introduction is not well-balanced in detail, and there are several typographical errors with symbols.
- The use of symbols like $F^d_i$ is inconsistent in terms of superscripts and subscripts, making it difficult to understand and appearing unprofessional in typesetting. This is not the only symbol with usage issues; multiple symbols have similar problems. It is recommended to provide a comprehensive explanation of the symbols in the supplementary material and to revise the symbols used in the figures and text.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Are the training results of the epipolar disparity flows truly meaningful? Are there any visual results to demonstrate this?
- How is $D_0$ constructed? During construction, were all source images initialized with the reference image? Was $D_0$ calculated as an average of initial values obtained from each source view? The explanation could be clearer.
- How is the depth updated through the GRU? The section 3.5 is not clearly explained.
- The ablation study is not sufficient:
- What are the different initializations of depth, such as random or all zeros?
- How is the convergence?
- In Figure 2, what is the relationship between the $e_i$ output from the GRU and the $e_i$ after fusion? How is $D_i$ obtained from $e_i$, and how is $U_i$ combined?
- As a method that also uses GRU for updates, there is a lack of reference to and discussion of "Multiview Stereo with Cascaded Epipolar RAFT."
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The author has already addressed the limitations of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The visual results of the method proposed in the article exhibit many floater artifacts.
Thanks for your reminder. Artifacts are generated during the depth fusion step. For point cloud fusion, we directly sampled the pcd method from DispMVS. This method selects multiple relevant depth views for each image to perform back-projection. After threshold filtering, the back-projected depth is weighted and combined with the current depth, which can lead to the generation of artifacts.
> The overall description of the network architecture needs further refinement. The writing need to be improved. The current introduction is not well-balanced in detail, and there are several typographical errors with symbols.
We will thoroughly review the entire manuscript to complete the details of the network design and correct any inconsistencies between the formulas and symbols in the figures.
> The use of symbols like $F_i^d$ is inconsistent in terms of superscripts and subscripts. It is recommended to provide a comprehensive explanation of the symbols in the supplementary material and to revise the symbols used in the figures and text.
Thanks for your reminder. I will standardize the notation for epipolar disparity features as $Fd_i$. I will carefully align the symbols in the formulas and images throughout the entire manuscript.
> Are the training results of the epipolar disparity flows truly meaningful? Are there any visual results to demonstrate this?
To consider the information interaction of multiple source images during 2D sampling, it is necessary to train the flow. As shown in Fig. 1, after the MDA and GRU modules, the flow obtained from different source images is consistent in terms of details. By incorporating geometric information, although the flow magnitudes on different source images vary, the representation of edges and other details is unified.
> How is $D_0$ constructed? During construction, were all source images initialized with the reference image?
On L.151, we describe that $D_0$ is obtained by selecting the midpoint of the sampling range. On L.153, we first initialize $D_0$ for all source images and then take the average as the initial depth obtained from each source view.
Thank you for your suggestion, we will make this description clearer.
> How is the depth updated through the GRU?
We follow the structure of DispMVS, iteratively updating the epipolar flow for each source image. We obtain the depth from the disparity flow and utilize a weighted sum for the depth in a multi-view situation. After fusion, the depth is converted back to disparity flow to perform the next iteration.
> The use of symbols like $F_i^d$ is inconsistent in terms of superscripts and subscripts, symbols' problems.
Thanks for your reminder. We will standardize the notation for cost volume as $V_i(p0)$, epipolar disparity features as $Fd_i$, and disparity hidden State as $Hd_i$ to eliminate the gap between the formulas and the figures.
We will recheck the entire document for correct notation usage, including matrix and vector parts, and align the symbols in the formulas and images.
> What are the different initializations of depth, such as random or all zeros? How is the convergence?
We add the comparison with different initializations of depth. However, due to the unknown depth range, it is not feasible to design a random sampling range for 3D sampling, which is impractical. Additionally, when the initial depth is set to 0, significant noise can occur during feature warping. Therefore, we designed three sets of ablation experiments: randomly initializing within the epipolar search range, and initializing fixed at the left endpoint of the epipolar line.
| | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :----------------- | :-------------: | :-------------: | :-------------: |
| The left endpoint | 0.389 | 4.373 | 2.381 |
| randomly 2D sample | 2.597 | 20211 | 2.404 |
| Ours | **0.338** | **0.331** | **0.335** |
For each case, as the GRU iteratively updates, the error gradually decreases, and the depth gradually converges. Fig. 4. shows the convergence behavior when the initial point is set to the left endpoint.
> In Figure 2, what is the relationship between the $e_i$ output from the GRU and the $e_i$ after fusion? How is $D_i$ obtained from $e_i$, and how is $U_i$ combined?
In each iteration, the GRU update operator output the current epipolar flow $e_i$ and weight $U_i$. We obtain the depth from the disparity flow by Eq.3 and utilize a weighted sum for the depth in a multi-view situation. After fusion, the depth is converted back to disparity flow $e_i$ to perform the next iteration.
> As a method that also uses GRU for updates, there is a lack of reference to and discussion of "Multiview Stereo with Cascaded Epipolar RAFT.
Please read the detailed explanation in the Author Rebuttal section for this part of the response.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' positive response and their clarification of my questions, as well as their commitment to revising symbols and expressions. However, I still have a few concerns.
Firstly, I am unable to understand why there are many floater artifacts in the DTU data. The authors' response was somewhat vague, although I believe these issues could be addressed by carefully tuning the depth fusion parameters.
Additionally, the use of symbols by the authors seems confusing. For example, using multiple letters to represent a single variable, such as $Fd_i$ instead of $F^d_i$, appears unconventional.
Moreover, I think the authors should include a more comprehensive discussion comparing their method with CER-MVS, especially regarding the use of depth range, as pointed out by Reviewer Q3xc, which currently seems inaccurate.
Overall, I am inclined to adjust the score to Borderline Accept.
---
Rebuttal 2:
Title: Response to concerns
Comment: Thanks for your reply and positive feedback. In response to your concerns, we provide the following answers:
> I am unable to understand why there are many floater artifacts in the DTU data.
Thanks for your reminder. As shown in Fig. 2 of PDF in Author Rebuttal, we compare our method with MVSFORMER++(SOTA) and find that point clouds inevitably exhibit artifacts in current MVS methods including the state-of-the-art methods. Artifacts are generated during the depth fusion step. For point cloud fusion, the pcd method selects multiple relevant depth views for each image to perform back-projection. After threshold filtering, the back-projected depth is weighted and combined with the current depth, which can lead to the generation of floater artifacts. This is due to 2D depth errors and inconsistencies across multi-view frames. Adjusting the threshold can mitigate this issue but may affect the overall quality of the 3D point cloud.
> The use of symbols by the authors seems confusing.
Thanks for your suggestion, we will make the adjustments accordingly.
> more comprehensive discussion comparing their method with CER-MVS.
CER-MVS uses depth gt priors to calculate the scale of scene and the way varies across different datasets. CER-MVS primarily uses three datasets: BlendedMVS dataset, Tanks-and-Temples dataset, and DTU dataset. BlendedMVS dataloader (https://github.com/princeton-vl/CER-MVS/blob/main/datasets/blended.py) provides two scaling methods: the default method (self.scaling == "median") uses depth gt to scale the scene to a median of 600 mm on Line 72, while the alternative method scales the scene using the depth range prior provided by the dataset (labeled as 'depth range gt') to achieve a minimum of 400 mm on Line 75. Tanks-and-Temples dataloader (https://github.com/princeton-vl/CER-MVS/blob/main/datasets/tnt.py) uses depth range prior to scale the scene to achieve a minimum of 400 mm. In the code, these depth range gt priors are loaded and represented by the 'scale_info' variable on Line 74 and 75. For DTU dataloader (https://github.com/princeton-vl/CER-MVS/blob/main/datasets/dtu.py), the depth range gt is not directly loaded in dataloader because DTU dataset already has a depth median of 600 mm and a minimum depth of 400 mm. The scene scale meets the network's requirements.
CER-MVS performs uniform sampling on inverse depth, fixing the depth sample range (https://github.com/princeton-vl/CER-MVS/blob/main/core/corr.py). In the code (https://github.com/princeton-vl/CER-MVS/blob/main/core/raft.py), the maximum disparity $d_{max}$ is set to 0.0025, and the disparity increments of stage1 and stage2 are set to $d_{max}/64$ and $d_{max}/320$ on Line 81. By scaling the dataset, CER-MVS can obtain more accurate depth increments and maintain updates within the inverse depth sampling interval.
To validate the robustness of CER-MVS to depth gt priors, we have designed experiments by altering the median or minimum depth values of the scene. Specifically, we have introduced noise perturbations to change the median of DTU dataset during rebuttal.
When there is no per-pixel depth gt, CER-MVS uses the depth range to scale the scene depth to a minimum value of 400 mm. It is worth noting that the depth range provided by the dataset is very accurate. For instance, the ground-truth data for Tanks-and-Temples dataset is captured using an industrial laser scanner. However, in practical applications, the depth range obtained through COLMAP are often inaccurate due to the sparsity of feature points and issues such as occlusion and suboptimal viewpoint selection. To verify the robustness of depth range priors, we use the depth range obtained from COLMAP to replace the depth range gt.
| Depth Range | Method | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :---------: | :-----: | :-------------: | :-------------: | :-------------: |
| GT | CER-MVS | 0.359 | **0.305** | **0.332** |
| GT | Ours | **0.338** | 0.331 | 0.335 |
| COLMAP | CER-MVS | 0.816 | **0.326** | 0.571 |
| COLMAP | Ours | **0.338** | 0.331 | **0.335** |
From the table, it can be seen that CER-MVS exhibites a certain degree of decline due to the noise in the depth range caused by COLMAP. In contrast, our method, which is independent of depth range, maintained consistent performance regardless of changes in depth range. This further demonstrates the necessity of eliminating the depth priors.
---
Rebuttal Comment 2.1:
Comment: Thank you for the response and clarification. I believe the current further response has addressed some of my concerns. I will give Borderline Accept. I believe that if the revised version of the paper addresses these points and incorporates updates, its overall quality would range between Borderline Accept and Weak Accept.
---
Reply to Comment 2.1.1:
Title: Response to Reviewer
Comment: Thanks for your positive feedback on our work. We promise to address the corresponding points in the final version. | Summary: The paper proposes a depth-rage free method for MVS. It leverages the transformers for designing a global-aware model, with using pose positional embedding to guide the model and also predict the uncertainty at the same time. The methods demonstrates good performance on diverse datasets and is also robust to different depth range.
Strengths: - The idea of building global-aware models using transformers makes sense.
- I like the idea of injecting inductive bias using pose positional embedding and the uncertainty method.
- The proposed method demonstrates satisfying experimental results and robustness to different depth range.
Weaknesses: - Related work. Using transformers for building global-aware model is very common for 3D reconstruction now [1,2,3]. I would be good if the authors could discuss these related works.
[1] Wang, Peng et al. “PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape Prediction.” ICLR 2024. [2] Jiang, Hanwen et al. “LEAP: Liberate Sparse-view 3D Modeling from Camera Poses.” ICLR 2024. [3] Zhang, Kai et al. “GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting.” ArXiv 2024.
- Dense view inputs. The experiments are performed in the sparse-view setting where the computation of transformers is not a big problem. Is the method also efficient in the dense view setting? If not, I hope the authors could have more discussions in the paper.
- I would say that the gains in the ablation experiments are not that significant.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The scale of the experiment is not big enough. It would be great if the authors could perform larger-scale experiments, ideally Dust3r-level scales, to understand the scaling capability of the proposed method. I can understand it is not easy to acquire more resources for training, but it would be great if the authors could have more discussions on this.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Using transformers for building global-aware model is very common for 3D reconstruction now [1,2,3]. I would be good if the authors could discuss these related works.
Thank you for your suggestion. We will discuss the corresponding references in the related work section.
[1] Wang, Peng et al. “PF-LRM: Pose-Free Large Reconstruction Model for Joint Pose and Shape Prediction.” ICLR 2024.
[2] Jiang, Hanwen et al. “LEAP: Liberate Sparse-view 3D Modeling from Camera Poses.” ICLR 2024.
[3] Zhang, Kai et al. “GS-LRM: Large Reconstruction Model for 3D Gaussian Splatting.” ArXiv 2024.
> Is the method also efficient in the dense view setting?
For images with a resolution of 512x640, our method can handle up to 77 views in V100 during testing phase, and up to 5 views simultaneously during training phase. Using the model trained on the DTU dataset with 5 views, we verify the impact of the number of views on system accuracy. Due to DTU providing co-visible relationships for only 10 frames, we selected 10 frames as the upper limit. The results are shown in the following table, which uses 2D metric. It can be observed that the error decreases as the number of used views increases and gradually coverage. Increasing the view during training may further increase the performance after convergence.
| | 3 views | 4 views | 6 views | 8 views |
| :---------- | :-----: | :-----: | ------- | ------- |
| Depth Error | 5.375 | 5.108 | 4.964 | 4.967 |
> The gains in the ablation experiments are not that significant.
We respectfully argue that the gain of adding the proposed components is large enough. For example, the performance gain of adding uncertainty is 9.95\% and that of adding pose embedding is 9.46\%. Besides, we also test the gain in the 2D metric (absolute depth error), the gain of adding uncertainty and pose embeddings is 1.46\% and 35.62\%, respectively.
> Perform larger-scale experiments, ideally Dust3r-level scales, to understand the scaling capability of the proposed method.
We collected some real-world data to test our method in more large-scale environment and also test its generalization. The camera intrinsic and camera pose are both obtained by running COLMAP. As shown in Fig. 5, our model is capable of generating dense point cloud reconstructions for the collected data, which shows basic generalization ability. However, the accuracy of these reconstructions is somewhat lacking. We guess one of the primary factors is the inaccurate camera pose from the COLMAP. It is a promising direction to explore the joint optimization of the camera pose for the MVS in the future. Besides, the limited training data also hinders the performance of our method. Compared with Dust3r which is trained with a mixture of eight datasets, covering millions of image data, our method and other MVS methods are only trained on DTU or BlendedMVS. How to utilize these large datasets for training MVS is also one of our future work. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for valuable feedback and positive comments like "novel and nontrivial"(Reviewer RLMs), "satisfying experimental results"(Reviewer heJr), "exhibits technical novelty"(Reviewer fCJ7), "motivation for the model design is clear"(Reviewer Q3xc), "promising results ... to different depth ranges"(Reviewer s1Ui). We will correct all typos in the final version. We will release the code to facilitate more practical research on depth-range-free methods upon acceptance.
The following are some detailed answers.
>Add comparisons: CER-MVS, MVSFormer++, GeoMVSNet
We include CER-MVS, GeoMVSNet, and MVSFormer++ in our comparative experiments. However, these methods all require the depth range prior, whereas our algorithm is designed to operate in a depth-range-free setting.
In paper, we mainly focus on experiments related to Depth Range in Sec. 4.5. And our main comparison is with depth-range-free methods like DispMVS, which reduce dependence on depth priors through network design. For methods that rely on depth range priors, whether based on GRU or Transformer, they may exhibit better performance with accurate depth priors.
However, their performance significantly degrades when there is an error in the depth prior.
| Depth Range | Method | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :-------------: | :---------: | :-------------: | :-------------: | :-------------: |
| (425,935) | MVSFORMER++[Ⅱ] | **0.309** | **0.252** | **0.281** |
| (425,935) | GeoMVS[Ⅲ] | 0.331 | 0.259 | 0.295 |
| (425,935) | Ours | 0.338 | 0.331 | 0.335 |
| (212.5,1870) | MVSFORMER++[Ⅱ] | 0.361 | **0.319** | 0.340 |
| (212.5,1870) | GeoMVS[Ⅲ] | 0.374 | 0.415 | 0.394 |
| (212.5,1870) | Ours | **0.338** | 0.331 | **0.335** |
| (141.6,2805) | MVSFORMER++[Ⅱ] | 0.739 | 0.820 | 0.780 |
| (141.6,2805) | GeoMVS[Ⅲ] | 0.602 | 1.663 | 1.133 |
| (141.6,2805) | Ours | **0.338** | **0.331** | **0.335** |
CER-MVS does not directly sample within the depth range but requires the use of depth ground truth (depth gt) to scale the entire scene to a range with a mean of 600. When we introduce a certain amount of noise to the depth gt, thereby altering the scale, the performance of CER-MVS declines sharply.
| Depth GT Noise | Method | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :------------: | :-----: | :-------------: | :-------------: | :-------------: |
| 0 | CER-MVS[Ⅰ] | 0.359 | **0.305** | **0.332** |
| 20\% | CER-MVS[Ⅰ] | 9.230 | 10.236 | 9.858 |
| 30 \% | CER-MVS[Ⅰ] | 9.385 | 10.098 | 9.741 |
| | Ours | **0.338** | **0.331** | **0.335** |
We also add CER-MVS, GeoMVSNet, and MVSFormer++ to depth range experiment in Appendix B.4. Experiments demonstrate that for methods dependent on depth range prior, even if a rough depth range can be obtained from COLMAP, their performance still degrades.
| Depth Range | Method | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :---------: | :---------: | :--------: | :---------: | :-----------: |
| GT | MVSFORMER++[Ⅱ] | **0.309** | **0.252** | **0.281** |
| GT | GeoMVS[Ⅲ] | 0.331 | 0.259 | 0.295 |
| GT | Ours | 0.338 | 0.331 | 0.335 |
| COLMAP | MVSFORMER++[Ⅱ] | 0.361 | **0.319** | 0.340 |
| COLMAP | GeoMVS[Ⅲ] | 0.374 | 0.415 | 0.394 |
| COLMAP | Ours | **0.338** | 0.331 | **0.335** |
> If the depth prior is known and the prior is more compact than the depth range calculated in Sec 3.2, can improve the performance?
To validate the effectiveness of the depth prior, we designed the experiment. The initial depth is reverted to random initialization based on the depth prior, similar to DispMVS. The results are shown as follows. It can be observed that adding depth prior can improve performance to some extent, but the difference compared to our depth-range-free method is not significant. This indicates that the current initial point selection strategy and the design of the Transformer enable the network to regress to the correct range, resulting in accurate depth estimation.
| | Acc.(mm)↓ | Comp.(mm)↓ | Overall(mm)↓ |
| :-------------------------------------------- | :-------------: | :-------------: | :-------------: |
| Random Depth Initialization among Depth Range | **0.331** | 0.335 | **0.333** |
| Ours | 0.338 | **0.331** | 0.335 |
[Ⅰ] Ma, Zeyu, Zachary Teed, and Jia Deng. "Multiview stereo with cascaded epipolar raft." ECCV 2022.
[Ⅱ] Cao, Chenjie, Xinlin Ren, and Yanwei Fu. "MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo." ICLR 2024.
[Ⅲ] Zhang, et al. "Geomvsnet: Learning multi-view stereo with geometry perception." CVPR 2023.
Pdf: /pdf/0989fe8ca2e1d914624ff2560e3950d623a9af45.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper describes an MVS approach that does not depend on given depth ranges, which MVS algorithms typically require when building 3D cost volumes. The solution, previously proposed in DispMVS, is to perform searching and iterative updates in disparity space. Compared to DispMVS, the authors propose several novel designs to further eliminate the dependency on depth range during initialization, as well as to allow synchronous updates from all source views.
Strengths: **Significance**: The complete elimination of the dependency on depth ranges brings convenience and avoids performance degradation due to improper ranges. These benefits may enable wider adoption of MVS in real-world applications.
**Originality**: Although the paper largely follows the foundation built in DispMVS, the several improvements are novel and nontrivial. The combination of 3D pose embedding and cross-attention as a solution for synchronous updates among all source views is quite interesting and proven effective.
**Quality**: The results show consistent improvements over other depth-range-free options. The ablation studies and the analysis of impact from varying depth ranges help make the overall design more convincing.
Weaknesses: **Clarity**
The paper has some rather noteworthy issues regarding writing. The method section needs to have the right level of detail. The approach is built on top of DispMVS and shares many concepts and details, and the authors' choices to keep and omit details seem somewhat arbitrary, resulting in a non-self-contained method section. For example, there are also some concepts used without introduction, e.g. "sampling range" on L.168, "t_c, t_f" in Eq.11. The GRU updating process isn't defined. The relationship between epipolar disparity flow and depth is a critical concept that the paper does not explain.
The resulting poor readability is potentially a significant issue, but unfortunately, it is hard to address in a rebuttal. I'd like to see how other reviewers weigh its impact.
**Performance**
While beating other depth-range-free solutions, including DispMVS, on the two benchmarks, results still need to catch up on cost-volume approaches like GBi-Net, often by a large margin. It'll be quite valuable to understand what causes such discrepancy. Is it due to the limited receptive field, search gratuity, or else? Can improving on any of these help close the gap?
**Generalization**
There is no discussion of generalization capability in the paper. Conceptually, it seems a depth-range-free approach would be more generalizable due to being invariant to object/scene scales. The ability to train a single model that operates in various use cases (indoor, outdoor, etc.) is arguably the most desirable feature of such approaches.
Technical Quality: 3
Clarity: 1
Questions for Authors: I look forward to answers to the questions above regarding clarity, generalization and performance.
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The authors recognize speed as a limitation. The performance gap behind cost-volume counterparts is also worth mentioning.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > "sampling range" on L.168.
Thanks for your reminder. Following DispMVS, the "sampling range" on L.168 refers to the set of sampling points uniformly sampled along the epipolar line.
> "$t_c$, $t_f$" in Eq.11.
on L.163, "$t_c$, $t_f$" are iterations at the coarse and fine stage.
> The GRU updating process isn't defined.
For GRU updating process, the epipolar flow is iteratively updated for each source image. In each iteration, the input to the update operator includes the hidden state, the disparity feature output from MDA module, the current epipolar flow, and the context feature of the reference image. The output of the update operator includes a new hidden state, an increment to the disparity flow, and the weight. We get the depth from disparity flow and utilize a weighted sum to the depth from multi-view situation. After fusion, the depth is converted back to disparity flow to perform the next iteration.
> The relationship between epipolar disparity flow and depth is a critical concept that the paper does not explain.
For the relationship between epipolar disparity flow $e_i$ and depth $d_0$, we can obtain the position $p_i$ in the source image by adding epipolar disparity flow $e_i$ to initial position $p_i^0$, then we can get $d_0$ by triangulation as Eq.3. Similarly, after obtaining $d_0$, we get the position $p_i$ by Eq.2, and subtract the initial position $p_i^0$ to obtain $e_i$. We will clarify this relationship more clearly in paper.
> Results still need to catch up on cost-volume approaches like GBi-Net, often by a large margin. Is it due to the limited receptive field, search gratuity, or else? Can improving on any of these help close the gap?
Compared to other methods that uniformly sample depth based on a depth range, our network requires a more powerful retrieval capability to regress the correct depth due to the lack of depth range priors and outperform known depth-free methods. There are two main reasons for the discrepancy between our method and cost-volume approaches.
One reason is the search gratuity. Although our method addresses the receptive field problem to a large extent by sampling points along the epipolar line at features with different scales, iterating over the depth range of $(0,\infty)$ significantly increases the search gratuity compared to uniformly sampling within a predefined depth range.
The second reason is the development potential of datasets for deep learning network. Existing MVS datasets, such as DTU, have a relatively uniform depth distribution, mostly around a mean depth of 600. This allows methods directly based on a narrow predefined depth range to achieve precise convergence, limiting the advantage of our method. However, in real-world scenarios, there are many scenes with a wide depth distribution, such as near and far objects, where the background cannot be crudely masked out like in the DTU dataset. In such cases, depth cannot be recovered with a narrow depth prior, necessitating an expanded depth search range, which increases the difficulty of convergence inevitably. I think enhancing accuracy over a large depth range is a crucial problem that MVS must address in the future. Currently, our ongoing work focuses on optimizing the acquisition of initial values and constructing more diverse datasets to endow the network with stronger learning capabilities.
> There is no discussion of generalization capability in the paper.
To evaluate the generalization of our method, we used an iPad to capture data and add inference experiments in multiple scenes. We collected images from various real-world environments, with intrinsic and extrinsic parameters obtained by running COLMAP. As shown in Fig. 5, our model is capable of generating dense point cloud reconstructions for both indoor and outdoor environments. However, the accuracy of these reconstructions is somewhat lacking, which is affected by the following three factors:
Inaccurate Camera Pose: The pose estimation from COLMAP is far from satisfying. In contrast, the evaluation benchmark provides more accurate camera pose. For example, DTU uses a structured light scanner and MATLAB calibration toolbox for camera pose estimation. The inaccurate camera pose can lead to large error during the MVS process.
Image Quality Issues: As illustrated in Fig. 6, issues such as overexposed, inadequate lighting and blurriness affect the reconstruction quality. These deficiencies in image quality contribute to the observed inaccuracies in the point cloud.
Training Data Limitations: Our model was trained on the DTU dataset, which is relatively small and features a narrow range of scenes. While our model can effectively mitigate depth range effects in various environments, it still struggles with fine detail accuracy. The limited diversity of the DTU dataset constrains the model's ability to capture detailed features accurately. Constructing an MVS dataset with diverse scenes is a promising approach to enhance the robustness and accuracy of point cloud reconstructions.
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed responses. The clarifications are useful, though it remains to be seen how well they will be integrated into a final draft. The comments regarding generalization and gaps in performance are reasonable; however, they did not directly address the concerns. Given the significant practical advantages afforded by a range-free model, I'm still keeping a borderline recommendation. | null | null | null | null | null | null |
Truth is Universal: Robust Detection of Lies in LLMs | Accept (poster) | Summary: This paper provides the analysis of linear subspaces of activations in LLMs in order to detect the truthfulness of the answer. Authors show that: 1) there are at least two-dimensional subspace for two types of false statements (affirmative and negated statements), which is the reason why previous approaches generalized poorly; 2) these two dimensions appears in various LLMs (Gemma-7B, LLaMA2-13B and LLaMA3-8B); 3) the detector built upon general truth direction is robust and it outperforms previous method (CSS)
Strengths: 1. Authors provide novel insight – identification of a two-dimensional subspace comprising a general truth direction and a polarity-sensitive truth direction is a significant contribution
2. Comprehensive evaluation including many existing datasets and LLMs.
3. The method achieves high accuracy in detecting both simple and complex lies, outperforming previous approach (CSS) by a notable margin.
4. Useful analysis of these directions using evaluation and principal components
Weaknesses: 1. Analysis lacks theoretical background of the reason for these directions to emerge in LLMs
2. Authors didn't provide any comparison with similar classifier-based like ITI [1] in order to mitigate hallucinations by intervention. In other words, it is not clear whether this general truth direction could be used to make model more honest.
3. The examples of failures of classifier are not provided, but could be important
[1] Li, Kenneth, et al. "Inference-time intervention: Eliciting truthful answers from a language model." Advances in Neural Information Processing Systems 36 (2024).
Technical Quality: 2
Clarity: 2
Questions for Authors: My main questions are as follows:
1. Could you provide the analysis and examples of facts when the developed method fails? It might contain interesting insights on the possible improvements
2. How can it be that adding cities_conj spoils facts_disj in the Fig. 5? If it is possible, could you provide the accuracy matrix for cross-domain generalization to understand the dependence on domain, i.e what accuracy will be if we train on cities + neg_cities and test on facts + neg_facts.
3. Could you please explain the experiment with scenarios? You have the following scheme: Context + Buyer: 'text' + Agent 'text'. Do you pass this text with or without Context in order to classify a lie?
Here are also some small comments and typos:
Line 141: Please, specify directly what j index means
Line 157 skipped linebreak
Eq (7) Shouldn't the factor before sum be 2/n_i ?
Line 282 please refer to the figure instead the «colored boxes below»
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: 1. The main paper limitation, in my opinion, is that authors provide evidence that this subspace is **at least** two-dimensional, limiting the results to affirmative and negated statements and its logical combinations. However, authors understand and mention it in Discussion section. Thus I find the claim that the truth is universal in the beginning (and in paper name) is a bit misleading.
2. Authors provide the investigation of only one type of embeddings – layer-wise embedding of the last token.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review. We are glad that you found our identification of the 2D truth subspace to be a significant contribution and appreciate your constructive feedback.
Regarding theoretical background: We agree that a theoretical explanation for why these truth directions emerge inside LLMs would be highly valuable. However, this is significantly beyond the scope of our empirical work. Such an explanation might require a detailed analysis of the training dynamics of the LLM and/or significant theoretical breakthroughs.
“Comparison with similar classifier-based like ITI [1] in order to mitigate hallucinations by intervention.” We agree that making models more honest through causal interventions based on the 2D truth subspace is an interesting question. However, we believe that it is beyond the scope of this paper. We will mention in the revision that this is an exciting direction for future research.
Regarding the examples of failures of the classifier: This is a good point. In reply, we analysed the LR classifier failures for several datasets. We observed two main failure modes for misclassified statements: In the first failure mode, almost all misclassified statements in a specific dataset had the same truth label. While the learned truth direction is still able to separate true vs. false statements, the reason for these errors is that the bias (learned from other datasets) did not generalize sufficiently well. In the second failure mode, the learned truth direction was not able to accurately separate true vs. false statements. We conjecture that a possible reason is that for these statements the LLM may not have been certain if the statement is true or false. This failure mode occurred in the inventors and in the scientific facts datasets. We will add a discussion of this in the revision, along with a few example sentences which were misclassified.
“How can it be that adding cities_conj spoils facts_disj in Fig. 5?” This is a good question. We currently do not have a satisfactory explanation for this behaviour. In the revision, we will add the cross-domain generalization accuracy matrix, and make a more detailed study of this issue.
“Could you please explain the experiment with scenarios?” Thanks for pointing out that this part was not clear enough. In the revision we will provide a more detailed and clear explanation. After generating the LLM response, we pass the text with the context to the LLM (Context + Buyer: 'text' + Agent 'text') and record the activations over the final token. The truth or falsity of the LLM’s response is then predicted by the classifier probes based on these activations.
Regarding the small comments and typos: Thanks for catching these! We will correct/update the paper.
“I find the claim that the truth is universal in the beginning (and in paper name) is a bit misleading.” Thank you for raising this issue. The claim of universality made in our paper is that all considered LLMs (including also the added Gemma-2-27B-Instruct), represent truth in a similar manner, for affirmative sentences, negated sentences and conjunctions thereof. We agree that the truth representations might differ in other yet unknown truth dimensions and will make this clear in the revision.
“Authors provide the investigation of only one type of embeddings” Thanks for raising this important point which was not sufficiently clearly explained in the original submission. We choose the last token and a specific layer for a good reason: Marks and Tegmark [2023] showed empirically that truth information is encoded in the intermediate layers of the LLM after processing this last token, which encodes the end of the input statement. We will clarify this in the revision. The choice of layer is justified by the results shown in Figure 2.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed answer, and for the new experiments regarding Mass Mean (MM). However, I still have unanswered questions about the study:
1) Could you provide now some experiments on cross-domain generalization accuracy, at least some part of domains? I believe it should be feasible for such timing.
2) I have some concerns regarding the experiment with scenarios. If you record embeddings (with context passed to LLM as well question and answer) and train LR classifier on them, I don't see any significance in results, the classifier just detects if the context asked to lie or not. Could you clarify the reason why you included this experiment and what should reader learn from it?
I am willing to increase my score, if you address these points and there will not be any flaws.
---
Reply to Comment 1.1.1:
Comment: Thanks for getting back to us!
Regarding experiments on cross-domain generalization: Thank you for raising this issue again. Here is the cross-domain generalization matrix:
| | cities | neg\_cities | cities+neg\_cities | cities\_conj | cities\_disj |
|----------------|--------|-------------|--------------------|--------------|--------------|
| cities | 1.00 | 0.64 | 1.00 | 0.90 | 0.82 |
| neg\_cities | 0.50 | 1.00 | 1.00 | 0.61 | 0.81 |
| facts | 0.73 | 0.78 | 0.90 | 0.86 | 0.73 |
| neg\_facts | 0.51 | 0.86 | 0.78 | 0.60 | 0.64 |
| facts\_conj | 0.54 | 0.53 | 0.71 | 0.71 | 0.58 |
| facts\_disj | 0.54 | 0.57 | 0.64 | 0.54 | 0.56 |
The columns of this matrix correspond to different training datasets and the rows to different test sets. For example, the first column shows the accuracies of a LR probe trained on the cities dataset and tested on the six test sets. We train all LR probes on 80% of the data, evaluating on the held-out 20% if the test and train sets are the same, or on the full test set otherwise. While we provide this matrix in Markdown format due to the limitations of rebuttal comments, we will include a figure similar to Figure 5 in the revision. If you have any other specific cross-domain generalization accuracies you would like to see, please let us know!
Regarding the experiment with the scenarios: Thank you for raising this important point which was not clear in the original submission. First, note that we are not training the classifier probes on the real-world scenarios but only use them as a test set. We will clarify in the revision that the probes are trained only on the activations of the simple affirmative statements in Table 1 and their negations.
Second, we agree that there is a risk that the probes detect the incentive to lie rather than the lie itself. In reply to this concern, we have now recorded scenarios where LLaMA3-8B-Instruct provides honest answers even when there is an incentive to lie and applied our lie detector probes to these scenarios. If the probes detected only the incentive to lie and not the lie itself, we would expect lie detection accuracies below 50% on these scenarios. However, the detection accuracies were 90 $\pm$ 11% (new method), 77 $\pm$ 22% (CCS) and 62 $\pm$ 17% (LR), indicating that the probes indeed detect the lie itself.
In summary, this is a proof of concept that classifier probes trained on simple statements can generalise to lie detection in more complex scenarios. We will clarify this and include these results in the revision. | Summary: The paper presents a discovery of truth vectors, specifically a general one and a polarized one, present in Large Language Models (LLMs) when “lying”. The paper builds upon previous work by using vectors from intermediate-layer vector presentations to find these two vectors. This paper finds that one needs two truth vectors to account for negations and that doing so can also account for conjunctions between statements for determining lying by the LLM. These vector results generalize across a number of topics and models. These vectors can further be used in simple linear models to predict, with reasonable accuracy, whether an LLM is lying.
Strengths: The paper is strong in its validation, novelty, and significance. For its significance, understanding LLM behavior, particularly undesirable behaviors, is very significant for the positive use of these models. This paper is directly addressing one of those behaviors. The paper also does a good job of building a robust dataset and running tests to show that the two truth vectors really do exist and are useful in lie detection. Finally, the discovery of not only the second truth vector, but its application for more universal truth detection is novel and will likely be of interest to the community (i.e., how universal is it? Are there other types of universal vectors? Etc.).
Weaknesses: The paper has a few weaknesses in clarity. I do wish the code was released, as some of the descriptions of the tests are hard to follow as are how the linear models were constructed. For example, I am not quite sure what this means when getting the $a_{ij}$ vectors means, “we feed the LLM one statement at a time and extract the residual stream activations in a fixed layer over the final token of the input statement. The final token is always the period token (".").” Does this mean I am supposed to take the vector corresponding to the ”.” token? Also, could you give the actual names of the layers you extract this from in addition to a number, so that which layers these vectors are being extracted from is less ambiguous?
Technical Quality: 3
Clarity: 3
Questions for Authors: - How similar were $t_G$ and $t_P$ between the different LLMs? It's not clear to me from the main sections or Appendix B how similar, or universal $t_G$ was between the models. I wonder if because many open-source LLMs are trained on the same datasets if this is a result of the data used to train LLMs (especially the pre-training), and that that might be where the universality comes from.
- What would happen if you asked LLM to evaluate whether it had produced a lie? Would that give a more accurate prediction of whether it lied in its previous output? Also, could something like this prompting scheme also be used with the intermediate layers method proposed in this work for possible performance improvements?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have successfully addressed the limitations of the work. I especially appreciate that they were careful with not over-generalizing their results and indicating where this work stops and where future work could continue.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review! We appreciate the criticism regarding clarity. In the revision, we will improve the writing throughout the manuscript.
Regarding code release: We fully agree. In the spirit of reproducible research, with the revision, we will make our code and scripts public, so other researchers can reproduce the results of our manuscript.
Regarding the extraction of the activation vectors: Thanks for raising this issue which was not sufficiently clear in our original submission. You are correct that we extracted the activation vector that corresponds to the “.” token at the end of the input statement. This final token contains the most enriched representation of the information present in this sentence. This scheme is identical to that of Marks and Tegmark. We will explain this in the revision.
Regarding the names of the layers from which we extract the activations: Thank you for raising this issue. In the revision we will explain more clearly from which layer we extracted the activations and what the residual stream is. The residual stream layers have no specific names beyond their layer number.
“How similar were tG and tP between the different LLMs?” Note that $t_G$ and $t_P$ for different LLMs come from layers that may have different dimensions and as such may not be directly comparable. The universality phenomena we uncover is that the various LLM models (including from Llama and Gemma families) all have a similar 2D truth representation. We will clarify this in the revision.
Note that Google DeepMind and Meta AI do not publish their training datasets. Hence, it may be plausible that the training datasets are very similar, but we do not know this for sure.
Regarding asking the LLM whether it just lied: This is an excellent question. However, it has already been explored in previous work by Pacchiardi et al. [2023] , so we did not include it again in our paper. We will clarify this important point in the revision. Pacchiardi et al. [2023] asked GPT-3.5 "Are you lying?" after it generated a lie, and found that GPT-3.5 lied again more than 80% of the time (see Appendix B.2.2 of their paper). In general, our approach is designed for scenarios where the LLM knows it is lying, but is doing so in pursuit of some goal or reward, see for example Scheurer et al. [2023]. If it is a sufficiently capable LLM, it will not reveal that it has just lied, but will hide it. But from the internal model activations we might be able to detect the lie! We will add a discussion of this in the revision.
“Also, could something like this prompting scheme also be used with the intermediate layers method proposed in this work for possible performance improvements?” This is an excellent suggestion for future research. We will mention this in the revision.
---
Rebuttal Comment 1.1:
Title: Reply to Rebuttal
Comment: Thank you for addressing the clarity issues with the paper and for committing to releasing your code. I stand by my rating as an accept for the paper. | Summary: The authors study LLM lie detection using probes trained on model internals. They show that LLM representations contain a two-dimensional subspace that corresponds to a general truth direction as well as a polarity-dependent truth direction which is sensitive to whether the statement is affirmative or negated. This clarifies the observation of prior works which observed the lack of generalizability of lie detection probes, and also enables a classifier that outperforms Consistent Contrastive Search, an existing lie detection method.
Strengths: - **Clarity and presentation**: the paper is easy to follow, well-structured and clearly presents the relevant context, experimental details and findings.
- **Explains prior observations regarding generalizability**: I found the 2D subspace explanation convincing, and the fact that the first and second principal components, corresponding to tg and tp, explain a large proportion of the variance to be a neat result
- **Universality**: The authors show that this subspace is present in multiple models, including Llama3-8B, Llama2-13B and Gemma-7B.
- The authors also look for the presence of additional linear structure/dimensionality by studying conjunctions, coming to the conclusion that it does not have a significant impact.
Weaknesses: - **Novelty**: although, the paper's insight about a 2D subspace is interesting, much of this result relies on findings of prior work, such as Marks and Tegmark who show that training on statements and their opposites improves generalization, and Levinstein & Herrmann who observe the failure to generalize on negated statements. Additionally, this insight does not lead to any novel method of training a more generalizable truth probe.
- **Generalizability**: Truth Directions learnt by probes are still entangled with topic-specific features, as suggested by the fact that projecting out tg and tp from activations still leads to good performance on subsets of the city training data (Figure 5). Although training on a range of topics reduces this issue, it still seems unclear to me that a robust generalizing set of truth directions can be found, especially when tested on more challenging and out of distribution statements. A more comprehensive investigation of generalizability would strengthen the paper.
- **Universality:** claims of universality would be strengthened by extending analysis to larger models and other model families.
- **Room for causal experiments**: Prior work (Marks and Tegmark) also investigates the casual effect of their discovered truth directions.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Did you investigate the Mass Mean Probing technique proposed by Marks and Tegmark? If so, how does it compare?
- Having presented a method to obtain a disentangled truth direction tg in Figure 3, why do you choose to train a normal LR probe on balanced statements in Section 5 when testing Generalization? Do they find similar directions (I would assume so, since as you mention, the balance of statements discourages the learning of tp)? Is there a reason to expect one method to be superior to the other? Does the first and second principal component of this probe also correspond significantly with tg and tp?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes, the authors identify that their evaluation of generalizability remains limited and that their investigation could be extended to larger and multimodal models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your review! We are glad you found the 2D subspace explanation convincing and liked the presentation of the results.
Regarding Novelty: We agree that our work was directly motivated by the empirical findings of Marks and Tegmark and Levinstein & Herrmann. The novelty and importance of our work is two-fold. (i) We explain their empirical findings via our 2D truth representation subspace analysis; and (ii) our analysis clarifies why linear classifiers may still be highly accurate for the task of truth/falsity classification. We will clarify this in the revision.
“this insight does not lead to any novel method of training a more generalizable truth probe”
As mentioned above, the insight that there is a 2D subspace explains why linear classifiers may still be highly accurate in classifying truth/false statements. In direct reply to the above concern, in recent follow up work (after our original submission), based on this insight, we constructed a new classifier, which is still linear, but achieves an even higher accuracy than LR and CCS. We will gladly discuss this in the revision.
Regarding generalizability and testing on more challenging and out of distribution statements: Please note that Section 5 presents such settings. Specifically, to the best of our knowledge, we are the first to quantify the generalisation accuracy of lie detectors based on internal probe activation, when tested on a variety of challenging real-life scenarios. We would like to emphasise that Pacchiardi et al., the creators of the real-life scenarios, used the output of the LLM to follow up questions (after lying or telling the truth) and not the internal activations of the LLM to classify the LLM responses as truthful reply or lie. We will clarify this important point in the revision.
“A more comprehensive investigation of generalizability would strengthen the paper”: In general, we agree that further investigations are of interest and we explicitly mentioned this in the original submission. That said, please note that the evaluation of generalizability in our submission is more comprehensive than most prior works, both in terms of type of statements, number of diverse datasets considered, etc. We will clarify this in the revision.
Regarding Universality: We agree that this claim can be strengthened by considering larger models and other model families. In reply to your concern, we extended our analysis to Gemma-2-27B-Instruct, a model twice the size of the largest model considered in our original submission, and to Mistral-7B-Instruct-v0.3, a LLM from a different model family. The results are qualitatively the same as for the other LLMs with a 2D truth subspace in the activation space of the models. We will include these results in the appendix of the revision.
Regarding causal experiments: We agree that causal intervention based on the 2D truth representation is an exciting research direction. However, it is beyond the scope of this paper, as our focus lies on providing rigorous evidence for the existence of the 2D truth subspace and using this knowledge for robust lie detection. We will mention in the revised conclusion section that causal intervention based on our insights is an exciting direction for future research.
Regarding the Mass Mean Probing technique of Marks and Tegmark: We thank the referee for this suggestion. In response, we did a comparison with MM probing on the Llama3-8B-Instruct model. For a fair comparison we in fact extended the MM probe to include a learned bias term. The MM probe generalised a bit better than our original LR based lie detector classifier. However, a new classifier we constructed after our original submission (mentioned above) achieved even higher generalisation accuracies. We included a table comparing the four methods in the global response. We will include all of these results in the revision.
Regarding the normal LR probe in Section 5: This is an excellent question and we agree that the motivation for this was not sufficiently well explained in the original submission. Our analysis of the 2D truth subspace showed that there is a general truth direction $t_G$ which we can disentangle from $t_P$ by training a linear classifier on a balanced number of affirmative and negated statements. Logistic Regression was simply our choice for the linear classifier and the direction it learned is indeed similar to $t_G$. We will clarify this in the revision.
Is there a reason to expect one method to be superior to the other?
Good question! Empirically, we have found (after our original submission) that truth directions which are learned separately from the bias (as in Section 3) generalise better to the unseen statement types and real-world scenarios than truth directions which are learned together with the bias (as in LR). However, we are not aware of any fundamental reason why one method would be superior to the other. We will discuss this in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response: I appreciate the additional experiments you conducted for MM probes as well as universality across models. I will keep my current score. | Summary: The paper considers the problem of detecting whether a statement is true or false from the intermediate activations of an LLM. It starts by introducing a linear functional form with two linear components t_G and t_P which is able to discriminate between true and false statements for both affirmative and negated statements. They then show that most of the variance in the activations is captured by its first two principal components, which closely match t_G and t_P. They then show that removing the linear subspace spanned by t_G and t_P hurts the generalization performance of a linear probe (i.e., logistic regression over the activations) in discriminating between true and false statements. They then show that logistic regression with balanced affirmative and negated statements outperforms a previous approach called Contrast Consistent Search, and include results for 26 real-life role-playing scenarios.
Strengths: The paper is clear. It directly addresses a problematic of prior work, namely constructing classifiers that are more robust in detecting truthful statements using LLMs’ intermediate activations. The findings of the work seem correct: training the classifiers on more diverse (and balanced) data improves generalization.
Weaknesses: The analysis of Section 3 is qualitative and would benefit from including the classifiers’ accuracy (i.e., accuracy gain when including t_G). Looking at Figure 1 top right, it seems like a single hyperplane would be able to separate True/False with reasonably high accuracy. Therefore, I don’t see the value of the “truth direction” t_G and “polarity-sensitive truth direction” t_P decomposition proposed by the authors (other than for interpretability purposes), the important point seems to be to train on the negated statements. That is, to reduce the distribution shift from train to test.
For the analysis of Section 4, since the first two principal components account for most of the variance of the intermediate activations, and t_G and t_P are reasonably aligned with these two PCs, it seems unsurprising that performance would degrade after projecting out t_G and t_P — since one is removing most of the information contained by the activation functions. That is, I would expect generalization to degrade not only for the true/false discrimination tasks considered, but for other tasks unrelated to true/false discrimination. If removing the first two PCs yields qualitatively similar results, it seems that there is nothing too specific about “truth” being projected out, simply most of the information contained in the activations.
In Section 5, the authors compare against a prior method, CCS. I take the main results to be that a simple classifier trained on diverse and balanced data outperforms a more complex and specialized learning algorithm. More broadly, I believe that the main contribution of the paper is to show that, for the task of true/false discrimination from internal activation functions, training on more diverse and balanced data improves generalization. I don’t think that this contribution is of sufficient significance to warrant a NeurIPs publication.
Technical Quality: 2
Clarity: 2
Questions for Authors: I suggest the following:
* In Section 3, to include the performance of the logistic classifier trained on both affirmative and negated statements, and compare it to that of the proposed t_G and t_P decomposition.
* In Section 4, to compare to projecting out the first two principal components.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The authors adequately discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your review! We appreciate your comments and hope that our response below can convince you that the contribution of our work goes beyond just showing that training on more diverse and balanced data improves generalization.
“The analysis of section 3 is qualitative”. Please note that the accuracies of the classifiers based on the directions $t_G$ and $t_P$ appear in Figure 3. This figure shows the clear advantage of classification based on $t_G$. The accuracy of a classifier trained only on affirmative statements ($t_A$), is 81%, whereas using $t_G$ it is 95%. We will add these numbers to figure 1 and to the main text. In addition, following the suggestion of the referee, we will add to Figure 3 another column with the accuracy of a logistic regression classifier trained on affirmative and negated statements.
Regarding the importance of the two directions, the decomposition, etc: We agree with the referee that in general training on a larger dataset, more representative of future test instances, is often beneficial. However, it is a-priori unclear that a linear classifier would be suitable.
In the following, we explain why in our opinion, our discovery of the 2D truth subspace spanned by $t_G$ and $t_P$ is a significant contribution. Prior to our work it was unclear how LLMs internally represent the truth or falsity of factual statements. For example, it was unclear whether there is a single "general truth direction" or multiple "narrow truth directions”, each for a different type of statement. However, this knowledge is essential in order to construct a robust LLM Lie Detector based on its internal model activations.
In particular, insights from our 2D truth subspace representation can be directly used to construct robust truth classifiers, which are still linear. Furthermore, these insights might allow other researchers to construct even more accurate non-linear classifiers which leverage both dimensions of the truth subspace.
Regarding the analysis of Section 4, and the first 2 PCs accounting for most of the variance: We thank the referee for raising this issue, which was not sufficiently clearly explained in the original submission. A key point is that we did not perform PCA on the raw activations. Instead, we first preprocess the data to isolate truth-related variance. As described in Section 4, we have a two-step process: (1) we center the activations in each dataset D_i, see L200-202. (2) We average the resulting activations for true and false statements in each dataset, see Eq. 7. The first two PCs after this preprocessing capture 60% of the variance in the centered and averaged activations, but not in the raw activations. On the raw activations, these two vectors capture only ~10% of the variance. Therefore, we are not removing most of the information contained in the activation functions. We will clarify this important point in the revision.
Regarding comparison to CCS, and the statement “I take the main results to be that a simple classifier trained on diverse and balanced data outperforms a more complex and specialized learning algorithm”:
Here we respectfully disagree with the referee. First, let us point out that we trained both our method, as well as CCS on the same diverse and balanced training set. Second, the high accuracy of our method is supported by our 2D analysis of the truth representation in the activation space, hence showcasing the tight connections between the various sections of our manuscript.
To conclude, given the clarifications above, as well as our replies to the concerns raised by the other reviewers, in our opinion, the contributions of our manuscript are of interest to the community and of sufficient significance to warrant publication at NeurIPS.
---
Rebuttal 2:
Comment: Thank you for your response, and apologies for the delay in responding.
> However, it is a-priori unclear that a linear classifier would be suitable.
What is the significance of the classifier being linear? Do you assume that linear implies more robust, at little to no cost in accuracy? This might not hold beyond the toy tasks considered in this work (e.g., the method already fails to generalize to logical conjunctions well, let alone actually detecting “lies” in deployed systems).
> Insights from our 2D truth subspace representation can be directly used to construct robust truth classifiers, which are still linear. Furthermore, these insights might allow other researchers to construct even more accurate non-linear classifiers which leverage both dimensions of the truth subspace.
I fail to see what insights you used to construct the final linear classifier. Is it not just standard linear regression over the activations? What insights could be used to construct more accurate non-linear classifiers?
> On the raw activations, these two vectors capture only ~10% of the variance.
This addresses my concern regarding most of the variance being projected out.
> First, let us point out that we trained both our method, as well as CCS on the same diverse and balanced training set. Second, the high accuracy of our method is supported by our 2D analysis of the truth representation in the activation space, hence showcasing the tight connections between the various sections of our manuscript.
Yes, I was commenting under the assumption that both CCS and your method are trained on the same data. The fact that linear regression outperforms CCS is to me more indicative of CCS being a poor method for this particular task rather than of the merits of linear regression. Linear regression (linear probing) is typically the baseline for classification tasks. For the more challenging tasks (Figure 6b, real-world scenarios), the error bars are so large that it is not even clear that the performance of LR and CCS are significantly different.
My general assessment remains “I believe that the main contribution of the paper is to show that, for the task of true/false discrimination from internal activation functions, training on more diverse and balanced data improves generalization” I’ll add the following “The work shows that a linear classifier is sufficient to obtain high accuracy in simple true/false discrimination tasks”. I still do not think that these two contributions are of sufficient significance to warrant publication.
---
Rebuttal Comment 2.1:
Comment: Thanks for getting back to us.
“What is the significance of the classifier being linear?”
As mentioned in the introduction (line 46-50), growing evidence supports the hypothesis that LLMs might encode human-interpretable concepts as linear combinations of neurons, i.e. as directions in activation space. Our manuscript provides strong evidence that one of these concepts might be the truth, represented by the truth direction $t_G$. Given that accurately assigning truth labels to statements is highly labor-intensive and data is correspondingly scarce, we aimed for a classifier with a strong inductive bias towards a solution that we have reason to believe generalizes well. Hence, our choice of a linear classifier.
Regarding insights used for the construction of the linear classifier: Our insight was that training a robust linear classifier which generalizes to both affirmative and negated statements requires disentangling $t_G$ from $t_P$. We achieved that by training on activations from an equal number of affirmative and negated statements. Note that successful disentanglement of $t_G$ and $t_P$ is far more crucial than disentangling $t_G$ from some spuriously correlated feature that correlates with truth in the narrow training distribution but is mostly uncorrelated with truth beyond it. In contrast to such spuriously correlated features, $t_P$ is consistently anti-correlated with truth on negated statements (as opposed to merely being uncorrelated), which is even worse for generalization. We will clarify this point in our revision.
“What insights could be used to construct more accurate non-linear classifiers?”
We showed that the truth direction $t_P$ points from false to true for affirmative statements and from true to false for negated statements. By estimating the statement polarity from the activations, one could flip the sign of $t_P$ for negated statements such that it points from false to true for both affirmative and negated statements. Now one could use both dimensions of the truth subspace, $t_G$ and $t_P$, for classification, not losing valuable information compared to using just one dimension. In the top right panel of Figure 1 you can see that such a non-linear approach would probably improve the accuracy of the classifier compared to a linear approach. Of course, this is just an example and people might come up with other approaches based on our analysis of how truth is represented in LLMs. We will mention this in the revision. | Rebuttal 1:
Rebuttal: In response to reviewer YJ6m's suggestion, we expanded our comparison of classifiers in Section 5. In addition to Logistic Regression (LR) and Contrast Consistent Search (CCS), we now include Mass Mean (MM) probing by Marks and Tegmark [2023] and a new classifier we developed after the original submission. Importantly, the design of our new classifier is motivated by the structure of the 2D truth subspace identified in our work. Our results, summarised in Table 1 of the attached PDF, demonstrate that our new classifier generalises even better than previous methods. We will include these results in the revision, along with a detailed explanation.
Pdf: /pdf/49ab4c81c5a6847ad75133c4b48eb8e34a791c8a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Schrodinger Bridge Flow for Unpaired Data Translation | Accept (spotlight) | Summary: This paper proposes $\alpha$-IMF, which is an incremental version of IMF method in DSBM paper. Specifically, this work demonstrate the convergence properties of $\alpha$-IMF and implement it through an online learning method. Here, $\alpha$ is implicitly reflected. Moreover, the functional flow interpretation provides a basis for online learning, allowing for online learning as opposed to the iterative approach of DSBM. From an engineering perspective, the authors suggest parameterizing forward and backward control simultaneously by adding a forward/backward indicator (0 or 1) as an input variable to the neural network. The method shows feasible results on toy (Gaussian) data and image-to-image translation tasks.
Strengths: - This work is theoretically fruitful. The introduction of $\alpha$-IMF leaded to a more expansive theoretical framework. This flow-based perspective enables online learning.
- Reduced the sensitivity to hyperparameters such as the iteration number, the number of phase in DSBM.
- Expands the interpretation of the Schrödinger bridge by demonstrating connections with reinforcement learning (RL) and Sinkhorn flow.
Weaknesses: - The experiment on practical data is insufficient and the performance improvement is incremental. The paper only presents real-world I2I on Cat <-> Wild. Moreover, the performance of bidirectional online learning improves only incrementally. Totally, it needs more evaluation on other kinds of dataset, e.g. Male <-> Female / Handbags <-> Shoes.
- It would be beneficial to quantify the advantages of bidirectional learning or online learning over DSBM. For instance, comparing the actual wall-clock time or GPU memory usage would be useful.
- The current algorithm reflects $\alpha$ implicitly, and so there is no analysis provided on $\alpha$. If there were any intuition or insights into the role of $\alpha$ (step size) or if an algorithm could be developed to explicitly control $\alpha$, it would offer more valuable insights.
Technical Quality: 3
Clarity: 2
Questions for Authors: - If we perform this algorithm with full gradient descent algorithm, is it possible to explicitly control $\alpha$?
- Can we develop exact algorithm that reflect $\alpha$ explicitly? If so, can we obtain some experimental results that compares $\alpha$?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, we appreciate your acknowledgements of the paper’s merits.
> The experiment on practical data is insufficient and the performance improvement is incremental. The paper only presents real-world I2I on Cat <-> Wild. Moreover, the performance of bidirectional online learning improves only incrementally. [...] For instance, comparing the actual wall-clock time or GPU memory usage would be useful.
Bidirectional online learning approximately halves the number of network parameters, thereby reducing GPU memory usage and wall-clock training time. In the case of AFHQ-64, pretraining bidirectional and 2-network models takes 4 and 9 hours, while finetuning takes 14 and 19 hours, respectively. We use the same number of gradient updates for both models. The computational cost difference comes from the use of a bidirectional versus a 2-network model. However, note that bidirectional models are incompatible with the original iterative DSBM and are suitable only for online finetuning.
A fair comparison of DSBM and α-DSBM in terms of the convergence speed, is then possible with a 2-networks model. It is best illustrated by the Gaussian example where there is a single evaluation metric.
> The current algorithm reflects 𝛼 implicitly, and so there is no analysis provided on 𝛼. If there were any intuition or insights into the role of 𝛼(step size) or if an algorithm could be developed to explicitly control 𝛼, it would offer more valuable insights.
If we perform this algorithm with full gradient descent algorithm, is it possible to explicitly control 𝛼?
Can we develop exact algorithm that reflect 𝛼 explicitly? If so, can we obtain some experimental results that compares 𝛼?
We agree with the reviewer that parameter $\alpha$ merits further discussion. In our current manuscript, we state in (l.218) “In Algorithm 1, we specify $\alpha \in (0,1]$ as a stepsize parameter. In practice, we use Adam (Kingma and Ba, 2015) for optimization, thus the choice of $\alpha$ is implicit and adaptive throughout the training.” So, $\alpha$ is linked to the learning rate: in a Stochastic Gradient Update (SGD), $\alpha$ would correspond to the learning rate. To address this point comprehensively (and also answers reviewer Ti9G), we train a model with SGD and a sweep on the values of $\alpha$; see additional one-page rebuttal. We found out that setting explicitly the value of $\alpha$ in SGD can yield similar results as letting $\alpha$ adaptive using Adam. However, for too large values of $\alpha$ the algorithm diverges.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I would like to keep my score to 6. | Summary: The authors consider developing a new algorithm for a Schrödinger Bridge problem of translating between two probability distributions. Motivated by the fact that current Schrödinger Bridge (SB) approaches either use mini-batch optimal transport techniques or require training diffusion at every iteration, the authors propose discretizing the Schrödinger Bridge Flow (SB Flow) of path measures that converge to SB. They call this discretization $\alpha$-Iterative Markovian Fitting ($\alpha$-IMF) procedure. Using the fact that Markovian path measures characterizing the considered flow can be parametrized by diffusion vector field $v$, authors propose a non-parametric scheme for updating $v_n$ corresponding to $n$ discretization step of SB Flow and prove its correctness. To implement the procedure in practice, authors prove a parametric counterpart to the non-parametric update scheme that allows for optimization over a vector field $v_{\theta}$ parametrized as a neural network called $\alpha$ Diffusion Schrödinger Bridge Matching ($\alpha$-DSBM). Furthermore, the authors computationally validate the proposed algorithm on both vector and image state spaces. The authors compare their $\alpha$-DSBM algorithm to other DSBM versions [3] in the Gaussian setting. Image state space experiments were held on image-to-image translation tasks, i.e., MNIST $\leftrightarrow$ EMNIST, Wild $\leftrightarrow$ Cat on AFHQ 64/256, with quantitative evaluation and comparison with the original DSBM algorithm [3].
Strengths: - The overall theory is novel and interesting. The authors prove that the proposed $\alpha$-IMF procedure with non-parametric updates converges to the solution of SB.
- The parametric counterpart to $\alpha$-IMF ($\alpha$-DSBM), which can be used with neural networks, is proposed and empirically verified.
- The possibility of online $\alpha$-DSBM finetuning seems beneficial in terms of computational resources compared to regular DSBM iterative training.
- A sufficient comparison of the proposed $\alpha$-DSBM with the previously known DSBM algorithm [3] is presented for a wide set of hyperparameter $\epsilon$.
- The Appendix presents an extensive additional theoretical review of the proposed concept and its connections to existing research.
- The study on the parameterization of diffusion in both directions by one neural network with varied conditional vectors is proposed and tested.
Weaknesses: - The proposed $\alpha$-DSBM algorithm [3] is, in some sense, an advancement of the previously known DSBM algorithm [3]. It is expected to be more efficient because it removes the necessity to learn a distinct model at each iteration. However, the paper lacks a study of image quality boost given the same computational budget. Thus, it is not clear whether the authors achieved the goal of developing a more efficient algorithm.
- The convergence of parametric $\alpha$-DSBM is not proved. While it may be a hard theoretical problem to prove, at least a more extensive empirical evaluation of the setup with the known ground truth solution would be beneficial. The presented setup with a scalar covariance matrix is too simple. It would be a more solid argument to evaluate the Gaussian setup with a full covariance matrix [6] or use a mixture benchmark [7] and compare it with other neural network methods.
- The FID metric is similar for DSBM and $\alpha$-DSBM for the AFHQ experiment in Table 1 considering measured standard deviation. Thus again highlighting the lack of study with the same computational budget to justify that $\alpha$-DSBM is more efficient.
The chosen AFHQ dataset may not be the best choice considering the small test set, which may introduce bias in measuring the FID metric. In turn, the visual comparison is hard since most images produced by DSBM and $\alpha$-DSBM and given in Figure 19 are similar, and it is hard to deduce which algorithm is better. A bigger dataset such as Celeba [5] may help to solve this issue. At least a bigger dataset would give a good estimate of FID metrics.
- It seems that some hyperparameters for DSBM AFHQ experiments are missing. The number of pre-training iterations, grad updates per IMF iteration, and IMF iterations are not specified. What is the training time and computational budget?
- There is no study on *cornerstone* hyperparameter $\alpha$. What is the value of $\alpha$ used in the experiments? Is it equal to the learning rate of the diffusion model parameters?
- The absence of code may cause trouble with reproducing the presented results.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Since the AFHQ dataset is quite small, experiments on unpaired male-to-female translation on the Celeba [5] dataset could provide a more meaningful comparison (as it was in DSBM[3]).
- How do DSBM and $\alpha$-DSBM compare in terms of computational budget? Could authors provide a quantitative comparison?
- How does variation of $\alpha$ affect speed of convergence and deviation from the SB solution?
- Can authors provide experimental results of $\alpha$-DSBM on more complex Gaussian distributions or multimodal low dimensional data, i.e., gaussian mixtures?
- Is it possible to extend the proposed approach to GSBM [4], where, as far as my understanding goes, authors change the reciprocal projection step compared to DSBM [3]?
- Can this method with small $\epsilon$ can be applied to the generation problem? This could drastically reduce the computational budget of such and be applied in modern text-to-image generative models [2] similar to the work [1]. It would be great to see such experimental results.
- Is it possible to perform *parametric* updates for Sinkhorn flow in a similar to $\alpha$-DSBM way?
Overall, the paper looks very promising. I will consider increasing my score if the authors address some of the listed weaknesses and questions, especially related to the same computational budget comparison, and evaluate their method on one of the listed more complex setups with the known ground truth distributions.
[1] Liu, Xingchao, et al. "Instaflow: One step is enough for high-quality diffusion-based text-to-image generation." The Twelfth International Conference on Learning Representations. 2023.
[2] Esser, Patrick, et al. "Scaling rectified flow transformers for high-resolution image synthesis." Forty-first International Conference on Machine Learning. 2024.
[3] Shi, Yuyang, et al. "Diffusion Schrödinger bridge matching." Advances in Neural Information Processing Systems 36 (2024).
[4] Liu, Guan-Horng, et al. "Generalized Schrödinger Bridge Matching." The Twelfth International Conference on Learning Representations.
[5] Liu Z. et al. Deep learning face attributes in the wild //Proceedings of the IEEE international conference on computer vision. – 2015. – С. 3730-3738.
[6] Hicham Janati, Boris Muzellec, Gabriel Peyré, and Marco Cuturi. Entropic optimal transport between unbalanced gaussian measures has a closed form. Advances in neural information processing systems, 33:10468–10479, 2020.
[7] Gushchin N. et al. Building the bridge of schrödinger: A continuous entropic optimal transport benchmark //Advances in Neural Information Processing Systems. – 2023. – Т. 36. – С. 18932-18963.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments and thoughtful questions, we are glad you enjoyed the paper.
> The paper lacks a study of image quality boost given the same computational budget.
> It would be a more solid argument to evaluate the Gaussian setup with a full covariance matrix [6].
> A bigger dataset such as Celeba may help to solve this issue.
We appreciate the reviewer's suggestions regarding our experimental setup. Concerning the image quality improvement given the same computational budget, we'd like to emphasize the inherent challenge in designing appropriate metrics for evaluating OT-based algorithms, as we are considering two distinct metrics: 1) Image quality (FID score): This roughly quantifies the alignment between the marginals $\pi_0$ and $\pi_1$ and the output distributions of the sampling process, 2) Alignment between samples from $\pi_0$ and $\pi_1$: This quantifies the minimization of the OT cost. Our objective is to minimize both the FID score and the alignment cost, necessitating the tracking of both these metrics. Nevertheless, we have conducted experiments comparing DSBM and $\alpha$-DSBM with equivalent computational costs, see the one-page supplementary.
Additionally, we have now evaluated our methodology in a complex Gaussian setting using full covariance matrices, similar to the approach in [1]; see one-page additional rebuttal. The findings from this full covariance matrix setting agree with our earlier conclusions from the scalar setting.
Finally, we have also evaluated our method on CelebA and refer to the one-page supplementary.
[1] Janati et al., “Entropic optimal transport between unbalanced gaussian measures has a closed form”, 2020.
[2] Chen et al., “Gradient flow in parameter space is equivalent to linear interpolation in output space”, 2024
> The number of pre-training iterations, grad updates per IMF iteration, and IMF iterations are not specified. What is the training time and computational budget?
We have added the missing hyperparameters for the training of DSBM in the case of the AFHQ experiment to the revised manuscript. The number of pretraining iterations is fixed to 100 000. The number of gradient updates per IMF iteration is 500 and the number of IMF iterations is 40.
> There is no study on cornerstone hyperparameter 𝛼. What is the value of 𝛼 used in the experiments? Is it equal to the learning rate of the diffusion model parameters?
We agree that we should have provided more discussion about the choice of parameter $\alpha$. Currently, we say (l.218) “In Algorithm 1, we specify $\alpha \in (0,1]$ as a stepsize parameter. In practice, we use Adam for optimization, thus the choice of $\alpha$ is implicit and adaptive throughout the training.” So indeed, $\alpha$ is linked to the learning rate and if one were to use SGD and not Adam, $\alpha$ would be exactly the learning rate. To complement this answer and also fully answer reviewer KqVg, we train a model with SGD and a sweep on the values of $\alpha$. The results are reported in the one-page rebuttal. We found that setting explicitly the value of $\alpha$ in SGD can yield similar results as letting $\alpha$ adaptive using Adam. However, for too large values of $\alpha$ the algorithm diverges.
> The absence of code may cause trouble with reproducing the presented results.
We are working on releasing the notebooks to reproduce our experiments on Gaussian data. Unfortunately, due to confidentiality agreements and intellectual property concerns, we are unable to open source the full source code. We have included a code snippet of our main training loop implementing our online methodology ($\alpha$-DSBM) in the appendix of the revised paper.
> Is it possible to extend the proposed approach to GSBM
We thank the reviewer for this remark. Indeed our online approach can be used to improve the computational efficiency of every method derived from DSBM. For example Generalized SBM [1] would also benefit from the techniques used in $\alpha$-DSBM. We have added a comment on this in the revised manuscript.
> Can this method with small epsilon can be applied to the generation problem? [...] [2] similar to the work [1].
You are correct: $\alpha$-DSBM can be used with $\varepsilon$ small (or even $\varepsilon = 0$, even though in that case, our theoretical framework does not guarantee the convergence to the solution of the OT problem). Hence, our technique can potentially enhance the performance of the 2-Reflow of [1] in a generative model context. However, we emphasize that in [2], there is no finetuning stage and only a flow-matching pretraining (i.e. this text2img model only corresponds to the pretraining of our model). As the pretraining and finetuning of such models is long and expensive, we could not conduct such experiments during the rebuttal period, however we plan to include such text2img experiments in the final version of the paper.
[1] Liu et al., InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation, 2024.
[2] Esser et al., Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, 2024.
> Is it possible to perform parametric updates for Sinkhorn flow in a similar to $\alpha$-DSBM way?
The applicability of a methodology similar to $\alpha$-DSBM to Sinkhorn-flow [1] is not straightforward. First, $\gamma$-Sinkhorn is an inherently static algorithm. In order to apply a methodology closer to $\alpha$-DSBM, we would need to develop a dynamic version of $\gamma$-Sinkhorn. This would correspond to extending DSB [2] to an online setting. It is unclear whether the objective of $\gamma$-Sinkhorn, see Lemma 1 in [1], can be modified to yield a tractable loss for an hypothetical $\gamma$-DSB version.
[1] Karimi et al., Sinkhorn Flow: A Continuous-Time Framework for Understanding and Generalizing the Sinkhorn Algorithm, 2023.
[2] De Bortoli et al., Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling, 2021.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their reply, which resolves most of my concerns and questions. To finalize, could you please provide the metrics for the quality (e.g., FID) and similarity (e.g., L2 cost or LPIPS) for the Celeba dataset experiment? Also, could you please provide estimates on the GPU time you used to train DSBM and alpha-DSBM?
Please incorporate Celaba, Guassian, text2img experiments, and other changes into the final revisions.
---
Rebuttal 2:
Comment: Thanks a lot for your comments which have improved the overall quality of the paper.
Here are the additional metrics requested below (the hardware setup is identical to AFHQ):
* Base model training time: 16 hours
* Finetuning alpha-DSBM/Finetuning DSBM: 7 hours
The finetuning of DBSM corresponds to two IMF iterations (one forward and one backward).
Regarding the L2, LPIPS evals (lower is better):
* DSBM: 0.159 (L2) / 0.451 (LPIPS)
* alpha-DSBM: 0.05 (L2) / 0.376 (LPIPS)
We are also reporting the Inception Score (higher is better) evals for the base models, DSBM and alpha-DSBM:
* Base training: 2.29
* DSBM: 2.88
* alpha-DSBM: 3.13
Hence, alpha-DSBM improves over DSBM for the same amount of compute. We see that the main reason for the underperformance of DSBM is that for that amount of compute there is little deviation of the alignement score, i.e. the model is still close to the base model. We will incorporate these numbers in the final version of the paper. We hope that these numbers resolve the concerns of the reviewer.
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply and additional results. Since my concerns and questions were addressed, I updated my score. | Summary: This work introduces $\alpha$-DSBM, a new way of training DSBM-like models, which does not require a Markovian projection at each step and eliminates the need to train multiple models. The main advantage over previous DSBM-based approaches is that $\alpha$-DSBM only needs to train a single model with a single model, thus exhibiting a more stable training procedure.
The authors thoroughly contextualize their work within related work and provide detailed theoretical derivations to motivate their approach. Empirically $\alpha$-DSBM is validated by comparing to existing DSBM-based approaches across different unpaired image translation tasks as well as some toy data examples.
Strengths: - The new $\alpha$-DSBM is introduced based on thorough theoretical motivation and derivations.
- The authors manage to present and contextualize their method very clearly within the scope of related work across the different bridge matching approaches (Appendix E) and highlight connections accordingly (e.g. connection to the reflow procedure).
- In general, the paper is well-written and, given the complexity of the topic, quite clear to follow.
- Empirical validation includes important ablations giving further insights into the hyperparameter choice of e.g. $\varepsilon$.
Weaknesses: - While the experimental section of the paper thoroughly analyzes and compares DSBM and its different flavors, including the proposed method, a comparison with other competing methods could further support $\alpha$-DSBM through empirical evidence. Specifically, a comparison to bridge/flow matching with mini-batch OT similar to [1, 2], a highly optimized CycleGAN as in [3], and adversarial-based OT methods like [4] could be good candidates for further comparisons. The authors also mention these as competing methods.
[1] Alexander Tong and Kilian Fatras and Nikolay Malkin and Guillaume Huguet and Yanlei Zhang and Jarrid Rector-Brooks and Guy Wolf and Yoshua Bengio. "# Improving and generalizing flow-based generative models with minibatch optimal transport". IN TMLR 2024.
[2] Luca Eyring and Dominik Klein and Théo Uscidda and Giovanni Palla and Niki Kilbertus and Zeynep Akata and Fabian Theis. "Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation". In ICLR 2024.
[3] Dmitrii Torbunov and Yi Huang and Huan-Hsin Tseng and Haiwang Yu and Jin Huang and Shinjae Yoo and Meifeng Lin and Brett Viren and Yihui Ren. "UVCGAN v2: An Improved Cycle-Consistent GAN for Unpaired Image-to-Image Translation". In Arxiv 2023.
[4] Beomsu Kim and Gihyun Kwon and Kwanyoung Kim and Jong Chul Ye. "Unpaired Image-to-Image Translation via Neural Schrödinger Bridge". In ICLR 2024.
Technical Quality: 3
Clarity: 4
Questions for Authors: - How does $\alpha$-DSBM perform compared to bridge matching with mini-batch OT sampling empirically? While it is clear that these methods introduce significant errors because of the mini-batch approximation, it is unclear how this affects results empirically compared to $\alpha$-DSBM. I think this is the most important additional competing work (apart from DSBM), as these methods share their overall goal.
- Does it make sense to combine $\alpha$-DSBM with mini-batch OT sampling for the initial pretraining? How would this impact $\alpha$-DSBM?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - The main limitation of $\alpha$-DSBM is that new data needs to be generated with the current model during its fine-tuning, making the procedure not simulation-free and, thus, more expensive. This is sufficiently addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive comments and feedback.
> While the experimental section of the paper thoroughly analyzes and compares DSBM and its different flavors, including the proposed method, a comparison with other competing methods could further support-DSBM through empirical evidence.
> How does 𝛼-DSBM perform compared to bridge matching with mini-batch OT sampling empirically?
We thank the reviewer for their suggestion to evaluate our approach against competing methods. While we agree that comparisons with [2,3,4] would be valuable, we have chosen, in this rebuttal, to focus on [1] which was also mentioned by reviewer ovZP and is most similar to our approach. However, we will mention [2,3,4] as possible alternatives with similar goals in our extended related work section. Our comparison with OT-bridge matching in the Gaussian setting is reported in the attached one-page rebuttal. We will include a comparison to [2,4] in the final version.
[1] Tong et al., Improving and generalizing flow-based generative models with minibatch optimal transport, TMLR, 2024.
[2] Eyring et al., Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation, ICLR, 2024.
[3] Torbunov et al., UVCGAN v2: An Improved Cycle-Consistent GAN for Unpaired Image-to-Image Translation, arXiv:2303.16280, 2023
[4] Kim et al., Unpaired Image-to-Image Translation via Neural Schrödinger Bridge, ICLR, 2024.
> Does it make sense to combine 𝛼-DSBM with mini-batch OT sampling for the initial pretraining? How would this impact 𝛼-DSBM?
This is indeed doable and is a great suggestion. This would require changing our pretraining strategy to include the mini-batch OT sampling during the training. Our guess is that the convergence of $\alpha$-DSBM (and even DSBM) would be faster. To support this result, we refer to the results of [1,2] which show that the convergence of the Sinkhorn algorithm depends on the Kullback-Leibler divergence between the original guess (here the output of the (E)OT-FM methodology) and the target Schr\”odinger Bridge.
[1] Bernton, Schrödinger Bridge Samplers, arXiv:1912.13170, 2019.
[2] L\’eger, A gradient descent perspective on Sinkhorn, Applied Mathematics & Optimization, 2021.
---
Rebuttal 2:
Comment: Thanks for the answers and added experiments. It would be very interesting to see comparisons to [1,2,4] also in the image translation settings. I think this setting would provide a more meaningful empirical comparison. Also thanks for pointing to [1,2] for the convergence of DSBM with mini-batch OT. | Summary: The paper proposes a novel algorithm for mass transport problems, aiming to compute maps that transport one distribution to another. This paper introduces the Schrödinger Bridge Flow algorithm, a dynamic entropy-regularized version of OT, eliminating the need to train multiple DDM-type models. The algorithm discretizes a flow of path measures, ensuring the Schrödinger Bridge as the only stationary point. The paper demonstrates the algorithm's effectiveness on various unpaired data translation tasks, showcasing its potential to solve Entropic Optimal Transport problems with improved computational efficiency and practical implementation compared to existing methodologies.
Strengths: 1.This paper is eloquently written, and its ideas are easy to follow.
2.While I did not strictly follow all the proofs provided by the authors, I found them to be generally detailed, clear, and easy to understand.
3.The authors provide both quantitative and qualitative results on several datasets, along with a relatively detailed discussion and visualization. However, it is worth noting that the figures in the appendix appear to have low resolution, and the positioning of the legends is somewhat odd.
4.I understand that the convergence of the parametric methodology may require several strong assumptions. Therefore, it is reasonable that the authors have not provided further proof of convergence.
Weaknesses: 1.Assumptions in Theorem 3.1: Theorem 3.1 only mentions "under mild assumptions" without specifying them in the main text. It is necessary to state these assumptions clearly in the main body of the paper and provide references to justify their rationality. While a more detailed explanation can be provided in the appendix, the main text should at least include a summary of the assumptions to ensure the reader understands the conditions under which the theorem holds.
2.Experimental Results and Comparisons: The experimental results on some datasets are not particularly impressive, even though the authors claim that their method approximates OT maps more accurately than existing methods. The results in Table 1 do not show significant progress or state-of-the-art (SOTA) performance on certain datasets. Additionally, it is confusing that the authors explicitly mentioned Rectified Flow, Flow Matching, and OT-Flow Matching in the literature review, but did not include comparisons with these methods in the experiments. For instance, Rectified Flow has conducted experiments on mass transport, and other related methods, such as Denoising Diffusion Bridge Models from ICLR 2024, have tackled similar tasks. A direct comparison of the proposed method's performance with these existing methods would provide a clearer picture of its advantages and limitations.
3.Discussion of Related Work: The paper could benefit from a more comprehensive discussion of related work, including diffusion models, flow matching, and the application of Wasserstein gradient flow-based models in mass transport tasks. This would help in better contextualizing the authors' innovations and contributions. By situating their work more clearly within the existing literature, the authors could highlight the unique aspects and potential advantages of their proposed method, enhancing the readers' understanding of its significance.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.See Weaknesses
2.I am curious about the complexity and cost of the training method proposed by the authors. Could the authors provide more detailed quantitative data, such as training time comparisons with related methods?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Although the authors proposed a novel model to address mass transport problems and provided some theoretical proofs, the method is simulation-based. I believe that the training cost is higher compared to simulation-free methods, and the model performance does not achieve state-of-the-art (SOTA) results. Additionally, there is a lack of direct comparison with existing SOTA methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments, we appreciate your acknowledgements of the paper’s merits.
> It is necessary to state these assumptions in the main body of the paper and provide references to justify their rationality.
Due to space limitations, we have detailed most of the technical assumptions for our theorems in the supplementary material rather than in the main paper. However, if our submission is accepted, we will be granted an additional page, which will allow us to include the assumptions as detailed below.
**Theorem 3.1**
The assumptions for the result to hold are stated in Lemma D.2. We recall them here. Let $\pi_0$ and $\pi_1$ be the distributions at time 0 and 1. We require that $\pi_0$, $\pi_1$ have distributions w.r.t. Lebesgue measure, with finite entropy and finite 2nd moments. This finite 2nd moment assumption is quite weak. While the Lebesgue density hypothesis is commonly considered in the diffusion model literature, see e.g. [1,2].
**Proposition 3.2**
This proposition is detailed in Proposition D.1.The sole assumption in this case is that $\mathbb{P}_{v^n}$ is defined for all $n$. This means that the SDE with drift $v^n$ admits a weak solution. The existence of weak solutions of SDE is a well-known topic in the literature of SDE; see e.g. [3]. In practice this condition is satisfied since $v^n$ is replaced by a NN approximation, which satisfies the conditions for the SDE to admit a solution.
**Proposition 4.1**
The proof is similar to the one of Proposition 3.2 and therefore holds under the assumption that $v^{n, \rightarrow}$ and $v^{n, \leftarrow}$ define solutions of SDEs.
[1] Lee et al., Convergence of score-based generative modeling for general data distributions, 2023
[2] Conforti et al., Score diffusion models without early stopping: finite Fisher information is all you need, 2023
[3] Stroock et al., Multidimensional Diffusion Processes, 2006.
> The experimental results on some datasets are not particularly impressive, even though the authors claim that their method approximates OT maps more accurately than existing methods
We want to emphasize that we do not claim our approach yields better results than DSBM (or any other method that recovers the EOT). There is no expectation of better results from our method compared to DSBM, as both converge to the same solution. Currently, we state “$\alpha$-DSBM is easier to implement than existing SB methodologies while exhibiting similar performance” (l.313).
The main advantages of our methods are computational: 1) significantly simpler implementation, requiring only one fine-tuning stage and eliminating the need to alternate between two optimisation problems 2) utilization of only one NN (with the bidirectional implementation) 3) faster convergence, as shown in Fig. 2 (and in the Gaussian study in the one-page rebuttal). We hope this explanation clarifies the key computational contributions of our paper. We will revise our manuscript to further underline this point if it remains unclear.
> It is confusing that the authors explicitly mentioned Rectified Flow (RF), Flow Matching (FM), and OT-FM in the literature review, but did not include comparisons with these methods.
RF corresponds to DSBM for $\varepsilon = 0$. Our $\alpha$-DSBM algorithm thus provides an alternative online implementation of RF for $\varepsilon = 0$ and results for this online implementation are provided in Fig 3 (first column of right figure). For unpaired data translation, RF is not competitive and adding noise improves performance. However, for generative modeling, RF outperforms DSBM and $\alpha$-DSBM using $\epsilon>0$. We conjecture that when one marginal distribution is Gaussian, adding further noise to the interpolant hurts the generative capabilities. In addition, the fixed points of RF exhibit straight paths but are not necessarily solutions to the OT problem, see [2] for a counterexample.
FM [3] can be seen as the first iteration of DSBM with $\varepsilon = 0$. It consistently produces inferior results compared to RF, this is why we did not include it.
OT-Flow Matching [4] is closer to our setting and, if the batch size has infinite size, then we recover the OT solution. In the one-page rebuttal we show that $\alpha$-DSBM outperforms this method in the Gaussian case.
Finally, we also highlight that DDBM [5] do not solve the (E)OT problem but requires paired data to train the model and therefore is not comparable to our approach.
[1] Liu et al., Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow, 2023.
[2] Liu, Rectified Flow: A Marginal Preserving Approach to Optimal Transport, 2022.
[3] Lipman et al., Flow Matching for Generative Modeling, 2023.
[4] Tong et al., Improving and generalizing flow-based generative models with minibatch optimal transport, 2024.
[5] Zhou et al., Denoising Diffusion Bridge Models, 2024
> The paper could benefit from a more comprehensive discussion of related work
In the revised manuscript, we have expanded the related work section in the appendix to offer a more comprehensive overview. Notably, we discuss a taxonomy graph of diffusion methods and their connection to OT, see the one-page rebuttal.
> Could the authors provide more detailed quantitative data, such as training time comparisons with related methods?
Bidirectional online learning approximately halves the number of network parameters, thereby reducing GPU memory usage and wall-clock training time. For AFHQ-64, pretraining bidirectional and 2-network models takes 4 and 9 hours, while fine-tuning takes 14 and 19 hours, respectively. We use the same number of gradient updates for both models. The computational cost difference comes from the use of a bidirectional versus a 2-network model. Note that bidirectional models are incompatible with the original iterative DSBM and are suitable only for online finetuning. We have included these cost and complexity considerations in the appendix.
---
Rebuttal Comment 1.1:
Comment: We thank you for your review and appreciate your time reviewing our paper.
The end of the discussion period is close. We would be grateful if we could hear your feedback regarding our answers to the reviews. We are happy to address any remaining points during the remaining discussion period.
Thanks a lot in advance!
---
Rebuttal 2:
Comment: Thank you for addressing many of the initial concerns in your rebuttal. However, I believe that if the main contribution is claimed to be a simpler implementation with only one neural network and faster convergence, it is crucial to provide more quantitative data to support these claims.
I recommend:
Including additional experiments that compare your model with other single neural network setups, highlighting computational efficiency and performance (In the rebuttal, the author provides some additional comparative experiments).
Providing details on the reduction in neural network parameters to quantitatively demonstrate the simplicity of your approach.
Clarifying the convergence rates in comparison with baseline methods, particularly considering the fine-tuning duration mentioned in your rebuttal experiments (I noticed that in the rebuttal experiments, the authors' fine-tuning time was much longer than the training time. Does this contradict the authors' statement?).
If these points are addressed in the revised version and the theoretical assumptions are further clarified, I would be inclined to increase my score to a 5-6. Thank you for your efforts to improve the manuscript.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for their additional comment to which we answer below.
> Thank you for addressing many of the initial concerns in your rebuttal. However, I believe that if the main contribution is claimed to be a simpler implementation with only one neural network and faster convergence, it is crucial to provide more quantitative data to support these claims.
We will clarify this point in our introduction but we believe that our contribution is two-fold.
* First, we introduce a new theoretical framework to analyze online schemes aimed at solving the Schrodinger Bridge (SB). To the best of our knowledge, this is the first time an online training algorithm is proposed to solve SB along with a theoretical framework. This is what motivates the $\alpha$-IMF scheme.
* Second, we introduce our practical implementation of $\alpha$-IMF, $\alpha$-DSBM. With the simplification given by $\alpha$-IMF this can be seen as a fine-tuning approach of any bridge/flow matching procedure. To further improve the scalability of our approach we only use one bidirectional network compared to existing implementations which use two networks.
In the rebuttal we conducted additional experiments on CelebA to validate our findings. Because of the lack of space we could not include evaluation metrics in our initial rebuttal but report them below. We compare DSBM and $\alpha$-DSBM in the setting where both are evaluated with a single network (in order to answer the reviewer comment about “compare your model with other single neural network setups”). Both DSBM and $\alpha$-DSBM are run with the same number of gradient evaluations. The hardware set up and hyperparameters are similar to our AFHQ experiment. We report both alignment metrics and quality metrics.
Regarding the L2, LPIPS evals (lower is better):
* DSBM: 0.159 (L2) / 0.451 (LPIPS)
* alpha-DSBM: 0.05 (L2) / 0.376 (LPIPS)
We also report the Inception Score (higher is better) evaluations for the base models, DSBM and alpha-DSBM:
* Base training: 2.29
* DSBM: 2.88
* alpha-DSBM: 3.13
We hope that this additional experiment resolves the reviewer’s concerns.
> Providing details on the reduction in neural network parameters to quantitatively demonstrate the simplicity of your approach.
We highlighted in the rebuttal that in our implementation we halve the number of parameters compared to DSBM. The results regarding the use of a bidirectional or a unidirectional network are reported in Table 1 in the original manuscript.
> Clarifying the convergence rates in comparison with baseline methods, particularly considering the fine-tuning duration mentioned in your rebuttal experiments (I noticed that in the rebuttal experiments, the authors' fine-tuning time was much longer than the training time. Does this contradict the authors' statement?).
We want to highlight that the finetuning is done for significantly fewer training steps than the initial base training (more precisely in the case of CelebA for instance: 100k base training and 10k finetuning). The reason the finetuning takes longer than the base training is because our method (as well as DSB(M) and related methods) is not _sampling-free_. This is a well-known limitation of this line of work and we do not claim to solve this problem in our paper. We acknowledge this limitation in the original discussion of our paper. We agree that deriving “sampling-free” or almost sampling-free improvements of our methodology is an interesting future work.
> the theoretical assumptions are further clarified,
We have detailed the theoretical assumptions in our original answer to the reviewer. If any of the discussed assumptions remain unclear, we are willing to clarify them. We highlight that the detailed discussion of these theoretical assumptions will be included in the revised version of the paper as well as the discussion regarding the parameters reduction of the method and the additional comparison with DSBM and related methods. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their valuable time and insightful feedback. We appreciate the thoughtful questions and the overall positive response.
We have provided detailed responses to each reviewer's comments. In summary, the key feedback points we have received are as follows:
* **Additional experiments**: as requested by all reviewers, additional experiments have been conducted to better illustrate the efficiency of the proposed method. In the one-page rebuttal, we have conducted additional experiments regarding the choice of $\alpha$, comparisons with OT bridge matching and experiments on CelebA.
* **Better support for the parametric case**: as suggested by reviewer TiG9 we have conducted an improved analysis of the parametric case. Following the recommendation of reviewer ovZP we have also included the assumptions our main theorems rely on in the main body of the paper.
* **Effect of the parameter alpha**: Reviewer TiG9 and reviewer KqVg have asked for more details regarding the choice of the parameter alpha. We have conducted additional experimental analysis of the effect of this parameter and report them in the one-page rebuttal.
Attached to this rebuttal is the additional one-page PDF with additional results and tables.
Pdf: /pdf/7a1bb14d810a9ded55d7af8ff71488f37f0a091c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Multiview Scene Graph | Accept (poster) | Summary: The paper introduces a new scene graph format, which regards objects and places as nodes. The edges are intuitive, basically saying which objects are in which places and which places are close to each other. The paper also provides two metrics on which the graph generation method can perform two tasks better than the chosen baselines. The authors also provide an application using the proposed graph to accelerate DUST3R's process.
Strengths: The paper introduces a new scene graph format, which regards objects and places as nodes. The paper also provides two metrics on which the graph generation method can perform two tasks better than the chosen baselines.
Weaknesses: ### Motivation
Even though I understand what the authors are trying to express in the paper, the necessity of the proposed format of scene graph still raises my concern.
1. Firstly, the current scene graph extracted from multiple views is a bit simple; there is no spatial or other high-level semantic relationship between objects. From my point of view, the graph only associates the adjacent views for scene nodes and the same objects in the scene as object nodes. Do authors think one can leverage a VLM to do a similar job? Just query the VLM to associate the objects that appear in the multiple frames and query to associate the same scene if adjacent frames are close.
2. Secondly, do the authors think the proposed graph can somehow have a better ability to evaluate these tasks (VPR and object association)? If so, could you explain motivation more? What are the usages of edges between objects and places? Can existing scene graph formats, e.g., [1,2], do similar tasks in this paper?
### Experimental Issue
The experiments were not sufficient enough. The baselines in the paper were from early ages, and I suggest more recent ones should be compared. For example, two papers are first online in Nov 2023 [3] and in Feb 2024 [4]. Authors are encouraged to choose other methods as long as they are sensible to be recent baselines.
### Misc
1. Do the authors want to modify the title a bit? the current one seems to be a bit broad.
2. I may miss the related part, but what object detector did the authors use in Figure 2?
3. In the Supplementary, it is good to see the authors provide the source code, but the README is a void file, which makes it hard to understand the details.
4. The paper has several typos, e.g., "objects nodes" in Line 32 and "Simoutenous" in Line 72. Please do a careful review.
[1] Armeni I, He Z Y, Gwak J Y, et al. 3d scene graph: A structure for unified semantics, 3d space, and camera. CVPR 2019.
[2] Wu S C, Tateno K, Navab N, et al. Incremental 3d semantic scene graph prediction from rgb sequences. CVPR 2023.
[3] Izquierdo S, Civera J. Optimal transport aggregation for visual place recognition. CVPR 2024.
[4] Lu F, Lan X, Zhang L, et al. CricaVPR: Cross-image Correlation-aware Representation Learning for Visual Place Recognition. CVPR 2024.
Technical Quality: 2
Clarity: 2
Questions for Authors: The same as above.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations have been thoroughly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: the current scene graph extracted from multiple views is a bit simple
__A1__: We acknowledge the fact that the proposed MSG has a rather simple format as it is free of spatial or high-level semantic relationships between objects.
However, this does not mean this task is simple, insignificant, or can be readily solved. As acknowledged by reviewer 2 (AiA2) and demonstrated by the baseline performances in the paper, building topological connectivity between the unposed images and associating objects across frames and long stretches of time without observation is a challenging and important task. Constructing such multiview scene graphs from unposed images is fundamental to many tasks, such as 3D reconstruction[1], visual navigation[2], visual slam[3], etc. Having such a graph can be beneficial to all these applications.
Furthermore, the proposed MSG is complementary to the existing scene graphs. Edges for object-object relationships can be a seamless add-on to extend the MSG with more semantic information. Therefore, we believe our work adds a meaningful contribution to the scene graph community.
We also demonstrate in the next response that current Vision-Language Models (VLMs) are not yet capable of solving the MSG task.
[1]Xiao, Jianxiong, Andrew Owens, and Antonio Torralba. "Sun3d: A database of big spaces reconstructed using sfm and object labels." Proceedings of the IEEE international conference on computer vision. 2013.
[2] Chaplot, Devendra Singh, et al. "Neural topological slam for visual navigation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
[3] Salas-Moreno, Renato F., et al. "Slam++: Simultaneous localisation and mapping at the level of objects." Proceedings of the IEEE conference on computer vision and pattern recognition. 2013.
__Q2__: Do authors think one can leverage a VLM to do a similar job?
__A2__: We appreciate the suggestion. VLMs exhibit strong emergent abilities for many tasks, and we agree it would be interesting to try them out for MSG. However, querying VLMs with every image pair is a huge amount of work and cost, and sending all images together poses a challenge to the context length limit while also hurting performance. Therefore, we conducted a case study with one scene as a pilot study.
Specifically, we sampled a scene with a relatively small number of images and further subsampled all the images containing annotated objects, resulting in 22 images in total. We then queried the GPT-4o[1] 231 times with each image pair annotated with object bounding boxes as visual prompts, the corresponding box coordinates, and the task prompt. By parsing the GPT-4o outputs, we obtained the following results:
| metric | model total | model adjusted | VLM |
|---------|-------------|----------------|-------|
| PP IoU | 59.3 | 63.0 | 30.3 |
| PO IoU | 85.0 | 85.0 | 62.5 |
The *model total* represents the performance of our model on the entire scene, and *model adjusted* represents the performance evaluated only on those 22 subsampled images for a fair comparison. Besides the issues with computation cost and context limits, we note that a common failure pattern of VLM is the failure to maintain consistent object associations. The limitations of VLM in VPR are also discussed in the literature [2].
Nevertheless, we acknowledge that this is only a small-scale pilot study. It is well possible to have better VLMs in the future and come up with better prompts, and we are excited about the future possibilities of VLM + MSG.
[1] Achiam, Josh, et al. "Gpt-4 technical report." arXiv preprint arXiv:2303.08774 (2023).
[2] Lyu, Zonglin, et al. "Tell Me Where You Are: Multimodal LLMs Meet Place Recognition." arXiv preprint arXiv:2406.17520 (2024).
__Q3__: do the authors think the proposed graph can somehow have a better ability to evaluate these tasks (VPR and object association)? … Motivation and usage. Can existing scene graph formats do similar tasks in this paper?
__A3__: Thank you for the question. The proposed MSG encompasses both visual place recognition (VPR) and object association. Our proposed evaluation metrics, PP IoU and PO IoU, reflect the quality of place recognition and object association. Our proposed baseline combines the two tasks in a joint architecture. Therefore, we believe training and evaluating on the task of MSG will help both VPR and object association.
We are motivated by the fact that building spatial correspondence by associating images and objects is fundamental for scene understanding. The MSG takes unposed RGB image sets as input, allowing it to be applied to any arbitrary image sets and video streams. As discussed in Q1, obtaining MSG from the unposed RGB images will provide a foundation for many downstream tasks in computer vision and robotics, such as loop closure, SfM, visual SLAM, navigation, and mobile manipulation.
Existing scene graphs focus on describing the spatial and semantic relationships between different objects, which is different from the MSG task. Meanwhile, existing 3D scene graphs require having 3D scene representations before obtaining the scene graphs. This, where object association has been given, is already a good starting point for many vision tasks. However, such 3D scenes are not always available or can be reliably obtained from unposed RGB observations. Therefore, we believe our MSG is a complementary contribution to the existing scene graph literature.
**Continue in the official comment**
---
Rebuttal 2:
Title: Continue to answer reviewer's questions
Comment: __Q4__: Experiment. The baselines in the paper were from early ages
__A4__: We thank the reviewer for this question. All the baselines are from recent years. The proposed MSG task involves two parts of baselines: visual place recognition (VPR) and object association. For the VPR part, our NetVlad baseline is adapted from the widely used baseline deep VG benchmark [1] (reference [8] in the manuscript), which was published in 2022. Anyloc was published in 2023. For the object association part, UniTrack is a widely used multi-object tracking method released in 2021. DEVA is from ICCV 2023.
We appreciate the reviewer’s suggestion and have also evaluated optimal transport aggregation (Salad) [2] as an additional and more recent baseline for the VPR part. The results are reported below. We find its performance comparable to Anyloc on the recall while slightly better on PP IoU. We note that both of the baselines are evaluated off-the-shelf.
| model | Recall@1 | PP IoU |
|---------------------------|-------|-------|
| Anyloc | 97.1 | 34.2 |
| Salad[2] | 97.1 | 35.6 |
| AoMSG-4 | 98.3 | 42.2 |
We would like to emphasize that the baseline methods we proposed (AoMSGs) are designed to be straightforward and easily extensible. Future work can incorporate more advanced techniques and insights from the VPR and object association fields, including but not limited to the Sinkhorn algorithm[3] used in the Salad paper. It would be exciting to explore more ways of combining the wisdom in both place and object association fields for MSG in future work.
[1] Berton, Gabriele, et al. "Deep visual geo-localization benchmark." CVPR. 2022.
[2] Izquierdo S, Civera J. Optimal transport aggregation for visual place recognition. CVPR 2024.
[3] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013.
__Q5__: Do the authors want to modify the title a bit?
__A5__: Thank you for the question. We chose “Multiview Scene Graph” as the title because we believe it best describes the contribution of this work while maintaining conciseness. A multiview scene graph refers to our proposed task, where a place-object graph is inferred by associating unposed images across views, predicting their topological connectivity, and associating objects across multiple views simultaneously. So multiview association is the key characteristic and essence of the proposed scene graph and the task and we believe that naming the work “Multiview Scene Graph” highlights its core focus and makes it both memorable and easy to understand.
However, we are more than happy to consider any advice on better titles from the reviewer.
__Q6__: What Object Detector is used in figure 2?
__A6__: Any detector can be used as long as it proposes object bounding boxes. The detector is frozen and bounding boxes are the only thing our method needs from the detector. For training, we use groundtruth detection for efficient supervision. We also used Grounding DINO as our off-the-shelf object detector for open-set detection and it shows consistent performance order for all the methods.
__Q7__: The README is a void file in the supplementary materials.
__A7__: Thanks for pointing it out. We deleted README while trying to keep anonymity. We will upload the complete code base and model checkpoint when our paper is camera-ready.
__Q8__: typos.
__A8__: We thank the reviewer for pointing this out. We will fix all typos for the camera-ready version.
---
Rebuttal 3:
Title: Inquiry About Any Additional Concerns
Comment: Dear reviewer o1XP
Thanks for your reviews, which are valuable for improving the overall quality of this manuscript.
To address your concerns, we have added discussions and explanations of the definition and format of our proposed Multivew Scene Graph (MSG) and how it is different from other existing formats of scene graphs. We have also discussed the potential applications and benefits of the MSG.
For the experiments, we appreciate your constructive suggestions and have conducted an additional baseline based on the recent VPR literature you recommended. We will also add these references to the revised version. We would also like to thank you for suggesting the idea of using VLM. We like this idea and conducted a pilot study. We have presented the results and discussion in the response above as well as in the attached PDF file.
We have also provided answers and clarifications to the other questions of your concerns. We thank you for pointing out the typos and we promise to fix them in the revised version. We have also explained our motivation for choosing the title and we would love to hear from you for any advice on this.
Could we kindly ask if our responses have addressed your concerns and if you have any new questions? Thanks for your time and effort in reviewing our submission.
Authors of Submission ID 18923
---
Rebuttal Comment 3.1:
Title: Thank for the rebuttal
Comment: Hi, I am happy to see your rebuttal. I hope you can include adjustment in your final paper. I would like to change the rate to BA.
---
Reply to Comment 3.1.1:
Title: Thank you
Comment: Thank you for your valuable reviews and we are very happy to see your positive decision. Yes, we appreciate your constructive advice and we will surely include these adjustments in the revised final paper. | Summary: The paper proposes a novel task: generating scene graphs from unposed RGB images as well as a new benchmark based on ARKitscenes. To achieve this, the paper proposes a novel approach that use off-shelf image encoder and detector that are frozen and only train a decoder that takes in the features learned from frozen image encoder and detector to generate scene graphs. The author ablates the pre-trained weights for backbones and show a promising results compared to baselines.
Strengths: 1. The author proposes a new benchmark based on ARKitscenes, and a novel method trained on those, showing promising results.
2. the authors also ablate the influence of different pre-trained backbones.
Weaknesses: 1. The task is not that novel, see EgoSG: Learning 3D Scene Graphs from Egocentric RGB-D Sequences, where a very similar task is propose, as in, generating scene graphs from unposed RGB-D images. And from RGB to RGB-D, a off-shelf depth detector can be leveraged.
2. it is unclear, how was ViT pre-trained, it is using MAE?
3. the authors use a self-made baseline, i am wondering if other baselines could be compared to, such as RGB + depth predictor to simulate a RGB-D methods, or using the visual clues for pose estimations and using existing methods as baselines.
4. Lack of explanation of design of choices: why explicit leveraging detection predictions by cropping features? What if you just feed in the image features? How much does "explicit leveraging detection results" help?
5. Seeing the gap between w/ GT detection w/ GDino, it seems better detector is more important than the proposed detector itself, have you tried more powerful detectors?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. do you need the unposed images must have overlap with each other?
2. Have you tried SAM? and do you think DinoV2 performs best is because of the detector's feature is closed to image encoder, when using DINOv2 as image encoder?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: it is not that hard to get poses from either sensors or visual cues nowadays.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__:The task is not novel, see EgoSG.
__A1__: Thank you for the question. We will cite EgoSG as a related work. However, while both carry “scene graph” in the names, the MSG task is completely different from the EgoSG and other existing 3D scene graphs.
Firstly, the definition of the graph is different. The existing scene graph connects objects and describes their relationships with the graph edges. We propose MSG as place-object graphs, where edges between images reflect the topological connectivity, and edges between objects and images reflect the object locations after resolving the object association. As reviewer 2 (AiA2)
acknowledges, associating objects across frames and long periods without observation is a challenging and important task that does not truly require 3D information. We believe introducing and studying the multiview unposed scene graph will provide a meaningful topological scene representation beneficial to other downstream tasks in computer vision and robotics, such as loop closure, SfM[1], visual SLAM[2], visual navigation[3], and mobile manipulation.
Secondly, the input data format is different. EgoSG and other 3D scene graphs require having 3D scene representations before obtaining the scene graphs, which is already a very good starting point for many vision tasks and is not always available. Our proposed MSG focuses on a different challenge. The input is purely unposed RGB images without any additional spatial knowledge. MSG exactly aims to extract topological spatial knowledge from these inputs. This allows it to work on arbitrary videos when depth information is not given or needs to be estimated and lays a good foundation for reconstructing 3D information from 2D inputs. While estimating depth and poses to convert the problem into RGB-D could improve performance, it does not solely solve our task as the estimation itself can introduce errors. As suggested by the reviewer(discussed in the following response Q3A3), we have tried a recent strong pose estimation method [4] and found the performance less satisfactory due to the accumulated drifting pose errors. The loop closure provided by the MSG will in fact help better estimate poses.
Additionally, our MSG will be complementary to the existing scene graphs. Edges for object-object relationships can be a seamless add-on to extend the MSG with more semantic information. Therefore, we believe our work adds a meaningful contribution to the scene graph community.
In conclusion, we believe this is a novel task different from the previous scene graphs and also important for many downstream applications in computer vision and robotics. This paper contributes by introducing the MSG task, the set of new baselines, and benchmarks to stimulate future research.
[1]Xiao, Jianxiong, Andrew Owens, and Antonio Torralba. "Sun3d: A database of big spaces reconstructed using sfm and object labels." ICCV. 2013.
[2] Salas-Moreno, Renato F., et al. "Slam++: Simultaneous localisation and mapping at the level of objects." CVPR. 2013.
[3] Chaplot, Devendra Singh, et al. "Neural topological slam for visual navigation." CVPR. 2020.
[4] Barroso-Laguna, Axel, et al. "Matching 2D Images in 3D: Metric Relative Pose from Metric Correspondences." CVPR. 2024.
__Q2__: how was ViT pre-trained:
__A2__: Thank you for the question. We will update the manuscript with this information. For the ViT model, we are using the base model with the default weights pretrained on ImageNet-21K. It is not a masked auto-encoder (MAE). MAE is also an important image pretraining model. We would like to note that any visual encoders that can produce feature maps or tokens are adaptable to our method. However, we did not sweep through more available choices as we found that DINOv2 gives significantly better performance.
__Q3__: other baselines with existing methods.
__A3__: Thank you for the suggestion. We agree adding more baselines linked to existing tasks and methods will benefit our work. Therefore, We have added a new baseline based on pose estimation. We are happy to explore other possible directions in the future. Specifically, We use a pretrained Mickey model [4] and provide the image data sequentially in temporal order and also the corresponding intrinsics ( not provided to MSG). We compute relative poses and convert them to absolute poses w.r.t the first frame. Then use the same threshold as the data set hyperparameter to obtain the p-p adjacency matrix and P-P IoU. The results are listed in the table below. We see that the performance is close to that of Anyloc. Recall@1 is easily 1.0 since the data is given in temporal order, and consecutive frames are trivially recalled. But its P-P IoU is inferior to Anyloc, suggesting the drifting issue of estimated poses and the need for loop closure, which is precisely MSG’s strength.
| model | Recall@1 | PP IoU |
|---------------------------|-------|-------|
| Anyloc | 97.1 | 34.2 |
| Mickey[1] | 100 | 33.1 |
[4] Barroso-Laguna, Axel, et al. "Matching 2D Images in 3D: Metric Relative Pose from Metric Correspondences." CVPR. 2024.
__Q4__: why explicit leveraging detection predictions by cropping features? What if you just feed in the image features? How much does "explicit leveraging detection results" help?
__A4__: The task of MSG involves associating objects from images across views. Therefore, we leverage detection bounding boxes to crop features to obtain features of object appearances. If directly feeding the image features as a whole, it would be difficult and unnatural for object association since the model would have no prior knowledge of what are the interested objects. So explicitly leveraging detection results will help the model locate the object appearances in the input images and associate them to build the multiview scene graph.
**Continue in the official comment.**
---
Rebuttal 2:
Title: Continue to answer reviewer's question
Comment: __Q5__: Seeing the gap between w/ GT detection w/ GDino, it seems a better detector is more important than the proposed detector itself, have you tried more powerful detectors?
__A5__: Our MSG task and model tackle object association rather than object detection. Association happens after object detection and refers to identifying whether the object detected in one image is the same object detected in another. While it is true that a better detector will enhance the overall performance of MSG, it is inaccurate to say that the detector is more important than MSG and object association itself, as they focus on different tasks.
The gap between ground truth detection and Grounding Dino (GDino) is expected since GDino is used off-the-shelf and in a zero-shot manner. Our aim is to demonstrate that although MSG uses ground truth detections during training, any available detector can be incorporated into the framework, and the performance order remains consistent. Thus, we anticipate that a more powerful detector than GDino or one fine-tuned on the same data should yield better results.
The task focuses on associating places and objects, and we believe having ground truth detections and consistent results with the commonly used GDino is sufficient to convey our contributions. We appreciate the reviewer’s suggestion and will consider training or benchmarking detectors in future explorations.
__Q6__: if overlapped unposed images are needed:
__A6__: No, we do not make such overlapping assumptions for the unposed images. The input of the MSG task is just a set of unposed RGB images. The model learns to figure out whether the images are taken from the closeby positions and associate the object appearances, i.e. whether they are the same object. The input of the entire process is simply unposed RGB images, with no overlapping required.
__Q7__: Have you tried SAM? do you think DinoV2 performs best because the detector's feature is close to the image encoder when using DINOv2 as the image encoder?
__A7__: Yes, our baseline DEVA uses SAM to obtain object segmentations and perform video segmentation by tracking these object segments across the frames. Detector’s features are not used in our model. The detector only provides bounding boxes. Grounding Dino is different from DINOv2 although bearing a similar name. The former is a Transformer-based detection model and the latter is a Transformer-based self-supervised vision pretraining model. DINOv2’s features have shown great performance in downstream tasks, such as in Anyloc. In this work, we have a similar observation that DINOv2 as the encoder backbone produces the best performance. We think the reason for its strong performance is possibly due to its vast and diverse pretraining data.
---
Rebuttal 3:
Title: Inquiry About Any Additional Concerns
Comment: Dear reviewer 9wC7
Thanks for your comments, which are valuable for improving the overall quality of this manuscript.
To address your concerns, we have provided a thorough discussion and explanation of similarities and differences between our proposed Multivew Scene Graph (MSG) and other existing formats of scene graphs such as EgoSG. We really appreciate your thoughts and will include this in our references as well as the discussion of it in the related works section in the revised version. We have also conducted an additional baseline based on pose estimation for the place recognition part of MSG, thanks to your suggestion.
We have also provided a thorough explanation to the other questions of your concerns, such as the use of the detectors in our proposed method, the design choices, encoder choices, and image choices.
Could we kindly ask if our responses have addressed your concerns and if you have any new questions? Thanks for your time and effort.
Authors of Submission ID 18923 | Summary: The manuscript proposes the problem of inferring a scene graph from unposed images of a space. The key distinguishing factor to previous work is using multiple frames (as opposed to a single frame) and not requiring poses and depth (like for typical metric 3d scene graphs). The scenegraph is defined as a the set of images connected by edges implying topological connectedness and the set of objects. Objects are connected to place nodes based on where they are observed from. The manuscript also proposes a sensible baseline that outperforms more basic baselines for MVSG generation as well as methods that just work on place recognition or just on object tracking.
Strengths: Associating objects across frames and long stretches of time without observation is a challenging and important task that does not truly require 3D information (but is greatly helped by it). Introducing and studying the relaxed problem of multi-view unposed scenegraph generation has the potential to yield novel solutions that can be deployed on any video stream. And of course methods that utilize 3d information (say from running some deep slam on the video first) might yield the best performance. It puts the different approaches onto the same benchmark which is useful for the community.
Given the 3D training data (to supervise associations correctly) the proposed model is conceptually and implementation wise simple and leverages state of the art vision backbones. Based on the evaluation this simple model is also effective in generating scene graphs from unposed images. The SepMSG baselines are useful in convincing about the usefulness of the AoMSG model. It is interesting to see that this baseline model outperforms existing tracking methods by a large margin.
Overall the paper is well written and the illustrations are high quality to support the understanding of the method and ideas.
Weaknesses: The constraint of not requiring poses and theoretically enables the method to be run on any video sequence. However there are no examples of even qualitative experiments on some arbitrary video sequences. I think this should be easy to add and would help support the idea that unposed-image posed scene graphs are useful. The other aspect that is not addressed in the manuscript is how to train the proposed system without access to poses and 3d surfaces or 3d bounding boxes of objects in order to supervise the model. It is unclear if the place localizer just learns to replicate the pose thresholds used during training (see question).
Technical Quality: 3
Clarity: 3
Questions for Authors: - l 204 whats the "benchmark VG pipeline" ?
- SepMSG: I assume you are also using the same 2d encoder as for AoMSG? Could you clarify.
- The place localization is trained using relative pose thresholding to obtain positive samples. I am curious if that means the features just learn those spatial thresholds? During inference what is the relative pose distribution of the connected places? And the disconnected ones?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are addressed adequately in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: Example of qualitative experiments on some arbitrary video sequences.
__A1__: Thank you for the great suggestion! We have self-recorded an unposed video with an iPhone in a household scenario and run our trained AoMSG model with a pretrained Grounding DINO detector on it. In the attached PDF file we show some resulting images with object instance id labeled. Results suggest that our model is able to obtain sensible outputs on arbitrary videos outside of the dataset. We have also made a simple frontend tool for interactive visualization with MSG, which we show a screenshot in the PDF. We will release the tool along with the full source code when camera-ready.
__Q2__: How to train the model without poses or other 3D annotations.
__A2__: Thank you for the question. Like many other new tasks, certain requirements of annotation exist in the current MSG task. We rely on explicit camera poses and 3D object annotations in the ARKitScene to generate our dataset since marking the same places and objects requires knowing camera poses and object instances. We note that any 3D dataset or environment providing such annotations can be leveraged for the MSG task and there are many large-scale 3D datasets available for use such as ScanNet [1], ScanNet++[2], and HM3D[3]. Nevertheless, we acknowledge that this is a limitation and in the future, we will try to explore training on a combined recipe of different datasets for strongly generalizable performance or explore training with less annotation.
[1] Dai, Angela, et al. "Scannet: Richly-annotated 3d reconstructions of indoor scenes." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[2] Yeshwanth, Chandan, et al. "Scannet++: A high-fidelity dataset of 3d indoor scenes." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[3] Ramakrishnan, Santhosh K., et al. "Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai." arXiv preprint arXiv:2109.08238 (2021).
__Q3__: What is the benchmark VG Pipeline
__A3__: We thank the reviewer and apologize for the confusion. VG stands for Visual Geo-localization[1] which refers to the same task of Visual Place Recognition (VPR). This pipeline is referenced as [8] in the manuscript. We will revise this line in the manuscript for better clarity.
[1] Berton, Gabriele, et al. "Deep visual geo-localization benchmark." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
__Q4__: Encoder of SepMSG
__A4__: Thank you for the question. To clarify, SepMSG baselines use the same image encoder as the AoMSG methods. We use pretrained encoders and keep them frozen while training all the methods. Specifically, SepMSG-direct refers to directly evaluating the output feature vector of the encoders, SepMSG-linear/MLP trains linear probing or MLP on top of the encoders and AoMSG trains the Transformer-based decoder. We empirically find that the pretrained DINO-base model produces the best performance in direct evaluation, so all the experiments in Table 1 use it as the baseline. Below is a table for the detailed performance of different backbone choices when used in SepMSG-direct. We will revise the manuscript with better clarification and the results.
| Encoder for SepMSG-direct | PP IoU | PO IoU |
|---------------------------|-------|-------|
| ResNet 50 | 26.30 | 45.23 |
| ConvNext base | 27.62 | 46.29 |
| ViT base | 28.86 | 46.04 |
| DINO small | 29.54 | 50.37 |
| DINO base | 30.94 | 54.78 |
| DINO large | 31.02 | 50.02 |
__Q5__: if features just learn those spatial thresholds? Calculate the relative pose distribution
__A5__: Thank you for the great suggestion. Annotating based on spatial thresholds is a conventional setup in visual place recognition (VPR) tasks and we chose to follow this convention. While training the features, we adopt a metric learning approach instead of simple binary prediction. During training, embeddings positive pairs are drawn closer in terms of cosine similarity, while negative pairs are pushed further. We also monitor the training dynamic using the Total Coding Rate as described in [1] to alert possible feature collapse. Thus, the learned features are not confined to just those spatial thresholds.
We have also added 4 plots for relative pose distribution (both orientation and translation) in the attached PDF file. They are obtained by taking the model’s predictions on the test set and calculating the histograms of the relative orientation and translation differences of the predicted connected and not-connected places. From the plots, we can see that while the model makes some mistakes (currently the PP IoU is around 0.40), the distribution difference between the connected and the not-connected places is clear. There is a clear yet smooth separation at the spatial thresholds on all the plotted distributions.
[1] Yu, Yaodong, et al. "Learning diverse and discriminative representations via the principle of maximal coding rate reduction." Advances in neural information processing systems 33 (2020): 9422-9434.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my questions and comments. I appreciate the histograms showing the learned relative pose separation based on semantic features.
The qualitative experiments on iPhone video is good to see although the limited set of frames makes it hard to judge how well the model was able to re-associate and track objects. It seems there are 12 objects - it would be good to see which ones were identified?
I also had a look at the other reviews; I do agree that EgoGS using RGBD is a related work worth citing. I do think that the work of the authors has a potential for more impact since it can be run on any video sequence without additional processing (as demonstrated by the iPhone video experiment). While we could run a monodepth network, it is unclear how the typical problem of inferring consistent scale will impact the performance of EgoGS with monodepth.
I was also excited to see the VLM comparison. It supports the need for dedicated approaches (at least thus far) that can run on long video sequences.
All in all I am happy to stick with my rating.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer AiA2,
Thank you for your response. We are glad that you appreciate our work and the additional experiments. For the qualitative experiments, we have generated a whole video with object bounding box visualizations that can show the tracking and re-association effects. But we could only show a few frames in the attached PDF due to space limit. We definitely agree that more frames will be a better visualization and We are releasing more visualization videos on a website along with the interactive visualization tool after the anonymous phase.
As for showing which objects are identified, that is an essential feature and we currently support it in the front-end visualization tool in the following ways:
- a) the orange nodes are object nodes, and hovering the mouse over it will show the object category and an object ID corresponding to the per-frame visualization, which has an object category and an object id labeled to the bounding box.
- b) when clicking, some edges will be highlighted in blue (as you can see in the attached screenshot in the PDF). Those between the orange nodes and the blue nodes indicate which and where an object is identified. We are excited to release it for more interactive visualization and we are super happy if you could provide more suggestions so that we can make it better.
Please do not hesitate if you have more questions or suggestions. Thank you again for your time and efforts. | Summary: The paper introduces the novel task of Multiview Scene Graph generation from unposed RGB images, whereas this type of scene graph encodes the notion of 'places', i.e., images from spatially close locations, and detected objects as graph nodes. The motivation is to combine spatial reasoning of object association and place recognition into a joint representation.
An evaluation metric based on the edge adjacency is introduced along an initial baseline method (AoMSG, based on a Transformer architecture and off-the-shelf image encoder and object detector) for this new task, which is compared against sub-task specific methods on the ARKit scenes dataset.
Strengths: The paper presents and motivates a new task of building a topological scene graph combing place recognition and object association. An example application task (3D Reconstruction using Dust3r on sub-sets of input images retrieved from the MSG) which would benefit from the proposed MSG is described in the discussion section.
The proposed adjacency-IoU metric is simple, however the object-object "alignment" can play a dominant role.
Weaknesses: - The current task and proposed baseline model does not seem to allow for an adjustable "place" definition beyond the train set constraints (place = 1m and 1 rad).
- To give further insights about the implication of the proposed metric, it would be useful to provide failure case which results in low IoU scores, especially for PP IoU which is significantly lower than PO IoU.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Missing experimental details of Figure 3: How do the numbers provided in the graph relate to the numbers in Table 1, i.e., which model specifically was trained with the specific backbone?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The checklist is complete, including justifications for each item. Limitations of the work are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: __Q1__: The current task and model do not seem to allow for an adjustable "place" definition beyond the train set constraints
__A1__: Thank you for the question. The thresholds are dataset hyperparameters which is a conventional setup in visual place recognition (VPR) tasks and datasets [1, 2]. Since our MSG task involves place recognition, we choose to adopt this convention. VPR tasks require a model to classify whether two images are taken from the same place or not. The concept of “place” is a discretization of the space which is continuous by nature, which necessitates the use of thresholds in the VPR setup.
The proposed baseline model conducts supervised training, so it is possible to set different hyperparameters for different datasets and train the model on multiple datasets. To give a closer look at the effect the threshold has on the model, in the attached PDF file, we also included figures of relative pose distributions (orientation and translation) for the connected and non-connect nodes based on our model’s prediction. The figures show that instead of collapsing to only represent the fixed thresholds, the pose distributions have a clear yet smooth separation across the spatial thresholds.
[1] Lowry, Stephanie, et al. "Visual place recognition: A survey." ieee transactions on robotics 32.1 (2015): 1-19.
[2] Zaffar, Mubariz, et al. "Vpr-bench: An open-source visual place recognition evaluation framework with quantifiable viewpoint and appearance change." International Journal of Computer Vision 129.7 (2021): 2136-2174.
__Q2__: Give further insights about the implication of the proposed metric … provide failure cases that result in low IoU scores.
__A2__: Thank you for the suggestion. We have added visualizations of some failure cases in the attached PDF file. We observe that most failure cases can be attributed to having very similar visual features with relatively large pose differences (false positives), such as observing a room from two opposite sides, or having fewer similar visual features with relatively smaller pose differences (false negatives).
We note that the recall metric, conventional in VPR, is straightforward and effective for image retrieval against a database. However, it falls short in reflecting challenging false positives and negatives, especially when constructing a topological graph like MSG where the number of positives varies. This highlights the usefulness of our proposed IoU metric, which consistently evaluates the quality of the graph.
__Q3__: Missing details in Figure 3.
__A3__: Thank you for pointing it out. We made a mistake with the numbers and y-axis. We have uploaded a new figure with more information in the attached PDF file to replace the original one. The numbers marked as *direct* are obtained by directly evaluating the pretrained encoders corresponding to the SepMSG-direct baseline. Those marked as *AoMSG* are obtained by training the *AoMSG-4* model on top of the frozen encoders, following the same setup as the main experiments in Table 1 in the manuscript. We chose the DINOv2 base as the default encoder for our main experiments as it gives the best performance when evaluated directly. We will revise the manuscript with the updated Figure 3.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
thank you for providing the additional results regarding failure cases and their interpretation - this helps in understanding the metric and potential cause of errors in applying the method.
Following the initial question of fellow Reviewer o1XP and the comment AiA2, I was also excited about the provided VLM comparison and the Author's effort in running such an experiment given the limitation (e.g., context length) of current public VLM services.
I'm happy to stick to my initial rating.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Dear reviewer qE49,
Thank you for your valuable reviews and we are pleased to see your positive decision. We really appreciate your constructive suggestions and we will include them in the revised version. | Rebuttal 1:
Rebuttal: We thank our reviewers for their encouraging comments, helpful suggestions, and insightful questions.
**Acknowledgements**.
- All reviewers acknowledge our novelty.
- R1 qE49: “The paper introduces the novel task…”.
- R3 9wC7: “The paper proposes a novel task…” “proposes a new benchmark … a novel method … showing promising result”.
- R4 o1XP: “The paper introduces a new scene graph format…”.
- R2 AiA2 further acknowledges that our posed MSG “is a challenging and important task” and “is useful for the community”.
- Several reviewers also think the paper is well-written with good presentation quality.
- R2 AiA2: “Overall the paper is well written and the illustrations are high quality to support the understanding of the method and ideas.”
- R3 9wC7 rates “Presentation: 4: excellent.”
We thank all the reviewers for their appreciation of this work.
**Questions and suggestions**.
Reviewers’ questions and suggestions are very constructive. We sincerely thank them for their advice in making the work of better quality.
- R1 qE49 and R2 AiA2 suggest adding additional analysis of the relative pose distributions and failure cases. We added these results in the attached PDF file and provided our thoughts and analysis in the responses.
- R3 9wC7 and R4 o1XP suggest additional baselines. We implement them and report the results and analysis in the PDF file as well as in the responses.
- R3 9wC7 and R4 o1XP also raise questions regarding the similarities and differences between our proposed Multivew Scene Graph (MSG) and other existing formats of scene graphs, to which we explain the difference in the definition and format of the graph, the input data, and the downstream applications. We also note that the MSG is complementary to the existing scene graphs. We answer the question in more detail in the corresponding responses.
**Attached PDF file**.
In total, we collected reviewers' constructive feedback and added the following 6 qualitative and quantitative experiments:
1. An analysis of relative pose distribution from the model’s prediction.
2. Provides failure cases which results in low IoU scores.
3. A qualitative real-world experiment on self-recorded videos using an iPhone.
4. Additional baseline using the most recent pose estimation method.
5. A pilot study of using VLM to try to solve the proposed MSG task.
6. Additional baseline using the most recent VPR method.
Please find the details in the PDF file and the responses. We will also revise the manuscript to reflect these experiments when the paper is camera-ready.
**Typos and revisions**.
We thank all the reviewers for pointing out the typos and confusion. Per the rules of the rebuttal session, we have marked all the typos and revisions suggested by the reviewers and will revise the manuscript when camera-ready.
Pdf: /pdf/8df48ebd7512c48fafe1d066f2fb11e8ec547326.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to Shape In-distribution Feature Space for Out-of-distribution Detection | Accept (poster) | Summary: This paper looks to perform OOD detection via distributional representation learning rather than assume some pre-specified distributional form for the ID data. Their motivations are that there can exist inconsistencies between the assumptions on the ID distribution from prior work and the actual unknown ground truth distribution.
They consider learning a feature space to approximate a mixture distribution over the ID data. The authors propose an online approach to approximate the normalization constant over the mixture distribution.
The authors also formulate this approach via a provably convergent EM algorithm to improve training stability. Specifically, they use latent variables as in-distribution classes, using Bayesian inference to derive the posterior distribution over latents given observed data (E-step). Then, the authors maximize the ELBO (which they also prove is bounded, as during each iteration of the algorithm, this value increases).
The authors empirically demonstrate the improvement of this approach over alternatives on average across multiple datasets when training on CIFAR10 and CIFAR100. They further show strong performance in hard OOD detection settings, where the goal is to detect data from CIFAR10 (and other datasets) when training on CIFAR100.
Strengths: The authors show strong empirical results in average, compared to the considered baselines.
Their approach also circumvents the need for pre-specified distributional forms for the ID data, which (as the authors note) can be an issue in practice.
Weaknesses: No apparent weaknesses.
Technical Quality: 4
Clarity: 3
Questions for Authors: How quickly does this method converge? Does the EM algorithm require a comparable number of steps or computational complexity to the other considered approaches?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer's dedicated time and effort in evaluating our work. In response to the insightful comments provided, we have provided detailed responses to each point raised, hoping that our responses can adequately resolve your concerns. Please find our responses below.
**Q.1.** How quickly does this method converge?
**A.1.** Thanks for your insightful idea. As can be seen from the loss trajectory shown in the uploaded PDF file, training with the same epochs as prior works [a,b] is sufficient for our method to converge. We will highlight the nature of our method following your valuable comments.
[a] Learning with Mixture of Prototypes for Out-of-Distribution Detection, ICLR 2024
[b] How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection, ICLR 2023
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thank you for the clarifications in the rebuttal! I'll maintain my score as an accept.
---
Reply to Comment 1.1.1:
Title: Thanks for your feedback
Comment: Dear Reviewer #WmsK,
We're grateful for your quick feedback and deeply appreciate your consideration in keeping the score at 7 Accept.
We remain open and ready to discuss any more questions or suggestions you might have. Your constructive comments have significantly improved the quality of our work.
Best regards and thanks,
Authors of Paper #8706 | Summary: The paper provides a novel approach called DRL to help optimize post-hoc OOD detection methods by explicitly shaping the ID space during pre-training. In particular, DRL is defined through an Expectation-Maximization algorithm with alongside a structured mini-batch setting. The resulting DRL is shown to have strong empirical performance across multiple OOD detection methods and benchmarks. Additionally, the authors also provide a rigorous theoretical setup for justifying the DRL optimization schema.
Strengths: Strengths:
- The paper provides a novel method for dealing with the problem of ID space shaping for OOD detection which is often overlooked in OOD detection.
- DRL shows strong empirical performance across multiple high-resolution benchmarks and traditional CIFAR benchmarks.
- The paper also provides a rigorous set of theoretical analyses to help better understand the underlying DRL method.
Weaknesses: Weakness:
- A small concern of the reviewer is the formatting and structure of the paper. In particular, given the dense nature of the work, the reviewer would highly encourage some additional context to help introduce the reviewer to the DRL method.
Technical Quality: 4
Clarity: 3
Questions for Authors: The reviewer has some confusion regarding the necessity of the sequential sampling alterations that the authors noted. In particular, the reviewer would like to get some more clarification regarding the inconsistency issues noted in Section 3.4.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed limitations to the methodology as well as any negative societal impact. Additionally, the reviewer does not foresee any potential negative impact resulting from this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your dedicated time and effort in reviewing our work. In response to the valuable comments provided, we have provided detailed responses to each point raised, hoping that our responses can adequately address your concerns. Please find our responses below.
**Q.1.** More clarification regarding the inconsistency issues
**A.1.** We kindly note that the inconsistency stems from that, under the batch-based training scheme, the model used in the E-step of the $m$-th iteration is trained without the data used in the M-step of the same iteration. In particular, the prediction $p_{\boldsymbol{\theta}}(k|\mathbf{x})$ is only updated once per training epoch while the discriminative model $f_{\boldsymbol{\theta}}$ keeps updated throughout mini-batches (iterations) in the training. | Summary: The paper proposes an in-distribution (ID) modeling approach, termed distributional representation learning (DRL), which enhances the convergence of ID latent feature learning. The authors include theoretical proof to corroborate the proposed approach. They have conducted a few experiments on standard OOD benchmarks and explored different OOD cases.
Strengths: 1. The theoretical contribution enhances the idea of representation learning in the latent space. This theoretical proof helps explain related previous works that have proposed several ID modeling techniques.
2. The proposed method includes practical implementation techniques in the methods corresponding to the proposed theory.
3. The presentation is generally clear and easy-to-understand to readers.
Weaknesses: 1. Although the proposed data distribution modeling method is claimed to mitigate the distribution assumptions in the previous works, the model convergence still relies on prior works' assumptions such as vMF. Even though the techniques with underlying assumptions do not directly influence the geometry of the latent space, the proposed method is still not viable without these techniques. Other optimization strategies without the assumptions might be worth considering.
2. The improvements in the standard benchmarks are not significant and convincing. A few previous works' results are not consistent with the number in the papers. For example, using CIDER on the LSUN dataset obtains 30.24 FPR while 16.16 FPR was reported in the original paper for the CIFAR-100 ID data experiment. The average FPR for Imagenet-100 is reported to be 25.9 in the original paper and GitHub repository which is much lower than the proposed method with 30.31 FPR in Table 3.
3. The ablation study and analysis of the proposed method can be improved. Figure 1(a) compares the effects of l2-normalization which has been widely known in most previous feature-based OOD detection methods. For Figure 1(b) observing the difference of sequence sampling, additional convergence analysis should be provided, such as loss trajectory or convergence time.
4. As described in line 189, the ELBO convergence issue might be a concern. Without sufficient evidence, it would be hard to justify the proposed method can completely avoid this kind of issue. As the proposed method is still a sampling strategy, the uncertainty of occasional unconverge situation might still occur.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. To perform online approximation, I would like to know if the batch size is an important parameter.
2. The description of $g_\theta$ in equation 7 (line 138) seems missing.
3. In Table 4, the proposed methods can hardly be comparable to the PALM in the ImageNet-Resize OOD detection results. Are there any reasons?
4. In Table 5, unsupervised settings seem to yield more failure cases, such as SVHN and LSUN datasets for the proposed method compared to PALM. Can the authors explain a bit about the results?
5. Why the authors did not include CIDER in Table 5?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The scope of OOD detection is widespread but the authors only consider simple and standard datasets. The experiments might introduce limitations to the study.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing our work. Below are detailed responses to your valuable comments.
**Q.1.** Although the proposed method is claimed to mitigate the assumptions in previous works, the model convergence still relies on prior works' assumptions (vMF). Even though the assumptions do not influence the geometry of the latent space, the proposed method is still not viable without these techniques.
**A.1.** We apologize for the misunderstanding. We will add the following explanation to highlight our motivation.
- Previous work assumes $p_{\theta}(x)$ (denoted as p_x in the left) to be the vMF distribution without theoretical guarantee. The mismatch between the assumed and learned distributions will degrade detection performance.
- Our method, with the derived ELBO (Eq. 10), enjoys the theoretical guarantee that the learned p_x conforms to vMF distribution. Here, introducing vMF is to compute the normalization constant in a closed form.
- Our method can shape the learned p_x to fit various distributions, i.e., it is viable without the help of vMF. For example, if we, following [a], define $h(z,k)$ to be a Bregman divergence [b], the resulting p_x turns to be an exponential family distribution, which is left as our future work.
[a] ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection
[b] Clustering with Bregman Divergences
**Q.2.** The improvements in the standard benchmarks are not significant and convincing.
**A.2.** We kindly note that the reported results of most compared baselines in all tables of this paper originate from PALM [c]. We are sorry for not explicitly illustrating this to make the reviewer confused.
- While the reimplemented CIDER in PALM performs worse than the original CIDER on LSUN with CIFAR-100 as ID data, it is advantageous for the former to have better-averaged results across 5 OOD datasets than the latter. This is because OOD data tends to be from various domains rather than a specific one.
Following PLAM, the experiments on ImageNet-100 are conducted on **ResNet-50**. However, the CIDER results reported on the GitHub repository are based on **ResNet-34** and thus can not be used for comparison. Thus, we **never** undergrad existing works.
[c] Learning with Mixture of Prototypes for Out-of-Distribution Detection
**Q.3.** The ablation study and analysis can be improved. Additional convergence analysis should be provided.
**A.3.** Thanks for your constructive advice. We have added the loss trajectory to the revision, as shown in the PDF file.
**Q.4.** It would be hard to justify the proposed DRL can completely avoid this issue. As it is still a sampling strategy, the uncertainty of occasional unconverge situations might still occur
**A.4.** We will add the following explanations.
+ The proposed DRL is formulated into a provably convergent EM framework. The inconsistency issue that occurs in the batch-based training scheme originates from the nature of the EM framework rather than our methodological design: during batch-based training scheme, $p_{\theta}(k|x)$ is only updated once per epoch while $f_{\theta}$ keeps updated throughout batches (iterations). The EM framework has been widely used in deep learning literature [a,b,c,d,e,f,g,h], where inconsistency is an open problem.
+ Our key idea is to update the model in the current iteration and take data from the upcoming iterations into consideration. This motivates sequential sampling to have $B\_m=(B\_m^{pre},B\_m^{next})$, where $B^{next}\_{m-1} = B^{pre}\_{m}$. In this way, by optimizing classification objective over $B\_m^{next}$ encourage $p\_{\theta_{m}}(k|x)$ to be close to the ground truth, the estimated $q\_{m+1}(k|x) = p\_{\theta_{m}}(k|x)$ can be reliable for $\mathcal{B}\_m^{next}$.
+ Our key contributions show that one can deterministically shape p_x to conform to the known distribution defined via Eq. (7) to avoid the unalignment between the learned and assumed distributions. We also admit that the inconsistency issue is a fundamental topic, but addressing it with a theoretical guarantee is still an open problem in the literature and out of the scope of our work.
[a] Knowledge Condensation Distillation
[b] MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
[c] Prototypical Contrastive Learning of Unsupervised Representations
[d] Joint Unsupervised Learning of Deep Representations and Image Clusters
[e] Deep Clustering for Unsupervised Learning of Visual Features
[f] Learning representation for clustering via prototype scattering and positive sampling
[g] Stable Cluster Discrimination for Deep Clustering
[h] Unsupervised Visual Representation Learning by Online Constrained K-Means
**Q.5.** To perform online approximation, I would like to know if the batch size is an important parameter
**A.5.** According to the definition in Line 159, the online approximation in Eq. (9) does not leverage data statistics within batches, i.e., it is independent of the batch size.
**Q.6.** The description in eq 7 (line 138) is missing
**A.6.** Thanks for pointing out the problem caused by typos. We have fixed this mistake:
$p_{\theta}(x|k)=\frac{h(z,k)}{\Phi(k)},\quad \Phi(k) = {\int_{z \in \mathcal{Z}} h(z, k) \, d z}$, where $\mathcal{Z}$ is the latent feature space.
**Q.7.** In Tables 4 and 5, the proposed methods can hardly be comparable to PALM
**A.7.** We suspect this is because PLAM introduces more than one prototype for each ID class to learn better features regarding discriminating the aforementioned OOD dataset from the ID dataset. We kindly note that our method achieves a better average performance than PALM.
**Q.8.** Why is CIDER not in Table 5?
**A.8.** Unlike PALM proposed for unsupervised and supervised settings, CIDER is proposed for supervised settings. Thus, it is unclear how to extend CIDER to the setting. Thus, we exclude CIDER from Table 5.
---
Rebuttal 2:
Title: The window for discussion is closing.
Comment: Dear Reviewer #7foW,
### **The window for discussion is closing.**
Thanks very much for your great efforts in reviewing and valuable comments. The discussion will end soon. At this final moment, we would sincerely appreciate it if you could check our responses and new results regarding your concerns.
**1. _The proposed method is still not viable without existing techniques._**
We apologize for the misunderstanding. We have provided explicit explanations highlighting that the assumption used by previous methods is the motivation for our method instead of the basis for our work.
**2. _The improvements are not significant and convincing_**
We apologize for the misunderstanding. We have provided detailed explanations for the experimental results, i.e., we used results reported in the literature and the different results from the difference in model architectures.
**3. _Explanations for sampling strategy_**
We have provided detailed explanations highlighting the motivation of the proposed sampling strategy.
If you have any further concerns, we will respond instantly at this final moment. We would sincerely appreciate it if you could confirm whether there are unresolved concerns.
Best regards and thanks,
Authors of #8706
---
Rebuttal Comment 2.1:
Comment: Hello Reviewer 7foW,
Please take a moment to read and acknowledge the author's response to your review.
Thanks, AC | Summary: The paper introduces an innovative learning framework, Distributional Representation Learning (DRL), designed to bridge the gap between network pre-training and density-based scoring strategies. DRL is formulated as a provably convergent Expectation-Maximization algorithm. Key contributions include the introduction of unnormalized class prototypes, which enhance the flexibility of the mixture model, and an online approximation of normalization constants, enabling end-to-end training. This framework represents a significant advancement in the field, offering both theoretical and practical improvements over existing methods.
Strengths: The theoretical framework presented formulates DRL as an Expectation-Maximization (EM) algorithm, offering a robust foundation for the learning process. Recognizing the difficulty in directly optimizing the log-likelihood function, the authors adeptly shift focus to the optimization of $\text{ELBO}(q, x; \theta)$. To address potential inconsistency issues in this optimization, they introduce a sequential sampling rearrangement technique, significantly enhancing OOD detection performance. Additionally, One of the key strengths of the paper lies in its innovative approach to handling normalization constants. Rather than imposing impractical constraints to make these constants
input-independent or known, the authors propose an online approximation method, enabling seamless end-to-end training. The approach substantially enhances out-of-distribution (OOD) detection performance on CIFAR-10 and CIFAR-100 datasets.
Weaknesses: - **Contradiction in Assumptions:** While the paper notes that existing methods often impose strict distributional assumptions due to the lack of prior knowledge about the learned feature space, it still enforces the underlying feature space to conform to a pre-defined mixture distribution. This approach appears to contradict its initial motivation.
- **Influence of Total Number of Classes (k):** The paper does not address the impact of the total number of classes (k) on computation and memory requirements. Including a detailed analysis of this aspect would provide a clearer understanding of the model's scalability and resource demands.
- **Performance Comparison with Other Density-Based Methods:** The paper lacks a clear comparison of DRL's performance with other density-based OOD detection methods.
- **Theoretical Proof for Sequential Sampling:** The paper lacks sufficient theoretical proof for the rearrangement of sequential sampling to ensure consistent optimization. Providing a rigorous theoretical foundation would strengthen the claims made in this regard.
- **Discussion on Hyperparameter $\beta$:** The paper does not include a discussion on the hyperparameter $\beta$. Including a comprehensive analysis of this hyperparameter, its impact on the model, and guidelines for its selection would be beneficial.
- **Equation Reference Correction:** In line 120, the reference to equation (7) should be corrected to equation (5).
- **Equation Mistakes:** In lines 176 and 183, there are errors in the equations that need to be addressed and corrected for accuracy.
- **Lack of Description for Table 5:** In line 288, under the section "Unsupervised OoD Detection," there is an insufficient description of Table 5. A more detailed explanation is required to clarify the contents and significance of the table.
- **Symbol Description:** In line 138, there is no clear description of the symbols in the formula. Providing detailed definitions and explanations of all parameters involved is essential for comprehensibility and reproducibility.
Technical Quality: 3
Clarity: 2
Questions for Authors: - The paper initially criticizes existing methods for imposing strict distributional assumptions due to limited prior knowledge of the learned feature space. However, it then introduces a pre-defined mixture distribution for the underlying feature space. Can you clarify how this approach aligns with the initial criticism and motivation of your method?
- How does DRL's performance compare with other density-based OOD detection methods? Can you provide competitive performance metrics and analysis to evaluate DRL's relative effectiveness?
- What is the impact of the total number of classification classes (k) on computation and memory requirements? A detailed analysis would help in understanding the scalability and resource demands of your model.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - The paper primarily explores a single realization of the non-negative density function and imposes a pre-defined mixture distribution on the underlying feature space, which seems to contradict its initial motivation.Further exploration of different realizations could provide a more robust evaluation of the proposed approach.
- The paper includes experiments for large-scale OOD detection primarily on the ImageNet-100 dataset. If extended to the larger ImageNet-1k dataset, there are concerns that inference could become significantly slower and memory usage substantially higher. Addressing these potential scalability issues would be important for practical applications on large-scale datasets.
- There are various writing and expression errors throughout the paper. Improving the clarity and precision of the writing would enhance the overall readability and comprehension of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your time and effort in reviewing. We have provided detailed responses below, hoping your concerns can be adequately addressed.
**Q.1.** Contradiction in Assumptions
**A.1.** We apologize for the misunderstanding and will provide explicit explanations highlighting that our method aligns with the motivation.
- As shown in section 2, existing feature-based and logit-based methods explicitly or implicitly assume $p_{\theta}(x)$ as the Gibbs-Boltzmann distribution and Gaussian mixture distribution, respectively. However, there is no theoretical guarantee to ensure these distributional assumptions hold under any discriminative models $f_{\theta}$ that are trained by minimizing the classification objective. This inconsistency would degrade OOD detection performance.
- Given that $p_{\theta}(x)$ is typically unknown after a discriminative model $f_{\theta}$ is trained by minimizing the cross entropy, our method proposes to deterministically shape $p_{\theta}(x)$ to conform to the known distribution defined via Eq. (7). This avoids the above unalignment issue. Namely, this leads to our motivation: is it possible to deterministically shape the ID feature distribution while pre-training a discriminative model? In this way, the defined distribution can be safely used for OOD detection since the involved distributional assumption can hold after training where Eq. (10) is minimized.
**Q.2.** Influence of class number (K)
**A.2.** Thanks for your kind suggestions. Our method's memory and time complexity linearly scales with K so that our method has the same scalability nature as the CE-based training.
- The parameters involved in our method are located in 1) backbone and 2) prototypes $\mu\_1,...,\mu\_K$. Since the backbone is orthogonal to our technical design, we omit the memory complexity of the backbone. Therefore, the memory complexity of our method is $O(K)$.
- Computing Eq. (14) for each x requires to compute $p\_{\theta}(x|k)$ for each $k$. By omitting the time complexity of the backbone and dot product, the time complexity of our method is $O(K)$.
- We conduct ImageNet-100 for large-scale OOD, aiming to keep consistent with prior works [a,b].
[a] Learning with Mixture of Prototypes for Out-of-Distribution Detection
[b] How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection
**Q.3.** Comparison with Density-Based Methods
**A.3.** While this paper has considered popular post-hoc density-based methods, including Energy, Maha, and CONJ, for comparison, as per your valuable advice, we compare to more baselines (Maxlogit [c] and GEM [d]) and report the averaged results across 5 OOD datasets used in this paper as follows:
|| FPR95 | AUROC|
| :-| :-|:-|
| [d]|47.38|91.03|
| [c] |33.55|90.97|
|Ours|11.58|97.83|
We compare it to the training-based density-based method [e], where an extra flow model is introduced on top of the pre-trained model. Here, following [e], we evaluate our method on CIFAR-10 and report the averaged results across 6 OOD datasets (5 used in this paper+LSUN-Resize)
| | FPR95 | AUROC|
| :-| :- | :-|
| [e]|16.26|97.19|
| Ours|13.19| 97.46|
[c] Scaling Out-of-Distribution Detection for Real-World Settings
[d] Provable Guarantees for Understanding Out-of-distribution Detection
[e] FlowCon: Out-of-Distribution Detection using Flow-Based Contrastive Learning
**Q.4.** Justification of Sequential Sampling
**A.4.** We will add the following explanations.
+ The proposed DRL is formulated into a provably convergent EM framework. The inconsistency issue that occurs in the batch-based training scheme originates from the nature of the EM framework rather than our technical design: during batch-based training scheme, $p_{\theta}(k|x)$ is only updated once per epoch while $f_{\theta}$ keeps updated throughout batches (iterations). The EM framework is widely used in deep learning literature [f,g,h,i,j,k,l,m], where inconsistency is an open problem.
+ Our key idea is to update the model in the current iteration and take data from the upcoming iterations into consideration. This motivates sequential sampling to have $B\_m=(B\_m^{pre},B\_m^{next})$, where $B^{next}\_{m-1} = B^{pre}\_{m}$. In this way, by optimizing classification objective over $B\_m^{next}$ encourage $p\_{\theta_{m}}(k|x)$ to be close to the ground truth, the estimated $q\_{m+1}(k|x) = p\_{\theta_{m}}(k|x)$ can be reliable for $\mathcal{B}\_m^{next}$.
[f] Knowledge Condensation Distillation
[g] MiCE: Mixture of Contrastive Experts for Unsupervised Image Clustering
[h] Prototypical Contrastive Learning of Unsupervised Representations
[i] Joint Unsupervised Learning of Deep Representations and Image Clusters
[j] Deep Clustering for Unsupervised Learning of Visual Features
[k] Learning representation for clustering via prototype scattering and positive sampling
[l] Stable Cluster Discrimination for Deep Clustering
[m] Unsupervised Visual Representation Learning by Online Constrained K-Means
**Q.4.** Ablation on $\beta$
**A.4.** As per your advice, we conduct the ablation study on $\beta$ (with CIAR-10 as ID dataset) and show in the following:
|$\beta$ |0.08|0.1|0.2|0.4|
| :-| :-| :-| :-| :-|
|Averaged FPR95|12.19|11.85|11.58|12.64|
**Q.5** Description for Table 5
**A.5.** In Table 5, we extend our method to the unsupervised setting. Following DINO [n], we keep a momentum teacher to produce soft labels as the target. As with DINO, we use the centering operation to avoid collapse.
[n] Emerging properties in self-supervised vision transformers
**Q.6** Equation Mistakes
**A.6.** Thanks for your comments. The revised paper has fixed the typos and mistakes you mentioned.
**Q.7** Symbol Description in line 138
**A.7.** Thanks for pointing out the typo. We have fixed this in our revised paper and show the correct definition of $\Phi(k)$ in Eq. (7)
$\Phi(k) = \int_{\mathbf{z} \in \mathcal{Z}} h(\mathbf{z}, k) d\mathbf{z}$
---
Rebuttal 2:
Title: The window for discussion is closing.
Comment: Dear Reviewer #s5Ce,
### **The window for discussion is closing.**
Thanks very much for your great efforts in reviewing and valuable comments. The discussion will end soon. At this final moment, we would sincerely appreciate it if you could check our responses and new results regarding your concerns.
**1. _Contradiction in assumptions_**
We apologize for the misunderstanding. We have provided detailed explanations highlighting that the proposed method aligns with our motivation.
**2. _Influence of class number_**
We apologize for the misunderstanding. Our method's memory and time complexity linearly scales with K so that our method has the same scalability nature as the CE-based training.
**3. _Justification of sequential sampling_**
Following your valuable suggestion, we have provided explicit explanations highlighting the motivation for the widely adopted approach.
**4. _Comparison with density-based methods_**
Following your valuable suggestion, we have added experiments, results, and discussions, demonstrating the effectiveness of our method.
**5. _Ablation_**
Following your valuable suggestion, we conducted the ablation study on the hyper-parameter $\beta$ mentioned.
If you have any further concerns, we will respond instantly at this final moment. We would sincerely appreciate it if you could confirm whether there are unresolved concerns.
Best regards and thanks,
Authors of #8706
---
Rebuttal Comment 2.1:
Comment: Hello Reviewer s5Ce,
Please take a moment to read and acknowledge the author's response to your review.
Thanks, AC | Rebuttal 1:
Rebuttal: We visualize our loss curve in the uploaded PDF file.
Pdf: /pdf/9c43e64fe9dfe1eb4a68956e208785c20e2b34f4.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diff-PCC: Diffusion-based Neural Compression for 3D Point Clouds | Reject | Summary: The paper proposes the first diffusion-based point cloud compression method called Diff-PCC.
A dual-space latent representation is devised in this paper, where a compressor composed of two independent encoding backbones is used to extract expressive shape latents from different latent spaces.
At the decoding side, a diffusion-based generator is devised to produce high-quality reconstructions by considering the shape latents as guidance to stochastically denoise the noisy point clouds.
Experiments demonstrate that the proposed Diff-PCC achieves state-of-the-art compression performance (e.g., 7.711 14 dB BD-PSNR gains against the latest G-PCC standard at ultra-low bitrate) while attaining superior subjective quality.
Strengths: Novelty is a strength. To my knowledge, diffusion model is used in point cloud compression for the first time. And the dual-latent design is also novel for learned point cloud compression.
The manuscript is well written and easy to follow. Especially, the author did a good job in introducing related works on image compression, point cloud compression, point cloud analysis and diffusion model.
Weaknesses: More work on diffusion model for data compression could be discussed, like ‘Idempotence and Perceptual Image Compression, ICLR 2024’. In addition, although this paper focuses on point cloud compression, the way of applying diffusion model should be compared with those learned image compression works in the related work part. From my impression, the method in this paper is still novel compared with those learned image compression paper using diffusion model.
More recent learned point cloud compression method [30][14] should be compared in Table 1, Figure 3 and Figure 4, regarding rate distortion and encoding/decoding speed. Besides, only object point cloud is considered currently, large scale point cloud like SemanticKITTI could be compared [30][14].
It is not clear how the speed is measured in Table 1. The hardware and commend line shoud be provided in the supplementary material.
Minor:
L86, Point·E[] is a typo.
[30] and [31] are the same.
L202, the reference should be fixed.
What is the FPS in eq 14? farthest point sampling?
Technical Quality: 3
Clarity: 3
Questions for Authors: see the weakness part
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation is addressed in the manuscript.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer HJUZ,
Thank you for your detailed review and the valuable feedback. We will address your concerns below.
Q1: More work on diffusion model for data compression could be discussed;The way of applying diffusion model should be compared with those learned image compression works in the related work part.
Thank you for this valuable suggestion. In forthcoming revisions of this article, an introduction to the related research on diffusion models will be further introduced and reviewed
Q2: More recent learned point cloud compression method (e.g., EHEM and ECM-OPCC) should be compared. Besides, only object point cloud is considered currently, large scale point cloud like SemanticKITTI could be compared.
• As the mentioned methods, EHEM and ECM-OPCC are not open-sourced yet, it is difficult to directly compare with them. Further, these two existing works require the octree structured to build the entropy model, which is different from ours that can input unstructured and non-voxelized point clouds. Our model Diff-PCC is more generic. We will replicate these two works for comparion in the future.
• The proposed Diff-PCC, which serves as the first exploration of the cutting-edge 3D diffusion-based compression models, aimning at compressing unstructured relatively small point clouds. Small-scale point clouds are also widely used in real-world applications, such as quick-browsing thumbnails and key points of captured scenes in robotics.
• To extend its application to large-scale point cloud datasets such as MVUB and MPEG 8i, we can adopt a patch-based approach in this rebuttal. This involves dividing the large point cloud into non-overlapping smaller patches, then sequentially compressing them with Diff-PCC, and finally assembling the reconstructed patches back into a large point cloud.
• We have displayed the comparison between it and G-PCC in a PDF document. The results indicate that the performance of Diff-PCC is inferior to that of G-PCC. On one hand, the patch-based method may result in the loss of the original semantic information of the whole subjects. On the other hand, it neglects the connections between the patches.
Considering the superior performance demonstrated in the small-scale point cloud samples, future research may include extending this work to compress larget-scale human bodies \& scenes.
```
Q3: It is not clear how the speed is measured in Table 1. The hardwar and commend line shoud be provided in the supplementary material.
```
Thank you for your suggestion. We will update the detailed information in the supplemtray material.
• The hardware and software information is listed below. All our experiments were conducted under the same machine to maintain consistency and reproducibility of the results:
```
– CPU: Intel(R) Xeon(R) Platinum 8474C @ 2.05GHz
– RAM: 128GB DDR4
– GPU: NVIDIA GeForce RTX 4090 D
– GPU Memory: 24GB
– Operating System: Ubuntu 20.04 LTS
– CUDA Version: 11.7
– cuDNN Version: 8.0.5
– Python Version: 3.10.14
– PyTorch Version: 2.0.1
```
• The command line and test scripts will be provided once the paper is published. Our speed test process follows the conventional user time measurement, which can be briefly described using the pseudo code:
```
pcl = read_point_cloud(file_path)
start_time = time.time() ## mark start time
pcl = torch.tensor(pcl) # cpu data to gpu data
bytestream = encode(pcl) # encode
torch.cuda.synchronize() # wait for CUDA device complete
end_time = time.time() ## mark end time
print(’Encode Time:’, end_time - start_time)
bytestream_to_file(bytestream) # save byte stream
```
Q4: Some minor problems.
Thanks for spotting these issues. We will fix them accordingly in forthcoming revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for the reply | Summary: In this work, the authors propose a diffusion-based point cloud compression framework. Low frequency and high frequency features are extracted via PointNet and PointPN from input point clouds, which are quantized and encoded for compression. During decompression, the quantized features would be decoded to condition a diffusion model to construct the decompressed results. The experiments on 15K points data from ShapeNet, ModelNet10, and ModelNet40 show superiority over compared methods.
Strengths: 1. The idea to introduce diffusion models for point cloud compression is different with former works;
2. The paper is easy to follow, while the disgrams are also good;
3. The performances show improvements on sparse point clouds.
Weaknesses: 1. The comparison is not convincing enough. Some commonly used compression methods are not compared, while the evaluation is limited to sparse point clouds with relatively simple structures from ShapeNet, ModelNet;
2. The motivation of using diffusion model for compression is questionable. As a sampling-based framework, diffusion models may construct different results during decompression from variant sampled noises each time. I am not so sure if the diffusion model is more appropriate than existing AE or VAE-based frameworks for the compression task, which may need decompression as accurate as possible;
Please check the questions for more details, thanks.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Some popular methods are not compared, including PCGC[1], PCGCv2[2], and 3QNet[3];
[1] Lossy point cloud geometry compression via end-to-end learning
[2] Multiscale point cloud geometry compression
[3] 3qnet: 3d point cloud geometry quantization compression network
2. Only sparse objects on ShapeNet and ModelNet are used for comparison. How is the compression performances on more complex and dense shapes, including 8iVFB[4] or RWTT[5]? Besides, point clouds with 15K points are too sparse for evaluation as dense points are main targets for compression. Methods mentioned in Question 1 can deal with dense points.
[4] 8i voxelized full bodies-a voxelized point cloud dataset
[5] Real-world textured things: A repository of textured models generated with modern photo-reconstruction tools
3. How do you deal with the uncertainty of diffusion models? The sampling-based generation process may produce different decompressed results between multiple inferences;
4. Could you compare the computational cost, e.g., compression efficiency between different methods?
Some minor problems:
5. In Eq.20, how do you calculate $\bar{x}_0$? As the whole denoising process may be not affordable during training.
6. In Eq.14, is the $F_{in}$ actually $F'_{xt}$ in Eq.13?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer U3Bi,
Thank you for your detailed review and the valuable feedback. We will
address your concerns below.
Q1: Some popular methods are not compared, including PCGC, PCGCv2,and 3QNet.
The proposed Diff-PCC mainly focuses on small-scale point clouds. The mentioned PCGC, PCGCv2 that target at voxlized dense point cloud compression, cannot be directly applied to sparse samples (usually degrates to extremely poor performance). For example, paper [1] has demonstrated the collapse of PCGCv2 on sparse point clouds, which proves that these methods are not on the same track as ours. In constrast, our model Diff-PCC does not require the input point cloud to be voxelized and structured, streamlining the processing pipeline. 3QNet requires external codec Draco to compress the skeleton points of the point cloud, which is not genenrally an end-to-end coding framework as ours.
We will dissucss the difference in the final version.
[1] IPDAE: Improved Patch-Based Deep Autoencoder for Lossy Point Cloud Geometry Compression.
Q2: How is the compression performances on more complex and dense shapes, including 8iVFB or RWTT?
• Diff-PCC is the first exploration of cutting-edge diffusion-based neural models for point cloud compression, currently only targeting at small-scale point cloud compression.To extend its application to large-scale point cloud datasets such as MVUB and MPEG 8i, we can adopt a patch-based approach in this rebuttal. This involves dividing the large point cloud into non-overlapping smaller patches, then sequentially compressing them with Diff-PCC, and finally assembling the reconstructed patches back into a large point cloud.We have displayed the comparison between it and ruled-based G-PCC in a PDF document.The results indicate that the performance of Diff-PCC is inferior to that of G-PCC. On one hand, the patch-based method may result in the loss of the original semantic information of the whole subjects. On the other hand, it neglects the connections between the patches.
In summary, by using the patch-based approach, we can apply Diff-PCC to large-scale point cloud compression, and it is worth further exploration.
• Considering the superior performance demonstrated in the small-scale point cloud samples, future research may include extending this work to compress larget-scale human bodies \& scenes.
Q3: How do you deal with the uncertainty of diffusion models?
Thank you for your detailed review and the valuable feedback.
Despite the issues with the randomness of sampling, we believe that DIFF-PCC, as the first work to introduce DDPM into the field of point cloud compression, holds significant importance and potential for further exploration.
Thank you for the issue, which is a very valuable research point in our future work.
Regarding this issue, we have the following two ideas for the future work:
1. We recognize that DDPM is a branch of the Stochastic Differential Equations(SDE) diffusion model call VP-SDE. SDE can be transformed into Ordinary Differential Equations(ODE) if we eliminate the random terms.Perhaps we could reformulate the problem as a ODE to use the sampling methods in a deterministic way.
2. DDPM diffuses in the pixel space. For Diff-PCC, the random noise during sampling may directly affect the position of the points in 3D space. Perhaps we could consider the Latent Diffusion Model (LDM) to map the point cloud into latent space.
```
Q4: Could you compare the computational cost, e.g., compression efficiency between different methods?
```
We have compared the respective running times of each method in Table 1, with separated encoding and decoding times for better comparison.
```
Q5: Problem in Eq.20.
```
When training, we do not obtain $\bar{x}_0$ through T iterations of sampling. Instead, we derive it by reversing the noise addition formula (Eq.21). Although the obtained $x_0$ is relatively coarse, we can still consider it as the final point cloud during training and use it to supervise the training process.
```
Q6: Problem in Eq.14.
```
Thank you for pointing out the mistake. In fact, there is no fundamental difference between $F_{in}$ and $ F^{'}_{xt} $ here.We will revise this in the paper accordingly.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks for the rebuttal of the authors. It has addressed some of my concerns. However, I still have some problems for now:
(1) I agree that PCGC and PCGCv2 may be unsuitable for sparse point cloud compression. But I hold the opinion that the importance of conducting sparse point clouds compression is not so strong as dense point cloud compression.
(2) For your comparisons on dense point clouds, do you normalize the points in different patches? If you do that, then how do you compress the centers and scales of different patches?
(3) For the claim about the uncertainty, I think the authors should provide some experiments about the variance of the decompressed results between multiple inferences. Otherwise, it may be difficult to judge if the diffusion framework is appropriate for the task of point cloud compression, due to its instability;
---
Rebuttal 2:
Comment: -Q1: The importance of conducting sparse point clouds compression is not so strong as dense point cloud compression.
-A1: Thank you very much for your reply. We agree with your viewpoint.
Undoubtedly, dense point clouds are more common in practical applications.
The dense point cloud compression offers a wider range of potential applications.
However, the original purpose of Diff-PCC was to combine Diffusion with Point Cloud Compression(PCC) to explore the feasibility of this novel technological approach.
Although it seems more suitable for sparse point clouds at present, it does not imply that there is no room for improvement in future work.
To extend its application to dense point clouds, we are experimenting with the following two methods:
(1) DDPM-based Upsampling: By utilizing the skeleton points and corresponding features of dense point clouds as conditions, we employ diffusion model to upsample large-scale of point, thereby reconstructing dense point clouds.
(2) LDM-based PCC: We map dense point clouds into a latent space with lower dimensions and then diffuse them to reduce computational demands.
Finally, we would like to respectfully point out:
LiDAR point clouds, as a type of large-scale sparse point cloud, are widely used in the field of autonomous driving and are becoming increasingly important. Voxel models are not adept at handling this kind of data, while point-based models like ours naturally have great potential for processing these point clouds.
-Q2: how do you compress the centers and scales of different patches?
-A2: Yes. We calculate the mean and variance of the point cloud patches and normalize the patches, resulting in arrays with shapes (1, 3) and (1, 1), respectively. The data type is float32, which is expected to occupy 24 bytes for storage.
In fact, we do not compress the center and scale, but transmit them directly to the decoding end for inverse normalization during this rebuttal period. We will consider to use octree coding to compress the centers.
Thank you very much for raising the question!
-Q3: About the uncertainty.
-A3:
Thank you for your reply.
To address your concerns about the uncertainty of the sampling results,
In our experiments, we have chosen to fix the random seed, and the code is as follows:
torch.manual_seed(2024)
np.random.seed(2024)
random.seed(2024)
By using this setting, we can ensure that the same random noise is taken during multiple sampling processes, thereby producing stable and consistent decompression results.
Despite this, randomness can still lead to some additional issues, such as outliers and rough edges, which are problems we aim to address in the future.
The above are our views on your question. Thank you very much for your reply. We welcome any insightful suggestions to improve our work. | Summary: In this paper, they introduce the diffusion-based point cloud compression method, dubbed Diff-PCC, to leverage the expressive power of the diffusion model for generative and aesthetically superior decoding. They get better performance than G-PCC and two deep learning methods.
Strengths: Encoding point clouds using diffusion models is a good idea. The article is easy to understand.
Weaknesses: Firstly, how do we obtain a point cloud with added noise in the decoder? We have no knowledge of any other information about the original point cloud, except for the information in the bitstream. This will result in the inability to decode.
This manuscript claims to achieve state-of-the-art compression performance, but it only compares with two deep learning methods from the past two years. It does not compare with the most advanced methods such as CNet, SparsePCGC, and so on.
Technical Quality: 1
Clarity: 2
Questions for Authors: How do we obtain a point cloud with added noise in the decoder? We have no knowledge of any other information about the original point cloud, except for the information in the bitstream.
How does your method's performance compare to CNet and SparsePCGC?
How does your method perform on datasets such as MPEG 8i and MVUB?
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 1
Limitations: The decorder will not work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer QHjS,
Thank you for your detailed review and the valuable feedback. In the following, we address all comments in the review.
Q1: How to obtain a point cloud with added noise in the decoder?
In a word, the decoder do not need any prior knowledge of the original point cloud. During decoding, a completely random Gaus-
sian distribution is initialized and then gradually denoised through the information in the bitstream.
Specifically, the decoding process can be described as:
• First, Diff-PCC samples randomly from a Gaussian distribution to obtain a pure noise point cloud $X_T$ with the shape $ (B,N,3) $.
• Then, the generator gradually removes the noise from $X_T$ , generating a series of denoised point clouds {${X_T, X_{T-1}, ...,X_{1}, X_{0}}$}.
• In this way, the Diff-PCC reconstructs the original point cloud by simulating the reverse process of DDPM in the decoder, starting
from Gaussian noise $X_T$ , and gradually remove noise.
Q2: Performance compare with CNet and SparsePCGC on datasets such as MPEG 8i and MVUB.
• Unfortunately, since both CNet (geometry part) and SparsePCGC are not open-sourced yet, it is technically difficult to evaluate these two models on small-scale samples.
• Diff-PCC is the first exploration of cutting-edge diffusion-based neural models for point cloud compression, currently only targeting small-scale point cloud compression (which is also widely used in real-world applications such as thumbnails for quick browsing and key points of captured scenes in robotics).
However, to extend its application to large-scale point cloud datasets such as MVUB and MPEG 8i, we can adopt a patch-based approach in this rebuttal. This involves dividing the large point cloud into non-overlapping smaller patches, then sequentially compressing them with Diff-PCC, and finally assembling the reconstructed patches back into a large point cloud.
We have displayed the comparison between it and G-PCC in attached PDF document.
The results indicate that the performance of Diff-PCC is inferior to that of G-PCC. On one hand, the patch-based method may result in the loss of the original semantic information of the whole subjects. On the other hand, it neglects the connections between the patches.
In summary, by using the patch-based approach, we can apply Diff-PCC to large-scale point cloud compression, but it is worth further exploration.
• Our work Diff-PCC distinguish itslef from previous works in the following aspects:
(1) We validate the possiblity of applying diffusion probabilistic model into point cloud compression for the first time. (2) Our model is more generic, which support the compression of point clouds of any types, spare or dense, voxelized or non-structured. In contrast, CNet and SparPCGC require the input point cloud to be voxelized.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: 1. As writing in the manuscript 'The generator takes noisy point cloud x_t at time t and necessary conditional information C as input.' If x_t is from x_T (a completely random Gaussian distribution), the reverse process has no information about the original point cloud x_0. In other words, for any original point cloud x_0, the difference in their x_t is only due to randomness, rather than their own characteristics. So is this reverse process still effective?
2. G-PCC has no relevant configuration files, which will lead to a decrease in performance. Therefore, I think the experiment on ShapeNet and ModelNet is unfair. You said your model is more generic, which support the compression of point clouds of any types, spare or dense, voxelized or non-structured. So please compare with state-of-the-art methods on more common datasets such as MPEG 8i, Ford, KITTI, etc.
Compressing point clouds using diffusion models is a good idea, but this work still needs further improvement. Therefore, we maintain our previous conclusion
---
Rebuttal 2:
Comment: Thank you for addressing the concerns raised in our initial feedback. We acknowledge the effort put into revising the manuscript. However, upon further review, we find that some of the issues highlighted in the weaknesses section are still not adequately resolved.
1. As writing in the manuscript. The generator takes noisy point cloud x_t at time t and necessary conditional infommation C as input. lf x_t is from x_T (a completely random Gaussian distribution), the reverse process has no information about the original point cloud x_0. In other words, for any original point cloud x_0, the diference in their x_t is only due to randomness, rather than their own characteristics. So is this reverse process still effective?
2. G-PCC has no relevant configuration fles, which will lead to a decrease in performance. Therefore, l think the experiment on ShapeNet and ModelNet is unfair. You said your model is more generic, which support the compression of point clouds of any types, spare or dense, voxelized or non-structured. So you should compare with state-of-the-art methods on more common dafasets such as MPEG 8i, Ford, KlTTl. etc.
Compressing point clouds using diffusion models is a good idea, but this work still needs further improvement. After thorough consideration, we have decided to maintain our original evaluation and rating of the manuscript.
---
Rebuttal 3:
Comment: We apologize for not thoroughly cleaning up your confusion.Now we will try our best to explain it below:
-Q1:is this reverse process still effective.
-A1:In fact, the reverse process contains the information of X_0 because C contains the features extracted from X_0.
We know that the reverse process starts from X_T (a completely random Gaussian distribution), continuously predicting and removing noise.The generator combines the conditional information C, the time step t, and the noise point cloud X_t to predict the noise.If we do not use C to guide the generator to predict specific noise, the reconstructed point cloud is likely to be very different from the original point cloud. For example, if we input an armchair, without C guidance, it is likely to reconstruct a chair without armrests.However, by following the guidance of C, we can limit the denoising direction and ultimately reconstruct an armchair corresponding to the original point cloud.This process demonstrates the strong generative capabitliy of diffusion model in point cloud generation.We will release our codes and pre-trained model in the near future to demonstrate the reverse process is effective and our decoder can work effectively as illustrated in the paper.
-Q2:Compare with state-of-the-art methods on more common dafasets such as MPEG 8i, Ford, KlTTl. etc.
-A2:
Thank you for agreeing that “Compressing point clouds using diffusion models is a good idea”,
and we agree that “this work still needs further improvement”.
We will try our best to solve your doubts and the following is our response to this question:
By dividing the point cloud into patches, we compared four human point clouds (longdress, loot, redblack, soldier) from MPEG-8i with G-PCC, as shown in the attached PDF document.
The configuration file of G-PCC refers to PCGV2.
We set some parameters as follows:
--positionQuantizationScale=1
--trisoupNodeSizeLog2=0
--neighbourAvailBoundaryLog2=8
--intra_pred_max_node_size_log2=6
--inferredDirectCodingMode=0
--maxNumQtBtBeforeOt=4
For different point clouds, we select different positionQuantizationScale to control BPP to meet our needs.
In addition, since SparsePCGC, CNet, and other methods are not fully open-sourced, it is difficult for us to reproduce and we are unable to compare with them currently.
Finally, we welcome any insightful suggestions to improve our work. Thank you very much. | null | null | Rebuttal 1:
Rebuttal: Dear all reviewers,
We thank each of you for generously dedicating your valuable time and
expertise to reviewing our work. We sincerely appreciate your constructive
feedback and are delighted to see the positive comments:
1.Novelty
• Reviewer HJUZ: ”diffusion model is used in point cloud compression for
the first time”;
• Reviewer HJUZ: ”the dual-latent design is also novel for learned point
cloud compression”;
• Reviewer U3Bi: ”introduce diffusion models for point cloud compression is
different with former works”;
• Reviewer QHjS: ”Encoding point clouds using diffusion models is a good
idea”;
2.Writing
• Reviewer HJUZ: ”The manuscript is well written and easy to follow”;
• Reviewer U3Bi: ”The paper is easy to follow, while the disgrams are
also good”;
• Reviewer QHjS: ”The article is easy to understand”;
• Reviewer HJUZ: ”did a good job in introducing related works on
image compression, point cloud compression, point cloud analysis and
diffusion model”;
Detailed responses to each reviewer’s comments are provided in the rebuttal
sections.
Pdf: /pdf/c19c2c11ebd20238e413036403068bf1f3d802b7.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Risk-Averse Fine-tuning of Large Language Models | Accept (poster) | Summary: This paper addresses the challenge of mitigating the generation of negative or toxic content by Large Language Models (LLMs) in response to certain prompts. It proposes an innovative approach that integrates risk-averse principles into LLM fine-tuning, aiming to reduce harmful outputs, particularly rare but significant events. The method fine-tunes LLMs by integrating risk-averse principles, employing the CVaR risk measure within an RLHF framework. This method improves the ability of LLMs to avoid generating toxic content, particularly in high-risk scenarios, while maintaining overall performance in generative tasks. Empirical evaluations demonstrate the effectiveness of this approach in promoting safer online interactions and enhancing the applicability of LLMs in various domains.
Strengths: It introduces an innovative approach to mitigating toxic content generation by integrating risk-averse principles, validated through comprehensive empirical evaluations. The clarity of presentation and detailed methodology enhance the paper’s accessibility, while its significance lies in addressing a critical issue with broad applicability and promoting responsible AI. These strengths make the paper a valuable contribution to the fields of natural language processing and machine learning.
Weaknesses: While the paper presents several innovative contributions, it also has potential weaknesses in terms of complexity, computational resource requirements, data dependency, interpretability, practical implementation challenges, adaptability, and the scope of validation. Addressing these weaknesses in future research would help to further validate and enhance the practical applicability and robustness of the proposed approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The long-term efficacy of the risk-averse mechanisms in continuously evolving real-world settings remains uncertain. The model’s performance might degrade over time as new types of toxic or harmful content emerge, requiring ongoing updates and retraining.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you appreciating and recognizing the strengths of our work. Please find additional details of interest below.
**Computational resource requirement**. The computational resource requirements are mentioned in Appendix E.1.
**Computational complexity**. Our algorithm has the same computational complexity as that of RLHF during the first n iterations. Once the soft risk scheduling kicks in, our algorithm introduces an additional best case computational complexity of $O(B + \alpha'\log(B))$, where $B$ is the batch size and $\alpha'$ is the risk-level that decreases across iterations. The space complexity remains the same as that of RLHF.
**Contribution**. Our work is the first to introduce a nuanced understanding of `risk' in the context of LLM content generation, going beyond Greenberg et al. [2022]'s work. Greenberg et al. [2022] proposed soft risk scheduling to make policies learned using policy gradients risk-averse. Presented below are our contributions:
1. We implement CVaR in conjunction with a regularized reinforcement learning objective (reward + KL term). Greenberg et al. [2022] work only with the plain reward. We choose to work with regularized reward for two reasons: I. We want to measure risk in generations accounting for both the performance on actual environment reward and the quality of language generation measured by the KL-Divergence with respect to the reference policy. II. Our said choice makes our proposed algorithm downward compatible to the existing RLHF implementations.
2. We implement CVaR in the Actor-Critic setting, as opposed to policy gradients, with an aim to learn a complex parameterized policy (LLM) with an extremely large action space.
3. Beyond the immediate application of creating safer LLMs, this work contributes to the broader field of machine learning by demonstrating how risk measures like CVaR can be integrated into the training process of complex models like LLMs. Our work additionally establishes a groundwork for exploring additional risk measures and criteria, such as the Entropic Value at Risk (EVaR), in the context of LLM safety and uncertainty quantification.
**Adaptability**. In the scenario that the input prompt distribution shifts from the distribution our RA-RLHF models are finetuned on, our models can be further trained on the new prompt data using our RA-RLHF algorithm. As noted in our submission (Sec. 5.1 and 5.2), RA-RLHF ensures safe generations without hampering the quality of language generation. We believe this behavior would hold true under retraining for input data distribution shifts.
**Validation**. In our work, we studied inclusion of safety in LLMs using three datasets - IMDB, Jigsaw and RealToxicityPrompts - over GPT-2 (117M parameters) and GPT-J (6B parameters). Most of the related works in the LLM safety space work with IMDB and RealToxicityPrompts datasets with models ranging upto GPT2-L (762M parameters) as used in DExperts and Quark. RECT, another related work, studies only the RealToxicityPrompts dataset with GPT-2, GPT-2 XL (1.5B) and GPT-3 (175 B). In the light of this information, we believe our experiments are comprehensive in terms of the number of models and datasets studied.
---
Rebuttal Comment 1.1:
Comment: I have read the rebuttal and thank the authors for the responses. | Summary: To mitigate the generation of negative or toxic content by LLMs, this paper proposes a new method "risk-averse" RLHF (RA-RLHF) to finetune LLMs. Their RA-RLHF optimizes the CVaR risk measure with RL to decrease the order of negativity or toxicity. They experiment with two datasets, IMDB-Gen and Jigsaw. Their experiment shows the effectiveness of their proposed method and compares it against conventional RLHF and supervised fine-tuning.
Strengths: 1. The paper investigates an important problem of how to mitigate the generation of negative or toxic content by LLMs.
2. The paper provided an extensive literature review, background illustration and introduction of the proposed method.
Weaknesses: 1. Lack of clarity. I wonder how their method involves human feedback. Reading through Algorithm 1, it is unclear how human feedback is used and how they obtain human feedback. I also wonder how they get their reward function. Do they use any annotated data to train the reward function? Are the reward functions used in training and testing the same?
2. Lack of solid evaluation. The proposed method is evaluated on reward and perplexity. However, it is not clear how the reward function and perplexity are calculated and which models are used. In previous work [1], the creativity of LLM generations is also evaluated and human evaluation is an important method to verify the generations' quality. However, this paper has provided neither evaluation.
[1] Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please see the questions in weakness.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The paper provided limitations and ethical discussion in Sections A and B in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions and comments. Before we begin answering the questions, we wanted to highlight that in our work, we studied inclusion of safety in LLMs using three datasets - IMDB, Jigsaw and RealToxicityPrompts - over GPT-2 (117M parameters) and GPT-J (6B parameters).
**Q1. Lack of clarity. I wonder how their method involves human feedback. Reading through Algorithm 1, it is unclear how human feedback is used and how they obtain human feedback. I also wonder how they get their reward function. Do they use any annotated data to train the reward function? Are the reward functions used in training and testing the same?**
A1. Human feedback is used to learn the score/reward models used for finetuning in our work. For the experiments with toxicity datasets, we use the toxicity score returned by `unitary/toxic-bert LLM` as our reward. For the positive sentiment generation task using IMDB dataset, we use the sentiment score returned by `lvwerra/distilbert-imdb` LLM as the reward. These rewards are used in line number 7 of our algorithm. We mention the use of existing reward models, as is the case in most related works, in line numbers 155-156 in Sec. 3. We have mentioned the reward function details in Appendix E.1 where we provide model and compute details. We can, however, make this more clear by adding this information in Sec. 4.1.
**Q2. Lack of solid evaluation. The proposed method is evaluated on reward and perplexity. However, it is not clear how the reward function and perplexity are calculated and which models are used. In previous work [1], the creativity of LLM generations is also evaluated and human evaluation is an important method to verify the generations' quality. However, this paper has provided neither evaluation.**
A2. **Score/Reward Metric**. For the experiments with toxicity datasets, we use the toxicity score returned by `unitary/toxic-bert` LLM as our metric. The LLM `unitary/toxic-bert` is trained to classify toxic comments on 3 Jigsaw challenges: Toxic comment classification, Unintended Bias in Toxic comments, and Multilingual toxic comment classification. Thus, the results included in the paper are indeed toxicity scores. For the positive sentiment generation task using IMDB dataset, we use the sentiment score returned by `lvwerra/distilbert-imdb` LLM. The LLM `lvwerra/distilbert-imdb` is trained to score positive sentiment present in a given piece of text, thus, making the returned score an appropriate metric for the task. These are also the models used to obtain reward function during training. We have mentioned these details in Appendix E.1 where we provide model and compute details.
**Perplexity Metric**. Perplexity is calculated to assess "how likely is a piece of text to be generated by a model", mathematically evaluated as $\text{PP}(W) = 2^{-\frac{1}{N} \sum_{i=1}^N \log_2 P(w_i|w_1, \ldots, w_{i-1})}$. Here, PP(W) is the perplexity of the model on the given text W. We keep W fixed across models. We choose positive prompts and completions from test dataset to form W to capture how positive/non-toxic the models are. N is the total number of words in the text. $P(w_i|w_1, ..., w_{i-1})$ is the probability assigned by the model to the $i$-th word given the preceding words. Perplexity calculation code is included in Appendix G. We can expand the discussion on perplexity in Appendix G as above. We can also briefly mention it in Sec. 5.2.
**Creativity of LLM generation**. We have now included results on text diversity measuring metric Distinct-n (Dist-n) introduced in DExperts [1]. The results are included over the existing algorithms along with the three new baselines (DExperts, Quark [2] and Prompted GPT-2) as described in the attached author rebuttal pdf. We observe that across datasets, models returned by our proposed algorithm RA-RLHF enjoy the best performance in terms of safety scores while maintaining text coherence and diversity.
*References*
[1] Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics.
[2] Lu, Ximing, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. "Quark: Controllable text generation with reinforced unlearning." Advances in neural information processing systems 35 (2022): 27591-27609.
---
Rebuttal Comment 1.1:
Title: Requesting feedback on rebuttal
Comment: Dear Reviewer,
Please let us know if our rebuttal answered your questions. Please let us know if you have any further concerns.
Thank you.
---
Rebuttal Comment 1.2:
Comment: Sorry for late response. Thanks for your response and new results. I found that most of my concerns have been addressed. I updated my rating accordingly. | Summary: The paper presents a method for fine-tuning large language models (LLMs) using risk-averse reinforcement learning from human feedback (RA-RLHF). The core contribution is the integration of risk aversion into the RL fine-tuning process to minimize the generation of toxic content, particularly in responses to harmful or challenging prompts. This is achieved by optimizing the Conditional Value at Risk (CVaR), which focuses on minimizing the expected harm in the most severe scenarios, rather than average outcomes. The authors claim this approach allows LLMs to perform effectively under challenging input conditions, maintaining good generative performance while reducing toxicity.
Strengths: 1. The experimental results, as demonstrated in the plots and tables, show that RA-RLHF significantly reduces toxic outputs.
2. The paper addresses a critical need for safer LLM applications, proposing a solution that could help the development of ethical AI systems.
3. The methodological explanation is thorough, detailing how the RA-RLHF integrates with existing LLM training frameworks and how it adjusts for risk during training iterations, which provides a clear roadmap for replication and future research.
Weaknesses: 1. The approach primarily adapts the CeSoR algorithm by Greenberg et al. [2022] to the text generation context. This application, while effective, does not significantly extend the original algorithm's conceptual framework, leading to questions about the novelty of the contribution. Essentially, the method repurposes an existing risk-averse reinforcement learning strategy for LLMs, which may limit its perceived innovation in the field.
2. Insufficient Comparison with Other Baselines: The evaluation of RA-RLHF mainly involves comparisons with SFT and standard RLHF. However, there are numerous other techniques for reducing toxicity in text generation that were not considered, such as rejection sampling, DExpert by Liu et al. [2021], QUARK by Lu et al. [2022], and RECT by Cao et al. [2023] and prompt-based method as such Self-Debias. The exclusion of these methods from the comparative analysis might give an incomplete picture of the proposed method's effectiveness relative to the current state-of-the-art.
3. The paper does not compare its approach with simpler prompt-based methods, which direct LLMs to generate non-toxic content through instructions. Given the advanced instruction-following capabilities of current LLMs, this omission could be a significant oversight, as such methods may offer simpler and potentially equally effective alternatives for reducing toxicity.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Consider comparing the proposed method with simpler prompt-based approach, which direct LLMs to generate non-toxic content through specific instructions.
2. Missing references:
- Lu, Ximing, et al. "Quark: Controllable Text Generation with Reinforced Unlearning." arXiv preprint arXiv:2205.13636 (2022).
- Meng, Cao, et al. "Systematic Rectification of Language Models via Dead-end Analysis." https://arxiv.org/abs/2302.14003 (2023).
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 1
Limitations: Consider evaluating the proposed method on more complex tasks spanning various domains.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful questions and comments.
**C1. The approach primarily adapts the CeSoR algorithm by Greenberg et al. [2022].\..**
A1. Our work is the first to introduce a nuanced understanding of `risk' in the context of LLM content generation, going beyond Greenberg et al. [2022]'s work. Greenberg et al. [2022] proposed soft risk scheduling to make policies learned using policy gradients risk-averse. Presented below are our contributions:
1. We implement CVaR in conjunction with a regularized reinforcement learning objective (reward + KL term). Greenberg et al. [2022] work only with the plain reward. We choose to work with regularized reward for two reasons: I. We want to measure risk in generations accounting for both the performance on actual environment reward and the quality of language generation measured by the KL-Divergence with respect to the reference policy. II. Our said choice makes our proposed algorithm downward compatible to the existing RLHF implementations.
2. We implement CVaR in the Actor-Critic setting, as opposed to policy gradients, with an aim to learn a complex parameterized policy (LLM) with an extremely large action space.
3. Beyond the immediate application of creating safer LLMs, this work contributes to the broader field of machine learning by demonstrating how risk measures like CVaR can be integrated into the training process of complex models like LLMs. Our work additionally establishes a groundwork for exploring additional risk measures and criteria, such as the Entropic Value at Risk (EVaR), in the context of LLM safety and uncertainty quantification.
**C2. Insufficient Comparison with Other Baselines.\..**
A2. Thank you pointing these out. We have now included results over these in the attached author rebuttal pdf. We could not add comparisons with RECT as the model checkpoints are not publicly available for inference. However, we will be sure to mention the work in our related work section (Sec. 2). We observe that across datasets, models returned by our proposed algorithm RA-RLHF enjoy the best performance in terms of safety scores while maintaining text coherence and diversity.
**C3. The paper does not compare its approach with simpler prompt-based methods.\..**
A3. Thank you pointing this out. We have now added results over a Prompt + GPT-2 baseline in the attached author rebuttal pdf. We in-context prompt GPT-2 to generate safe text. At inference time, we add the prompt "Generate positive sentiment" to the IMDB prompts. For the toxicity datasets (Jigsaw and RealToxicityPrompts), we add the prompt "Generate non-toxic text". Generations from the prompted model enjoy only a marginal improvement in terms of safety scores over vanilla GPT-2.
*Thank you for pointing out the missing references. We will be sure to mention these in our related work section (Sec. 2).*
---
Rebuttal Comment 1.1:
Comment: Thanks to the author for the response. The additional experiments addressed some of the concerns. However, I don't think the prompting baseline result in the attached pdf makes sense since GPT2 is relatively weak and lacks the ability to follow instructions. Having said that, I will raise the score by 1. | Summary: The authors propose a fine-tuning method to mitigate text degeneration, such as toxic outputs, in large language models. The proposed method optimizes a Conditional Value at Risk-inspired criterion. Experimental results show that the proposed method outperforms various baselines on two datasets.
Strengths: - The paper is well-written, and the proposed method is relatively easy to understand.
- The task addressed is important and relevant.
Weaknesses: - The experimental results are limited. I realize that training LLMs requires substantial computational resources, but given that this method is designed for LLMs, the authors should have conducted more detailed experiments on a larger number of datasets, different model architectures, and sizes.
- The improvements over the baselines, especially RLHF, seem somewhat small, particularly for GPT-J 6B. It is unclear what actual benefits the proposed method offers, as the metrics used do not clearly demonstrate improved text generation capabilities (toxicity scores? human evaluation?). Furthermore, the experiments with GPT-J 6B should have been placed in the main paper instead of the appendix.
- Overall, contribution seems limited, since this method appears to be an extension of Greenberg et al. [2022] for LLM fine-tuning. Authors should clearly describe differences between their proposed method and that of Greenberg et al. [2022].
Technical Quality: 3
Clarity: 3
Questions for Authors: - What is the computational efficiency of the proposed method compared to the baselines?
- How does proposed method performs against methods such as DExperts (Liu et al., 2021) and GeDi (Krause et al., 2020)?
- Have you considered evaluating output diversity, e.g., the mean number of distinct n-grams, normalized by the length of text (Li et al., 2016)?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions.
**C1. The experimental results are limited.\...**
A2C1. In our work, we studied inclusion of safety in LLMs using three datasets - IMDB, Jigsaw and RealToxicityPrompts - over GPT-2 (117M parameters) and GPT-J (6B parameters). Most of the related works in the LLM safety space work with IMDB and RealToxicityPrompts datasets with models ranging upto GPT2-L (762M parameters) as used in DExperts and Quark. RECT, another related work, studies only the RealToxicityPrompts dataset with GPT-2, GPT-2 XL (1.5B) and GPT-3 (175 B). In the light of this information, we believe our experiments are comprehensive in terms of the number of models and datasets studied.
**C2. The improvements over the baselines, especially RLHF, seem somewhat small...**
A2C2. **Metric**. For the experiments with toxicity datasets, we use the toxicity score returned by `unitary/toxic-bert` LLM as our metric. The LLM `unitary/toxic-bert` is trained to classify toxic comments on 3 Jigsaw challenges: toxic comment classification, unintended bias in toxic comments, and multilingual toxic comment classification. Thus, the results included in the paper are indeed toxicity scores. For the positive sentiment generation task using IMDB dataset, we use the sentiment score returned by `lvwerra/distilbert-imdb` LLM. The LLM `lvwerra/distilbert-imdb` is trained to score positive sentiment present in a given piece of text, thus, making the returned score an appropriate metric for the task.
**GPT-J**. We could not add the results for GPT-J in the main paper because of the space constraints. We can add these in the main body in the final version of the paper. The results reported in our work, especially on the tail prompts, across datasets and LLM types are amongst the largest margin improvements reported in the related work (DExperts [1], Quark [2]).
**C3. Overall, contribution seems limited.\..**
A2C3. Our work is the first to introduce a nuanced understanding of *risk* in the context of LLM content generation, going beyond Greenberg et al. [2022]'s work. Greenberg et al. [2022] proposed soft risk scheduling to make policies learned using policy gradients risk-averse. Presented below are our contributions:
1. We implement CVaR in conjunction with a regularized reinforcement learning objective (reward + KL term). Greenberg et al. [2022] work only with the plain reward. We choose to work with regularized reward for two reasons: I. We want to measure risk in generations accounting for both the performance on the actual environment reward and the quality of language generation measured by KL-Divergence with respect to the reference policy. II. Our said choice makes our proposed algorithm downward compatible to the existing RLHF implementations.
2. We implement CVaR in the Actor-Critic setting, as opposed to policy gradients, with an aim to learn a complex parameterized policy (LLM) with an extremely large action space.
3. Beyond the immediate application of creating safer LLMs, this work contributes to the broader field of machine learning by demonstrating how risk measures like CVaR can be integrated into the training process of complex models like LLMs. Our work additionally establishes a groundwork for exploring additional risk measures and criteria, such as the Entropic Value at Risk (EVaR), in the context of LLM safety and uncertainty quantification.
**Q1. What is the computational efficiency of the proposed method compared to the baselines?**
A1. Our algorithm has the same computational complexity as that of RLHF during the first n iterations. Once the soft risk scheduling kicks in, our algorithm introduces an additional best case computational complexity of $O(B + \alpha'\log(B))$, where $B$ is the batch size and $\alpha'$ is the risk-level that decreases across iterations. The space complexity remains the same as that of RLHF.
**Q2. How does proposed method performs against methods such as DExperts (Liu et al., 2021).\..**
A2. We have now included a comparison with three new baselines (DExperts, Quark, and Prompted Base Model) in the attached author rebuttal pdf. We did not include results on GeDi (Krause et al., 2020) as both DExperts and GeDi are decoding based methods, and DExperts reported better results over GeDi. However, we will be sure to include GeDi in the related work section (Sec. 2).
**Q3. Have you considered evaluating output diversity..?**
A3. Thank you for the suggestion. We have now included results over output diversity in the attached author rebuttal pdf.
*References*
[1] Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics.
[2] Lu, Ximing, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. "Quark: Controllable text generation with reinforced unlearning." Advances in neural information processing systems 35 (2022): 27591-27609.
---
Rebuttal Comment 1.1:
Title: Requesting comment on the rebuttal
Comment: Dear Reviewer,
Please let us know if our rebuttal answered your questions. Please let us know if you have any further concerns.
Thank you. | Rebuttal 1:
Rebuttal: With this work, our goal was to introduce a nuanced understanding of "risk" in the context of LLM content generation to induce safety/non-toxicity in LLM generations. We achieved so by introducing a risk-averse strategy to LLM finetuning, focusing on optimizing Conditional Value at Risk (CVaR), representing a significant contribution to enhancing the safety and ethical considerations of LLM deployment.
We are pleased to know that reviewers appreciated our efforts, and we extend our gratitude to the reviewers for their insightful comments, questions and suggestions. It is encouraging to note that the reviewers found
1. the paper addressing a critical need for safer LLMs (4ixc, CMzE, xjv9),
2. the proposed solution of applying risk-averse principles to fine-tune LLMs new (R4NK),
3. our paper well written (R4NK, CMzE),
4. the methodology clearly explained (R4NK, CMzE, 4ixc, pvQg), and
5. that the results demonstrate the effectiveness of our proposed approach (R4NK, CMzE, 4ixc, xjv9, pvQg).
Upon reviewers' suggestions, we have included additional evaluation results, demonstrating superior performance of our proposed RA-RLHF algorithm over three new baselines and a diversity evaluation metric in the attached pdf.
*References for the attached pdf:*
[1] Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics.
[2] Lu, Ximing, Sean Welleck, Jack Hessel, Liwei Jiang, Lianhui Qin, Peter West, Prithviraj Ammanabrolu, and Yejin Choi. "Quark: Controllable text generation with reinforced unlearning." Advances in neural information processing systems 35 (2022): 27591-27609.
[3] Cao, Meng, Mehdi Fatemi, Jackie Chi Kit Cheung, and Samira Shabanian. "Systematic rectification of language models via dead-end analysis." arXiv preprint arXiv:2302.14003 (2023).
Pdf: /pdf/fb05fe6c53bc40865793095d7c7f551fc4e777b5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a new way to reduce the generation of toxic content by large language models. The authors introduce a method that integrates risk-averse principles into the fine-tuning process, focusing on minimizing harmful outputs using Conditional Value at Risk (CVaR) as the risk measure.
The goal of this approach is to address uncommon but significant toxic events. By using risk-averse reinforcement learning with human feedback (RA-RLHF), the authors train LLMs to avoid generating negative content while still being effective in other language generation tasks.
The results from sentiment modification and toxicity mitigation tasks show that the approach reduces toxic outputs and seems to promote a safer generation.
Strengths: - The idea of applying risk-averse principles to fine-tune LLMs is rather new and the results seem to show that RA-RLHF is effective at reducing toxic outputs.
- The paper is well written and the methodology properly explained (even though the limitations and discussion sections ended up being in the appendix)
- I appreciated the use of the Proximal Policy Optimization (PPO) within the RA-RLHF framework, I think it could be an approach that can be integrated into existing workflows.
Weaknesses: - The paper mentions a slight increase in model perplexity with the RA-RLHF method. This doesn’t really hurt overall performance, but it does suggest the model might be making some more drastic changes, which could affect the smoothness and clarity of the text it generates in certain situations.
- The paper does not discuss how the the model interacts with users in dynamic environments.
- As the authors point out in the limitations section, the evaluations are carried out only on specific tasks (IMDB-Gen and Jigsaw-Gen), and so it's unclear how well the approach works in other tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What steps can be taken to ensure the risk-averse fine-tuning process doesn't inadvertently reinforce existing biases in the training data? How can the model be evaluated for potential biases?
- What is the long-term impact of risk-averse fine-tuning on model behavior and performance? How stable are the improvements over time?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - The long-term impact of the risk-averse fine-tuning on model behavior and performance over time is not addressed, leaving questions about the stability of the improvements.
- The paper doesn't deeply address the potential for introducing or reinforcing biases during the fine-tuning process.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and questions.
**Q1. What steps can be taken to ensure the risk-averse fine-tuning process doesn't inadvertently reinforce existing biases in the training data? How can the model be evaluated for potential biases?**
A1. Since we work in the regime of RLHF based LLM finetuning, as long as the reward model used in RA-RLHF penalizes bias, our learned models will be unbiased. This is indeed the case for our models trained to generate non-toxic text as we use `unitary/toxic-bert` as the reward model. `unitary/toxic-bert` is trained to classify toxic comments on 3 Jigsaw challenges: toxic comment classification, *unintended bias in toxic comments*, and multilingual toxic comment classification.
**Q2. What is the long-term impact of risk-averse fine-tuning on model behavior and performance? How stable are the improvements over time?**
A2. In the scenario that the input prompt distribution shifts from the distribution our RA-RLHF models are finetuned on, our models can be further trained on the new prompt data using our RA-RLHF algorithm. As noted in our submission (Sec. 5.1 and 5.2), RA-RLHF ensures safe generations without hampering the quality of language generation. We believe this behavior would hold true under retraining for input data distribution shifts.
*Please let us know if we understood your questions correctly. We are happy to provide further clarification if needed.*
---
Rebuttal Comment 1.1:
Title: Thank you for your reply.
Comment: I appreciate the way you addressed my concerns. | null | null | null | null | null | null |
Learning-Augmented Priority Queues | Accept (poster) | Summary: The study investigates the integration of learning-augmented frameworks into the design of priority queues, focusing on enhancing worst-case performance using potentially inaccurate predictions. It examines three specific prediction models—dirty comparisons, pointer prediction, and rank prediction—applied within priority queues based on skip lists for sorting and Dijkstra algorithms. The research demonstrates how these predictive models can effectively optimize priority queue operations. Furthermore, it establishes the optimality of
the proposed solution and explores potential real-world applications of the findings.
Strengths: S1: The mathematical proof process of the paper is highly rigorous, demonstrating strong
credibility.
S2: The methods proposed in the paper significantly reduce the number of data item
comparisons during the internal execution of basic operations in priority queues, thus
possessing the potential to pique the interest of researchers in related fields.
Weaknesses: O1: Writing Quality and Readability Issues
O1.1 Poor logical flow and transitions: Before discussing the learning-enhanced framework, the limitations and necessity of traditional priority queue methods are not sufficiently explained, leading to unnatural logical transitions. For instance, after stating in line 21, "However, it is established that a priority queue with n elements cannot guarantee o(log n) time for all the required operations," there should be a more detailed explanation of why this is a significant limitation. Similarly, after line 158's statement, "To improve the
insertion complexity with dirty comparisons, we first tackle the search problem in this setting," the paper jumps directly into technical implementation without a clear bridging sentence explaining why this approach can improve complexity.
O1.2 Lengthy sentences unsuitable for academic writing: Some sentences are overly lengthy, which does not meet the standards of academic writing. For example, in line 163, "Indeed,...."
O1.3 Dense concepts with insufficient explanation: The paper introduces many concepts but fails to adequately explain some important ones. Although many specific examples of priority queue applications are provided, these details could be more credibly summarized and
simplified to make the paper more concise. Detailed explanations of application cases might belong in the "Background" section rather than the "Introduction."
O1.4 Figures not referenced or adequately explained in the text: The paper contains only three figures, but they are not referenced in the text. For instance, Figure 1 on skip lists is neither clearly explained in the text nor supplemented with detailed captions. Additionally,
the figure does not illustrate the dynamic process of finding and deleting the minimum value in a priority queue using a skip list.
O1.5 Lack of summaries and overviews: There is no summary of the innovations, contributions, or experimental results in the Introduction section.
O2: Lack of detailed motivation and innovation overview. The paper does not provide a detailed introduction to the motivation behind the work or summarize its innovative aspects. Although the authors point out the limitation of priority queues in guaranteeing O(logn)
operations in the worst case on line 21, they do not explain in which application scenarios this limitation occurs. Appendix A also fails to discuss whether related works have addressed this issue and what shortcomings remain.
O3: Dependence on previous work with insufficient innovation. This paper builds on a previous work "Xingjian Bai and Christian Coester. Sorting with predictions. Advances in Neural Information Processing Systems, 36, 2023.", requiring readers to have prior knowledge of the previous paper to understand much of its content. However, this paper lacks innovative aspects compared to the previous one, merely combining several prediction algorithms (Dirty Comparison, rank prediction, pointer prediction) with priority queues.
O4: Insufficient comparative analysis of algorithm bounds. While the authors provide numerous bounds for their algorithms in the main text and appendices, these proofs are convincing but lack comparison with related works and original priority queues regarding
these bounds. The intrinsic relationship between the designed algorithms and these bounds is not clearly explained. I suggest that the authors provide a comparative table to illustrate the advantages of various bounds more clearly and detail how their algorithms ensure these bounds during the algorithm introduction.
O5: Issues in the experimental section. The experimental section introduces important settings, such as class/decay setting, without prior explanation. These settings are suddenly presented in the experimental part with vague explanations, requiring readers to refer to
cited articles mentioned in O3 for understanding. As key experimental variables, these settings should be formally defined and described. The authors also fail to explain the specific application scenarios corresponding to these settings and their differences.
O6: Experimental setup issues. The experimental setup has significant issues. Merely using the number of comparisons as a metric is inadequate. The authors should also record the error metrics for the three algorithms mentioned in Section 1.1 in their experiments,
including the number of dirty comparisons, pointer prediction errors, and rank prediction errors. Furthermore, the experiments should compare various methods regarding actual execution time and memory usage in sorting and Dijkstra algorithms. I recommend that the
authors refer to relevant experimental metrics used in research papers from the learned index field.
O7: Unsubstantiated claims about adversarial insertion order. In line 110, the authors mention that their proposed method has advantages under adversarial insertion orders, but they neither explain which application scenarios would encounter this situation nor provide
experimental results to validate this claim.
Technical Quality: 2
Clarity: 2
Questions for Authors: Q1: Could you provide more specific algorithms that utilize priority queues and indicate under what conditions the original design might experience degradation during basic operation execution? This question pertains to the motivation behind this study.
Q2: Could you provide a more detailed explanation of the class / decay setting (O5)? Particularly, regarding the class setting, as (#classes)/n increases, the number of partitions increases, leading to fewer comparisons. What is the basis for such experimental settings?
Q3: Where does the trade-off lie in your algorithm? Is there a need to allocate additional space for the prediction models? It seems you have omitted necessary discussions on space complexity and have not observed memory usage in your experiments.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: none
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and suggestions. We address below their concerns.
### Weaknesses
* **O1.1. (Limitations of standard priority queues, line 21)** The lack of $o(\log n)$ time priority queues is an impossibility result, hence a limitation. We are not quite sure what type of detailed explanation the reviewer is requesting.
* **O1.1. (Sentence in line 158)** Immediately before that sentence, in the same paragraph, we explain that a heap can use a binary search algorithm for insertion. Having a better search algorithm, using predictions, would therefore immediately improve the insertion complexity.
* **O1.2.** We kindly disagree regarding line 163. That sentence is not overly complex, and we do not think this constitutes a weakness in the paper.
* **O1.3.** Could the reviewer specify which concepts they would like us to explain more? The list of applications in the first paragraph illustrates their ubiquity, but the details are not important.
* **O1.4.** We will add explicit references to Figures 1 and 2 in the text. Figure 1 is merely an illustration of a skip list and corresponds to the description above it. Figure 2 includes a clear caption and corresponds to the discussion to its left. As noted in line 357, a detailed discussion of Figures 3 and 4 is included in Appendix G, which we could not include in the main body of the paper due to space limitations.
* **O1.5.** The section "our results" summarizes our main contributions and experimental results.
* **O2.** We address all the points raised here in O1.5, O1.1 and Q1.
* **O3.** The paper does not require any prior knowledge of the previous work by Bai and Coester for comprehension. We study, among others, the dirty comparison model they have introduced. However, we formally define the model and explain its relevance for priority queues. In Section 4, we demonstrate how their results can be derived as corollaries of ours. For sorting, we also employ their experimental setup to compare with their sorting algorithms, which only proves the efficiency of our priority queue. We use their results only to prove the lower bound in the pointer prediction model.
* **O4.** The complexities of priority queues are detailed in the related work, and are recalled multiple times throughout the paper (lines 85, 149, 156, 198, ...). However, as suggested by the reviewer, we will add a table in the section "our results" to summarize the different bounds.
* **O5.** The motivation behind these settings is not discussed in sufficient detail due to space limitations. Their relevance to Dijkstra's algorithm is addressed in Appendix G. If the paper is accepted, we will use the additional page to provide more detailed information on the motivation (see also our reponse to Q2 below).
* **O6.** The prior work of Bai and Coester, which we can compare to, used number of comparisons as performance metric. This has the advantage that results are replicable independent of hardware and implementation details. Therefore, we opted to use the number of comparisons as performance metric as well. In particular, this allowed us to use a simple Python implementation for our algorithms, whereas for the algorithms by Bai and Coester we used their existing C++ implementation. The prediction error in the three models is correlated to the number of classes in the class setting, and to the number of timesteps in the decay setting. Thus, the x-axis in the figures represents the amount of perturbation in the predictions. The works in the learned index field that we are aware of have a much stronger experimental focus, whereas our paper's primary focus is theoretical. Many other experiments are possible, which could be part of a separate study.
* **O7.** The bounds we establish for sorting hold for any insertion order, even if chosen by an adversary. This contrasts with the algorithm of Bai and Coester, which requires a random insertion order. So our algorithms are defined in a strictly broader setting. Application scenarios of the broader setting we capture are any situations where a sorted order must be maintained while items are added (and possibly deleted) over time, rather than being known upfront. Validating this claim experimentally would not make sense, as the algorithms of Bai and Coester cannot process such inputs.
### Questions
* **Q1.** Regardless of the use case, the priority queue operations with a binary heap always require $\Theta(\log n)$ comparisons. In a skip-list, insertion always requires expected $\Theta(\log n)$ comparisons. The motivation of the paper is to use predictions to reduce these complexities.
* **Q2.** The formal description of both settings is given in the experiments section (line 337). In the context of sorting, items often have grades that provide a partial ordering. For instance, students might have GPA grades A,B,C,..., and our goal is to determine their precise ranking. The decay setting models situations where an initial ordering of items may have evolved over time. The perturbation of the ordering depends on the time elapsed between the initial ranking and the current time step. The relevance of both settings for Dijkstra's algorithm is given in Appendix G (line 986). If the paper is accepted, we will use the extra page to expand this discussion in the experiments section.
* **Q3.** As is common in the literature on algorithms with predictions, we treat the ML models delivering the predictions as a black box. Their space consumption depends on the implementation of the model. The expected space occupied by a skip list is $O(n)$, and in the case of rank predictions, an additional $O(N)$ space is needed to store the vEB tree.
---
Rebuttal Comment 1.1:
Title: Thank you for your clarification
Comment: Thank you for your detailed response. However, I feel my concerns regarding the experiments remain inadequately addressed, which is a significant issue for me in this paper. I will maintain my recommendation.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. From the reply, we gather that it seems to be primarily O6 and possibly O5 that the reviewer remains concerned about.
We appreciate that the reviewer’s preference may be for papers with detailed experimental analyses, as is common in many fields of research. For the field of algorithms-with-predictions (and theoretical works more generally), the type of experiments that the reviewer is suggesting — reporting on memory usage etc — would be extremely unusual though (unless that is the quantity being optimized). In fact, many similarly flavored algorithms-with-predictions papers don’t include any experiments at all, with some examples from last year’s NeurIPS and ICML being:
[1] NeurIPS’23: Li et al. “Beyond Black-Box Advice: Learning-Augmented Algorithms for MDPs with Q-Value Predictions”
[2] NeurIPS’23: Balcan et al. “Bicriteria Multidimensional Mechanism Design with Side Information”
[3] ICML’23: Antoniadis et al. “Mixing Predictions for Online Metric Algorithms”
[4] ICML’23: Lassota et al. “Minimalistic Predictions to Schedule Jobs with Online Precedence Constraints”
[5] ICML’23: Antoniadis et al. “Paging with Succinct Predictions”
Those that do include experiments typically restrict them to a brief section at the end, with similar setup choices to us, that merely serve as additional support to the prior main results, rather than being main results themselves.
But there’s another, more important reason why using running times as a performance measure would actually be misleading in our case: The motivation for the dirty comparison setting is to have two types of comparisons: slow exact comparisons and fast inexact comparisons. An example could be comparing molecule structures for a potential vaccine, where slow comparisons would involve lengthy clinical trials, while fast comparisons return imprecise results very cheaply. In our experiments, however, the simulation of either type of comparison is equally very fast, so running time wouldn’t capture the true performance at all. One could address this by having the simulation artificially wait for a long time whenever it performs an exact comparison. But this is essentially exactly what we achieve by using the number of clean comparisons as a performance measure. Similarly for the other prediction types, in reality there’s a flexible overhead for producing predictions that depends on the prediction model and which the simulations wouldn’t capture. The point of algorithms-with-predictions is to study the effective usage of predictions separately from their generation. For these reasons, and the ones mentioned earlier, comparison complexity is the better measure of performance for the problems we consider, while the running time of simulations would be rather meaningless. | Summary: The paper studies various beyond worst-case models for priority queues, a fundamental data structure. A learning-augmented viewpoint is taken and the authors comprehensively study three different natural prediction models: dirty/clean comparisons where some comparisons between items maybe in correct, pointer predictions that allow us to index into the predecessor of an element in a data structure, and rank predictions where the rank of an element among a set of elements is noisily given.
For each prediction model, the authors improve upon classical algorithms which implement priority queues. For example with pointer predictions, one can insert elements in time proportional to the log of the error, compared to the classical $O(\log n)$ time (using skip lists). A similar result is shown for rank predictions, where a clever idea of reducing rank predictions to pointer predictions using an auxiliary vEB tree. Results for other settings such as sorting using predictions is also given as corollaries of their priority queue data structures, which matches and extends prior work.
The authors complement their upper bound with lower bounds, showing that for instance that in their dirty/clean comparison model, they obtain the optimal number of clean comparisons used for the extract min operation.
Strengths: - The problem is quite motivated: it is known that in the worst case we cannot have all desired operations of a priority queue fast. So beyond worst case models are certainly relevant.
- The authors study three very natural models of predictions and give sounds justifications of their predictions.
- The randomized pivot idea in Theorem 2.1 is nice
- Section 3.3 makes a nice connection about using rank predictions to reduce the problem to integer keys (or bounded universe size), for which there often are better algorithms for. This idea could potentially have other application. It also gives some indication that rank queries maybe more powerful than pointer queries, since the authors use ranks to simulate pointers.
- The authors improve upon or match results for sorting with predictions studied in previous works.
- The experiments demonstrate that the, with appropriate predictions, the algorithms outperform worst case behavior
Weaknesses: - The 'our results' section is a bit disorganized. It would be nice to have a clear table for the different prediction models, showing the prior worst-case bounds and then the bounds for the learning-augmented algorithms, for each of the operations. This would make it easier to quickly understand the improvements in each case, especially since different underlying data structures are used for different prediction models, and the number of results is large.
- There is a bit of context switching in section 3 since every subsection deals with a new prediction model. Maybe it would be better to not focus on binary heaps in section 2 as much, since the skip lists seem to be the main focus anyways.
Technical Quality: 4
Clarity: 3
Questions for Authors: What does 'randomly filled positions in the leaf level' mean in algorithm 1? Does it mean the leaf elements are randomly permuted?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: No societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and insightful suggestions. Below, we address the concerns and questions raised in the review.
### Weaknesses
* **Table of results.** We will include a table, as suggested by the reviewer, summarizing the complexities of the operations using standard and learning-augmented priority queues. This would indeed make it easier to understand the improvements.
* **Transitions in Section 3.** If the paper is accepted, an additional page will be allowed in the main body. This will provide us with enough space to create smoother transitions between the subsections that address the different models.
### Question
* In a classical heap, the leaf level is filled from left to right. In our algorithm, whenever a new depth level is reached, denoting by $k$ the number of positions it contains ($k=n+1$), a uniformly random permutation $\sigma$ of $[k]$ is chosen. The leaf positions are then filled according to the order specified by $\sigma$. This is equivalent to choosing uniformly at random an empty position for each insertion operation.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the response. I would recommend to add the provided details about filling the heap in the final version. I maintain my score. | Summary: The authors in this paper propose a learning-augmented priority queue data structure which takes advantage of the inaccurate predictions to facilitate the operations. Three different prediction models including dirty comparisons, pointer predictions, and rank predictions have been explored and discussed. The authors provide theoretical guarantees as well as empirical analysis in the paper, showing the superiority of their method.
Strengths: 1) The authors propose an interesting learning-augmented data structure by taking advantage of the predictions even when the predictions are not accurate. For the implementation wise, skip listed is used to overcome the inefficiency of the sorted linked list.
2) In addition, the authors have explored three different predictions and provide theoretical analysis to each one.
3) The authors have also discussed the potential applications of the proposed learning-augmented data structure and compared the performance with the SOTA method.
4) The authors have conducted numerical experiments and compared the performance with the SOTA method to show the superiority of the proposed learning-augmented priority queue.
Weaknesses: 1) For the comparison, the authors have compared their proposed priority queue with two traditional heaps. I found there is an advanced learning-augmented data structure in ref1, I am wondering how the proposed method would compare with that learning-based data structure.
2) Is there a threshold of the "inaccurate" prediction? For example, if after some threshold, will this proposed data structure be crashed?
3) It would be great if the authors could give more details about the experiments such as what is the scale of the experiments and how the model is set up.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have listed my concerns and questions in the [Strengths And Weaknesses] section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I have listed my concerns and questions in the [Strengths And Weaknesses] section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express our gratitude to the reviewer for their feedback and insightful comments. We address below the weaknesses they have raised.
### Weaknesses
* **Comparison with prior learning-augmented algorithms.** We are uncertain about the specific data structure the reviewer is referring to and would appreciate a more precise reference. In any case, our work is the first to implement learning-augmented priority queues, making it difficult to compare directly with other data structures unless they support the same operations. However, we have compared the performance of our data structure for sorting, which is one possible application among many others, with previous learning-augmented sorting algorithms (Double-Hoover and Displacement Sort) as detailed in Sections 4 and 5.
* **"Threshold of inaccurate predictions".** Could the reviewer give more clarifications on this question, in particular what is meant by "crashed"? Note that $\log \eta$ is at most $\log n$, hence the complexity of our learning-augmented priority queue is always at most that of a standard priority queue not using predictions, up to a constant factor.
* **Details about the experiments.** For sorting, Figure 2 is obtained with $n=10^5$ as mentioned in the corresponding caption. For Dijkstra's algorithm, the number of nodes $n$ and the number of edges $m$ are indicated on each figure. More details about the setup, as well as additional experiments with different scales for both sorting and Dijkstra's algorithm can be found in Appendix G. If the paper is accepted, we will use the additional page to expand the experiments section with supplementary material from the appendix. | Summary: The paper considers designing data structures for priority queues that accept predictions / advice to improve the time complexity of common queue operations. The paper considers three different models of predictions - (i) dirty comparisons : cheap but possibly inaccurate comparisons are available. Goal is to utilize these cheap dirty predictions to reduce the reliance on expensive true comparisons; (ii) pointer predictions : predicted position of the predecessor of a key in the queue. (iii) rank predictions : predicted rank of the key in the universe of all keys.
The paper considers two priority queue implementations. (i) A binary heap is a simple structure that supports all priority queue operations in time O(log n). In particular, since inserting a new element involves inserting it at the correct position in a root-to-leaf path of length log n, it can be done via O(log log n) comparisons (but O(log n) time). The authors show that by using dirty comparisons instead insertion can be performed in O(log n) time using O(log log n) dirty comparisons and O(log log \eta) clean comparisons (in expectation) where \eta denotes the prediction error. The algorithmic ideas here are rather standard and unsurprising.
(ii) The second implementation considers skip lists - a probabilistic data structure that supports insertion in expected O(log n) time (and ExtractMin in O(1) time). In presence of predictions, the authors provide a new insertion algorithm that reduces the expected insertion time ( or #clean comparisons) to O(log \eta) where \eta is the corresponding error measure. Similar results are obtained for rank predictions.
Finally, the paper shows almost tight lower bounds in all models.
Strengths: - The algorithms are very clean and easy to follow. The paper is well-written and the main ideas are readily accessible.
- Priority queues are a fundamental data structure and improving their performance via learning augmentation is a good contribution - e.g. prior results on learning augmented sorting are now simple applications.
Weaknesses: Algorithmic techniques introduced are rather unsurprising and are mostly a collection of well known ideas
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and for the time and effort spent on our submission.
**Weakness.**
The intuitions behind some algorithmic ideas are indeed inspired by previous work, and we highlighted these connections as much as possible to make the paper and algorithms easier to understand. Given the efficiency of our algorithms, we believe that their simplicity can also be viewed as a strength rather than a weakness, although we understand the reviewer's perspective as well. However, the paper introduces several novel ideas. For instance, we demonstrate how to effectively use a van Emde Boas (vEB) tree to reduce the problem with rank predictions to the design of a priority queue within a finite universe, and how to handle the prediction errors in the subsequent analysis adequately.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! I maintain my positive score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FINALLY: fast and universal speech enhancement with studio-like quality | Accept (poster) | Summary: The authors propose FINALLY, a speech enhancement algorithm based on GANs and the Wav LM encoder. They evaluate different feature extractors and then show qualitative and quantitative results on speech enhancement.
Strengths: A major strength is the fidelity of the generated outputs. The results are impressive and are demonstrated across a variety of SNR noise levels, noise types, and accents. I was very impressed by the samples, and the fact that it can be generated in a single pass. The generative nature of the model allows for a greater degree of restoration from low quality signals, compared to discriminative/masking based approaches (e.g. Demucs, Conv-Tasnet, TF-GridNet) that can't restore the speech to this studio like quality.
I also liked the rigorous comparison to choose WavLM as the feature extractor in the network. There was proper consideration given to why you chose that encoder. Although a better choice would have been to compare WavLM to using other methods in the ablation study, e.g. WavLM does better on the clustering and SNR rule, but does that actually mean it does better as the network component?
Weaknesses: The main weakness is that the approach is very close to HiFi with only slight modifications. The other contributions stated in the paper are not as major as the authors claim. The first section about sampling from the conditional distribution vs the max seems like it misses the point a bit. The reason people use diffusion models, GANs etc for for conditional sampling is because those models produce high quality unconditional outputs and can often be used without modification Not necessary because there is a desire to sample from the posterior distribution to allow multiple generations for example. Therefore I don't think the analysis in section 2 is such a big contribution.
A second major weakness is the experiment section. For speech enhancement, there are some standard datasets and metrics people use to compare results. These include metrics like PSNR, PESQ, STOI, and datasets like VCTK, WHAM, and Librispeech. This would allow comparison against a greater number of recent methods (e.g. TF GridNet, Demucs) which are not included
Technical Quality: 3
Clarity: 3
Questions for Authors: Can you provide qualitative comparisons against the baselines like Hifi? The reviewers would appreciate hearing those as well as your generated outputs that you have already provided.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Firstly, we would like to thank you for your invaluable work. Below, we address your concerns about the paper.
**W1. Diffusion Models and GANs for Conditional Sampling**
We generally agree that generative models, such as GANs and diffusion models, are not necessarily used to allow multiple generations from posterior distributions but rather due to their high-quality results. However, we think that this consideration does not downplay our contribution. We provide a theoretical interpretation for the case when samples are generated using an LS-GAN generator. Theoretical interpretations are important for in-depth analysis and might have a significant impact on how the field evolves in the future. From the practical side, we argue that learning the whole posterior distribution might not be necessary for the speech enhancement problem and therefore diffusion models might be solving an unnecessarily complex task. Our analysis reveals that GANs provide a natural remedy to regress for the most probable speech reconstruction directly, thus speech enhancement GANs solve a simpler task with fewer potential resources. Therefore, we believe that Section 2 holds a significant contribution to the field.
**W2. Standard Datasets and Metrics for Speech Enhancement**
We would like to clarify the reasons behind our choice of datasets and metrics.
We chose the VoxCeleb and UNIVERSE validation set because these data include several degradations at the same time, and the strongest baselines for universal speech enhancement, which are HiFi-GAN-2 and UNIVERSE, release results on this data. Other methods from the literature usually consider only one degradation (e.g., only noise or reverberation) or are significantly inferior to HiFi-GAN-2 and UNIVERSE. For instance, the popular speech enhancement VCTK-DEMAND dataset considers only additive noise as the distortion, and methods achieving good results on this dataset tend to poorly generalize to real data. Thus, comparison on this data is not particularly important from a practical point of view.
Our paper does not include similarity-based metrics such as PESQ and STOI for two reasons. First, since PESQ and STOI require ground truth reference audio, we are unable to compute them for real data such as VoxCeleb and LibriTTS, as there is no ground truth reference for such data. Second, there have been a number of works consistently reporting low correlation of reference-based metrics with human perceptual judgment [1, 2, 3]. In particular, the [1] study reports that no-reference metrics (including DNSMOS, which we reported in our work) correlate significantly better with human perception and therefore have higher relevance for objective comparison between methods. Furthermore, in our study, we report the MOS score, which directly reflects human judgments of restoration quality.
However, we agree that outlining conventional metrics such as PESQ and STOI on a popular VCTK-DEMAND benchmark could facilitate comparison with prior work. One can find this comparison in the table below. Note that we will add these results in the appendix of the camera-ready paper.
Table 1. Comparison with baselines on VCTK-DEMAND dataset.
| Model | MOS | UTMOS | WV-MOS | DNSMOS | PESQ | STOI | SI-SDR | WER |
|----------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| input | 3.18 ± 0.07 | 2.62 ± 0.16 | 2.99 ± 0.24 | 2.53 ± 0.10 | 1.98 ± 0.17 | 0.92 ± 0.01 | 8.4 ± 1.2 | 0.09 ± 0.03 |
| DEMUCS | 3.95 ± 0.06 | 3.95 ± 0.05 | 4.37 ± 0.06 | 3.14 ± 0.04 | 3.04 ± 0.12 | 0.95 ± 0.01 | 18.5 ± 0.6 | **0.07 ± 0.03** |
| HiFi++ | 4.08 ± 0.05 | 3.89 ± 0.06 | 4.36 ± 0.06 | 3.10 ± 0.04 | 2.90 ± 0.12 | 0.95 ± 0.01 | 17.9 ± 0.6 | 0.08 ± 0.03 |
| HiFi-GAN-2 | 4.13 ± 0.05 | 3.99 ± 0.05 | 4.26 ± 0.05 | 3.12 ± 0.05 | 3.14 ± 0.12 | 0.95 ± 0.01 | 18.6 ± 0.6 | **0.07 ± 0.03** |
| DB-AIAT | 4.22 ± 0.05 | 4.02 ± 0.05 | 4.38 ± 0.06 | 3.18 ± 0.04 | **3.26 ± 0.12** | **0.96 ± 0.01** | **19.3 ± 0.8** | **0.07 ± 0.03** |
| FINALLY (16 kHz) | 4.41 ± 0.04 | **4.32 ± 0.02** | **4.87 ± 0.05** | **3.22 ± 0.04** | 2.94 ± 0.10 | 0.92 ± 0.01 | 4.6 ± 0.3 | **0.07 ± 0.03** |
| FINALLY (48 kHz) | **4.66 ± 0.04** | **4.32 ± 0.02** | **4.87 ± 0.05** | **3.22 ± 0.04** | 2.94 ± 0.10 | 0.92 ± 0.01 | 4.6 ± 0.3 | **0.07 ± 0.03** |
| Ground Truth (16 kHz)| 4.26 ± 0.05 | 4.07 ± 0.04 | 4.52 ± 0.04 | 3.16 ± 0.04 | - | - | - | - |
| Ground Truth (48 kHz) | 4.56 ± 0.03 | 4.07 ± 0.04 | 4.52 ± 0.04 | 3.16 ± 0.04 | - | - | - | - |
As one can see, our model clearly outperforms all baselines in terms of the MOS metric and reference-free objective metrics, while reference-based metrics correlate poorly with human judgments. Note that our model has a slightly higher MOS than the ground truth likely due to the fact that our training data is of higher quality than the VCTK ground truth samples. Additionally, we would like to point out that one can find a comparison on real data with the mentioned DEMUCS model in Table 2 of the paper.
**Q1. Qualitative Comparisons Against the Baselines**
We agree that the work could benefit from more qualitative results. We will provide more qualitative comparison results on the paper's web page upon acceptance, as we cannot modify supplementary materials during the rebuttal period according to NeurIPS rules.
[1] Manocha, Pranay et al. "Audio Similarity is Unreliable as a Proxy for Audio Quality."
[2] Manjunath, T. "Limitations of perceptual evaluation of speech quality on VoIP systems."
[3] Andreev, Pavel et al.. (2022). "HiFi++: a Unified Framework for Bandwidth Extension and Speech Enhancement." | Summary: The paper proposes a GAN based method for universal speech enhancement (SE) demonstrating competitive experimental performance.
To justify the use of adversarial learning for SE, the authors provide a theoretical insight regarding the effectiveness of the mode-covering property of LSGAN for the SE task under ideal conditions. However, in practice several additional losses needs to be employed for stable training and better performance.
The authors also propose a perceptual loss, LMOS, which combines L1 loss between STFT magnitudes and L2 loss between WavLM convolution encoder features. The choice of using WavLM-conv features for the perceptual loss is driven by two heuristic rules regarding the features, 1) identical (content) speech sounds should be clustered together and 2) adding noise should increase distance from clean cluster centers proportional to SNR. Based on these rules, several models were evaluated and finally the WavLM-conv was chosen.
Moreover a Human Feedback (HF) loss is also utilized, which is provided by means of differentiable PESQ and UTMOS predictions.
For the model architecture, the authors extend the HiFi++ model architecture, incorporating WavLM features along with SpectralUNet features as input for the Upsampler. Additionally, an upsampling WaveUNet is added which increases the output sampling rate upto 48khz.
Strengths: - Overall the paper is well written.
- Strong speech enhancement performance.
- Interesting theoretical insight showing the relevance of mode-covering property of LSGAN for speech enhancement task.
- The proposed criteria for choosing the SSL feature for perceptual loss is very interesting and the results of the comparison with several SSL models is a very good contribution.
- Architectural changes to the HiFi++ model are significant as it enables the model to output 48khz speech and incorporate SSL features.
Weaknesses: **Main weakness is in the experimental evaluations**
- Table 2 (main experimental result):
- Since the model architecture is based on HiFi++, a direct comparison is probably necessary, though the ablation study covers it somewhat, but comparison with a full scale baseline HiFi++ in the main experiment would be more convincing.
- There is no description of the model size/training data scale for the baseline methods, which makes it difficult to contextualize the results.
- A self contained description of the evaluation dataset is missing.
- Table 3 (ablation study):
- The ablation study is missing the results for the 1st stage (pretraining setup, without adversarial loss), while it is mentioned in Appendix E2, but the results are not in the table.
Technical Quality: 3
Clarity: 3
Questions for Authors: - From the ablation study, the LMOS loss, model scaling, Upsampler (to 48khz) and HF losses show clear performance benefits, however the advantage of incorporating the WavLM-enc is not clear, as it does not lead to significant performance gain. Is the WavLM-enc necessary?
- Since the goal is universal speech enhancement, it would be interesting to see the performance in more focussed evaluation such as bandwidth extension or reverberation individually.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you very much for your high assessment of our work. We truly appreciate your valuable comments. Below, we address your concerns.
**W1. Comparison with HiFi++ & Q1. Advantage of incorporating the WavLM Encoder**
The original HiFi++ model was proposed for speech denoising and bandwidth extension applications separately. Therefore, a significant practical difference between our work and HiFi++ is that our model was trained to support a wide range of degradations emerging in practice, and thus our model is able to generalize to real-world data. Additionally, our model is trained with novel LMOS and HF losses, the effectiveness of which is validated by ablation studies. Therefore, our training framework is substantially different from that of HiFi++, and the importance of this difference is validated by practical observations.
However, you are absolutely correct that the generator architecture of our model is mostly based on HiFi++. The main differences in this regard are the introduction of Upsample WaveUNet and the WavLM encoder. While the importance of Upsample WaveUNet is clearly validated by ablation studies, the effect of the WavLM encoder appears to be somewhat marginal in terms of MOS score (although we must pinpoint that objective metrics are considerably higher for the case with WavLM). To validate the importance of the WavLM encoder, we have conducted an additional ablation test on the UNIVERSE validation set, which contains more challenging cases than the Voxceleb data. The results are provided in the table below.
Table 1. Ablation of WavLM encoder on UNIVERSE validation.
| | MOS | UTMOS | WV-MOS | DNSMOS | PhER |
|-------------------|------|-------|--------|--------|------|
| w/o WavLM | 3.49 ± 0.08 | 3.33 ± 0.18 | 3.80 ± 0.15 | 3.15 ± 0.09 | 0.27 ± 0.04 |
| w/ WavLM | 3.75 ± 0.07 | 3.56 ± 0.20 | 3.99 ± 0.16 | 3.07 ± 0.08 | 0.21 ± 0.04 |
Thus, we conclude that the introduction of the WavLM encoder is very important to achieve high quality on more challenging data. We will provide these additional results in the camera-ready paper.
**W2. Description of the Model Size/Training Data Scale for the Baseline Methods**
We agree that for better contextualization of our result, detailed information on the model size and training data of the baseline models is needed. Below, we provide a table, which will be included in the camera-ready version:
Table 2. Comparison of resources with baselines.
| Model | Training Data Scale (clean data) | Model Size (parameters) | RTF on V100 GPU |
|----------------|--------------------------------------------------------------------------|-----------------------------|-----------------|
| VoiceFixer | 44 hours (VCTK) | 112 M | 0.02 |
| DEMUCS | 500 hours (DNS) | 61 M | 0.08 |
| STORM | 200 hours (WSJ0 and VCTK) | 28 M | 1.05 |
| BBED | 140 hours (WSJ0) | 65 M | 0.43 |
| HIFI-GAN-2 | 5 hours (DAPS) | 34 M | 0.50 |
| Universe | 1500 hours (private data) | 189 M | 0.5 |
| FINALLY (ours) | 200 hours (LibriTTS-R and DAPS) | 454 M (including 358 M of WavLM) | 0.03 |
We note that, while our model has a larger number of parameters than the baselines, most of these parameters are used to process low-resolution features (e.g., the Transformer of WavLM operates on 320-times downsampled representations of the waveform, i.e., at 50 Hz). In contrast, models like HiFi-GAN-2 mostly operate at full waveform resolution (due to WaveNet). This allows our model to be more compute-efficient and thus have a much lower Real-Time Factor (RTF).
**W3. Self-contained Description of Evaluation Dataset**
Since we took the evaluation data from prior work, we did not describe it in much detail. However, we agree that our paper would benefit from such a description. We will include it in the camera-ready paper. Please find the evaluation dataset details below.
**VoxCeleb Data:** 50 audio clips selected from VoxCeleb1 [1] to cover the Speech Transmission Index (STI) range of 0.75-0.99 uniformly and balanced across male and female speakers.
**UNIVERSE Data:** 100 audio clips randomly generated by the UNIVERSE [2] authors from clean utterances sampled from VCTK and Harvard sentences, together with noises/backgrounds from DEMAND and FSDnoisy18k. The data contains various artificially simulated distortions including band limiting, reverberation, codec, and transmission artifacts. Please refer to [2] for further details.
**W4. The Results for the 1st Stage (Pretraining Setup)**
Thank you very much for noticing this issue. Indeed, the table with these results is missing in the Appendix. We provide it below and will include it in the Appendix of the camera-ready paper.
Table 3. Results on Voxceleb data after pretraining with different regression losses.
| Loss | UTMOS | WV-MOS | DNSMOS |
|------------|-------------|-------------|-------------|
| MS-STFT | 2.54 ± 0.10 | 2.77 ± 0.10 | 3.04 ± 0.05 |
| RecLoss | 2.53 ± 0.10 | 2.77 ± 0.10 | 3.04 ± 0.05 |
| LMOS | 3.44 ± 0.08 | 3.25 ± 0.03 | 3.57 ± 0.06 |
| L1_Spec | failed to converge | - | - |
**Q2. More Focused Evaluation**
We agree that additional results concerning individual degradations would be helpful. We will include qualitative examples on the project web page upon paper publication.
[1] Nagrani, Arsha et al.. "Voxceleb: a large-scale speaker identification dataset."
[2] Serrà, Joan, et al. "Universal speech enhancement with score-based diffusion."
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed rebuttal and additional ablation experiments.
Most of my concerns have been adequately addressed, however I would still like to see the HiFi++ in the main tables (Tab. 2 and 3) since the proposed work builds on top of it.
In the evaluation on the VCTK-DEMAND (in response to reviewer gE54), HiFi++ is included, will that table be included in the main paper? It will add value I think and also adding it to Tab 2 and 3 in the will also make the improvements stand out.
---
Reply to Comment 1.1.1:
Title: Comparsion with HiFi++
Comment: Dear reviewer,
Below we provide a table comparing our method with HiFi++.
Table 1. Comparison with HiFi++ on VoxCeleb data.
| Model | UTMOS | WV-MOS | DNSMOS |
|---------------|--------------|--------------|--------------|
| Input | 2.72 ± 0.11 | 2.90 ± 0.16 | 2.72 ± 0.11 |
| HiFi++ | 2.76 ± 0.13 | 2.68 ± 0.14 | 2.98 ± 0.07 |
| FINALLY (ours)| 4.05 ± 0.07 | 3.98 ± 0.06 | 3.31 ± 0.04 |
As one can see, the performance of HiFi++ on real data is quite poor due to the reasons mentioned above. We agree that this comparison will make the proposed improvements stand out more clearly. Therefore, we will include the comparison with HiFi++ in the main paper, as well as results on the VCTK-DEMAND benchmark. | Summary: This paper proposes a universal speech enhancement model for real-world recording environments utilizing GAN, referred to as FINALLY. The authors theoretically analyze that using LS-GAN loss leads to finding the point of maximum density within the conditional clean speech distribution. To stabilize the adversarial training process, WavLM-based perceptual loss is integrated into the MS-STFT pipeline.
Strengths: - The paper provides a meaningful analysis of LS-GAN loss in the context of speech enhancement.
- The adoption of the WavLM neural network shows performance improvements.
Weaknesses: - Although the paper focuses on real-world scenarios, it still needs results with objective metrics such as PESQ and STOI for a solid evaluation.
- The three-stage training process of FINALLY is complex, and the performance gains over HiFi-GAN2 do not justify the increased cost.
- Reducing the model size by 90% for ablation studies is not convincing, as it can significantly affect the results.
Technical Quality: 2
Clarity: 2
Questions for Authors: - I'm curious about why DNSMOS and other metrics trend differently in Table 3.
- Could you clarify why the order of the metrics in Tables 2 and 3 is presented differently?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: - Artifacts mentioned in the inference results can degrade perceptual quality.
- The paper has a clear contribution, but the presentation and organization are lacking.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your time and consideration. We would like to address your concerns about the paper.
**W1. It still needs objective metrics such as PESQ and STOI for a solid evaluation**
Our paper does not include similarity-based metrics such as PESQ and STOI for two reasons. First, since PESQ and STOI require ground truth reference audio, we are unable to compute these metrics for real data such as VoxCeleb and LibriTTS, as there is no ground truth reference for such data. Second, there have been numerous works consistently reporting low correlation of reference-based metrics with human perceptual judgment [1, 2, 3]. In particular, study [1] reports that no-reference metrics (including DNSMOS, reported in our work) correlate significantly better with human perception and therefore have higher relevance for objective comparison between methods. Furthermore, in our study, we report the MOS score, which directly reflects human judgments of restoration quality. Therefore, we believe that our evaluation could be considered solid.
However, we agree that outlining conventional metrics could facilitate comparison with prior work. Therefore, we have decided to measure these metrics on the popular denoising benchmark VCTK-DEMAND
Table 1. Comparison with baselines on VCTK-DEMAND dataset with conventional metrics.
| Model | MOS | UTMOS | WV-MOS | DNSMOS | PESQ | STOI | SI-SDR | WER |
|----------------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|--------------|
| input | 3.18 ± 0.07 | 2.62 ± 0.16 | 2.99 ± 0.24 | 2.53 ± 0.10 | 1.98 ± 0.17 | 0.92 ± 0.01 | 8.4 ± 1.2 | 0.09 ± 0.03 |
| MetricGAN+ | 3.75 ± 0.06 | 3.62 ± 0.09 | 3.89 ± 0.10 | 2.95 ± 0.05 | 3.14 ± 0.10 | 0.93 ± 0.01 | 8.6 ± 0.7 | 0.10 ± 0.04 |
| DEMUCS | 3.95 ± 0.06 | 3.95 ± 0.05 | 4.37 ± 0.06 | 3.14 ± 0.04 | 3.04 ± 0.12 | 0.95 ± 0.01 | 18.5 ± 0.6 | **0.07 ± 0.03** |
| HiFi++ | 4.08 ± 0.05 | 3.89 ± 0.06 | 4.36 ± 0.06 | 3.10 ± 0.04 | 2.90 ± 0.12 | 0.95 ± 0.01 | 17.9 ± 0.6 | 0.08 ± 0.03 |
| HiFi-GAN-2 | 4.13 ± 0.05 | 3.99 ± 0.05 | 4.26 ± 0.05 | 3.12 ± 0.05 | 3.14 ± 0.12 | 0.95 ± 0.01 | 18.6 ± 0.6 | **0.07 ± 0.03** |
| DB-AIAT | 4.22 ± 0.05 | 4.02 ± 0.05 | 4.38 ± 0.06 | 3.18 ± 0.04 | **3.26 ± 0.12** | **0.96 ± 0.01** | **19.3 ± 0.8** | **0.07 ± 0.03** |
| FINALLY (16 kHz) | 4.41 ± 0.04 | **4.32 ± 0.02** | **4.87 ± 0.05** | **3.22 ± 0.04** | 2.94 ± 0.10 | 0.92 ± 0.01 | 4.6 ± 0.3 | **0.07 ± 0.03** |
| FINALLY (48 kHz) | **4.66 ± 0.04** | **4.32 ± 0.02** | **4.87 ± 0.05** | **3.22 ± 0.04** | 2.94 ± 0.10 | 0.92 ± 0.01 | 4.6 ± 0.3 | **0.07 ± 0.03** |
| Ground Truth (16 kHz)| 4.26 ± 0.05 | 4.07 ± 0.04 | 4.52 ± 0.04 | 3.16 ± 0.04 | - | - | - | - |
| Ground Truth (48 kHz) | 4.56 ± 0.03 | 4.07 ± 0.04 | 4.52 ± 0.04 | 3.16 ± 0.04 | - | - | - | - |
As one can see, our model clearly outperforms all baselines in terms of the MOS metric and reference-free objective metrics, while reference-based metrics correlate poorly with human judgments. Note that our model has slightly higher MOS than the ground truth due to the fact that our training data is of higher quality than VCTK ground truth samples.
**W2. The three-stage training process of FINALLY is complex, and the performance gains over HiFi-GAN-2 do not justify the increased cost**
We agree that our training process is slightly more complex than that of HiFi-GAN-2. However, we would like to point out that our training pipeline, although intricate, is not considerably more complex compared to HiFi-GAN-2. HiFi-GAN-2 (similarly to our model) has three stages: 1) acoustic feature prediction network, 2) WaveNet training, and 3) adversarial training. Furthermore, our final model is more than 10 times faster (0.03 RTF compared to 0.5 RTF for HiFi-GAN-2). Therefore, we believe that the complexity of the training process is well justified by the dramatic increase in the efficiency of the resulting model.
**W3. Reducing the model size by 90% for ablation studies is not convincing, as it can significantly affect the results.**
We follow a well-established practice in deep learning literature of conducting ablation studies on a smaller scale in order to reduce the costs of training, as many leading papers in the field have done (e.g., [4]). While this procedure may influence the results, practical considerations in a resource-intensive field such as ours remain significant. Therefore, we use a smaller model for some parts of the ablation study.
**Q1. Why DNSMOS and other metrics trend differently in Table 3.**
Due to the imperfections of different objective metrics, they can trend differently as some of them pay more attention to certain artifacts than others. Please consider taking into account relevant papers [1, 2, 3] in order to understand the issues with objective metrics for speech quality in more detail.
**Q2. Order of metrics in Tables 2 and 3.**
Thank you very much for noticing this. We will rearrange the columns to have a consistent order in these tables to improve the clarity of presentation in the camera-ready version.
[1] Manocha, Pranay, et al.. "Audio Similarity is Unreliable as a Proxy for Audio Quality."
[2] Manjunath, T. "Limitations of perceptual evaluation of speech quality on VoIP systems."
[3] Andreev, Pavel, et al. "HiFi++: a Unified Framework for Bandwidth Extension and Speech Enhancement."
[4] Karras, Tero, et al. "Analyzing and improving the training dynamics of diffusion models."
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the detailed rebuttal, and particularly for providing the additional results using objective metrics. Rebuttal addressed the most of my concerns.
Based on the description in the rebuttal, it seems that no additional training was performed with the VCTK-DEMAND dataset. While the performance on objective metrics falls short of that achieved by HiFi-GAN2, the improvement in MOS scores is indeed Impressive. However, after reviewing the demo samples provided, I was unable to clearly perceive an improvement over HiFi-GAN2. Still, the significant increase in speed of the proposed algorithm compared to HiFi-GAN2 is noteworthy.
Moreover, considering that few models have successfully addressed multiple distortions with one pass, I believe this work holds substantial value. Therefore, I will raise my score by one level. | Summary: This paper describes a new formulation of GAN-based speech enhancement. It includes an analysis of the convexity in different feature spaces of the distribution of TTS utterances generated from the same inputs, concluding that WavLM's convolutional encoders provide the most convex such space. This representation is then incorporated into the input of HiFi++ and also a separate, unrelated upsampling stage is added at the end.
On the VoxCeleb real data validation set and the validation data of UNIVERSE, the proposed approach outperforms other strong baselines in both quality (MOS from a subjective listening test and other objective metrics) as well as in realtime factor. For example, on VoxCeleb, the proposed system is rated at 4.63 MOS compared to the second best system HiFi-Gan-2 at 4.47 while the proposed system is ~15x faster. For systems with comparable RTFs, the difference in MOS is 4.63 vs 3.79 for DEMUCS.
Strengths: Significance:
* The argument for the relevance of GANs to speech enhancement is thoughtful, interesting, and convincing. It brings clarity on a point that I did not previously appreciate.
* Analysis of the different feature spaces in terms of their convexity is another valuable and interesting contribution. It provides clearly actionable findings that are shown in the experiments to make a meaningful difference to system performance. This analysis can be applied more broadly to compare different SSL representations for various tasks.
System performance:
* Improving both the performance and the efficiency of a speech enhancement system is quite valuable and these differences appear to be large compared with strong baselines.
* Listening to the provided audio examples in the supplementary material shows impressive performance, especially in comparison to the halucinations of UNIVERSE in low-SNR instances, although it is not clear how these particular examples were selected.
Clarity:
* The paper is well written and easy to follow. The figures are helpful in understanding it. The description of the loss and different training stages is particularly clear and helpful.
* The ablations are thorough and informative and show clear benefits to each of the stages of the model/training.
Weaknesses: One point that could be added to the discussion of previous work is that of Maiti and Mandel (2019) and related work, which introduced the idea of speech enhancement by synthesis of a clean utterance that contains the same content as the original utterance.
The evaluation was conducted on a crowdsourcing platform without IRB review. This should be reviewed by ethics reviewers.
S Maiti and M Mandel (2019). Parametric resynthesis with neural vocoders. Proc. IEEE WASPAA.
Minor comments:
* Line 187 states, "we report MOS score" but it does not state where this MOS score comes from. Please describe the experiment/measurement that generated the MOS scores. Presumably it was the human listening test described in the appendix, but this is not clear at this point in the manuscript.
* Equation (2): please define phi
* Line 299 calls losses based on UTMOS and PESQ "human feedback" losses, but since these are algorithms predicting human feedback, I don't think their outputs can be called human feedback itself.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations of the approach are discussed in the appendix, section D.4. The limitation of this approach being non-streaming is mentioned there and is important to highlight.
Flag For Ethics Review: ['Ethics review needed: Research involving human subjects']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
We are very grateful for the high assessment of our work and your valuable suggestions. Below, we address your concerns.
**W1. Work by Maiti and Mandel (2019)**
Thank you very much for pointing out this work. We will add a discussion of it to the related work section in the camera-ready version.
**W2. Minor comments.**
- “Presumably it was the human listening test described in the appendix, but this is not clear at this point in the manuscript.” – You are correct. We will add a reference to the relevant appendix section.
- “Equation (2): please define phi.” – Phi denotes the WavLM-Conv feature mapping. We will add this clarification in the camera-ready version.
- “Since these are algorithms predicting human feedback, I don't think their outputs can be called human feedback itself.” – We can call these losses “predicted human feedback losses” instead of “human feedback losses.” We believe this clarification will indeed improve clarity, and therefore we will use this naming in the camera-ready version.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for their rebuttal. I have read all of the reviews and the rebuttals and would like to keep my rating as-is. I do think that section 2 is a strong contribution to the literature and our understanding of the problem of speech enhancement and the utility of generative models for solving it. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to express our sincere gratitude for your thoughtful comments and suggestions. Your appreciation of our insights into GAN training and the analysis of SSL models’ feature spaces is truly encouraging. We have worked hard to address the questions and concerns you raised during the review process. Below, we have summarized our responses to the key concerns:
**Use of Conventional Metrics and Datasets**
In response to the reviewers' concerns about not using PESQ, STOI, and other classic metrics, we point out their relatively weak correlation with perceptual quality. We cite several papers where this issue has been discussed in detail. Moreover, we elaborate on the difficulty of applying such metrics to real-world data, which frequently lacks ground truth samples. Therefore, we reason that subjective metrics, like MOS, provide a more accurate and fair assessment of the method’s performance. Nevertheless, we concur that the paper could benefit from benchmarking our method against other baselines using classic metrics and well-established datasets. Therefore, we have included a table comparing our method with other baselines on the VCTK-DEMAND dataset.
**Complexity of the Presented Method**
We agree that the level of complexity of our method in comparison to other methods is a significant concern. We address this in two ways. First, we point out that comparable SOTA algorithms (e.g., HIFI-GAN-2) also involve complex training procedures, consisting of numerous stages that must be trained consecutively. Therefore, we believe that while our method is complex, it is nevertheless on par with other existing algorithms. Second, we provide clear evidence that our algorithm is considerably faster (~x15 times, as noted by reviewers as well), while delivering comparable or better perceptual quality. We think that the benefits of increased inference speed and sound quality outweigh the possible training complexity issues.
**Comparison with HiFi++ and Importance of WavLM Encoder**
The generator architecture of our model is largely based on HiFi++, with the main differences being the introduction of Upsample WaveUNet and the WavLM encoder. While the importance of Upsample WaveUNet is clearly validated by our ablation study, the effect of the WavLM encoder appears to be somewhat marginal in terms of MOS score (although we must pinpoint that objective metrics are considerably higher for w/ WavLM case). To validate the importance of the WavLM encoder, we conducted additional ablation tests on the UNIVERSE validation set, which contains more challenging cases than the VoxCeleb data. The results clearly indicate the importance of the WavLM encoder in achieving high quality on the challenging UNIVERSE validation data.
**Theoretical Analysis of Conditional Generation**
We agree that the key advantage of modern generative models is their ability to generate high-fidelity objects. However, it is important to note that the choice of a generative model necessarily involves trade-offs. Diffusion models can generate realistic and diverse objects but do so slowly, whereas GANs are capable of generating realistic objects quickly in one forward pass, though they may lack diversity. In our paper, we argue that the lack of diversity is not an issue for speech enhancement and provide an analysis of why GANs are likely to sample the desirable mode. Our main argument is that sacrificing diversity is not problematic for speech enhancement, but sacrificing inference speed is. Thus, GANs might be better suited than diffusion models for the speech enhancement problem.
Once again, we would like to thank all the reviewers for their efforts and time.
Sincerely,
Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Invariant Tokenization of Crystalline Materials for Language Model Enabled Generation | Accept (poster) | Summary: This paper presents a novel approach for generating crystal materials using language models. The key innovation lies in the Mat2Seq method, which converts 3D crystal structures into 1D sequences while ensuring SE(3) and periodic invariance. This approach addresses the challenge of representing crystal structures in a way that is unique and invariant under different mathematical descriptions.
Strengths: 1. The paper is well-written, providing clear descriptions of the problem background and proposed methods.
2. It tackles an interesting problem by using language models to generate crystal structures and introduces the consideration of SE(3) and periodic invariance for the first time.
3. The experiments are comprehensive, covering standard benchmarks and attempting to discover crystals with specific properties, demonstrating the robustness and versatility of the method.
4. Although the paper ensures the invariance of the generated structures, enforcing this invariance at the tokenizer level might limit the diversity of the generated structures to a small subspace. To obtain structures with different rotations, additional post-processing may be required, unlike models based directly in the spatial domain which can generate various rotational variants naturally.
Weaknesses: 1. The paper should discuss the computational complexity differences between language models and specialized crystal models, as language models generally have higher parameter counts and computational demands.
2. The comparison of language models is limited. The experiments only compare with CrystalLLM, despite mentioning other methods.
3. The rationale for comparing only with CrystalLLM in Section 4.2 and not with other methods from Section 4.1 should be clarified.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. For crystalline materials, isn't the original data already the most primitive unit? If we first apply Niggli cell reduction before using CIF files to describe unit cells, can we ensure a consistent representation?
2. While generating continuous crystal structures in 3D space seems natural, I am curious whether this token generation method can ensure that the reconstructed 3D crystal structure is continuous. Specifically, can the generated sequences always be converted back into a meaningful crystal representation?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: 1. When performing conditional generation, it appears that generating structures for each specific property requires fine-tuning under specific conditions. This could result in high complexity in practical applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer vCYe,
Thank you for your recognition that our approach addresses the challenge of representing crystal structures in a unique and invariant way under different mathematical descriptions. For your concerns and questions, we provide point-to-point responses below.
1. The paper should discuss the computational complexity differences between language models and specialized crystal models, as language models generally have higher parameter counts and computational demands.
Thank you very much for your suggestion, and we do think this is important to point out. Thus, we include the computational complexity and efficiency in generation comparisons with specialized crystal models like CDVAE and DiffCSP in Table 11 attached in the above general responses. As we can see in the table, LLMs based Mat2Seq indeed has more parameters, but with similar amount of GPU resources following DiffCSP running efficiency comparisons, Mat2Seq is significantly faster in generation.
We additionally show that decreasing the model size by 8 times will not influence the RMSE a lot and can further increase the running efficiency. For the smaller model, it achieved RMSE of 0.039 which is significantly better than DiffCSP with RMSE of 0.049, and more than 3 times faster in generation process.
2. The comparison of language models is limited. The experiments only compare with CrystalLLM, despite mentioning other methods.
Thank you for your question. We'd like to mention that we use Table 1 to directly show the failures of other two LLMs for crystal generation, and crystalLLM from Meta team that are finetuned instead of training from scratch is not directly comparable due to unfair training logic (they are using pre-trained large language models). We tend to believe the crystal structure prediction task established by DiffCSP and CrystaLLM is a reasonable and powerful benchmark to compare with, but other LLMs based crystal generation methods are either not directly comparable, or provide no evaluations for this task.
3. The rationale for comparing only with CrystalLLM in Section 4.2 and not with other methods from Section 4.1 should be clarified.
Thank you very much for this question. The reason for not including CDVAE and DiffCSP in Section 4.2 is also simple: we just want to compare with baselines in a fair way. For the tasks defined in Section 4.2, it requires the model to be trained on CrystaLLM dataset with 2.3 million crystal structures, while CDVAE and DiffCSP have only been trained on much smaller dataset like MP20 and MPTS52 with less than 50k training samples. It is unfair to compare with them, so we do not include them in the comparison in Section 4.2.
Additionally, the major contribution of this work, just as you mentioned before, is addressing the challenge of representing crystal structures in a unique and invariant way under different mathematical descriptions. To show this, we comprehensively compare our Mat2Seq with CrystaLLM that does not address this.
Furthermore, we tend to believe the performance gains beyond CDVAE and DiffCSP are already clearly shown by comparisons in Section 4.1.
4. For crystalline materials, isn't the original data already the most primitive unit? If we first apply Niggli cell reduction before using CIF files to describe unit cells, can we ensure a consistent representation?
Unfortunately no. Although usually the original data is the most primitive unit cells, there are a lot of different most primitive unit cells for the same crystal structure, as shown in Figure 1, like shifting periodic boundaries.
For your second question, actually the baseline method CrystaLLM uses Niggli cell reduction before using CIF files, but as we can see in Table 1, it fails to maintain a consistent representation for the same crystal structure before and after periodic transformations that will not change the crystal structure at all.
5. While generating continuous crystal structures in 3D space seems natural, I am curious whether this token generation method can ensure that the reconstructed 3D crystal structure is continuous. Specifically, can the generated sequences always be converted back into a meaningful crystal representation?
To show this, it is better to demonstrate the validity and stability of generated crystals from Mat2Seq. We have provided additional experiments for validity evaluations and stability evaluations in Table 8 attached above in the general responses. It can be seen that Mat2Seq achieves competitive validity ratio and remarkable stability ratios (around 50\% of the generated structures can fall into the energy hull region of E_{hull} < 0.1 eV following FlowMM pipeline).
For your second question, as we can see from Table 8, CDVAE, DiffCSP, and the most recent FlowMM published after the NeurIPS deadline, none of them can guarantee a 100\% validity rate, which means the generated crystals by these SOTA methods are not always meaningful.
6. When performing conditional generation, it appears that generating structures for each specific property requires fine-tuning under specific conditions. This could result in high complexity in practical applications.
Yes, currently we tend to utilize the pretrained model on that 2.3 M dataset because this pretrained model is available and potentially powerful beyond training from scratch. This kind of finetuning will potentially introduce less complexity compared with training from scratch. We will release the pretrained model for others to use.
Additionally, a foundation model covering various properties require a huge amount of crystal structures with corresponding properties. This kind of dataset is very expensive to establish and currently out of our scope. And this is a promising direction to move forward.
Thank you again for your questions. If you have any other questions, we are more than willing to answer.
Yours sincerely, Authors.
---
Rebuttal Comment 1.1:
Comment: The authors' rebuttal has addressed my concerns, and I have no further questions.
---
Reply to Comment 1.1.1:
Title: Thank you for your responses
Comment: Dear Reviewer vCYe,
Thank you for your recognition and responses!
Your suggestions undoubtedly help us enhance the clarity of this work, and we are glad that your concerns have been addressed.
Yours sincerely, Authors | Summary: In this paper, the authors focus on the application of language models in the field of material generation. Starting from CIF files that represent crystal unit cell structures, they primarily utilize the Niggli reduction method to organize structures under different translations, rotations, and unit cell expansions into a unique representation. This representation is then used for crystal generation and conditional generation tasks. Compared to CrystaLLM, the proposed method demonstrates better generalization capabilities across multiple tasks.
Strengths: 1. The authors introduce a new process, Mat2Seq, which uniquely represents crystal structures using Niggli reduction. This approach ensures SE(3) invariance, periodic invariance, and completeness, avoiding data augmentation problem and shortening token length in language model.
2. The paper showcases Mat2Seq’s ability to generalize to novel crystal structures that were not seen during training. This is a crucial aspect for practical applications and demonstrates the robustness of the method.
Weaknesses: 1. The authors utilized a unique representation method for crystal structures, which theoretically promises to become an essential step in data preprocessing. This representation method primarily aims to improve the model's ability to recognize equivariant transformations in materials. However, the evaluation does not demonstrate this method's generalization capability to equivariant structures. It remains unclear from the authors' validation whether this method help language model learns from SE(3) equivariance or if the improved performance is due to the new token representation.
2. In the experimental validation in Section 4.2 and the conditional generation in Section 4.3, the evaluation of capabilities is very limited. The authors only provided the proportion of generated structures without detailing whether new structures can be generated, how many of the generated structures are valid, and accuracy of these generated structures. The generated structures need detailed explanation and evaluation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Section 3.1, the authors list two conditions for uniqueness and three types of transformations: translation, rotation, and unit cell expansion. It is suggested that Figure 1 should clearly illustrate all these transformations. Additionally, the "Change Lattice" operation in Figure 1 is confusing. Which type of transformation does this correspond to? Simply changing the length in the a-direction should not alter the atomic positions. Clarification is needed on whether the authors intended to illustrate unit cell expansion or some other operation.
2. In Table 1, the authors demonstrate Mat2Seq's ability to recognize crystal uniqueness, but the specific evaluation details are not provided. How were the values obtained, especially the 30% proportion? The methodology and criteria for this evaluation should be clearly explained.
3. In Section 4.3, while band gap is indeed an important property for semiconductors, it should be noted that whether the band gap is zero is a classification task distinguishing metals from non-metals. The authors' use of <0.5 as a threshold is inappropriate, especially when they mention that "values from 0 to 0.5 are marked as 0," which exacerbates the issue. This oversimplification could lead to misleading results.
4. In Section 4.2, in evaluating the ability to generate structures of new materials, the authors only indicate whether the generated structures match the given chemical formulas. It is recommended to include an evaluation of the atomic distance differences between the real and generated structures using RMSE to better assess the generation accuracy.
5. In Section 4.3, compared to commonly used machine learning models for material screening, generative models have the potential to produce out-of-distribution (OOD) structures, aiding researchers in discovering new structures with desired properties. However, the authors do not inform readers about the chemical formula repetitions, structure repetitions, or generation errors in the generated structures. A standard should be defined to evaluate the generated structures based on validity, uniqueness, and novelty.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors utilize an existing unique unit cell representation method in the data preprocessing stage for language models. This method holds promise as a necessary step for future work. However, the evaluation does not provide sufficient evidence to demonstrate its application value. Additionally, the conditional generation applications mentioned by the authors are also quite limited, based on the evaluations provided in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Sr8M,
Thank you very much for your time invested in reviewing our work. For your concerns and questions, we provide point-to-point responses below.
1. About authors' validation whether Mat2Seq helps LM learns from SE(3) equivariance or if the improved performance is due to the new token representation.
Thank you for this question. Actually, we'd like to point out that the only major difference between Mat2Seq and CrystaLLM is the SE(3) invariance 1D crystal sequence used, while the tokenization process is similar. To be specific, we used the exactly same model settings and very similar tokenization process following CrystaLLM, with the aim of showing the importance of SE(3) invariance and periodic invariance sequence representations. With CrystaLLM naturally served as the ablation of the inclusion of SE(3) invariance and periodic invariance sequence representations implemented by Mat2Seq, we show by comprehensive experimental results in the original manuscript (Tables 2 and 3) and in the attachment of the global rebuttal (Table 12) that Mat2Seq sequences will yield better performances.
Thus, by using CrystaLLM as a solid baseline and ablation, we believe the importance of SE(3) invariance and periodic invariance sequence representations for materials is clearly demonstrated.
2. We slightly organized your questions and concerns for the evaluation of Section 4.2 (Weakness 2, Question 4) and 4.3 (Weakness 2, Question 5), and provided responses below.
First of all, we agree with your point that for section 4.2 it is better to further include evaluation of the atomic distance differences between the real and generated structures using RMSE to better assess the generation accuracy. We provide detailed hit rate and RMSE in the new Table 12 attached above in general responses. It can be seen in the table that Mat2Seq significantly performs better than CrystaLLM in terms of Hit Rate (whether the generated structure matches with the literature structure), and RMSE.
Additionally, your suggestion for section 4.3 to include metrics like uniqueness, validity, and novelty is indeed very important for aiding researchers in discovering new structures with desired properties. Thus, we follow your suggestion and conduct evaluations for the generated crystals when conditioned towards lower or higher band gap values, with results shown in Table 9 attached above in general responses. It can be seen in Table 9 that Mat2Seq achieves remarkable validity with 88\% for lower band gap condition and 90\% for higher band gap condition, with good uniqueness of 98\% and 92\%, and good novelty of 86\% and 99\%.
3. About figure 1 and "Change Lattice" operation.
Thank you for your suggestion. Currently we show two types of periodic transformations that will alter unit cell structures significantly to demonstrate the failures of previous methods. For sure, we can update Figure 1 to include more demonstrative examples of all transformations that can result in different unit cell structures once we can update the paper.
For your second question, the "Change Lattice" operation does not correspond to cell expansion. Let's use a simple example to better demonstrate this. For example, you have a 2D material with atom at the origin of a cell, with cell lattices (0, 1) and (1, 0). A change lattice operation means you can actually change the lattice vectors to (1, 0) and (1, 1) without changing the area (or volume for 3D structures) of the cell at all. It is still a minimum unit cell, but with different lattice structures. We will also add these texts in the appendix once we have the chance, to enhance the clarity.
4. How were the values obtained, especially the 30% proportion?
Thank you for your question. The evaluation is conducted as follows: (1) for all crystal structure in MP 20 dataset, we transform it to a different unit cell representation without changing the crystal structure at all, e.g., by shifting the periodic boundaries as shown in Figure 1, to obtain a second representation of the same crystal. (2) We feed the original and the second crystal structure into 1D sequence representations of different methods, and then compare the differences between the output of these two different inputs for the same crystal structure. If the sequence mismatches, e.g., the resultant coordinates or type for the first atom is different, then it is a failure. (3) We go through the whole dataset and calculate the success rate. 30% means only 30% of the structures before and after the transformation will have the same sequence representation when using CrystaLLM.
5. The band gap zero is a classification task distinguishing metals from non-metals. This oversimplification could lead to misleading results.
Thank you for your question. However, we kindly disagree with you for this point. We want to mention that the band gap of a material is the energy difference between the top of the valence band and the bottom of the conduction band. It is calculated by subtracting the energy of the valence band maximum (E_v) from the energy of the conduction band minimum (E_c). We treat materials with Eg = E_c - E_v < 0.5 eV as a separate group, which will indeed include both true metals/semimetals with (Eg = 0) and small gap materials 0< Eg <0.5 eV), but we do not regard this group as pure "metal" group. Similarly, we separate materials with 0.5 <= Eg < 1.0 eV, 1.0 <= Eg < 1.5 eV, ... etc. as individual groups. This is just our current grouping protocol. For your interested applications, you can easily divide materials into different groups, e.g. separating Eg=0 as an individual group for metal electrode applications, and 0<Eg<0.5 as another group for mid-infared and thermoelectric applications.
With extensive additional experimental results and further clarifications, we hope we addressed your concerns and questions. If you have any other questions, we are more than willing to answer.
Yours sincerely, Authors.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed response and the additional experimental evaluations. These supplements have enriched the paper's experimental assessment, particularly the added evaluations on the validity, uniqueness, and novelty of the generated structures, which have improved the overall quality of the paper. As a result, I have decided to adjust my score to 6.
However, there is still a point worth discussing. Regarding the authors' evaluation of conditional generation in the last reply, although the authors have explained their approach to merge materials with zero and near-zero band gaps, I still have some concerns. In the MP database, the ratio of materials with a zero band gap to those with a non-zero band gap is approximately 1:1, and in the MP20 dataset, the proportion of materials with a zero band gap is even higher. When evaluating the conditional generation capability, the authors chose to merge materials with Eg = 0 and near-zero band gaps into the range 0 ≤ Eg < 0.5 eV. This approach, which is based on a dataset where the proportion of Eg = 0 is very high and has very low proportion of 0 < Eg < 0.5 eV, seems confusing to me and does not adequately validate the conditional generation capability. I recommend a separate discussion on this aspect to provide a clearer validation of the conditional generation capability.
Once again, thank you to the authors for their hard work and for the thorough response to our feedback.
---
Reply to Comment 1.1.1:
Title: Further responses from authors
Comment: Dear Reviewer Sr8M,
Thank you for your responses and recognition of our work. We appreciate your insightful questions regarding the evaluation of the conditional generation ability of the proposed method. Below, we provide further clarifications and discussions which we will also include in the manuscript.
From our understanding, your question mainly concerns the high proportion of materials with zero band gap and the low proportion of materials with a band gap value between 0 and 0.5 eV. You suggest that this may not adequately validate the model's conditional generative ability, as the proportion of materials with 0 ≤ Eg < 0.5 eV is substantial. In other words, if the ratio of a group of materials in a dataset is large enough, an unconditional model might also generate a significant proportion of materials that satisfy the given group condition.
To clarify this, let us begin with the dataset distribution. The training set we used for generating materials with high or low band gap values contains 61,541 crystals in total, with 46,933 crystals (77.9%) having 0 ≤ Eg < 0.5 eV, 9,814 crystals (15.9%) with 0.5 ≤ Eg ≤ 3 eV, and 4,794 crystals (7.8%) with Eg > 3 eV. As you mentioned in your comments, the ratio of crystals with 0 ≤ Eg < 0.5 eV is indeed large in the training set. Therefore, to better demonstrate the model's conditional generation capacity, we not only show the success rate of generating crystals with low band gap values (<0.5 eV) but also the success rate of generating crystals with high band gap values (> 3 eV). This approach highlights the ability of the proposed method to significantly alter the distribution from the training set. As shown in Table 4 of the paper and Table 9 in the general response, Mat2Seq can significantly change the generated crystal distribution from 7.8% with Eg > 3 eV in the training data to more than 90% in the conditional generation distribution, with a remarkable 92.2% uniqueness ratio and 89.8% validity ratio. Therefore, we believe that the conditional generation capacity of Mat2Seq is well demonstrated.
Furthermore, another reason we do not group crystals with a specific band gap value (e.g., 0 eV) into a single group but rather use a range of band gap values for this task is that current state-of-the-art machine learning-based band gap predictors (such as ComFormer, ALIGNN, and others) treat the prediction of band gap values as a regression task, rather than first classifying materials as metal or non-metal and then performing regression for non-metal materials. As a result, it is challenging to determine whether a generated material satisfies a 0 eV band gap because the mean absolute error (MAE) can be as large as 0.122 eV.
Thank you again for your recognition and insightful suggestions and comments. If you have any additional questions, we would be more than happy to answer them.
Yours sincerely,
The Authors | Summary: There are several challenges when developing LMs for materials: 1) each crystal structure consists of an infinite number of atoms and a unique and invariant unit cell must therefore be selected for each crystal 2) the unit cell can be represented in a one-dimensional (1D) sequence that maintains invariance under arbitrary rotations and ensures completeness.
To tackle these challenges, the paper proposed a method called Mat2Seq that systematically transforms 3D crystal structures into 1D sequences. This is achieved by first identifying SO(3) equivariant unit cells and subsequently converting these into SE(3) invariant sequences. The experimental results in crystal structure prediction and crystal discovery with desired properties validate the efficacy of Mat2Seq.
Strengths: - I understand why it is important for the inherent symmetry of materials to be reflected in the text when designing a language model for material generation.
- The importance of unique representation and completeness from the structure of the material to the sequence is emphasized.
Weaknesses: - Overall, the novelty and contribution of the paper are low. Instead of proposing new algorithms, they simply apply technologies that are already widely used in materials science to the process of representing materials as 1d sequence text. More specifically, the method for determining the equivariant unit cell is derived from Niggli cell reduction rather than being an original contribution of this paper. This concept is already widely used in the field of materials science, and the authors’ application of it to the text representation of materials does not seem like a significant contribution.
- It is unclear what Section 3.3 is trying to convey. The authors claim to achieve text representation reflecting SE(3) symmetry following SO(3) symmetry, but it is doubtful whether this is well conveyed in this section.
- The evaluation metrics used are limited. To demonstrate efficiency more effectively, the paper should present values for various metrics such as Validity, Stability, and S.U.N., which are commonly addressed in other studies.
Technical Quality: 2
Clarity: 2
Questions for Authors: - Could you provide a more detailed explanation with examples regarding the input for the language model (LM)?
- Why is the target property value inserted at the beginning of the text input when generating materials for a specific target property?
- Does “irreducible atom sets” refer to the atom sets contained within the minimum unit cell?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Please refer to the Weakness section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 5Kst,
Thank you for your time invested in reviewing this work. We provide point-to-point responses to your questions and concerns.
1. Weakness 1
Thank you very much for raising this question. However, we kindly disagree with this and we feel there might be a potential misunderstanding of our proposed method.
Let's begin from a question. Could Niggli cell reduction algorithms or other widely used 1D crystal representations including CIF files achieve uniqueness or SE(3) invariance? The answer to this question is **No**. More specifically, **Niggli cell reduction can only give you a set of lattice vectors and cannot determine a unique crystal unit cell**. Additionally, there is a previous work **CrystaLLM that has been using Niggli cell reduction** when converting the crystal structures into 1D sequences. However, as we show in Table 1, **CrystaLLM cannot achieve unique** crystal 1D sequence representations.
Furthermore, let's look into our proposed method Mat2Seq and demonstrate how it achieves the desired properties including uniqueness, completeness, and SE(3) invariance. **The niggli cell reduction is only used in the first step when we want to uniquely determine a set of lattice vectors**. After that, we need to uniquely determine a corresponding unit cell. And then, we need to determine a unique ordering of atoms in the cell, and features used to completely represent the crystal 3D unit cell structures. The Niggli cell reduction is only an initial step of our proposed method, and to the best of our knowledge, Mat2Seq is the first work that can achieve uniqueness, completeness, and SE(3) invariance when converting 3D crystal structures into 1D sequences in the field of materials science. Thus, we kindly disagree that this problem is solved by previous Niggli cell reduction algorithms or any other widely used methods in materials science.
Last but not least, we want to point out that all previous LLM based crystal generation methods fail to achieve uniqueness and SE(3) invariance. It is valuable to propose a method to address this limitation of the current process in this direction.
2. Weakness 2
We appreciate your question. To clarify, as we mentioned in the beginning of section 3, we begin with Requirements for ideal crystal sequence representations in Section 3.1, and then move forward to how to determine a SO(3) equivariant unit cell in Section 3.2, then, Section 3.3 is used to demonstrate how to convert a SO(3) equivariant unit cell into SE(3) invariant 1D sequence.
Specifically, in Section 3.3, we show that for given determined SO(3) equivariant and periodic invariant unit cells $\mathbf{M}=(\mathbf{A}_u, \mathbf{P}_u, \mathbf{L}_u)$, we represent them by SE(3) and periodic invariant sequences that are complete to guarantee the full reconstruction of crystal structures.
More specifically, the SO(3) equivariant unit cell obtained in Section 3.2 cannot be directly used as input for LLMs, because a given crystal structure can have infinite number of SO(3) equivariant unit cells that differ by a rotation transformation.
Additionally, to show that the converted 1D sequence satisfies all the requirements, we provide detailed proofs in Section 3.4.
With these being said, we will appreciate any additional specific suggestions you have to improve this section, and we will revise the main paper accordingly once we have the chance.
3. Weakness 3
Thank you very much for your valuable question.
Following your suggestion, we further followed a very recent work FlowMM in ICML 24 published after the NeurIPS submission deadline to establish a fair comparison in terms of Validity, Stability, and S.U.N. It is worth noting that the calculation of stability and S.U.N. is very expensive (requires extensive DFT calculations) and only has been used by very recent works even published after NeurIPS deadline. We show the results in Table 8 attached above in general responses. It can be seen in the table that Mat2Seq achieves competitive results in terms of validity, stability, and S.U.N., even 28\% better beyond FlowMM in terms of S.U.N. (stable, unique, and novel) that published in ICML 24 after NeurIPS deadline.
4. Question 1
Sure, we'd like to mention that a concrete example of the Mat2Seq input for a crystal structure is shown in Figure 2 in the main paper. The whole Mat2Seq sequence for that crystal structure is "formula Ag 4 Hg 2 I 8 \n space_group_symbol I-4 \n lattice_parameters \n a 6.5361 b 6.5361 c 13.1629 \n alpha 90.0000 beta 90.0000 gamma 90.0000 \n Ag 2 0.0000 0.0000 0.0000 \n Ag 2 0.0000 0.5000 0.7500 \n Hg 2 0.0000 0.5000 0.2500 \n I 8 0.2400 0.7596 0.6200 \n\n", where all these symbols and numbers are mapped to int values by a mapping dictionary.
5. Question 2
Thank you for your valuable question. The nature of Autoregressive language models is the conditional probabilities of the next token given its predecessors, $p(C_i|\theta)=\prod_{j=1}^{n_i}p(c_j | c_1:c_{j-1}; \theta)$. Thus, if you want to establish a conditional distribution $p_\theta (\cdot|s)$ to generate 3D crystal structures possessing the property $s$, it is natural to place the target property values at the beginning of the text other than anywhere else.
6. Question 3
No, the irreducible atom sets refer to the subset of the atoms in the minimum unit cell. For example, as you can see in the structure of Ag4Hg2I8 in Figure 2 in the main paper, there are 14 atoms in the minimum unit cell, however, the irreducible atom set only contains 2 Ag, 1 Hg, and 1 I. This is because the positions of other seven I atoms can be fully recovered by using I 0.2400 0.7596 0.6200 and the space group transformations for I-4 group, and that's exactly why there is a number 8 in "I 8 0.2400 0.7596 0.6200" after symbol I.
With these clarifications provided, we hope we addressed your questions and concerns. And if you have any other questions, we are more than willing to answer.
Yours sincerely, Authors.
---
Rebuttal Comment 1.1:
Title: We appreciate the chance to answer any additional questions
Comment: Dear Reviewer 5Kst,
Thank you again for the valuable time you invested in reviewing this work. For your previous questions and concerns, we have provided detailed clarifications along with extensive additional experiments. We tend to believe these clarifications and additional experiments could thoroughly address your concerns and questions.
As the author-reviewer discussion period is closing soon, we would greatly appreciate the chance to answer any additional questions you may have.
Yours sincerely,
The Authors | Summary: This article propose a novel method, known as Mat2Seq, to tackle this challenge. Mat2Seq to converts 3D crystal structures into 1D sequences and ensures that different mathematical descriptions of the same crystal are represented in a single unique sequence, thereby probably achieving SE(3) and periodic invariance. Experimental results show that, with language models, Mat2Seq achieves promising performance in crystal structure generation as compared with prior methods. Overall this work gives new insights and should be accepted for the conference after minor revisions.
Strengths: 1. Mat2Seq to converts 3D crystal structures into 1D sequences
2. it reports the framework for creating unique and complete crystal sequence representations, followed by the construction of a material LLM capable of generating novel crystal structure with desired properties of interest.
Weaknesses: 1. Lack of experimental data validation for the generated data.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Author needs to verify the generated data with more experimental results as currently it have multiple limitations. 2. Author needs to compare the accuracy with already existing CIF data of crystals.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations of the current Mat2Seq include: (1) it cannot be directly used for other atomic systems, like molecules and proteins; (2) the extension to model disordered materials remains a challenging frontier; and (3) large-scale training with more stable crystal structures can potentially enhance the robustness and performance when more computational resources are available.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer JJwt,
Thank you very much for your recognition of our work in terms of insights and contributions. For your questions and concerns, we provide point-to-point responses as follows.
1. Lack of experimental data validation for the generated data. Need to verify the generated data with more experimental results as currently it have multiple limitations. Need to compare the accuracy with already existing CIF data of crystals.
Thank you for your insightful comments. We totally agree that the comparison with experimental data (or in other words, crystal structures that have been experimentally observed) is important to demonstrate the ability of the proposed method for real world applications, beyond synthetic crystal structures obtained from pure DFT calculations.
Actually, to demonstrate this, we conduct experiments by using Mat2Seq to discover recently experimentally discovered crystal structures from literature in Section 4.2. The visual comparison between the Mat2Seq generated structure with the experimentally observed crystal structure is shown in Figure 4, which shows our proposed method's ability for experimentally observed crystal structures. We also add one experiments to show the match rate and RMSE in Table 12 attached above in the general response.
Additionally, the datasets used in Section 4.1 also contain a large number of crystal structures that are experimentally observed. For example, in MP20 dataset test set, there are 3819 crystal structures that are experimentally observed. To show the accuracy of Mat2Seq for experimental data, we additionally conduct evaluation experiments on these 3819 crystal structures with results shown in Table 10 attached above in general response. It is worth noting that Mat2Seq achieves 65.2% match rate with 0.042 RMSE on these 3819 crystal structures.
2. For current limitations listed in the paper.
Thank you for your valuable question.
(1) Designing a LLM-based generative method for all atomic systems while satisfying uniqueness and completeness is highly nontrivial and a open question. We tend to address this problem in future works.
(2) Different from other generative methods like diffusion based or flow based methods, LLM based methods generate crystal structures in an autoregressive manner. This difference makes a lot of techniques used by previous diffusion or flow based methods invalid for LLM-based methods. And this is an open frontier to be explored as also discussed by previous LLM-based methods. In this paper, we proposed a potential solution for this problem, and we tend to believe there are rooms for future works to improve.
(3) As discussed by the ML community, usually more data will result in more powerful ML models. However, generating large scale crystal structure dataset is very expensive and currently out of our scope, so we list this as a potential direction to further improve the power of ML methods for crystal structure generation.
We hope we addressed your concerns. If you have additional questions or concerns, we are more than willing to answer.
Yours sincerely, Authors. | Rebuttal 1:
Rebuttal: Dear Reviewers, ACs, and PCs,
We thank all reviewers for your time invested in reviewing our work, and appreciate your valuable suggestions.
For all your (reviewers') questions and concerns, we provide detailed clarifications with additional experimental results, and some of these experiments are quite expensive to run like S.U.N. and stability (which require extensive DFT calculations). We attached detailed experimental results in the pdf file here for you to view.
To be specific, in Table 8, we add the comparison with a very recent SOTA method FlowMM that is published in ICML 24 after the NeurIPS submission deadline, to establish a fair comparison in terms of **Validity, Stability, and S.U.N.** (stable, unique, and novel). We follow the FlowMM pipeline and generate 1k materials to obtain these results.
In Table 9, we add the **unique ratio, validity ratio, and novelty ratio of generated materials conditioned towards low and high band gap values** detailed in Section 4.3.
In Table 10, we add the match rate (%) and RMSE for **experimentally observed** crystal structures in MP-20 test set.
In Table 11, we add the **efficiency in generation speed, and model complexity** comparisons with CDVAE and DiffCSP.
In Table 12, we add the **Hit rate (whether the generated structure matches with recently experimentally observed crystals from literature) and RMSE** for the 10 challenge crystal systems detailed in Section 4.2, compared with previous SOTA method CrystaLLM.
With these extensive experiments and detailed further clarifications, we hope we addressed all your concerns and questions thoroughly. If you have any other questions or concerns, we are more than willing to answer.
Yours sincerely, Authors
Pdf: /pdf/27c273397433339357ceae33a193559f7f606f54.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Unified Debiasing Approach for Vision-Language Models across Modalities and Tasks | Accept (spotlight) | Summary: The authors propose a method for debiasing the representations of Vision-Language Models (VLMs) that can be applied at various layers of the image and text encoders/decoders, and can be used for a variety of downstream tasks such as image generation, 0-shot classification, text to image retrieval, and image captioning.
Strengths: - This work's goal of debiasing VLMs is an important and timely problem that should be of interest to many in the NeurIPS community
- The proposed approach is seemingly effective, lightweight and seems easy to apply
- The method is finetuning-free, which mitigates issues such as catastrophic forgetting
Weaknesses: **Minor issues:**
- The writing could be improved in parts. There are minor grammatical errors in parts, such as "For instance, Hirota et al. [18], Zhao et al. [42] investigated bias in image captioning, where specific genders or races are disproportionately represented **leading to generate** biased caption." I suggest another editing pass or two.
- I find the structure of section 3 "Bias Analysis in VLMs" is a little odd. It mostly reads like background/preliminary material, but it also contains some experimental setup details (such as in line 165). This seems off, as this is before the proposed method section. I suggest moving these experimental details to section 5, with the rest of the experimental setup.
- Figure 4 shows that DeAR fails to produce bias-free representations. Including a figure likes this naturally makes the reader want to see a figure showing that the proposed approach **does** succeed here. It feels incomplete as-is.
**More significant issues:**
- The experimental results don't include confidence intervals. The authors justify this by pointing out that the proposed approach is deterministic. However, CIs still can and should be calculated by an approach such as bootstrapping the test set.
- Relatedly, the text to image generation experiment is **not** deterministic if the text2img generator's random seed is changed, so CIs should definitely be reported here.
- The compared methods are limited to only 2 other debiasing approaches and the baseline model.
- The proposed approach of 1) finding features most associated with the biased attributes and 2) replacing these features with an imputed value from low-confidence samples is, I think, very similar in practice to Chuang et al.'s approach [1] in "Debiasing Vision-Language Models via Biased Prompts". Chuang modifies embeddings by projecting them onto the subspace defined by the linear direction most associated with the biased attribute(s). Intuitively, the direction most associated with the biased attribute should be similar to the direction defined by the features identified in step 1) of the proposed approach. Not that I am **not** saying that this similarity alone is a limitation; I think the proposed approach is different enough to still have sufficient novelty. However, I think that Chuang's approach really needs to be compared against (for all experiments except maybe image captioning), in order to verify that these features are either finding a different direction in the embedding space or that the low confidence imputation performs better than just projecting onto the bias subspace.
[1] Chuang, Ching-Yao, et al. "Debiasing vision-language models via biased prompts." arXiv preprint arXiv:2302.00070 (2023).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is the improvement of the proposed approach over the compared methods significant when CI are considered, where these CI's are obtained from bootstrapping or some other way?
- How does the proposed approach perform when compared against the method proposed in "Debiasing vision-language models via biased prompts."?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I think the discussion on limitations could be improved. The authors point to Section ***5.2 Result Analysis** as their discussion of limitations. 5.2 covers the results of their experiments, and does include some discussion on how the experimental results on image generation point to room for improvement. However, I feel like a discussion of the limitations would be greatly improved by dedicating a separate section/subsection to it. I would expect the authors to discuss more fundamental limitations, rather than just experimental performance. For instance, I would include details on how the method assumes access to a validation set with labels for bias/protected attributes (e.g. a dataset with race/gender/etc labels).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Confidence interval in Text-to-Image Generation
Thank you for pointing out the lack of confidence intervals in our experimental results.
We acknowledge that text-to-image generation is not deterministic. This is why we conduct text-to-image generation 10 times with different seeds and use a unified evaluation metric to measure the skewed distribution across 10 different generations for each neutral prompt:
\begin{align*}
Skew = \frac{1}{|\mathcal{P}|} \sum_{p \in \mathcal{P}} \frac{\max(N_{p,m}, N_{p,f})}{10},
\end{align*}
where $N_{p,m}$ and $N_{p,f}$ are the numbers of detected genders for each profession, $\mathcal{P}$ is a profession set. For example, if a model generates the same gender for a class 9 times out of 10, the Skew value for this profession becomes 90%. Although this metric does not include a confidence interval, it **accounts for the randomness in text-to-image generation**.
On the other hand, a metric used in [1] measures gender discrepancy as follows:
\begin{align*}
Discrepancy=\sqrt{\Bigl(\frac{N_{p,m}}{\vert \mathcal{P}\vert}-0.5\Bigr)^2 + \Bigl(\frac{N_{p,f}}{\vert \mathcal{P}\vert}-0.5\Bigr)^2},
\end{align*}
This metric can be used in a single generation, so it may produce a confidence interval when conducted on multiple seeds. However, this metric does not effectively reflect bias in text-to-image generation. For example, assume we have a set of four gender-dominated professions: {nurse, dancer, doctor, engineer}. If a biased text-to-image model always produces female images for nurse and dancer, and male images for doctor and engineer, the evaluation metric becomes 0, even though the bias is prevalent. This metric fails to demonstrate bias, even with a confidence interval over 10 runs. Therefore, our evaluation metric, which computes Skew over 10 runs, more effectively reflects the bias in text-to-image generation, accounting for randomness.
On the other hand, demonstrating a confidence interval for the mismatch rate for gender-prompt is possible as it can be measured by a single experiment. As we have already executed the experiment with 10 different seeds, we report the confidence interval for mismatch rates in **Table 1 of the rebuttal PDF file**.
[1] Chuang, Ching-Yao, et al. "Debiasing vision-language models via biased prompts." arXiv preprint arXiv:2302.00070 (2023).
### Confidence interval for all downstream tasks
We agree with the importance of including confidence intervals in experimental results, even when the model is deterministic, as is the case with pre-trained models used in zero-shot classification, text-to-image retrieval, and image captioning.
For zero-shot classification and image captioning, we conducted experiments on the full test dataset. We utilized the bootstrapping technique by generating 1000 datasets with replacement and reported the confidence intervals in **Table 1 of the rebuttal PDF file** for zero-shot classification and image captioning.
In the case of text-to-image retrieval, the test set is a subset of the entire Flickr30K dataset, created by randomly sampling 1000 samples. We extended the experiments by using 10 different test sets to obtain the confidence intervals. These results are also included in **Table 1 of the rebuttal PDF file**.
#### Robust superiority of SFID
Overall, when we include confidence intervals for all downstream tasks via bootstrapping or additional experiments, SFID consistently outperforms the comparison methods in all cases. **This robust performance across various scenarios highlights the effectiveness and reliability of SFID in mitigating bias.**
### Comparison with Prompt-debias (Chuang et. al.)
Thank you for recognizing a similar approach and the novelty of our work. We agree that while their goal in debiasing VLM embeddings is analogous to ours, our methodologies differ significantly.
We include the experimental results of Prompt-debias in Table 1 of the rebuttal PDF file. It turns out that while Prompt-debias can mitigate bias in various downstream tasks, our method, **SFID, performs significantly better**. Moreover, in text-to-image generation, Prompt-debias works only with neutral prompts but deteriorates the bias in the case of mismatch rates for gender-prompts.
The reason our method works better than Prompt-debias is due to the bias amplification in the decoder or image encoder. Prompt-debias's application is limited to the text embedding, which is the output of the text encoder. In contrast, SFID mitigates bias in either the encoder or decoder output, or both, providing a more comprehensive approach to debiasing.
### Visualization of embedding translation by SFID
We include a visual aid for the embedding to indicate the bias-free representation achieved by SFID in **Figure 2 of our rebuttal PDF**. As SFID aims to obscure all the features in the important index by imputing ambiguous values, all the data points are gathered at a single point to mute the information related to the sensitive attributes, while maintaining the features in other indices as they are. It is important to note that although this translation is enforced to a single point, it still remains within the distribution of the original samples, as shown in Figure 3 of our main paper.
### Grammar and Structure
Thank you for pointing out the organization of our paper. We will address the minor grammatical errors and ensure clarity throughout. In Section 3, it is true that it serves both as preliminaries and motivation section. Experimental setups are included in this section to provide readers with context regarding each downstream task and experimental setting, ensuring consistency. We will revise and reorganize our content to enhance readability.
### Limitation and Future work of SFID
Thank you for pointing out the limitations section. We have identified potential limitations and areas for future exploration in **our global rebuttal.**
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive rebuttal. I have raised my score accordingly. | Summary: this paper introduces Selective Feature Imputation for Debiasing (SFID), which integrates feature pruning and low confidence imputation (LCI) to effectively reduce biases in VLMs.
Strengths: 1.The proposed method utilize feature selection techniques such as RandomForest to identify gender-specific (or race) biases within the frozen representation, and subsequently replace bias-causing features with bias-free representation.
2. SFID eliminates the need for costly retraining of pre-trained VLMs and it simply utilizes datasets with sensitive attributes in individual images or texts for debiasing.
3.The experimental results demonstrate the efficacy of the proposed method in mitigating bias across 4 downstream tasks
Weaknesses: 1.the method is simple, utilizing Random Forest, which is pretty well-known in the machine learning community. However, the novelty is a little limited.
2. how to define the \Delta DP for attributes with more than two levels? For instance, race has white, asian, black, etc.
3. what is the principle for choosing the number of important features? will it affect the performance?
Technical Quality: 2
Clarity: 3
Questions for Authors: About the working mechanism of the proposed method, the authors may dig deep how the identified features is correlated to the social biases, maybe with visualization tools.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: see weaknesses and questions
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Novelty of the proposed method
We appreciate the reviewer's opinion regarding the simplicity and novelty of utilizing RandomForest in our framework.
While the RandomForest algorithm itself is well-known and simple, the novelty of our work lies not in the use of RandomForest alone, but in the innovative way we integrate it within our Selective Feature Imputation for Debiasing (SFID) framework. The SFID methodology introduces several novelties:
- **Selective Feature Imputation:** Our approach combines feature pruning with low-confidence imputation (LCI) to replace bias-causing features with bias-free representations. This novel technique maintains the semantic integrity of the embeddings while effectively reducing bias.
- **Efficiency and Cost-Effectiveness**: SFID eliminates the need for costly retraining of pre-trained VLMs and does not require paired text-image datasets, making it a cost-effective and efficient solution for debiasing.
- **Versatility Across Modalities and Tasks**: Unlike existing methods that focus on specific modalities or tasks, SFID is designed to be seamlessly integrated into various components of VLMs, including both encoders and decoders, across a wide range of tasks such as zero-shot classification, text-to-image retrieval, image captioning, and text-to-image generation. **We emphasize that none of the existing work can debias such a seamless number of downstream tasks without further training or fine-tuning.**
- **Empirical Validation and Effectiveness**: Our experimental results demonstrate the practical effectiveness of SFID in mitigating biases across various tasks without compromising performance. This is evidenced by significant improvements in fairness metrics across multiple benchmarks. The comparative analysis with other debiasing methods, such as DeAR, CLIP-clip, and Prompt-debias (newly added), highlights t**he superior performance of SFID**, further validating the novelty and utility of our approach.
While RandomForest is indeed a well-known algorithm, the novelty of our work lies in the innovative application and integration of this algorithm within the SFID framework. This novel approach offers a significant advancement in the field of debiasing VLMs, as demonstrated by our empirical results and comparative analysis. Furthermore, we highlight the necessity for a unified debiasing framework, such as SFID, to address the challenges posed by the emergence of diverse VLMs applications.
### Multiple sensitive attributes in racial bias
Thanks for pointing out the details how to define $\Delta DP$ in multiple attributes and how SFID mitigates such bias.
SFID is free from the type of bias when the attribute labels are given such as gender and race in FairFace dataset, even in scenarios involving multiple attributes.
We define $\Delta DP$ for multiple attributes by adopting the evaluation metric from [1] and [2], which measures the maximum discrepancy across the sensitive attributes for each class. This metric is defined as follows:
\begin{align*}
DP_c = \max_{i \in S} \max_{j \in S \setminus \{i\}} \Bigg( \bigl| P(y=c \mid a=i) - P(y=c \mid a=j) \bigr| \Bigg)
\end{align*}
where $i, j \in S$, $c$ is a class, and $S$ denotes the set of multiple sensitive attributes. We consider the mean and maximum values of $DP_c$ to evaluate the bias across the classes. By applying this metric, we can effectively measure and demonstrate the reduction in bias achieved by SFID, despite the differences in sensitive attributes between the training and test sets.
To support the adaptability of SFID in addressing different types of bias with multiple attributes, we report experimental results for racial bias in our **global rebuttal**. The table in the global rebuttal demonstrates that SFID can effectively mitigate racial bias even when dealing with multiple sensitive attributes. This showcases SFID's versatility and robustness in reducing bias across various scenarios.
[1] Foulds, James R., et al. "An intersectional definition of fairness." 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020.
[2] Denis, Christophe, et al. "Fairness guarantee in multi-class classification." arXiv preprint arXiv:2109.13642 (2021).
### Impact of the number of important features
Thank you for pointing out how to choose the number of important features. Indeed, the number of important features significantly affects performance, but we have an easy way to determine the optimal number, $k$, as shown in Appendix A.1 of the main paper.
In SFID, selecting $k$ is crucial for performance. As shown in Figure 5 of Appendix A.1, we determine an appropriate value for $k$ by identifying the elbow point in the feature importance curve, obtained by sorting the indices by their importance. According to Figure 5, $k=50$ is a reasonable elbow point.
We also analyze the impact of $k$ on performance. As expected, we achieve the best trade-off between utility and fairness when $k=50$, as shown in Figure 6 and Figure 7 in Appendix C of the main paper.
Overall, we have a solid guideline for choosing the number of important features, $k$, and acknowledge its impact on performance. The observed trend aligns with our intuition.
### Visualization: Correlation of identified features to social biases
Thank you for suggesting this valuable enhancement to our paper.
We visualize how the important features are correlated to social biases by showing GradCAM visualizations, as presented in **Figure 1 of our rebuttal PDF file**. For example, the more important features highlight human faces, while the least important features are correlated with the image background.
SFID imputes the representations at important indices with the values from low-confidence samples, making the face-related features ambiguous. Despite this imputation, the replaced values remain within the distribution of the original samples, as shown in Figure 3 of our main paper.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comprehensive rebuttal, which has addressed my concerns. So I have raised my score accordingly. Please include the important analysis and definition in the main content. | Summary: The paper introduces a new method to reduce biases in VLMs, which works by using a random forest to identify the bias-related features in model representations and then imputes the values for those features with values from the low-confidence samples. The authors test it on tasks like zero-shot classification, image captioning, and text-to-image generation.
Strengths: - The method works across different tasks (classification, captioning, generation) and model components (encoders, decoders).
- SFID doesn't require costly model retraining or extensive hyperparameter tuning.
- The authors test SFID on multiple state-of-the-art VLMs (CLIP, XVLM) and compare it against existing debiasing methods.
Weaknesses: - The paper focuses primarily on gender bias without exploring how the method performs on more complex biases, or even biases on other attributes -- e.g. race or age attributes are provided in the FairFace dataset, it would be good to show how SFID can mitigate biases w.r.t these attributes.
- The paper doesn't address potential limitations of using Random Forest for feature importance. There could be cases where complex, non-linear relationships between features and bias are missed by this approach.
- Insufficient comparison with a wider range of existing debiasing methods. The paper only compares SFID with DeAR and CLIP-clip, and is potentially missing out on how it compares to other methods e.g. https://arxiv.org/pdf/2302.00070.
- Notation in Section 3 could be simplified (especially 3.3)
Technical Quality: 3
Clarity: 3
Questions for Authors: - In line 188 the authors mention that “For a frozen component in VLMs g, whether it is an encoder or decoder, or processes image or text, we obtain the frozen representations”. Can the authors provide a breakdown of the computation cost when the encoder or the decoder is a language model that is large?
- Would it be possible to generate embedding translation (figure 4) for SFID as well?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have provided some limitations in section 5.2. The paper could benefit from an extended limitations section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### How SFID Mitigates Bias in Various Attributes, Including Racial Bias
Thank you for your detailed observations on how SFID mitigates different types of bias, which is even more complex.
SFID effectively addresses biases when attribute labels are provided such as gender and race in the FairFace dataset. The RandomForest attribute classifier is trained to identify important features within the frozen image embedding. For instance, as illustrated in **Figure 1 of our rebuttal PDF**, GradCAM highlights that the 'important features' pertain to human faces, allowing for the recognition of identity in both gender and race attribute cases. SFID aims to impute the values in these important features with those of low-confidence samples, rendering the final embedding ambiguous in terms of the targeted attribute.
To demonstrate the adaptability of SFID to different types of attributes, we have included experimental results addressing racial bias in our **global rebuttal**. Both **Figure 1 in the PDF** and the **Table in the global rebuttal** illustrate SFID's capability to mitigate more complex biases, showcasing its overall effectiveness.
### Effectiveness in using RandomForeset
We appreciate the reviewer's insightful feedback regarding the potential limitations of using RandomForest for feature importance. However, we argue that using RandomForest for feature importance is both theoretically and empirically sufficient and effective.
- Contrary to some concerns, RandomForest is capable of capturing complex, non-linear relationships due to its ensemble nature. By combining multiple decision trees, each considering different feature subsets and data splits, RandomForest can effectively model complex interactions and dependencies among features.
- Specifically, given the ample representation of images or text extracted by pre-trained neural networks, RandomForest's role is to identify important features while considering the dependencies between them.
- In short, RandomForest is an interpretable method for determining feature importance, robust against overfitting, and efficient in handling large datasets.
- Moreover, the empirical results in the paper demonstrate the effectiveness and versatility of using RandomForest within our framework.
While we have demonstrated the effectiveness of RandomForest in our current work, we acknowledge the importance of exploring additional techniques that might further enhance the modeling of complex relationships. Future research could incorporate advanced methods for identifying important features while improving the quality of representation.
### Comparison beyond DeAR and CLIP-clip
Thank you for your feedback. We acknowledge that many debiasing methods have been proposed recently, but most focus on specific downstream tasks, with only a few demonstrating versatility across various tasks, such as DeAR and CLIP-clip. The paper suggested by the reviewer, Prompt-debias [1], is limited to scenarios where the VLM input is text, making it inapplicable to tasks such as image captioning.
For the other three tasks, we have reported the results in **our rebuttal PDF file**. The results in Table 1 show that while Prompt-debias mitigates bias in zero-shot classification and text-to-image retrieval, ***our method, SFID, is superior in mitigating bias in these tasks compared to Prompt-debias***. In the case of text-to-image generation, Prompt-debias shows slightly better performance with neutral prompts with CoDi, but significantly deteriorates the bias with gender prompts.
In conclusion, few methods are versatile across various downstream tasks. SFID demonstrates superior performance in mitigating various types of bias across all the downstream tasks we evaluated.
[1] Chuang, Ching-Yao, et al. "Debiasing vision-language models via biased prompts." arXiv preprint arXiv:2302.00070 (2023).
### Notation in Section 3
Thank you for pointing out the readability issues. We will revise the notations to be simpler and will add an appendix to provide more detailed explanations and visualizations for each evaluation metric to increase readability.
### Computational Cost
We are happy to address the computational cost for each component and data type. **The overall computational cost is significantly low** since SFID only utilizes frozen embeddings from a pre-trained network, without involving any training or fine-tuning of the neural network. Regardless of the network size, we break down the computational cost into two parts: a) identifying feature importance and b) inference. Both tables demonstrate that applying SFID does not increase the computational cost in practice.
#### Identifying Feature Importance (One-time for each component)
|Component|Details|
|-|-|
|CLIP Encoders|54.60s to 60.75s|
|CoDi Text Encoder|65.90s|
|CoDi Image Decoder|104.14s|
#### Inference
|Component|Details|
|-|-|
|Zero-shot classification|0.196ms / sample|
|Zero-shot classification + SFID| 0.204ms / sample|
|T2I retrieval | 0.146s / sample|
|T2I retrieval + SFID| 0.152s / sample|
|T2I generation (CoDi)| 11.80s / 25 prompts|
|T2I generation (CoDi) + SFID | 12.05s / 25 prompts|
### Visualization of embedding translation by SFID
We include a visual aid for the embedding to indicate the bias-free representation achieved by SFID in **Figure 2 of our rebuttal PDF**. As SFID aims to obscure all the features in the important index by imputing ambiguous values, all the data points are gathered at a single point to mute the information related to the sensitive attributes, while maintaining the features in other indices as they are. It is important to note that although this translation is enforced to a single point, it still remains within the distribution of the original samples, as shown in Figure 3 of our main paper.
### Limitation and Future work of SFID
Thank you for pointing out the limitations section. We have identified potential limitations and areas for future exploration in **our global rebuttal.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the comprehensive response. I have raised my score accordingly. | null | null | Rebuttal 1:
Rebuttal: Thank you to the reviewers for the valuable feedback! Here is our global rebuttal, with a PDF file attached for figures and tables. Please refer to the individual rebuttals for specific details and additional information.
### SFID mitigates various types of bias, even in multi-attribute case
To address the reviewers' concerns regarding various types of bias in Vision-Language Models (VLM), especially in more complex cases involving multiple sensitive attributes, we conducted additional experiments focusing on racial bias with more than two sensitive attributes.
Firstly, we adopted the FairFace dataset for training the attribute classifier, as it contains seven racial attributes: East Asian, Indian, Black, White, Middle Eastern, Latino Hispanic, and Southeast Asian. Given that RandomForest can handle multiple classes, SFID is also applicable in this context. During the evaluation stage for zero-shot classification, we used the FACET dataset, which contains 'skin tone' labels instead of race. We categorized the skin tone attributes into three categories: 'lighter,' 'middle,' and 'darker.' In this setting, the biased zero-shot classifier tends to produce higher accuracy by associating certain skin tones with specific professions (e.g., bartender with lighter skin, trumpeter with darker skin).
To mitigate this bias, SFID demonstrates its effectiveness even though the attributes in the training set and test set do not exactly match, i.e. race in FairFace and skin tone in FACET. We adopted an evaluation metric inspired by [1] and [2], which measures the maximum discrepancy across the sensitive attributes for each class. This metric is defined as follows:
\begin{align*}
DP_c = \max_{i \in S} \max_{j \in S \setminus \{i\}} \Bigg( \bigl| P(y=c \mid a=i) - P(y=c \mid a=j) \bigr| \Bigg)
\end{align*}
where $i, j \in S$, $c$ is a class, and $S$ denotes the set of multiple sensitive attributes. We consider the mean and maximum values of $DP_c$ to evaluate the bias across the classes. By applying this metric, we can effectively measure and demonstrate the reduction in bias achieved by SFID, despite the differences in sensitive attributes between the training (debiasing) and test sets.
| Method | Mean Acc. | Mean DP | Max DP |
| -------- | -------- | -------- | -------- |
| CLIP-RN50 (Baseline) | 51.92 | 13.94 | 33.89 |
| CLIP-RN50 (SFID) | 51.35 | **13.27** | **32.56** |
| CLIP-ViT-B/32 (Baseline) | 52.48 | 13.54 | 44.62 |
| CLIP-ViT-B/32 (SFID) | 51.97 | **13.31** | **32.71** |
| XVLM (Baseline) | 56.61 | 14.85 | 45.30 |
| XVLM (SFID) | 56.51 | **14.59** | **43.00** |
The results show that even with limited data and racial bias present only in the image dataset, SFID can effectively mitigate the racial bias. We expect even more advanced results if we have access to race-profession related text datasets as well.
[1] Foulds, James R., et al. "An intersectional definition of fairness." 2020 IEEE 36th International Conference on Data Engineering (ICDE). IEEE, 2020.
[2] Denis, Christophe, et al. "Fairness guarantee in multi-class classification." arXiv preprint arXiv:2109.13642 (2021).
### Limitation and future work of SFID
In response to the reviewer's concerns, we have identified some limitations of SFID and potential areas for future exploration.
##### Dependence on validation set
SFID assumes that labels in the debiasing dataset (training and validation set) are available. If the validation set is unavailable, a subset of the training set could be used as a validation set. A potential limitation is that the quality of low-confidence samples depends on the representativeness of the validation set. For example, if the samples in the validation set are out-of-distribution, the imputed values will follow this distribution, potentially worsening performance.
##### Compound bias
SFID can be applied to various types of bias, as shown in our response above. However, there is room for improvement in mitigating more than one type of bias simultaneously, such as gender and race together. As future work, we plan to extend our framework to address compound bias.
##### Diversity in Low Confidence Imputation
In the Low Confidence Imputation (LCI) process, we take the average of all low-confidence samples for each feature index. Although the imputed value remains in-distribution, the diversity of the imputed value could be limited as they are projected into a single imputation value. We will explore how the distribution of the imputation value affects diversity and how to properly distribute the imputation values to ensure diversity in the generation task while maintaining debiased results.
We appreciate the reviewer's insights and will address these limitations in our future work.
Pdf: /pdf/ece41f33cbb9ddebff8a32eb71f6c022264b504d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Unleashing Region Understanding in Intermediate Layers for MLLM-based Referring Expression Generation | Accept (poster) | Summary: This paper explores the Multi-modal Large Language Model (MLLM) based Referring Expression Generation (REG) task, which aims to generate unambiguous text descriptions for specific objects or regions in images. MLLM-based REG models tend to suffer from hallucination issues, and there is a trade-off between detailed descriptions and accurate targeting. To address this, a training-free method called "unleash-then-eliminate" is proposed, which elicits latent information in intermediate layers and uses cycle-consistency decoding to reduce hallucinations. Extensive experiments on the RefCOCOg and PHD benchmarks show that this method outperforms existing approaches in both semantic and hallucination-related metrics.
Strengths: 1. The observation that the intermediate layers of the current region-level MLLMs sometimes hold more descriptive regional information than the final layer is interesting.
2. The writing throughout the paper is clear and easy to follow. The authors have done a good job in presenting their ideas and methodologies in a manner that is both logical and comprehensible.
3. The experimental results are compelling and demonstrate that the proposed method outperforms existing methods on the newly introduced evaluation metric.
Weaknesses: 1. Since the proposed method requires multiple layers for inference to be described, both REG and RES models are required, resulting in a very low efficiency of the method and requiring a lot of additional calculations and memory.
2. The ablation study is not sufficient. For example, subset used to select the optimal layer in Layer Importance Measurement. What is the Impact of subset size and quality on performance? What is the performance of using cycle-consistency-based candidate ranking process for whole dataset, i.e., not using Layer Importance Measurement.
3. What is effect of using different RES models for the proposed method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the effect of \alpha defined in Line 172
2. \mathcal{V} is not defined in Line 171.
3. The candidate sentence set A use n to represent the size of candidate layers, while use N-1 in Line 169. So are n and N-1 the same, or do they represent different things?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your attention to our work and your positive acknowledgment of our idea. We have open-sourced the code. We will address your concerns below.
**1. low efficiency.** We would like to hightlight more about probing-based estimation that is designed to allivate this issue. While the cycle-consistency-based quality ranking incorporates the RES model, adding to the computational load, our probing-based estimation method simplifies this process. By transforming the REG-RES cycle into a straightforward set of importance weights (please refer to Figure A in our common response), we eliminate the necessity for RES in decoding. The proposed method and its resulting importance prior weights can effectively reduces MLLM hallucinations by using combinations of intermediate layers. We have validated this approach in our experiments, as shown in Table 3 of the PHD benchmark. We hope this could partially address your concern.
**2. Ablation study.** Thanks for your precise and valuable feedback. We have reported some subset-related ablation experiments in the table below. The range of candidate layers is [0, 7]. As for the “full-D”, we calculate the layer importance weights on the full dataset, and then integrate these weights into decoding. “full-R” denotes cycle-consistency-based ranking on the full dataset. “top” and “avg” denotes the best/average results we tested on different sampled 1/8 subset.
| Size | METEOR | CHAIR_S | CHAIR_I | Recall | Len | nCHAIR_S | nCHAIR_I |
|--------|--------|---------|---------|--------|------|----------|----------|
| 1/8 (avg) | 172.0 | 42.25 | 30.95 | 0.821 | 22.9 | 1.840 | 1.348 |
| 1/8 (top) | 172.0 | 41.90 | 30.90 | 0.819 | 22.9 | 1.829 | 1.349 |
| 1/16 | 171.0 | 42.80 | 31.72 | 0.812 | 22.5 | 1.902 | 1.409 |
| full-D | 172.0 | **41.60** | **30.70** | 0.818 | 22.8 | **1.824** | **1.346** |
| full-R | **173.0** | 42.40 | 31.20 | **0.823** | 23.16| 1.830 | 1.347 |
In the first row, we report the average result of 10 different random seeds for 1/8 subset sampling, which is also the result shown in Table 1 of the main paper. The second row reports the best results for the 1/8 subset. The results indicate that the randomness of the sampled subsets affects the performance of our proposed decoding strategy in reducing hallucinations. In other words, the quality of the subset has a certain impact on the decoding outcome.
However, assessing the quality of subsets during probing is also an intractable problem. One possible solution is to use an unsupervised clustering method (e.g. K-means) to first cluster the multimodal features (extracted by CLIP or embedding layers of MLLM) of the entire dataset, and divide the subsets based on different centroids, then calculate and store the inter-layer weights in an "importance weight bank." During inference, we can compute the distance between the new query and these centroids, selecting the set of weights from the closest centroid for inter-layer combination during decoding. This strategy needs careful designing, and we consider it a future extension. Besides quality, by comparing the second and third rows, we also find that compared to the 1/8 subset, the 1/16 subset shows less stable de-hallucination and generation performance.
**3. Different RES model.**
We appreciate for the detailed comments. In the following Table , we ablate the impact of different RES models utilized in the proposed cycle-consistency-based quality ranking (CCR). We adopt another MLLM-based RES model LISA to score the sentence quality of intermediate layers.
| RES Model | METEOR | CHAIR_S | CHAIR_I | Recall | Len | nCHAIR_S | nCHAIR_I |
|-------------|--------|---------|---------|--------|-------|----------|----------|
| LISA (1/8) | 171.0 | 43.1 | 31.32 | 0.809 | 23.1 | 1.865 | 1.356 |
| LISA (full-R) | 172.0 | 42.60 | 31.40 | 0.811 | 22.9 | 1.860 | 1.371 |
| GlaMM (1/8) | 172.0 | **42.25** | **30.95** | 0.821 | 22.9 | 1.840 | 1.348 |
| GlaMM (full-R) | **173.0**| 42.40 | 31.20 | **0.823** | 23.16 | **1.830** | **1.347** |
From the table, we can observe that the choice of the RES model affects the CCR. The RES task itself involves transforming a language query into a pixel-level visual representation of an object. Given that GLaMM performs slightly better than LISA in RES, we can therefore infer that the more robust this transformation is completed, the better the performance of the CCR in quality ranking.
**4. Explanation of symbols.**
Thanks for the question and we apologize for the confusion. $\alpha$ is a hyperparameter in [0, 1] that truncates the next token distribution of $p(y_t|y_{<t})$. It helps split out the tokens whose score is lower than a proportion of the highest score. Larger $\alpha$ entails more aggressive truncation, keeping only high-probability tokens, whereas smaller $\alpha$ allows tokens of lower probabilities to be generated [Li et al., arXiv:2210.15097].
$\mathcal{V}$ in Line 171 denotes the vocabulary set.
$n$ and $N-1$ are different. $N$ denotes the number of layers of MLLM model, 32 in our case, and $N-1$ represents the number of remaining layers except the final layer. The lowercase $n$ is dynamic and refers to the number of candidate layers in experiments, which we chose as 8 in our implementation.
---
Rebuttal 2:
Comment: Thank you once again for your insightful comments. We deeply appreciate your guidance and found great resonance with your perspectives. With the discussion period nearing completion in less than two days, please feel free to share any final comments at your convenience.
---
Rebuttal Comment 2.1:
Comment: The author did address some of my concerns, and I keep my initial rating. | Summary: This paper presents an approach for improving the accuracy and richness of referring expression generation (REG) by leveraging the descriptive potential of intermediate layers in Multi-modal Large Language Models (MLLMs). The method employs a cycle-consistency-based decoding strategy to reduce hallucinations and improve descriptive quality. The proposed method is evaluated on the RefCOCOg and PHD benchmarks, demonstrating superior performance over existing methods.
Strengths: The "unleash-then-eliminate" strategy and the use of intermediate layers for generating more detailed descriptions is innovative to me.
Weaknesses: The proposed method introduces additional complexity, particularly in the decoding process. While effective, the cycle-consistency-based approach may increase computational overhead, which could limit its applicability in real-time or resource-constrained environments.
The related work section lacks depth in its analysis and could benefit from a more thorough review of recent advancements, particularly in hallucination mitigation techniques.
Some terms and notations, such as Q, H and W in Section 3.1 , are not clearly defined in the context of the paper, which can lead to confusion. Providing explicit definitions and clarifications would improve readability.
Technical Quality: 2
Clarity: 1
Questions for Authors: You use PCA for dimensionality reduction and Wasserstein distances for comparing features. Can you elaborate on why these specific techniques were chosen and how they contribute to the effectiveness of your method compared to other possible choices?
Your method aims to reduce hallucinations in generated descriptions. Can you provide more details on how you quantify hallucinations and the specific improvements observed with your method compared to baselines? Are there any cases where your method still struggles with hallucinations?
Is there any Transformer-based region encoder?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The authors have not adequately addressed the limitations and potential negative societal impacts of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your feedback. We have made the code open source at the link attached to the abstract. We will address your concerns below.
**1. Increase computational overhead.** Thanks for the comments.
In the cycle-consistency-based quality ranking, the RES model is incorporated as an auxiliary, leading to increased computational overhead. However, we need to emphasize that our proposed probing-based estimation method simplifies the cycle between REG and RES into a set of importance weights (refer to Figure A in common response). It removes the need for RES during decoding, and mitigates the MLLM hallucinations through combinations of intermediate layers. This result has been demonstrated in our experiments, on the Table 3 PHD benchmark.
**2. More related work.**
Thank you for your suggestion. Based on your advice, we would like to summarize some related works on inference-time decoding strategies for mitigating hallucination in large language models (LLMs). Specifically, our work represents an inference-time decoding strategy to mitigate hallucinations, which utilizes the latent knowledge within the intermediate layers without additional training. There are more details in the code link. We hope this could help you better understand the positioning of our work.
**3. Undefined terms.** Thanks for your feedback. HW denotes the flattened encoded visual token numbers. Q denotes the number of tokens that the model used to encode the region prompt.
**4. The choices used in visualization.**
Thanks for the question. We first observed through the early-exit method that outputs from different intermediate layers tend to vary, and some may even contain more descriptive content than the final layer. Subsequently, we attempted to quantify how much potential different intermediate layers have for better region understanding. PCA and Wasserstein distances are two common analytical methods used. PCA reduces the dimensionality of high-dimensional latent features while preserving the principal components, which not only speeds up the visualization process but also allows features from different dimensions to be reduced to the same dimension for distance calculation. We use Wasserstein distances as a measure of distance because it evaluates the distance between two distributions from the perspective of transportation cost. Compared to other methods used to measure distances between distributions, it requires fewer samples [M. Arjovsky et al., ICML2017].
**5. How to quantify hallucinations.**
Thanks for the question. According to your suggestion, we would like to provide a detailed elucidation of the hallucination metric calculation and summarize it as follows:
In our study, we utilized two approaches to quantify the severity of the hallucination:
1. The first approach is based on a widely adopted metric CHAIR, which directly counts the number of hallucinatory descriptions generated by the model. We use it for the sentences of REG task. It relies on a reference expert table, providing the scope of the explicit object, and quantifies the object hallucination by calculating the ratio of "the objects mentioned but not in the expert table" to "all objects mentioned in a description". The result is reported in Table 1 & 2.
2. The second approach we adopted is prompting MLLM and counting the average ratio of the number of answers that do not fall into the hallucinations to all answers. We utilize the PHD benchmark (a challenging extension of POPE) to achieve this. Similar to POPE, since prompted questions are all interrogative sentences in this benchmark, it can be directly concluded whether the description is hallucinatory or not just from calculating the accurate "yes" or "no" answers generated by the model. The result is reported in Table 3. For more details, please refer to the code link.
**6. The specific improvements and failure case.**
1. **specific improvements:**
By extracting information from the intermediate layers of MLLMs, our method generates sentences that are more descriptive while reducing hallucinations. Specifically, Table 1 shows an increase in the METEOR metric, indicating higher quality sentence generation, and an improvement in the CHAIR series metrics, showing a reduction in hallucinations. Table 3's improvements on the PHD benchmark further demonstrate that our decoding strategy effectively mitigates hallucination.
2. **failure case:**
Thank you for the question, and we provide a failure case as Figure C-(6) in the common response. In this case, the referred object is the black dog on the left. It can be seen that during the generation process, the model initially describes the referred object in detail, but eventually a hallucination occurs, mistaking the state of another dog (looking at the cake) for the state of the referred dog. This also illustrates the unique challenges of region-level understanding.
**7. Is there any Transformer-based region encoder?** Thank you for the question. In the context of our paper, the region encoder is used to encode region prompts for the MLLM. To our knowledge, MLLMs that perform REG have not yet employed transformer blocks for encoding region prompts. Shikra [K. Chen et al., arXiv:2306.15195] converts regions into coordinates, represented using natural language numbers. Ferret [H. You et al., ICLR2024] uses CNN blocks and point sampling for encoding. GPT4ROI [S. Zhang et al., arXiv:2307.03601] and GLAMM [H. Rasheed et al., CVPR2024] employ ROI pooling methods, and Osprey [Y. Yuan et al., CVPR2024] utilize CNN blocks for multi-scale mask encoding. We speculate that such designs may aim to reduce computational complexity, as transformer modules require a larger amount of data to fit. Given the vast knowledge underlying pre-trained visual encoders (CLIP) and the MLLM base (LLAVA), the region prompts need to align with the existing knowledge. A lightweight region encoder could reduce the optimization difficulty.
---
Rebuttal 2:
Comment: It is great to know that our responses are helpful, and thank you for your positive feedback and support for this work! Your suggestions have helped improve this work and we will incorporate them in the revised manuscript. | Summary: The paper aims to strike a balance between detailed description and precise captioning when using multimodal large language models (MLLM) in the task of referring expression generation (REG). A key observation is that the output of a Referring Expression Segmentation (RES) model should be consistent with the input of a REG model.
Based on this observation, a training-free method has been proposed, called 'unleash-then-eliminate,' which adopts an 'elicit-then-eliminate decoding' strategy. Captions are generated using contrastive decoding (Li et al., arXiv:2210.15097) and then fed into a RES model to produce corresponding masks. The Intersection over Union (IoU) between the masks tagged by the RES model and the input of the REG model is leveraged to select the candidate layer along with the generated caption. Additionally, a Probing-based Importance Estimation method is proposed to accelerate the decoding process.
The generation quality of the model is evaluated using the METEOR score on the RefCOCOg dataset. Additionally, the model's performance in avoiding hallucinations is assessed with the CHAIR and PHD metrics. The model proposed in this paper outperforms both Osprey and Dora not only in terms of generation quality but also in terms of adequacy.
Strengths: 1. The problem definition and motivation are clear. The paper indicates that a balance between detailed description and accurate targeting is necessary when using MLLM.
2. The model architecture is clear. The proposed model utilizes contrastive decoding to unleash the information in intermediate layers for generating captions. It indirectly evaluates the quality of these layers by assessing the masks generated by a RES model in relation to the captions. Additionally, a Probing-based Importance Estimation method is proposed to expedite the decoding process.
Weaknesses: 1. The writing is disorganized and lacks clarity. For example, in the first sentence "Referring expression generation (REG) Yu et al. [2016], Mao et al. [2016], Hu et al. [2016], Yu et al. [2017], Luo et al. [2020], Ding et al. [2021], Tanaka et al. [2019] is a task to", the reference is mixed with words, making it difficult to follow.
2. The proposed approach is technically simple. The architecture seems simply a combination of REG and RES models.
3. The motivation is to achieve a trade-off between the granularity and accuracy of captioning by selecting different intermediate layers. However, the experiments only show that the proposed model outperforms other models, as indicated in Table 1. Furthermore, Table 2 reveals that the first bucket of layers achieves the best scores in both METEOR, which relates to generation quality, and nCHAIR_I, which relates to the avoidance of hallucinations. This finding contrasts with the initial proposal.
4. The answer to `Open access to data and code` is [Yes], but there's only a `placeholder` found in the anonymous repo, noted as footprint in the first page of paper.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. What is the final architecture of the proposed model with `Probing-based Importance Estimation` ?
2. What are `the number of parameters` of the proposed models (with and without Probing-based Importance Estimation)?
3. What are the metrics corresponding to those types (i.e. Object Recognition...) in table 3? What is the difference between `neutral-question mode` and `misleading mode`?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: Yes.
The authors claim that since the model has not been tuned on a specific dataset, the generating performance is suboptimal compared to training-based methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and constructive comments. The code is accessible through the link indicated in the abstract. We noticed that there might have been some misunderstandings regarding to our proposed "Probing-based importance estimation" method, and we hope this response will better convey our decoding strategy to you.
We will address your concerns below.
**1. Reference format.** Thanks for the suggestion. We have changed it to numerical style in the revision. Additionally, we would like to summarize some related works on inference-time decoding strategies for mitigating hallucination in large language models (LLMs), hoping this can better help you position our work. Specifically, our work is in line with CCS [C. Burns et al., ICLR2022], ITI [K. Li et al., NeurIPS2023], and DoLa [Y.-S. Chuang et al., ICLR2024], which are all aiming to utilize the existing knowledge in trained models through representational operations to enhance the authenticity of the output. Compared to these works, we uncover the intermediate layers of multimodal LLMs and, as far as we know, are the first to propose using a combination of intermediate layers for region-level caption generation. There are more details in the code link.
**2. The architecture seems simply a combination of REG and RES models.** Structurally, the cycle-consistency-based ranking method we proposed indeed relies on the combination of REG and RES. RES serves as an auxiliary tool we utilize to filter the intermediate layer information unleashed afterward.
Additionally, we would like to highlight some key points beyond the architecture:
1. We designed an inference time decoding strategy, which can unleash the latent knowledge inside the intermediate layers without additional training.
2. The key observation is: compared with the output of the last layer, the sentences of intermediate layers can sometimes provide more discriminative descriptions for the referred object, which motivates us to design a cycle between REG and RES to localize the desired information.
3. Besides the REG and RES cycle process, we also proposed a probing-based estimation method that can obviate the need for RES, directly blending estimated weights into the existing model to enhance its anti-hallucination efficacy.
**3. The trade-off between granularity and accuracy.** Thank you for your detailed comment and we apologize for any confusion caused. We would like to clarify that the trade-off discussed is inherent to the REG task itself. The more extensively a model describes, the higher the likelihood of errors, as demonstrated by the CHAIR-related metrics in Table 1. As descriptions become lengthier, both $\text{CHAIR}_I$ and $\text{CHAIR}_S$ metrics deteriorate, indicating an increase in hallucinations during detailed description generation. To alleviate this issue, we unleash information from the intermediate latent knowledge to broaden the space of choice. Then a RES model as a listener is used to evaluate and select proper output. This "unleash-then-eliminate" approach helps us get rid of the intrinsic constraint of detailedness and accuracy. Table 1 demonstrates the efficacy of our proposed method. In Table 2, our goal is to investigate the effects of candidate layers from different zones using our method, and to illustrate the properties of various regions within a trained MLLM. We have not explored such a trade-off across different layers.
**4. Open source.** Thank you for your interest in our implementation. The code has been uploaded.
**5. Final architecture of Probing-based estimation, and the number of parameters.** We have noticed that there might be some misunderstandings about the approach we have proposed. We have introduced **a decoding strategy for inference-time**, not a new model that requires training, hence **no new parameters** have been added.
The Probing-based estimation method involves the following steps:
1. First, sample a subset as the probing set.
2. On this probing set, perform cycle-consistency-based ranking and calculate scores for each sample in each layer.
3. Calculate an inter-layer importance weight on this subset, which is mainly based on frequency counting of each candidate layer being the optimal layer.
4. Combine the importance weight prior with the J-S divergence to represent the probability of each layer being selected (the higher the importance, the more likely it is to be chosen).
5. Sample from this hybrid importance distribution to decide which layer to use for decoding the next token, until the generation finishes.
This approach compresses the “cycling” process into a set of importance weights, thus facilitating convenient decoding. We would like to emphasize that the experiments in Table 3 have demonstrated that such a set of weights can be directly applied to MLLM to alleviate hallucinations, without the need for RES models and the cycling process, enhancing the practicality of our method. We also provide a visualization of importance weights in **Figure A** (listed in common response).
**6. The metrics of Table 3.** These refer to the prompt types of PHD benchmark. For better illustration, we provide some prompt and generation examples in **Figure B** (in common response). This benchmark sets ten different types of questions, which are composed of five different tasks, with each task featuring two modes of questioning. We refer you to the 'Datasets and Metrics' section and the newly added Appendix C for details on the different tasks. The two modes of questioning include Neutral mode and Misleading mode. The prompts of the former only include the original question, while the prompts of the latter would accompanied by misleading descriptions. More details are available in the code link.
---
Rebuttal Comment 1.1:
Comment: The author did address some of my concerns and I'll raise my score from 3 (Reject) to 4 (Borderline reject).
---
Reply to Comment 1.1.1:
Comment: We really appreciate your acknowledgment of our responses to the initial comments, and the adjustment of the rating. We would be grateful for the opportunity to discuss any remaining concerns that could be addressed in our manuscript further. Thank you very much for your time!
---
Rebuttal 2:
Comment: Thank you once again for your valuable comments and thorough review. We have tried our best to clarify the concerns on the paper. Your detailed feedback has enhanced the clarity of our explanations. With the discussion period nearing completion in less than two days, we would appreciate it if you could let us know if any aspects remain unclear. We truly appreciate this opportunity to improve our work and shall be most grateful for any feedback you could give us. | Summary: The paper addresses the Referring Expression Generation (REG) task using Multi-modal Large Language Models (MLLMs), identifying the key challenge as the trade-off between generating detailed descriptions and accurately targeting the referring objects, which often leads to hallucinations—the inclusion of incorrect or spurious information in the generated text. To address this issue, the authors propose a training-free "unleash-then-eliminate" method that leverages the intermediate layers of MLLMs and employs a cycle-consistency-based decoding strategy to mitigate hallucinations. The proposed approach is validated through extensive experiments on the RefCOCOg and PHD benchmarks, demonstrating superior performance compared to existing techniques.
Strengths: 1. The paper is in well-written, which makes it easy to understand.
2. The proposed "unleash - then - eliminate" method is training-free, which can avoid the need for additional data and training, reducing the complexity and cost of the model.
3. I like the idea of utilizing the latent information in the intermediate layers of the current region-level MLLMs, which is often overlooked but contains more descriptive regional information.
4. The cycle-consistency-based decoding method helps to alleviate the production of hallucinations in the generated sentences, improving the accuracy and reliability of the model's output. The hybrid layer importance measurement strategy not only increases the decoding speed but also maintains the ability to mitigate hallucinations, achieving a good balance between efficiency and performance.
5. The method shows superior performance compared to existing methods on both semantic and hallucination - related metrics in the experiments, demonstrating its effectiveness.
Weaknesses: 1. Without tuning in the specific dataset, the generating performance of the method might be suboptimal compared to the training - based methods.
2. The methods used in the paper, such as cycle ranking, may introduce additional computational load, which could affect the per-sample decoding speed. Although the proposed strategy helps to alleviate this issue, it may not completely solve it.
3. The method assumes that the RES model can accurately estimate the region-aware descriptive performance of the captions generated by the candidate layers. However, the RES model may also have its own limitations and errors, which could affect the final evaluation results.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: refer to weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive acknowledgment of our work. We are glad to notice your interest in the latent information of the intermediate layers. We hope our responses below will partially address your concerns.
**1. Suboptimal compared to the training-based methods.**
Yes, we proposed an inference-time decoding strategy that utilizes existing knowledge in trained models through representational operations to enhance the authenticity of the output. If the training is allowed, there are two potential areas within our framework where performance could be improved through training:
1. If more descriptive datasets are available, the selection among intermediate layers could depend not only on the RES model but also on the ground truth sentence.
2. A linear layer could be designed to learn the mapping from different scores across layers to the importance weights of layers.
We still need to emphasize that although training-based methods might yield better results, as you mentioned, the proposed "unleash-then-eliminate" method is training-free, which also avoids the need for additional data and training.
**2. Computation load.**
Thanks for the feedback. We agree that the computational intensity of the cycle ranking could potentially affect the decoding speed. This is an inherent challenge when integrating more complex algorithms to improve the accuracy of the generation. We would like to highlight that in probing-based estimation, we compress the cycle process into a set of layer importance weights which could directly be merged into the original decoding process, thus enhancing the practicality of our method.
**3. The RES model is imperfect.**
Thanks for the precise comments. We acknowledge that our method depends on the performance of the RES model. We hope that more powerful RES models in the future will help alleviate this limitation. In the table below, we have ablated the impact of the existing MLLM-based RES on our method. Given that GLaMM performs slightly better than LISA in RES, we can therefore infer that the more robust the RES model is, the better the performance of cycle-consistency-based in quality ranking. “full-R” denotes cycle-consistency-based ranking on the full dataset.
| RES Model | METEOR | CHAIR_S | CHAIR_I | Recall | Len | nCHAIR_S | nCHAIR_I |
|-------------|--------|---------|---------|--------|-------|----------|----------|
| LISA (1/8) | 171.0 | 43.1 | 31.32 | 0.809 | 23.1 | 1.865 | 1.356 |
| LISA (full-R) | 172.0 | 42.60 | 31.40 | 0.811 | 22.9 | 1.860 | 1.371 |
| GlaMM (1/8) | 172.0 | **42.25** | **30.95** | 0.821 | 22.9 | 1.840 | 1.348 |
| GlaMM (full-R) | **173.0**| 42.40 | 31.20 | **0.823** | 23.16 | **1.830** | **1.347** |
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Authors made a good rebuttal. After carefully reading it, I find all my concerns have been properly solved. I also would like to say, every research paper may have some imperfections, but I am more concerned with whether the paper offers new insights that can drive further exploration in the field. The presence of minor flaws does not diminish the value of a work that makes a meaningful contribution. From my perspective, the quality of this work has satisfactorily met the established requirements for acceptance. Thus, I would like to give STRONG ACCEPT as my final score.
---
Rebuttal 2:
Comment: Thank you once again for your insightful comments! We deeply appreciate your precise feedback and truly resonate with your perspectives, which have been instrumental in enhancing our work. With the discussion period nearing completion in less than two days, please feel free to share any final remarks at your convenience.
---
Rebuttal 3:
Comment: Thank you so much for your encouraging feedback and for upgrading your score to a "strong accept." We are truly grateful for your recognition of the efforts made in our rebuttal. It's great to hear that our responses have successfully addressed your concerns. We're committed to continuously improving our work, taking into account your insights and those from other reviewers. Thanks again for your supportive words! | Rebuttal 1:
Rebuttal: We appreciate the time and effort of all reviewers in reviewing our manuscript. Your insightful feedback has been essential in enhancing our work’s quality.
We have released our code, and it is available at the link listed in the abstract.
In this common response, we would like to explain the figures newly uploaded:
**1. Figure A:** We display the weights of different layers after probing-based importance estimation. These weights can be seen as a "compression" of the cycle-consistency-based ranking process, directly integrated into the decoding process without the need for a RES model, thus enhancing the practicality of our method. The effectiveness of the important weights is validated in Table 3 of our paper.
**2. Figure B:** We have listed some prompt-response examples from the PHD benchmark to illustrate the quantification of hallucinations. We hope this can help answer the questions of reviewers MMWB and 85oQ.
**3. Figure C (1)-(5):** Visualization of multi-modal alignment in intermediate layers . We delve deeper into showcasing the transition of multi-modal alignment across different layers of a well-trained MLLM, as well as the potential impact of this transition process on the region-level understanding capabilities of intermediate layers. We can observe the following phenomena:
- The degree of multi-modal alignment varies across different layers. More specifically, in the early layers, the relative distance between visual tokens and language tokens is bigger than what in the later layers.
- The shift in language tokens across layers is greater than that of other types of tokens.
- The distance between the last language token (used for next token prediction) and region-related tokens does not change monotonically.
Despite the limitations of the specific distance in measuring token alignment comprehensively, our observations suggest that the multi-modal alignment of intermediate layers of a well-trained MLLM undergoes a transitional phase, with these varied transitional states potentially providing better region understanding compared to the final layer. Reviewer MMWB and 85oQ might have interests in it.
**4. Figure C (6):** We provide a failure case according to the suggestion of reviewer 85oQ.
Pdf: /pdf/479819d2b61cffb1034e795b3f4c967d4901c0cf.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dense Associative Memory Through the Lens of Random Features | Accept (poster) | Summary: This paper introduce an iterative random feature Dense Associative Memory Model (DenseAM).
It is a kernel approximation of the standard DenseAMs using random feature decomposition and approximation.
The authors term this approach to DenseAMs as distributed representation/formulation of memory (DrDAM).
Theoretically, they analyze the time complexity of the proposed method (Thm1) and the approximation error (Thm2), both benchmarked against the standard DenseAM (MrDAM).
Experimentally, they evaluate the approximation accuracy of DrDAM's energies and gradients, defined as the MAE difference between DrDAM and MrDAM, and the retrieval accuracy of DrDAM.
Importantly, their analysis of DrDAM considers its iterative nature, distinguishing their contribution from prior works.
In sum, they address important research questions and their approach is natural and intuitive based on recent advances in the random feature kernel trick.
Strengths: * **Originality**: This paper expands our understanding of Dense Associative Memory Models (DenseAMs) by presenting a random feature DenseAM, termed DrDAM. The idea of DrDAM is novel and beneficial to the ML and associative memory community.
* **Significance**: While the idea of random feature DenseAM is not entirely new, this work delves deeper than most prior studies. For example, to the best of my knowledge, there is nothing as concrete as the iterative Algorithm 1 presented in prior works. This represents a significant and solid step forward for the community, and I appreciate the effort.
* **Theoretical Contribution**: This paper provides an efficiency and approximation analysis of DrDAM.
* **Experimental Contribution**: They explore some aspects of DrDAM, including the approximation of energies and gradients and the approximation ability of DrDAM. Reproducible code is also provided.
Weaknesses: ## Summary of Weaknesses
Overall, this paper presents a good idea, but it seems rushed, which affects the contribution of an otherwise very interesting work. Regardless of the final decision, I hope the authors find my comments helpful in further refining their draft.
There are a few areas that could benefit from further improvement and clarity:
* **Experiments:** The current experimental results would be strengthened by including comparisons with established baselines and more ablation studies.
* **Clarity:** Some of the theoretical results are difficult to interpret due to minor typos and a need for increased mathematical clarity.
* **Motivation:** The motivation and problem statement could be articulated more convincingly. Even after multiple readings of the intro, it remains unclear if distributed representation is truly parameter-efficient beyond the nonparametric sense.
* **Related Works:** Additionally, while not critical, the idea of Random Feature DenseAM (and Kernel DenseAM) has been discussed in prior works. The related discussion in this paper could be expanded to better position the contribution within the existing literature. For examples:
* ```Random Feature DenseAM``` Random Feature Hopfield Networks generalize retrieval to previously unseen examples https://openreview.net/forum?id=bv2szxARh2)
* ```Random Feature DenseAM``` Nonparametric Modern Hopfield Models https://arxiv.org/abs/2404.03900
* ```Kernel DenseAM``` Kernel Memory Networks: A Unifying Framework for Memory Modeling https://arxiv.org/abs/2208.09416
* ```Kernel DenseAM``` Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models https://arxiv.org/abs/2404.03827
* ```Kernel DenseAM``` Bridging Associative Memory and Probabilistic Modeling https://arxiv.org/abs/2402.10202
**Note:** I have tried my best to evaluate this paper. I spent much more time on this review than on my previous ones because I like this paper. Still, I might have missed something. Please correct me if I am wrong with fair justifications. I am open to changing my opinion and raising the score.
---
## Major: My main concern with this paper is its soundness and inadequate experimental evidence
### General
* ```line23:``` "Memory representation" is a strange term for me. If you are referring to the way of encoding memory patterns onto the energy landscape, isn't it "memory encoding"? Representation generally involves learning, and to my understanding, there is no learning here.
>Thus, in situations when the set of the memories needs to be expanded by introducing new patterns one must introduce additional weights.
A side comment: this statement implies nonparametric, as suggested in [17].
* ```distributed representation```: I find the description of distributed representation vague. With more memories, $K$ increases, resulting in more degrees of freedom in $\mathbf{T}$. You can say the total number of entries in $\mathbf{T}$ is fixed, but not the number of parameters, unless you clearly define "parameters" differently. The current draft makes it difficult to parse the soundness. I believe the idea of distributed representation deserves more elaboration. The current draft seems nonparametric to me.
In my current review, I assume what is stated in lines 29-30 is correct.
* ```Sec3```: Section 3 is hard to follow due to typos and the lack of clarity.
* ```Missing Broader Impacts```: The authors didn't discuss both potential positive and negative societal impacts except in the checklist. My understanding is that this discussion needs to be included in the main text or the appendix, as the checklist is not considered part of the paper.
---
### Theory
* ```line92```: Eqn. (6) is hard to follow without clearly defined $S,s$.
I suppose $S:\mathbb{R}^d\times \mathbb{R}^d\to \mathbb{R}$ which makes $s\in\mathbb{R}$ a scalar realization of $S$?
* ```No fixed point convergence results```: Per my understanding, DenseAMs are energy models with memories stored in their local minima. Memory retrieval is conducted through energy minimization. This setting requires convergence between local minima and fixed points of retrieval updates, such as the convergence results of [Ramsauer2020,Hu2023]. Otherwise, there may be unbounded retrieval error due to the mismatch between local minima and fixed points of retrieval updates. Specifically, I believe lines 106-110 are at best ambiguous.
* [Ramsauer2020] Hopfield Networks is All You Need. Ramsauer, H., Schäfl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., Gruber, L., Holzleitner, M., Pavlović, M., Sandve, G.K., and Greiff, V., 2020. arXiv preprint arXiv:2008.02217.
* [Hu2023] On Sparse Modern Hopfield Model. Hu, J.Y.C., Yang, D., Wu, D., Xu, C., Chen, B.Y., and Liu, H., 2023. Advances in Neural Information Processing Systems, 36.
* ```Theorem 1```: I feel Thm1 may be incorrect. Could you elaborate more on this?
* **My understanding:** Let $L$ be the number of updates, $K$ be the number of stored memories, $D$ be the memory dimension, and $Y$ be the feature space dimension. DrDAM improves time complexity from $O(LKD)$ to $O(D(Y K + L(Y + D))$, and improves peak memory from $O(KD)$ to $O(Y+D)$.
* **Comment1:** $O(D(Y K + L(Y + D))$ is clearly a typo and makes the result hard to parse.
* **Comment2:** Without specifying the relationships between $L$, $K$, $D$, and $Y$, it's hard to tell if this is truly an improvement. I can easily think of counterexamples.
Given these, can you kindly elaborate more on the significance of thm1?
* ```Theorem 2```: Eqn. (12) is strange. Is there any guarantee ensuring $\alpha L(1+2K\beta e^{\beta/2}) < 1$? If not, it doesn't make sense that more updates would result in larger error. Perhaps this is related to the missing fixed point convergence?
---
### Experiments
* ```No Comparison to existing works:``` there is no baseline compared.
* ```No efficiency evaluation1:``` Please include ablation studies on changing $L,K,D,Y$ to verify Thm1.
* ```No efficiency evaluation2:``` Please also compare the efficiency with other methods, including linear, top-K, random feature, and random masked from [17], as well as dense [Ramsauer2020], sparse [Hu2023], and generalized sparse [Wu2023] MHNs. For example, the current draft needs some figures similar to Fig. 4 of [17] to justify the efficiency. Please do whatever makes sense to you. For instance, you can pick just two out of [Ramsauer2020, Hu2023, Wu2023] since they share similar structures.
* [Ramsauer2020] Hopfield Networks is All You Need. Ramsauer, H., Schäfl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., Gruber, L., Holzleitner, M., Pavlović, M., Sandve, G.K., and Greiff, V., 2020. arXiv preprint arXiv:2008.02217.
* [Millidge2022] Universal Hopfield Networks: A General Framework for Single-Shot Associative Memory Models. Millidge, B., Salvatori, T., Song, Y., Lukasiewicz, T., and Bogacz, R., 2022, June. In International Conference on Machine Learning (pp. 15561-15583). PMLR.
* [Wu2023] Stanhop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction. Wu, D., Hu, J.Y.C., Li, W., Chen, B.Y., and Liu, H., 2023. arXiv preprint arXiv:2312.17346.
* [Hu2023] On Sparse Modern Hopfield Model. Hu, J.Y.C., Yang, D., Wu, D., Xu, C., Chen, B.Y., and Liu, H., 2023. Advances in Neural Information Processing Systems, 36.
* ```Compare with existing kernel DenseAMs.``` Also, a direct comparison with existing kernel methods is also necessary. For example,
* Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models. Wu, D., Hu, J.Y.C., Hsiao, T.Y., and Liu, H., 2024. arXiv preprint arXiv:2404.03827.
* Kernel Memory Networks: A Unifying Framework for Memory Modeling. Iatropoulos, G., Brea, J., & Gerstner, W., 2022. Advances in Neural Information Processing Systems, 35, 35326-35338.
* Bridging Associative Memory and Probabilistic Modeling. Schaeffer, R., Zahedi, N., Khona, M., Pai, D., Truong, S., Du, Y., Ostrow, M., Chandra, S., Carranza, A., Fiete, I.R., and Gromov, A., 2024. arXiv preprint arXiv:2402.10202.
---
## Minor
* ```Inadequate Related Work Discussion```: A paragraph discussing related works is given at the end of the introduction. However, I feel it could be made more comprehensive to provide more background on the evolution of ideas and existing works in this field. At least, I feel the current draft is hard for a non-expert in DenseAMs to follow. Please see the following points.
* ```line38:``` I believe here can be benefited by including more comprehensive references. There are other recent works showing exponentially large memory storage capacity in various settings and dense associative memory (or modern Hopfield) networks/models. For examples: (I am not sure of their exact relevance, so I will leave it to the authors to decide whether they should be included and discussed.)
* Hopfield Networks is All You Need. Ramsauer, H., Schäfl, B., Lehner, J., Seidl, P., Widrich, M., Adler, T., Gruber, L., Holzleitner, M., Pavlović, M., Sandve, G.K., and Greiff, V., 2020. arXiv preprint arXiv:2008.02217.
* Kernel Memory Networks: A Unifying Framework for Memory Modeling. Iatropoulos, G., Brea, J., & Gerstner, W., 2022. Advances in Neural Information Processing Systems, 35, 35326-35338.
* On Sparse Modern Hopfield Model. Hu, J.Y.C., Yang, D., Wu, D., Xu, C., Chen, B.Y., and Liu, H., 2023. Advances in Neural Information Processing Systems, 36.
* Stanhop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction. Wu, D., Hu, J.Y.C., Li, W., Chen, B.Y., and Liu, H., 2023. arXiv preprint arXiv:2312.17346.
* Bridging Associative Memory and Probabilistic Modeling. Schaeffer, R., Zahedi, N., Khona, M., Pai, D., Truong, S., Du, Y., Ostrow, M., Chandra, S., Carranza, A., Fiete, I.R., and Gromov, A., 2024. arXiv preprint arXiv:2402.10202.
* Sparse and Structured Hopfield Networks. Santos, S., Niculae, V., McNamee, D., and Martins, A.F., 2024. arXiv preprint arXiv:2402.13725.
* On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis. Hu, J.Y.C., Lin, T., Song, Z., and Liu, H., 2024. arXiv preprint arXiv:2402.04520.
* Outlier-Efficient Hopfield Layers for Large Transformer-Based Models. Hu, J.Y.C., Chang, P.H., Luo, R., Chen, H.Y., Li, W., Wang, W.P., and Liu, H., 2024. arXiv preprint arXiv:2404.03828.
* ```line46:``` Similarly, here can be benefited by including more recent references
> DenseAM family [...]
* ```line51```: A similar motivating question is asked and explored in [17] from a kernel regression perspective. I agree this paper contributes something different and even beyond [17]. However, given the similarity of aims between the two, the current draft should:
1. State the overlaps between the two works (both use a kernel approach to DenseAMs, and...).
2. State the clear distinctions between the two works (both use a kernel approach to DenseAMs, but...).
Such discussion will help readers parse the contributions of this work, which is currently lacking.
[17] Jerry Yao-Chieh Hu, Bo-Yu Chen, Dennis Wu, Feng Ruan, and Han Liu. Nonparametric modern hopfield models. arXiv preprint arXiv:2404.03900, 2024. URL https://arxiv.org/pdf/2404.03900.pdf.
* ```line92```:
> those results have been recently applied to associative memory [15].
I might have missed something, but it would be better to make the discussion of prior work more complete if it's presented in its current form (e.g., a paragraph highlighting the novelty and contributions benchmarked against existing works). The application of the kernel trick to associative memory is not new. While [16,17] are discussed right after [15], I feel it's better to cite them together as [15,16,17]. Additionally, there are two papers related to the kernel trick in associative memory, which I have attached below. Again, I am not sure of their exact relevance, so I will leave it to the authors to decide whether they should be included and discussed.
- Iatropoulos, G., Brea, J., & Gerstner, W. (2022). Kernel memory networks: A unifying framework for memory modeling. Advances in neural information processing systems, 35, 35326-35338.
- Schaeffer, R., Zahedi, N., Khona, M., Pai, D., Truong, S., Du, Y., Ostrow, M., Chandra, S., Carranza, A., Fiete, I.R. and Gromov, A., 2024. Bridging Associative Memory and Probabilistic Modeling. arXiv preprint arXiv:2402.10202.
* ```line97```:
> Another paper [17] uses kernels for studying the sparse modern Hopfield network.
This is incorrect.
As stated in their abstract, [17] presents a general framework (referred to as nonparametric) to analyze and design modern Hopfield networks. The sparse network is just a special case of their framework. Additionally, [17] has already introduced the construction of random feature HMNs. While this construction lacks detailed analysis, which is a limitation they acknowledged, it would be more accurate to say that the submitted draft serves as a strong companion to [17] with rigorous theoretical and empirical results. I understand that the authors are not obliged to treat arXiv preprints (unpublished papers) too seriously, but it is better to give accurate credit to existing works when possible. To clarify, I am very happy to see this draft improves the results/proposal of [17] with detailed analysis, including multiple retrieval updates.
* I did not check the proofs line by line. However, I had a quick skim through them and found some typos and areas for improvement. Please do another round of proofreading if possible.
---
### Update 2024/08/07: raise score from 4 -> 5 -> 6 -> 7
Technical Quality: 2
Clarity: 3
Questions for Authors: * ```line109```: why you call the number of updates as *layers*? Are you recurrently passing the output as input to the same network or the network really has many layers? For example, recurrent layer is RNN sense is the later.
* ```Proposition 1,2,3```: what is peak memory? why we should care about it?
* ```Time Complexity```: I appreciate the analysis of the computational complexity of the proposed method. From my view, this proposal is a special case of [Hu2024] with $L=1$ (where $L$ is the sequence length in their analysis). [Hu2024] provides a characterization of all sub-quadratic (efficient) variants of the modern Hopfield model. Essentially, your proposal is an efficient approximation to the softmax DenseAMs. Hence, the relevance holds. Can you discuss how your proposal fits into the results of [Hu2024]?
* [Hu2024] Hu, J.Y.C., Lin, T., Song, Z. and Liu, H., 2024. On Computational Limits of Modern Hopfield Models: A Fine-Grained Complexity Analysis. arXiv preprint arXiv:2402.04520.
* ```line255, Exp Observation 2```: This observation is a bit counterintuitive. Normally, large $\beta$ means low-temperature region and leads to more stable/accurate retrieval, e.g., infinity $\beta$ leads to argmax-retrieval. Why random DrDAM shows otherwise?
* ```Fig3```: Intuitively, why larger $\beta$ needs larger $Y$?
* ```line303```: What's the point of approximating energies and update dynamics? Shouldn't the significance lie in the approximation error of retrieval?
* I am wondering if you have explored the learning aspect of the proposed method. It is known that many DenseAMs are connected to transformer attention and can be used as a learning model or layer. Can you explore this part a bit as in [17]? If not, please explain why.
* Can you kindly remind me which part of this work supports the claim "making it possible to introduce new memories without increasing the number of weights"?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: There is a limitation section but no broader impact or impact statement expect in the checklist.
Appendix A of the draft acknowledges that the experimental explorations are limited. However, it would benefit from discussing comparisons with baselines and including some vital ablations. This would help clarify the contribution and soundness of the proposed method. See above for other limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough and insightful evaluation of our work. We are glad you found it interesting and beneficial to the ML community!
We will add a Related Work section in which we will review the results of
- Hu et al., Nonparametric Modern Hopfield Models
- Wu et al., Uniform Memory Retrieval
which are already referenced in our paper, in addition to
- Negri et al., Random Feature Hopfield Networks generalize…
- Hu et al., On Sparse Modern Hopfield Model
- Wu et al., Stanhop
- Iatropoulos et al., Kernel Memory Networks
and other suggested papers. We will also include a Broader Impact statement. Below, we do our best to address your comments within the character constraints of the rebuttal.
> …results would be strengthened by including comparisons with established baselines and more ablation studies.
>
We conducted numerical experiments to validate our theorems (see 1-page pdf). Note that our main goal is to approximate the energy and trajectory of a standard DenseAM (MrDAM) using Random Features. Thus, MrDAM is the “baseline” for our method.
> "Memory representation" is a strange term for me…
>
We use the word representation in a more colloquial sense - the description in terms of something. Similarly to Fourier representation, momentum representation in physics, etc.
> You can say the total number of entries in $T$ is fixed, but not the number of parameters, unless you clearly define "parameters" differently…
>
We define the number of parameters as the number of entries in $T$. We will include additional elaboration on these aspects in the revised paper.
> I feel Thm1 may be incorrect
>
(Re: Comment 1) We apologize for the typo. The bound should be $O(D(YK + L(Y+D)))$; the version in the paper is missing a closing bracket around $O(\cdot)$. Other than that, the bound is as stated, with the $DYK$ term coming from the `ProcMems` subroutine and the $LD(Y+D)$ term coming from the $L$ `GradComp` invocations. See the proof of Thm 1 in App E1.
(Re: Comment 2) Thm 1 quantifies the complexity of procedures in Alg 1. We do not claim improvement over standard MrDAM for the precise reason stated by the reviewer regarding the choices of $D, Y, K, L$.
These results highlight that energy descent with DrDAM requires computation and memory that only depends on $D$ and $Y$. Together with Thm 2 and Cor 1, we characterize where the energy descent divergence between MrDAM and DrDAM can be bounded with a choice of $Y$ that only depends on $D$ (and other parameters in the energy) but not $K$.
> Is there any guarantee ensuring $\alpha L (1 + 2K \beta e^{\beta/2}) < 1$?
>
There is no guarantee, as stated in Thm 2. Denoting this term with $\delta = \alpha L(1 + 2K \beta e^{\beta/2})$, the upper bound contains a term of the form $\left(\frac{1 - \delta^L}{1 - \delta}\right)$. If $\delta < 1$, it is clear that the numerator grows with $L$, providing a worse upper bound. However, if $\delta > 1$, $\frac{1 - \delta^L}{1 - \delta}$ can be equivalently written as $\frac{\delta^L - 1}{\delta - 1}$, where the $\delta^L$ term grows with $L$, still having the same effect of larger divergence upper bound with larger $L$.
> **Re: proof for “fixed point convergence results”**
>
The proof is standard in DenseAM literature (see appendices in [paper1](https://arxiv.org/abs/2008.06996) or [paper2](https://arxiv.org/abs/2107.06446)). Consider our Eq (11) for the energy as an example, whose dynamics is described by the following eqs ($i=1...D$)
$$
\tau \frac{dx_i}{dt} = - \frac{\partial E}{\partial x_i}.
$$
For this reason, the energy monotonically decreases on the dynamical trajectory
$$
\frac{dE}{dt} = \sum\limits_{i=1}^D \frac{\partial E}{\partial x_i} \frac{dx_i}{dt} = -\tau \sum\limits_{i=1}^D \Big( \frac{dx_i}{dt}\Big)^2\leq 0.
$$
The energy in Eq (11) is bounded from below, by $-\log(K)/\beta$, since the argument of the logarithm is bounded from above by $K$. Thus the energy descent dynamics has to stop sooner or later when $\frac{dx_i}{dt}=0$, which defines the fixed points. Thus, local minima of the energy correspond to the fixed points of the dynamics.
> …include ablation studies on changing $L,K,D,Y$ to verify Thm1.
>
This is a great suggestion. However, note that Thm 1 is a tight analysis of Alg 1, counting the $D$- and $Y$-dimensional vector initializations, additions, and dot-products. So we did not think such an ablation study would provide additional insight.
> Why larger $\beta$ needs larger $Y$?
>
The reviewer is correct that large $\beta$ (low-temperature) leads to more stable and accurate retrieval by approaching the argmax-retrieval in standard DenseAMs. However, this regime is specifically difficult for random features. Consider the L2 distance based energy in Eq (11):
$$
E(\mathbf{x}) = -\frac{1}{\beta} \log \sum_{\mu} \exp\left(-\frac{\beta}{2} \| \boldsymbol{\xi}_\mu - \mathbf{x} \|^2 \right).
$$
As $\beta$ increases, the $\exp(\cdot)$ term, which we approximate with random features, gets skewed to the extremes across the $K$ memories, with the $\exp(\cdot) \to 0$ for all but the closest memory. This makes the RF approximation harder, requiring larger $Y$ for better approximation.
> which part of this work supports the claim that "making it possible to introduce new memories without increasing the number of weights"?
>
Prop. 5 shows how to update weights $\mathbf{T}$ with a new memory $\boldsymbol{\xi}$ without increasing the number of weights:
$$
\mathbf{T} \gets \mathbf{T} + RF(\tau, \boldsymbol{\xi}),
$$
taking $O(DY)$ time and $O(D+Y)$ memory.
---
Rebuttal Comment 1.1:
Comment: Thanks for clarifications. Haven't finished going through them. Yet, I understand the rebuttal space is limited. I am ok if the authors use comments to clarify more. I will read them.
---
Rebuttal Comment 1.2:
Comment: > We conducted numerical experiments to validate our theorems (see 1-page pdf). Note that our main goal is to approximate the energy and trajectory of a standard DenseAM (MrDAM) using Random Features. Thus, MrDAM is the “baseline” for our method.
Thanks for the clarification.
Since it's approximation to single step of iterative algorithm, can you comment on the "iterative" error bound? Is it sth that you analyze or the current results are just for single step?
Thanks!
> We use the word representation in a more colloquial sense - the description in terms of something. Similarly to Fourier representation, momentum representation in physics, etc.
This is ok. Yet I suggest the authors to refine the wording to ensure preciseness. The examples you mentioned, Fourier & momentum, are mathematically meaningful, e.g., one of the domains connected by some transformation.
> (Re: Comment 2) Thm 1 quantifies the complexity of procedures in Alg 1. We do not claim improvement over standard MrDAM for the precise reason stated by the reviewer regarding the choices of $L,Y,D,K$
Thanks for clarification. Yet this confuses me. I suppose random feature is for efficiency by sacrificing some accuracy, but you don't really ensure DrDAM is more efficient here. Isn't this make the contribution vacuous?
> There is no guarantee, as stated in Thm 2...
This is concerning. The multiple updates of DrDAM lead to looser error bound. This contradicts to the fixed point convergence argument. It is not contraction map toward fixed point.
> This is a great suggestion. However, note that Thm 1 is a tight analysis of Alg 1, counting the .. and ...-dimensional vector initializations, additions, and dot-products. So we did not think such an ablation study would provide additional insight.
I am confused. What do you mean by tight analysis? Do you mean "everything is exact by counting"? Thm 1 is about # of operations needed, correct?
===
Thanks for your response. Please address above if possible. Thank you!
---
Reply to Comment 1.2.1:
Comment: > Since it's approximation to single step of iterative algorithm, can you comment on the "iterative" error bound? Is it sth that you analyze or the current results are just for single step?
>
Nothing in our work is about a single step update. We only consider fully recurrent DenseAMs with many iterative updates. Just to reiterate, the network is defined by the differential equation
$$
\tau \frac{dx_i}{dt} = -\frac{\partial E}{\partial x_i},
$$
which can be discretized to obtain Eq (5) in the submission. All the results in the paper pertain to studying this fully iterative system. The number of iterations can be as big as it is necessary in order to reach the fixed point.
The fixed points of this trajectory are the "memories" of the system. MrDAM will follow this trajectory by descending the gradient of the standard energy $-\frac{\partial E}{\partial x_i}$ in e.g., Eq (11). DrDAM will follow this trajectory by descending the gradient of the approximate energy $-\frac{\partial \hat{E}}{\partial x_i}$.
To be a "good" approximation, DrDAM must produce similar energies and fixed points as that of MrDAM (Fig 2 and 3). We do not specifically consider the case of single steps down the energy landscape for either MrDAM or DrDAM because we want to guarantee that we have reached the "end" (fixed point) of the dynamics. See the trace of energy over time in Fig 1 to see that a single update step is not guaranteed to converge to a fixed point.
This said, if you are interested in how well DrDAM approximates MrDAM in the single step regime, you can substitute $L=1$ Thm 2, taking 1 discrete step down the energy. The approximation error in this regime is dependent only on the error between the gradients $\nabla \hat{E}$ and $\nabla E$, which we empirically test in Fig 2 of our paper.
> I suppose random feature is for efficiency by sacrificing some accuracy.
>
We use Random Features to decouple the energy and dynamics of MrDAM from the number of stored patterns $K$. Efficiency is not our goal in this paper. Arguments for the "efficiency" of DrDAM over MrDAM depend on specific choices for $L,Y,D,K$. Thm 1 derives the computational complexity as a function of these hyperparams; Thm 2 derives the error bounds as a function of these hyperparams. In addition, Fig 2 and 3 empirically characterize where DrDAM is a good approximation i.e., when queries have low initial energy at or near the stored patterns, at higher values of $Y$, and at lower values of $K$ and $\beta$. Thus, a user can use the relationships described in our work to choose regimes where DrDAM is more or less efficient than MrDAM. This was our intent when we said "We do not claim improvement over standard MrDAM for the precise reason stated by the reviewer regarding the choices of $L,Y,D,K$”.
> What do you mean by tight analysis?... Thm 1 is about # of operations needed, correct?
>
Correct, Thm 1 is about the number of operations needed to perform memory retrieval using DrDAM. The complexity can be verified by counting the number of operations during retrieval. Thm 1 emphasizes that the gradient computation `GradComp` is independent of the value of $K$.
> The multiple updates of DrDAM lead to looser error bound. This contradicts to the fixed point convergence argument.
>
DenseAM in continuous time is a fully contracting system, if the energy decreases along the dynamical trajectory and the energy is bounded from below (please see our rebuttal for the derivation of this result). Both these conditions are satisfied for both MrDAM and DrDAM, thus both these networks are contracting. The bound derived in Eq 12 is for discretized system, Eq (5). You are correct that this bound is useless for $\delta>1$, as it grows to infinity as $L$ is increased. However, it is a very useful bound for sufficiently small $\alpha$, which corresponds to $\delta<1$ (please see Corollary 1). There is no contradiction here, it’s just the bound becomes uninformative for $\delta>1$. As a side note, in our empirical experiments $\alpha$ is always sufficiently small to make sure that $\delta<1$ and that the discretized network is sufficiently close to the network described by continuous time.
---
Rebuttal 2:
Comment: Thanks for the clarifications.
1. Maybe the authors should polish the draft regarding Thm1, specifying under which conditions it is efficient, as efficiency is mentioned in the draft. I still feel that it is necessary to have exps showing these efficient conditions are true in the final version. Please correct me if I am wrong. 2 settings (efficient v.s. inefficient) benchmarked against MrDAM should suffice.
2. I'd like to remind the authors that the convergence of fixed point and stable point of the energy landscape (local minima) are different concepts. It seems that your derivation is about the latter, given that it's based on gradient descent on $E$. I understand the intuition you want to convey, but I remind the authors that this physics style of derivation is not really fixed-point convergence in a mathematically rigorous way. For one, you don't provide a convergence rate; you just assume it converges perfectly at $\nabla_x E = \frac{dx}{dt} = 0$, whereas, in reality, you only converge to a region defined by the termination of the gradient descent algorithm. Moreover, can you clarify if $E$ is convex? If it is, such analysis should be easier to include.
3. If your derivation is about stable point of energy landscape (local minima), then how can you ensure fixed point convergence? It is acceptable that you don't have it or don't know how to prove it in this model (leave for future work), but it's important to be precise and accurate.
4. My main concern is that it's impossible to have $\delta <1$ given $\alpha,L,K$ are all positive constants greater than 1.
It's strange to have an **iterative** approximation algorithm with error exponentially scaling with its iteration number. This makes Thm2 vacuous. Please correct me if I am wrong.
I appreciate the authors' efforts making this work clearer to me so far. I learned a lot. For this, I raise my score from 4 to 5.
I am willing to further raise score if above concerns are addressed: exps for efficient criteria, fixed point convergence and Thm2.
Thank you!
---
Rebuttal Comment 2.1:
Comment: Thank you for your questions. We are happy to provide further clarity.
> Maybe the authors should polish the draft regarding Thm1, specifying under which conditions it is efficient, as efficiency is mentioned in the draft. I still feel that it is necessary to have exps showing these efficient conditions are true in the final version.
>
We understand that the reviewer is asking for us to choose configurations of the hyperparams in Thm 1 that cause DrDAM to be more/less efficient (in terms of memory and computational complexity) than the baseline MrDAM. Per your repeated requests, we will add experiments on select configurations to the final draft, though “efficiency” was not an emphasis of our original draft. On re-reading our submission, we find mention to efficiency in only one sentence, in our Conclusion: “We have demonstrated how this can be done efficiently…” We will modify this sentence to prevent future confusion. Thank you for identifying this. Just to reiterate, theoretically, the peak memory complexity of MrDAM is $O(KD)$ vs $O(Y+D)$ for DrDAM. Given that in the interesting regime $Y\sim D/\varepsilon^2$ (Corollary 1), DrDAM is more memory efficient than MrDAM if $K>const/\varepsilon^2$. For running time complexity (ignoring memory encoding), which is $O(LKD)$ for MrDAM vs. $O(LD(Y+D))$ for DrDAM, MrDAM is generally more efficient than DrDAM.
> …the convergence of fixed point and stable point of the energy landscape (local minima) are different concepts…
>
Actually, in the DenseAM energies studied in this work, there is a 1:1 correspondence between fixed points and local minima of the energy function. The argument we presented above is mathematically rigorous and correct. For intuition, note that the energy landscape of Eq (11) “looks like” a mixture of Gaussians that has been inverted s.t. local peaks in log-probability are now local minima in energy. Thus, our model is non-convex (i.e., there can be many local minima) and each local minimum is a “fixed point” (where $\nabla_{\mathbf{x}} E = \frac{dx}{dt} = 0$) of the dynamics.
Imagine for simplicity that $\beta$ is large and $x_i$ is sufficiently close to one of the memories, say memory 1 (i.e., $\xi^1_i$), then only one of the exponential terms in Eq 11 is appreciably different from 0. Thus, in this limit the energy can be simplified to $E \approx \frac{1}{2} \sum\limits_{i=1}^D (\xi^1_i - x_i)^2$, locally around memory 1. This is just a quadratic function. Optimization is very simple and the solution for the fixed point is $x_i=\xi^1_i$ both from solving the continuous time differential equation and from minimizing the energy. There are no any subtleties or ambiguities here.
> can you clarify if $E$ is convex?
>
The energies of DenseAMs are non-convex functions because they have many local minima.
> For one, you don't provide a convergence rate… you only converge to a region defined by the termination of the gradient descent algorithm
>
A convergence rate is not necessary to prove convergence. By the argument above, the fixed point “regions” where gradient descent terminates are in fact points, which are indeed local minima of the energy function. Maybe you refer to some numerical precision errors? If so, of course everything that we do is impacted by those errors, but they are small. So, in reality we always stop $10^{-6}$ or so away from the true fixed point, but this does not impact any of our methods.
> If your derivation is about stable point of energy landscape (local minima), then how can you ensure fixed point convergence?
>
We believe we have answered this question above.
> it's impossible to have $\delta < 1$ given $\alpha, L, K$ are all positive constants greater than 1.
>
This statement incorrectly assumes that $\alpha > 1$: $\alpha$ (our discrete gradient-descent step size) is a strictly positive scalar that approaches the continuous setting as $\alpha \rightarrow 0$. In all the results presented in our paper $\alpha<1$.
---
Rebuttal 3:
Comment: > Actually, in the DenseAM energies studied in this work, there is a 1:1 correspondence between fixed points and local minima of the energy function...
This is not entirely accurate. Your gradient descent (GD) can't reliably approach the point $\nabla_x E = 0$ given $E$ is nonconvex, at least not without strong assumptions. For example, if your local minima form a connected ring, what would be your fixed point?
There is a line of research dedicated to this subtlety. Please refer to the "Global Convergence Theory of Iterative Algorithms" in [1] for related discussions. In that work, global convergence refers to the convergence of "stable points of $E$" and "fixed points of iterative algorithms." Without proving your fixed-point convergence with similar mathematical rigor, the claim seems overreached.
For the same reason, the following statements are also unconvincing without supporting proofs:
* > For intuition, note that the energy landscape of Eq (11) “looks like” a mixture of Gaussians that has been inverted such that local peaks in log-probability are now local minima in energy. **Thus**, our model is non-convex (i.e., there can be many local minima) and each local minimum is a “fixed point”...
* > By the argument above, the fixed-point “regions” where gradient descent terminates are, in fact, points, which are indeed local minima of the energy function...
Still, I might be wrong, but for now, I am not convinced by the math presented in this paper.
[1] Sriperumbudur, Bharath K., and Gert RG Lanckriet. "On the Convergence of the Concave-Convex Procedure." NIPS. Vol. 9. 2009.
---
Regarding $\alpha$, please correct me if I am wrong. In `line 181` of the submitted draft, it states "$\alpha \in \{1,...,Y\}$". Aren't they the same $\alpha$? If I mistook two different $\alpha$, it would be better to change the repeated index notation to avoid confusion.
---
I appreciate the authors' efforts and trust they will include efficiency experiments (and also modify the draft as they promised). For this, I raise score from 4->5->6.
Thank you!
(Side comment: Even if it's not the main focus of this work, I still think it's important to show general readers where this method is an efficient approximation.)
---
Rebuttal Comment 3.1:
Comment: **Edited 15mins later:** After further consideration, the fixed-point convergence issue is just food for thought/discussion but not a fatal flaw in this work. The work is still refreshing overall, so I am raising my score to 7. This paper is one of those with real new ideas. I recommend it for acceptance.
However, I hope the authors can refine the related discussions based on my comments. Alternatively, if I am mistaken, please clarify where I went wrong. I'd really appreciate it.
---
Reply to Comment 3.1.1:
Comment: We thank the reviewer for seeing the value in our work and increasing their score! In the spirit of a healthy discussion period, we will continue to respond to your questions:
> Your gradient descent (GD) can't reliably approach the point $\nabla_x E = 0$ given $E$ is nonconvex, at least not without strong assumptions. For example, if your local minima form a connected ring, what would be your fixed point?
>
It would require a huge amount of fine-tuning for $\beta$ and the location of memories to engineer a connected ring, or any other type of flat directions in the energy defined by Eq (11). Here we are working in the dense regime of DenseAMs, when $\beta$ is sufficiently large, so that every memory is a point attractor. Flat directions are common in more advanced models of AM, such as [Energy Transformer](https://arxiv.org/abs/2302.07253), but these are not considered in this work.
> In `line 181` of the submitted draft, it states $\alpha \in \{1,...,Y\}$. Aren't they the same $\alpha$? If I mistook two different $\alpha$, it would be better to change the repeated index notation to avoid confusion.
>
You are absolutely justified in your confusion and we apologize for this clash of notation, where $\alpha$ can represent both the index into the distributed memory and discrete gradient-descent step size. The $\alpha$ in Eq (12) is the “gradient descent step size” first defined in Eq (5), and used in Cor 1 to constrain the divergence. We will refine this notation in the final draft — thank you for spotting this. | Summary: The paper offers a kernel-based approximation of Dense Associative Memory that allows for Hebbian encoding in a space of randomized basis functions. The main advantage when compared with the exact approach is that information concerning all patterns is shared in a single weight tensor, without requiring additional weights for each new pattern.
The authors provide some complexity results and bounds on the Euclidean deviation from the exact retrieved pattern in the case of exponential Dense Associative Networks (i.e. modern Hopfield networks). In a series of numerical experiments, the author show that the approximation breaks down for low values of the inverse temperature and for large number of patterns when compared with the number of basis functions.
Strengths: 1) The paper addresses perhaps the greatest limitation of Dense Associative Memories with non-quadratic energy, namely the fact that they cannot compress the patterns into a single shared weight matrix. This has potentially great practical relevance for the implementation of these models in practical applications. Furthermore, it connects the theory of non-quadratic Dense Associative Memories with a very large body of theoretical research on Hebbian learning and classical Hopfield models.
2) The proposed approximate algorithm is theoretically sound as it connects with important known results on kernel machines and feature expansions.
3) The exposition is clear and it is relatively easy to follow even for non-specialists.
4) The numerical analysis of the error profile of the approximation error is thorough and insightful, especially when interpreted in combination with the theoretical results.
Weaknesses: 1) The embedding of this paper in the existing literature is limited. Feature expansion methods have been studied for classical Hopfield and generalized models [1,2,3], and there are uncited existing papers connecting Dense Associative Memories and kernel methods [4]. It would also be useful to discuss the recent results connecting exponential Dense Associative Memories (i.e. modern Hopfield networks) with generative diffusion models [5], since as pointed out in the paper the connection provides an alternative way to learn the same energy function in a fixed synaptic structure. It could also be useful to discuss more theoretical results involving random features, for example [6] and [7].
It would be good to have this treatment organized in a proper Related Work section.
2) While the approximation is based on a sound idea, its derivation is very informal. It would be very useful to derive the formula using variational techniques or other known systematic methods for deriving approximations. It could also be interesting to connect it with similar approximations in Gaussian Processes, since kernel feature expansions are commonly used for GPs.
3) I am not convinced about the usefulness of the bound in Eq.12 in non-trivial regimes due to the exponential dependence on the energy.
4) Given the simplicity of the proposed method and its similarities with known approaches such as [TODO], I am not convicted that the contribution is relevant enough for this conference.
References:
[1] Liwanag, Arnold, and Suzanna Becker. "Improving associative memory capacity: one-shot learning in multilayer Hopfield networks." Proceedings of the 19th Annual Conference of the Cognitive Science Society. Vol. 442. 1997.
[2] Barra, Adriano, Matteo Beccaria, and Alberto Fachechi. "A new mechanical approach to handle generalized Hopfield neural networks." Neural Networks 106 (2018): 205-222.
[3] Yilmaz, Ozgur. "Machine Learning Using Cellular Automata Based Feature Expansion and Reservoir Computing." Journal of Cellular Automata 10 (2015).
[4] Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models
[5] Ambrogioni, Luca. "In search of dispersed memories: Generative diffusion models are associative memory networks." arXiv preprint arXiv:2309.17290 (2023).
[6] Negri, Matteo, et al. "Random Feature Hopfield Networks generalize retrieval to previously unseen examples." Associative Memory {\&} Hopfield Networks in 2023. 2023.
[7] Negri, Matteo, et al. "Storage and learning phase transitions in the random-features hopfield model." Physical Review Letters 131.25 (2023): 257301.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) I find intuitively strange that the bound in Eq.13 depends on the absolute value of the energy instead of on a relative energy. Can you provide some intuition for this result?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations of the proposed approximate method are adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Improve embedding of this paper into existing literature
Thank you for your suggestions, we will update the paper with the Related Work section and will include the discussion of all the papers that you have suggested. As a side note, the reference [4] actually is cited in our submission, please see Ref [16]. Connection with diffusion, Ref [5] is a beautiful work, which we will be delighted to highlight. As well as the theoretical work on random features Refs [6], [7].
> It would be very useful to derive the formula using variational techniques… It could also be interesting to connect it with similar approximations in Gaussian Processes…
>
We thank the reviewer for these interesting suggestions and connections.
Our presentation of the approximation was motivated by the simplicity of the derivation that highlights how the energy can be approximated with random features in a cleanly straightforward manner; we were hoping to highlight how easily this distributed representation of memories simplifies the energy function. However, after the derivation, we formally and precisely present how the energy descent can be performed with these random features in Algorithm 1, carefully controlling the peak memory usage and computation. Naively, the peak memory usage would have been $O(YD)$, while our carefully crafted algorithm only requires $O(Y+D)$ peak memory with the same outcome.
However, we do think it is important to derive this approximation via other techniques, potentially highlighting other avenues for improvement.
We thank the reviewer for connecting our work to Gaussian processes. It is true that random features have been used for approximating Gaussian Processes, but the form of usage is targeted to a different application. Gaussian Processes are often used for supervising learning (either directly, or as part of a black-box derivative-free optimization), and the main bottleneck is the need to compute and invert the kernel gram matrix of the training points for any inference. With random features, we can instead perform Bayesian linear learning in the expanded feature space, obviating the kernel gram matrix computation and inversion.
DenseAMs are often utilized for memory retrieval, which is different than supervised learning (the main application of Gaussian Processes). However, this suggested connection by the reviewer can lead to interesting cross-contribution between the fields of Gaussian Processes and DenseAMs. This is very much in line with our goal in this paper to view DenseAMs through a lens of random features and kernel machines, bringing together two fields.
> I am not convinced about the usefulness of the bound in Eq.12 in non-trivial regimes due to the exponential dependence on the energy.
>
It is true that the divergence upper bound depends on the term $\exp(\beta E(\mathbf{x}))$. However, note that, for the energy function defined in Eq (11), assuming that all memories and the initial queries are in a ball of diameter $1$, $E(\mathbf{x}) \leq \frac{1}{2} - \frac{\log K}{\beta}$, implies that $\exp(\beta E(\mathbf{x})) \leq \exp(\beta/2) / K$.
An important aspect of our analysis is that the bound is query specific, and depends on the initial energy $E(\mathbf{x})$. As discussed above, this can be upper bounded uniformly, but our bound is more adaptive to the query $\mathbf{x}$.
For example, if the query is initialized near one of the memories, while being sufficiently far from the remaining $(K-1)$ memories, then $\exp (\beta E(\mathbf{x}))$ term can be relatively small.
More precisely, with all memories and queries lying in a ball of diameter $1$, let the query be at a distance $r < 1$ to its closest memories, and as far as possible from the remaining $(K-1)$ memories. In this case, the initial energy $E(\mathbf{x}) \approx -(1/\beta) \log [\exp(-\beta r / 2) + (K-1) \exp(-\beta/2)]$, implying that
$$
\exp(\beta E(\mathbf{x})) \approx \frac{\exp(\beta r / 2)}{\Big[1 + (K-1)\exp(-\beta(1 - r)/2)\Big]} \leq \exp(\beta r / 2)
$$
For sufficiently small $r < 1$, the above quantity can be relatively small. If, for example, $r \sim O(\log K)$, then $\exp(\beta E(\mathbf{x})) \sim O(K^{\beta/2})$, while $r \to 0$ gives us $\exp(\beta E(\mathbf{x})) \to O(1)$. This highlights the adaptive query-dependent nature of our analysis.
So it is not directly clear why this is not a “non-trivial regime”. We believe that this form of analysis is novel, and highlights the effect of the different quantities of interest in this formulation (such as the effect of the number of memories, energy descent steps, random features, initial particle energy). We accept that this upper bound might not be the tightest. However, that limitation does not necessarily make the analysis “trivial”.
> Given the simplicity of the proposed method and its similarities with known approaches such as [TODO]…
>
We would be happy to elaborate on this comment if the reviewer kindly specifies the “[TODO]” approaches. We do believe, however, that our approach is novel and that our contribution is relevant to this conference. We will add a Related Work section to better differentiate it from existing prior work.
> I find intuitively strange that the bound in Eq.13 depends on the absolute value of the energy instead of on a relative energy. Can you provide some intuition for this result?
>
This result uses the form of the energy, given by Eq (11), which assumes that the zero-energy state is chosen such that the sum of the exponents under the logarithm is equal to 1. Only the difference of the energies appears in Eqs (12,13), but this difference is implicit, given the (arbitrary) choice of the reference point made in Eq (11).
---
Rebuttal Comment 1.1:
Comment: With the discussion period soon drawing to a close, we wanted to check in to see if our rebuttal has satisfactorily addressed the concerns you raised in your initial review? If you have any further questions or require additional clarifications, we would be more than happy to engage in further discussion. | Summary: This paper studies a method to modify associative memory network weights when introducing new memories. The proposed uses random features and is shown to approximate the energy function of the conventional ones.
Strengths: 1. The proposed method and results are novel to me, understanding associative memories (AMs) through random feature seems to be a great connection between AMs and random feature transformers.
2. The paper is well written and easy to follow.
Weaknesses: 1. Why using hamming distance when calculating retrieval error when the theoretical results use L2?
2. To my understanding, the proposed method is an approximation to DAM. Thus the paper should at least compare the proposed method to DAM in thr experimental section.
3. The field of associative memories, Hopfield networks are getting increased attention recently, I think a related work section would benefit this paper a lot.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please refer to weaknesses.
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: This paper addresses its limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Why using hamming distance when calculating retrieval error when the theoretical results use L2?
>
We consider Hamming error since we are storing and retrieving binary memories, e.g. in Fig.3. Our theoretical results operate in the more general continuous $\mathbb{R}^d$ space, and thus bounds the L2 error. However, please note that, with binary vectors, Hamming error and L2 error are related by the square root operation. For this reason, our theoretical bound (proven for L2 error) is equal to the square root of the numerical evaluations with Hamming distance.
> To my understanding, the proposed method is an approximation to DAM. Thus the paper should at least compare the proposed method to DAM in the experimental section.
>
You are correct -- our proposed "distributed representation" for DenseAMs, which we call DrDAM, are approximations to the original DAM (what we refer to as the "memory representation" or MrDAM in our paper). The experiments in our paper are explicitly designed to compare against the original DAM a.k.a. MrDAM. E.g., Figs 2 and 3A plot DrDAM's approximation error when compared to the original DAM. Fig 3B shows how well, qualitatively, DrDAM approximates the original DAM (what we call the "Ground Truth" in that figure).
> The field of associative memories, Hopfield networks are getting increased attention recently, I think a related work section would benefit this paper a lot.
>
Thank you for your suggestion. This sentiment was echoed by other reviewers and we will update the paper with a pertinent related works section.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response.
I think the authors have addressed all my concerns, I will raise my score accordingly. | Summary: The paper proposes to interpret the energy function of Dense Associative Memory as relying on a kernel that can be approximated by kernel-specific random feature maps.
The approximation allows to condensate the stored patterns into a tensor whose size is independent of the number of patterns stored, similarly to what can be done in classical and polynomial Hopfield networks.
New patterns can then be incorporated without changing the dimensionality of the parameter space.
They provide bounds for the approximation error of the iterates in the gradient descent procedure that uses their energy approximation.
Strengths: - I appreciated the idea of trying to use the kernel approximation in order to overcome the computational limits of DAMs.
- The paper is well written and the idea is clearly explained.
Weaknesses: - The main limitation of the work in my opinion is that one has to go to very large values of the feature size Y in order to have a good approximation. In fact, looking at the error bound in eq. (12), it seems that Y has to be taken of the order of O(K^2*D) in order to have a good approximation, which is much larger than the size of the memory matrix (K*D). This not taking into account the exp(beta*E) term in the bound, which could penalize high-energy configurations a lot.
- Numerical Experiments are limited and not fully convincing. For instance, in Fig. 1 the 4x12288=49152 memory matrix has to be replaced by a several times larger Y-sized vector in order to obtain a good approximation.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the author show a plot similar to Fig. 2 but now showing the theoretichal prediction from eq. 12 (say for L=1 and alpha=1) compared to the actual error?
Probably this information could just be plotted in Fig. 2B.
- It would be nice to make a plot similar to Fig. 2 but now as a function of K.
- In order to help the reader and improve consistency, p in Algorithm 1 and at the beginning of Section 3 could denote the same thing.
- Assuming that the random feature version is considered as DAM per se instead of an approximation to another DAM, could the author comment on its capacity in presence of random memories? It is linear in Y?
- line 215. Usually, the EDP model also contains an x^2 term outside the logarithm.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors could discuss more the limitations of their work. In particular, the fact that the approximation requires a very large feature size Y in order to be accurate. This could limit the practical use of the method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are happy to hear that the main message of the paper got across well, and thank the reviewer for their insightful comments and feedback. Below we answer individual questions raised.
> … looking at the error bound in eq. (12), it seems that $Y$ has to be taken of the order of $O(K^2 D)$…
>
It is true that if we ignore all the terms on the right hand side of equation (12) except for the $K \sqrt{D/Y}$ term, it appears that $Y \sim O(K^2 D)$. However, it is important to note that the remainder terms in the upper bound on the right hand side of (12) also include the step-size $\alpha$ and $K$ itself. Ignoring the effect of $\alpha$ and $K$ in the remainder terms can give a wrong impression regarding the bound. For that very reason, we explicitly study a special case in Corollary 1, in which the update rate $\alpha$ is scaled in a $K, L$, and $\beta$ dependent manner. An important aspect of our analysis is that the bound is query specific, and depends on the initial energy $E(\mathbf{x})$. For example, if the query is initialized near one of the memories, while being sufficiently far from the remaining $(K-1)$ memories, then $\exp (\beta E(\mathbf{x}))$ term can be relatively small. More precisely, with all memories and queries lying in a ball of diameter $1$, let the query be at a distance $r < 1$ to its closest memory, and as far as possible from the remaining $(K-1)$ memories. In this case, the initial energy $E(\mathbf{x}) \approx -(1/\beta) \log [\exp(-\beta r / 2) + (K-1) \exp(-\beta/2)]$, implying that
$$
\exp(\beta E(\mathbf{x})) \approx \frac{\exp(\beta r / 2)}{\Big[1 + (K-1)\exp(-\beta(1 - r)/2)\Big]} \leq \exp(\beta r / 2)
$$
Thus, the exponential factor in our bound is not a problem. Overall, the divergence is bounded by $O(\sqrt{D / Y})$, at constant $\beta$, implying the need for $Y \sim O(D / \epsilon^2)$ for at most $\epsilon$ divergence. Please note, that this estimate is independent of $K$, which is the essence of our proposal.
> Numerical Experiments are limited… in Fig. 1 the 4x12288=49152 memory matrix has to be replaced by a several times larger Y-sized vector…
>
The configuration shown in Fig.1 was chosen to illustrate the main idea of the method. $Y$ does not have to be larger than the number of parameters required to describe the memory vectors. To explicitly demonstrate this point we have repeated the experiments shown in Fig.1 with 20 memories; please see Fig (a) in the 1 page PDF rebuttal. Now we have 20x64x64x3=246k parameters in the memory vectors encoded in the 200k-dimensional vector $T$. The method works well and all the conclusions presented in the paper remain valid.
> Can the author show a plot similar to Fig. 2 but now showing the theoretical prediction from eq. 12…?
>
We thank the reviewer for this great suggestion, which would highlight the tightness of our proposed upper bound. We have done it in Fig (c) of the 1 page PDF rebuttal. It is important to note that the upper bound also involves a quantity (precisely the quantity $C_1$ defined in assumption A2 in Theorem 2) which bounds the approximation introduced by the random feature maps (such as Rahimi and Recht (2007)), and this does not have an easily computable analytical form. For this reason, it is challenging to precisely compute this bound. However, it is possible to check the predicted scaling relationships. Specifically, the theoretically predicted $Y^{-1/2}$ dependence in the upper bound for a fixed $D$ appears in most of the results, please see Fig (c) in the 1 page PDF. This shows that the upper bound in equation (12) from Theorem 2 fairly characterizes the dependence on $Y$. Fig (c) from the PDF will be used to update Fig 2 (right side) in the current submission.
> It would be nice to make a plot similar to Fig. 2 but now as a function of K.
>
Thanks for the suggestion. We have generated this plot per your request and included it in Fig (b) of the 1 page PDF. The results are consistent with the theory developed in our paper.
> … $\mathbf{p}$ in Algorithm 1 and at the beginning of Section 3 could denote the same thing.
>
The reviewer is correct that we overload the variable $\mathbf{p}$ both as $\mathbf{p} = \mathbf{g}(\mathbf{x})$ and $\mathbf{p} = RF(\tau, \mathbf{g}(\mathbf{x}))$. We will fix the notation, and use a different term to denote $\mathbf{g}(\mathbf{x})$.
> line 215. Usually, the EDP model also contains an $x^2$ term…
>
The reviewer is correct, the complete EDP energy is
$$
E(\mathbf{x}) = -\frac{1}{\beta} \log \sum_\mu \exp(\beta \langle \boldsymbol{\xi}\mu , \mathbf{x} \rangle) + \frac{1}{2} \| \mathbf{x} \|^2.
$$
We were referring to the part of the energy which is relevant for the argument in line 215. We will write the complete energy. Our claim of validity of the proof technique of Theorem 2 still applies, since we just need to approximate the $\exp(\beta\langle \boldsymbol{\xi}_\mu, \mathbf{x} \rangle)$ term with the random features.
> Assuming that the random feature version is considered as DAM… could the author comment on its capacity in presence of random memories?
>
This is an interesting question, which may be investigated in the future. However, it remains beyond the scope of our current project. Our goal here is to answer the following question: how well the random feature description can represent the dynamics of DAM?
**General Comment**
Once again, we greatly appreciate your insightful comments and suggestions. We would be grateful if you could consider increasing the score for our work towards an accept.
---
Rebuttal Comment 1.1:
Comment: With the discussion period soon drawing to a close, we wanted to check in to see if our rebuttal has satisfactorily addressed the concerns you raised in your initial review? If you have any further questions or require additional clarifications, we would be more than happy to engage in further discussion. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their thoughtful feedback. We are honored that our work has been recognized as addressing “the greatest limitation of Dense Associative Memories with non-quadratic energy” [wQ3r] and as making a “significant and solid step forward for the community” [9NBh]. We are also grateful that reviewers have acknowledged the clarity and accessibility of our paper [xsUY, B4qp, wQ3r].
Reviewers have identified several areas in our presentation that need improvement:
1. **A more complete list of related works** [wQ3r, 9NBh, B4qp]**.** We appreciate the reviewers’ suggestions for additional relevant works. We have reviewed the related work suggested by reviewers and find them all relevant, and will enhance the “Related Works” section accordingly to more thoroughly embed our work into the context of existing studies.
2. **Improve the motivation for the method** [xsUY, 9NBh]. We understand that reviewers are looking for compelling empirical reasons to use the distributed representation for DenseAMs over the traditional memory representation, particularly in terms of parameter efficiency or time complexity. We hope that our additional experiments show that information compression is indeed possible using DrDAM; however, we want to emphasize that the main message of our paper is to show and characterize how we can uncouple the actual memory patterns from the definitions of (1) the energy and (2) the full memory-retrieval dynamics of DenseAMs.
3. **Additional experiments** [xsUY, 9NBh, B4qp] We have run additional experiments and included them in the attached 1-page PDF, further validating the theory and practicality of our method:
- Fig (a): We designed Fig 1 of the original submission as an illustration of the method, but Reviewer xsUY astutely pointed out that in this experiment our distributed representation method (DrDAM) used more parameters than the memory representation for DenseAMs (MrDAM). We have thus updated that experiment to store and retireve 20 images from TinyImagenet into both DrDAM and MrDAM configured at the same $\beta$, showing that DrDAM can successfully compress information in the stored patterns. Full details are provided in the caption and app. C of the orig. submission
- Fig (b): In Fig 2 of the original submission we analyzed the error between DrDAM and MrDAM as a function of $Y$ (the size of the distributed tensor $\mathbf{T}$), $D$ (the size of queries and stored patterns), and $\beta$ (the inverse temperature). We additionally performed this error analysis as a function of how many patterns $K$ are stored in the memory. See more details in the caption.
- Fig (c): Here we verify the tightness of the error bound (a function of $\frac{1}{\sqrt{Y}}$ ) in Thm. 2 on top of the empirical curves of Fig 2. We will update the right part of Fig 2 of the original submission with this version upon acceptance.
We have addressed each reviewer’s individual questions and concerns below. We respectfully ask each reviewer to consider increasing their score if concerns have been satisfactorily addressed.
Pdf: /pdf/a2ceace3f4bd7374ebcce55c08cb5246fd816c06.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
No-Regret Learning for Fair Multi-Agent Social Welfare Optimization | Accept (poster) | Summary: The paper considers a new fairness measure, say NSW, lacking Lipschitzness. Unlike previous measures, the multi-armed bandit problem cannot obtain common $O(\sqrt T)$ regret. For stochastic MAB, the authors find an algorithm achieving $\tilde{O}(T^{\frac{N-1}{N}})$ regret and show its tightness. For adversary scenarios, the authors show that no algorithm can achieve sublinear regret. Then, they consider an easier information structure, say full-information feedback and $\sqrt T$ regret is possible. Finally, they consider the situation when logarithmic regret is achievable.
Strengths: The model is clear and the algorithms proposed are easy to understand. Besides, the authors consider different information structures and models to depict the problem completely. And the construction of lower bounds is very nice.
Weaknesses: 1. The paper shows why considering $NSW$ is better than considering $NSW_{prod}$. However, why not consider simple average which is more intuitive? I recommend that the authors provide some realistic applications for NSW.
2. If I understand correctly, you consider an addictive utility, for example, you consider expected regret when showing lower bounds. However, for (2), why do you use $NSW(u^Tp_t)$ rather than $\sum_i p_{t,i}NSW(u_{t,i})$? The definition of regret seems strange.
3. The statement in Line 165 is not correct. We don't need to observe $u^T_t p_t$ when using BCO though we may need the dimension of $u$ to be 1, i.e. $N=1$. When you prove the lower bounds, I think the negative result, i.e., Thm 3.2, holds because NSW is concave but https://arxiv.org/pdf/2402.06535 needs convexity. However, in Appendix B. 1., you have convexity, so you can only construct a hard-to-learn example with at least two agents. I'm not sure whether my intuition is correct. If it is, some statement in the paper doesn't hold. If not, please explain more about the construction of your lower bounds to make it easy to understand.
4. You mention that NSW is not Lipschitz, but why not make a truncation? If an agent has a very small utility, like $\sigma$ in the paper, the regret will be small. Otherwise, we can use Lipschitzness. Is there any difference between your methods? If not, show the reason why NSW doesn't make the problem harder. Also, if you have time, it's meaningful to use experiments to compare these two ideas.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What will happen if the learning can only observe NSW rather than everyone's utility? Will it change the dependence of $K$ and $N$?
2. In Line 324, what is $f(p_t)$? I believe the input of $f$ is an $R^N$ vector. However, $p_t$ belongs to $R^K$.
3. Typo: In Line 338, "is not only convex" should be "concave".
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed and valuable comments to our paper. We address the reviewer's questions as follows.
**Q1: However, why not consider a simple average, which is more intuitive? I recommend that the authors provide some realistic applications for NSW.**
A: It is well-known that a simple average does not promote fairness. To see this, consider the following example mentioned by Hossain et al. [2021]: suppose there are 10 agents and 2 arms, where the first 4 agents receive reward 1 from arm A and reward 0 from arm B, and the other 6 agents receive reward 0 from arm A and reward 1 from arm B. If we use average as the measure, then the best policy is to always pull arm B, while if we use NSW as the measure, the distribution (0.4, 0.6) over the two arms is the best policy, which is clearly more reasonable from a fairness viewpoint.
For “some realistic applications for NSW”, we refer the reviewer to our first response to Reviewer t2CF.
**Q2: For (2), why do you use $𝑁𝑆𝑊(u^\top p_t)$ rather than $\sum_{i\in[N]}p_{t,i}𝑁𝑆𝑊(𝑢_{𝑡,𝑖})$? The definition of regret seems strange.**
A: This is because the former (geometric average of expected utilities) is arguably more meaningful as a fairness measure than the latter (expected geometric average of utilities). To see this, simply consider a setting with 2 agents and 2 arms, where the first agent always gets reward 1 from arm A and reward 0 from arm B, while the second agent is the opposite (reward 0 from arm A and reward 1 from arm B). Then, in terms of geometric average of expected utilities, the uniform distribution is the best policy (which makes sense from a fairness viewpoint); on the other hand, in terms of the expected geometric average of utilities, all distributions achieve the same value of 0, implying that all polices are as fair, which is clearly not what we want.
**Q3: The statement in Line 165 is not correct. We don't need to observe 𝑢𝑡𝑇𝑝𝑡 when using BCO though we may need the dimension of 𝑢 to be 1, i.e. 𝑁=1. When you prove the lower bounds, I think the negative result, i.e., Thm 3.2, holds because NSW is concave but https://arxiv.org/pdf/2402.06535 needs convexity. However, in Appendix B. 1., you have convexity, so you can only construct a hard-to-learn example with at least two agents. I'm not sure whether my intuition is correct. If it is, some statement in the paper doesn't hold. If not, please explain more about the construction of your lower bounds to make it easy to understand.**
A: We believe that there are some significant misunderstandings from the reviewer. First of all, when $N=1$, our problem simply reduces to the classic multi-armed bandit problem, whose minimax regret rate is well-studied, so our focus is on the case when $N\geq 2$. After all, what is the point of considering fairness when there is only one agent?
Second, you mentioned that our NSW is concave while standard BCO needs convexity, but note that our problem is about **utility maximization** while BCO is about **loss minimization**, so concavity in the former is just equivalent to convexity in the latter by simply taking negation of the objective.
Finally, your comment about Appendix B.1 also does not make sense to us. In Appendix B.1, we consider the product version of NSW used by previous works, which is neither convex nor concave.
**Q4: You mention that NSW is not Lipschitz, but why not make a truncation? If an agent has a very small utility, like 𝜎 in the paper, the regret will be small. Otherwise, we can use Lipschitzness. Is there any difference between your methods? If not, show the reason why NSW doesn't make the problem harder. Also, if you have time, it's meaningful to use experiments to compare these two ideas.**
A: We are not sure if we fully understand the reviewer’s idea, and will clarify based on our understanding below. If this does not address your question, please do follow up on this.
First, note that our truncation in the analysis is on the quantity $\langle p_t, u_{:,n}\rangle$ (Line 202), which is related to the unknown utility $u$ and is thus cannot be explicitly implemented in the algorithm itself.
Second, if what the reviewer meant is to truncate $\langle p_t, \hat{u}_{t,:,n}\rangle$ instead, that is, replace NSW in Algorithm 1 by
$$NSW_{\sigma}(\mu)=\prod_{i=1}^N \max(\sigma, \mu)^{1/N},$$
then, regardless whether the analysis works, it does not seem that the algorithm can be efficiently implemented since it is unclear how to compute $p_t$ given that $NSW_{\sigma}$ is not concave (and neither is $\log NSW_{\sigma}$).
**Q5: What will happen if the learning can only observe NSW rather than everyone's utility? Will it change the dependence of $𝐾$ and $𝑁$ ?**
A: As we discussed in Lines 161-171, in this case, the problem is just an instance of standard bandit convex minimization (or equivalently, bandit concave maximization). Therefore, applying the state-of-the-art algorithms from [Fokkema et al., 2024] achieves $O(K^{1.75}\sqrt{T})$ regret in the stochastic setting and $O(K^{3.5}\sqrt{T})$ in the adversarial setting (both without $N$ dependence).
**Q6: In Line 324, what is $𝑓(𝑝_𝑡)$? I believe the input of $𝑓$ is an $\mathbb{R}^𝑁$ vector. However, $𝑝_𝑡$ belongs to $\mathbb{R}^𝐾$.**'
A: It should be $f(u_t^\top p_t)$ instead. Thanks for pointing out the typo!
**Q7: In Line 338, "is not only convex" should be "concave".**
A: This is **not** a typo. Note that we are talking about -NSW here, which is indeed convex since NSW is concave.
**Reference**
[Fokkema et al., 2024] H. Fokkema, D. van der Hoeven, T. Lattimore, and J. Mayo. Online Newton method for bandit convex optimisation. In Conference on Learning Theory, 2024.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response.
---
Reply to Comment 1.1.1:
Comment: Thanks for reading our response. If it addresses your concerns, please consider reevaluating our paper. Otherwise, please follow up with more questions. Thank you again. | Summary: The authors address the problem of maximizing Nash Social Welfare (NSW) in a centralized multi-agent multi-armed bandit setting. In each round, a central entity selects a probability distribution $p_t$ over the $K$ actions, and draw an action $i_t \sim p_t$. Upon selecting an action, the central entity observes the $i$-th row of the $K\times N$ utility matrix $u_t$ which indicates the utility received by each of the $N$ agent when choosing one of the $K$ actions.The goal of the central entity is to maximize the sum of the Nash Social Welfares over the rounds, where the NSW for a given round is defined as the geometric mean of the expected utilities received by the agents: $$NSW(p_t, u_t) = \Pi_{j = 1}^{N} \langle p_t \vert u_t\rangle^{1/N}.$$
The authors note that, unlike the product of expected utilities, the Nash Social Welfare (NSW) is not Lipschitz continuous. Consequently, the analytical techniques used in previous works that study the product of expected utilities are not applicable here. In the scenario where the utility matrix $u_t$is drawn i.i.d. from some unknown distribution, the authors develop an algorithm for this problem and establish both upper and lower bounds that match in terms of scaling with the horizon $T$. Next, they consider the adversarial framework. They demonstrate that with bandit feedback, the worst-case regret must scale linearly. They propose two different algorithms for the adversarial case with full information and show that both achieve a regret of $\tilde{O(\sqrt{T}}$, with different dependence in $N$ and $K$. Finally, they show that in some special cases, the regret can be logarithmic in $T$.
Strengths: I am not very familiar with the literature on Nash Social Welfare; however this paper seem to address a well-motivated problem in multi-agent decision-making.
The authors propose a UCB algorithm for maximizing the sum of Nash Social Welfares (NSW), utilizing new confidence bounds to achieve optimal regret rates. They establish a matching lower bound on the regret, providing theoretical robustness to their approach. Additionally, they explore the adversarial case, a scenario that had not been previously addressed. Their results reveal that bandit feedback alone is insufficient to achieve sub-linear regret rates. To complete their investigation, they propose a Follow-the-Regularized-Leader (FTRL) algorithm. While FTRL is not a novel algorithm, its application here concludes a comprehensive analysis of the problem at hand.
The paper is very clear, with the key ideas of the proofs well outlined and the problem effectively contrasted with existing works.
Weaknesses: I see no major weaknesses. As a minor remark, the use of the Nash Social Welfare is not motivated. While it is a standard welfare measure in economics, it may be unfamiliar to those in the bandit literature. Therefore, it is worth introducing and discussing its significance and relevance.
Technical Quality: 4
Clarity: 4
Questions for Authors: - Could you please quickly comment on why the Nash Social Welfare is defined as the geometric average of the expected utility, instead of the expectation of the geometric averages of the utilities? Admittedly, the latter might be less interesting from a mathematical standpoint. Is this a choice you are making, or is it a commonly accepted definition?
- Could you please detail a bit more how you obtain the thrid line from the second line of the computations at the end of page 14?
- There is a typo at the last line of page 4, where a "O" is capitalized in the middle of a sentence.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations are well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's detailed and valuable comments to our paper. We address the reviewer's questions as follows.
**Q1: the use of the Nash Social Welfare is not motivated**
A: We will add more discussion on the significance and relevance of the use of NSW in the next version as suggested. We point out that Hossain et al. [2021], the main work we improve upon, has provided very good motivation on studying NSW, and we copy one such discussion below for reference:
*This problem of making a fair collective decision when the available alternatives affect multiple agents is well-studied in computational social choice. The literature offers a compelling fairness notion called the Nash social welfare, named after John Nash… Maximizing the Nash social welfare is often seen as a middle ground between maximizing the utilitarian social welfare (sum of utilities to the agents), which is unfair to minorities, and maximizing the egalitarian social welfare (minimum utility to any agent), which is considered too extreme. The solution maximizing the Nash social welfare is also known to satisfy other qualitative fairness desiderata across a wide variety of settings.*
**Q2: Could you please quickly comment on why the Nash Social Welfare is defined as the geometric average of the expected utility, instead of the expectation of the geometric averages of the utilities? Admittedly, the latter might be less interesting from a mathematical standpoint. Is this a choice you are making, or is it a commonly accepted definition?**
A: We point out that the former (geometric average of expected utilities, which we study) is arguably more meaningful as a fairness measure than the latter (expected geometric average of utilities). To see this, simply consider a setting with 2 agents and 2 arms, where the first agent always gets reward 1 from arm A and reward 0 from arm B, while the second agent is the opposite (reward 0 from arm A and reward 1 from arm B). Then, in terms of geometric average of expected utilities, the uniform distribution is the best policy (which makes sense from a fairness viewpoint); on the other hand, in terms of the expected geometric average of utilities, all distributions achieve the same value of 0, implying that all polices are as fair, which is clearly not what we want.
Therefore, both previous works (Hossain et al. [2021] and Jones et al. [2023]) and ours study the former notion (which is in fact also mathematically more interesting as the reviewer pointed out).
**Q3: Could you please detail a bit more how you obtain the third line from the second line of the computations at the end of page 14?**
A: We use the fact that $a^N-b^N=(a-b)\cdot (\sum_{k=0}^{N-1} a^kb^{N-1-k})$ with $a=<p_t, \hat{u}_{t,:,n}>^{1/N}$ and $b=<p_t, u_{:,n}>^{1/N}$.
Finally, thanks for pointing out the typo on Page 4! We will fix that in the next revision.
---
Rebuttal Comment 1.1:
Title: Rebuttal acknowledgment
Comment: I thank the authors for their rebuttal, which adequately answers my questions. | Summary: The paper studies the problem of online social welfare maximization. More precisely, the authors consider the online learning setting where the learner, at each round $t \in [T]$, picks an action $i \in [K]$ that then determines the utility of each of the $n$ agents. The utility of agent $j$ is given by the $(i,j)$ entry of a utility matrix $u_t \in \mathbb{R}^{K \times n}$, which can either change arbitrarily or be sampled from a predetermined probability distribution. The cost of the learner is given by the Nash Welfare, defined as the geometric mean of the agents' utilities: $NW(\mu) = (\Pi_{n \in [n] \mu_n})^{1/n}$. The goal of the learner is to pick a sequence of mixed actions to achieve social welfare comparable to that achieved by the best fixed mixed actions.
The authors consider both the stochastic and the deterministic versions of the problem, as well as both the bandit and full-information feedback models. For the stochastic case with bandit feedback (where the learner learns only the utilities of the agents for the randomly sampled action), the authors provide an $O(T^{N-1/N})$-regret online learning algorithm and show that this regret bound is tight for the stochastic setting. They then study the bandit feedback model with adversarial changes and establish an $\Omega(T)$ regret lower bound. In view of this negative result, the authors shift their attention to the full-information feedback and adversarial changes case, for which they provide $O(\sqrt{T})$-regret online learning algorithms using Follow the Regularized Leader with log-barrier regularization.
Strengths: The paper studies an interesting and technically challenging setting. The writing is relatively good, and the authors clearly explain their contributions and the crucial differences between their setting and classical online concave optimization with bandit feedback. I found it particularly interesting that there is a significant discrepancy in the possible regret bounds between online concave optimization and the bandit feedback case of the considered setting. Additionally, I appreciate that the authors present both upper and lower bounds for all the considered settings.
Weaknesses: Despite the paper's solid technical contribution, my only concern lies with the motivation for the setting. The authors briefly mention that the setting has applications in resource allocation but do not provide any concrete examples or a convincing discussion on why this setting is particularly interesting. While I do not doubt that the setting is indeed interesting, I believe a detailed discussion on the potential applications of the model would significantly enhance the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you elaborate more on the potential applications of the considered setting?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the reviewer's appreciation of our paper! We answer the reviewer's question as follows.
**Q1: A detailed discussion on the potential applications of the model.**
One important application of the model is fair repeated policy decision making. In fact, we believe that Hossain et al. [2021], the main work we improve upon, already provided very good justification for studying this model, and we copy one such discussion from their introduction below:
*This problem can model situations where the principal is deliberating a policy decision and the arms correspond to the different alternatives she can implement. However, in many real-life scenarios, making a policy decision affects not one, but several agents. For example, imagine a company making a decision that affects all its employees, or a conference deciding the structure of its review process, which affects various research communities…This problem of making a fair collective decision when the available alternatives affect multiple agents is well-studied in computational social choice. The literature offers a compelling fairness notion called the Nash social welfare, named after John Nash… Maximizing the Nash social welfare is often seen as a middle ground between maximizing the utilitarian social welfare (sum of utilities to the agents), which is unfair to minorities, and maximizing the egalitarian social welfare (minimum utility to any agent), which is considered too extreme. The solution maximizing the Nash social welfare is also known to satisfy other qualitative fairness desiderata across a wide variety of settings.*
---
Rebuttal Comment 1.1:
Title: Reviewer's Response
Comment: Thank you for the response. I maintain my positive opinion for the paper and I keep my score. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Diversity Is Not All You Need: Training A Robust Cooperative Agent Needs Specialist Partners | Accept (poster) | Summary: The paper argues that while diversity among training partners is essential for developing robust generalist cooperative agents, specialization also plays a crucial role. The authors introduce a method for quantifying both diversity and specialization using mutual information. They highlight the limitations of the cross-play minimization (XP-min) technique, which generates diverse but overfitted partners. To address this issue, the authors propose reinforcement learning and supervised learning methods to extract beneficial behaviors while reducing overfitting. Empirical results demonstrate that these methods lead to more robust generalist agents.
Strengths: * The paper introduces a novel method to quantify partner diversity and specialization using mutual information.
* It effectively identifies the issue of overfitting in partners generated by the XP-min technique.
* The proposed methods are empirically validated, showing improvement in training robust generalist agents.
* The paper provides a thorough analysis of how diversity and specialization impact the robustness of generalist agents.
Weaknesses: * The experiments are conducted within a specific cooperative environment (multi-recipe Overcooked), which may limit the generalizability of the results.
* The overfitness measurement relies on an oracle generalist, which may not always be available or practical in real-world scenarios.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Can you elaborate on the computational requirements for implementing SpecTRL and SpecTRL DAgger in different environments?
* What strategies can be employed to identify or create an oracle generalist in environments where one is not readily available?
* I noticed that the hyperparameter selection process is detailed in the appendix. However, I am still concerned: is the algorithm sensitive to hyperparameters, and can minor changes in hyperparameters cause significant performance degradation?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitations in detail in the appendix. The discussion is reasonable, and it is impractical to completely address them in the current version of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall positive sentiment towards the paper and their thoughtful feedbacks.
Here, we address the questions raised by the reviewer
> Can you elaborate on the computational requirements for implementing SpecTRL and SpecTRL DAgger in different environments?
The computation cost of the distillation process of SpecTRL and SpecTRL DAgger is similar to that of self-play. One can approximate the training time of SpecTRL by referencing the training time of self-play in the same environment. Although the computation cost scales linearly with the number of the source partners, one could parallelize the distillation process (e.g., one distillation pair per CPU core). For reference, in the multi-recipe Overcooked, distilling 8 partners in parallel takes 12 hours.
> What strategies can be employed to identify or create an oracle generalist in environments where one is not readily available?
We believe that an efficient way to generate oracles is via reward shaping alongside usual self-play training, which we also employed in this work. This approach allows us to exert domain knowledge for the design of the oracles without requiring us to program the oracles’ behaviors.
> I noticed that the hyperparameter selection process is detailed in the appendix. However, I am still concerned: is the algorithm sensitive to hyperparameters, and can minor changes in hyperparameters cause significant performance degradation?
Our proposed distillation method only introduces one additional hyperparameter, $\lambda_{dagger}$, for the SpecTRL DAgger variant. We do agree that this hyperparameter could cause performance degradation if $\lambda_{dagger}$ is too big. We suggest that one should start using SpecTRL with the DAgger component (i.e. $\lambda_{dagger}=0$). Then, if the performance improvement is unsatisfactory, start introducing the DAgger component with a small coefficient (e.g., 0.01) and incrementally increase the coefficient value (e.g., 0.1 or 0.2).
_We have also included weaknesses mentioned by the reviewer in the general response. We hope that our responses will resolve the questions and concerns raised by the reviewer. We are happy to further discuss and clarify if the reviewer feel their comments are not addressed._
---
Rebuttal Comment 1.1:
Title: Response
Comment: The author addressed my concerns and I improved my score. | Summary: This work studies partner diversity in the context of training a generalist agent.
The authors observe that XP-min approaches, while capable of producing behavioral diversity in its teammates, generates “handshaking`` behaviors—a kind of overfitting.
While MP-reg aims to correct for this overfitting, the authors hypothesize this generates a “loss of specialization” (LOS) problem.
The authors empirically support this hypothesis in a controlled experiment, concluding “unspecialized or overfit partners are not good training partners”.
This study leverages mathematically grounded definitinos of diversity, specialization, and overfitness (each of which requires domain knowledge and/or a trained specialist agent).
The authors propose “SpecTRL” and “SpecTRL DAgger”, both of which operate on a pre-trained XP-min (and/or mutual information (MI)) based population, but aim to “reduce overfitness while maintaining the diversity and specialization” which already exists in the population.
The main idea of SpecTRL is for the distilled partners to learn to cooperate with the specialization of the XP-min agents, but not their sabotaging behaviors.
SpecTRL DAgger introduces supervision, which cain aid the distilled agent in learning to utilize complex handshakes that may not be discovered through random exploration.
The experimental results demonstrate that MP-reg / MI increase diversity, but lose specialization—in line with the LOS hypothesis.
On the other hand, the SpecTRL-based approaches successfully reduces overfitting while preserving the specialization in the XP-min population.
Notably, while SpecTRL’s distillation phase reduces overfitting, repeated distillation does not further reduce it.
Strengths: * Work appears to be novel and grounded in relevant literature.
* The paper is well-written and very well-formatted; easy to read.
* The results are significant, and will likely be of use to the MARL community.
Weaknesses: * The limitations section is currently in the Appendix, but as this works’ analyses contains notable limitations, the authors may consider putting this in the main body.
* The results are only validated on the Overcooked environment. While the results on this environment are thorough and promising, it is difficult to be confident in the generality of the findings without validation on more domains.’
* Table 2 is a bit hard to read; it’s nice for seeing the raw data, but it’s a lot of numbers to try to make sense of. The visual aspects of the Table are nice, but plots might have made these easier to interpret (and the raw data could have been placed in the Appendix).
Technical Quality: 4
Clarity: 4
Questions for Authors: * I think I have some intuitive understanding for why distillation instead of regularization during training may help prevent the overfitting issue while maintaining specialization, but could the authors elaborate on this aspect? A key component of the method appears to be this distillation component, but from what I see there’s not so much discussion about the motivation/intuition/inspiration for this.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors appear to adequately address all major limitations in Appendix G:
* The proposed measures are just one proposition—there may be other sets that better quantify the quality of training populations.
* Most notably, domain knowledge is necessary for the behavior characteristic function (f), and for training the oracle specialists.
* Evaluated on single domain (expressed as weakness above).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall positive sentiment towards the paper and their thoughtful feedbacks.
Here, we address the question raised by the reviewer
> I think I have some intuitive understanding for why distillation instead of regularization during training may help prevent the overfitting issue while maintaining specialization, but could the authors elaborate on this aspect? A key component of the method appears to be this distillation component, but from what I see there’s not so much discussion about the motivation/intuition/inspiration for this.
Our aim for the distilled population is for them to learn diverse and specialized behaviors of a source XP-min population while removing sabotaging behavior (i.e., overfitness). The reason we think reward maximization (RL) objective would achieve this is that the distilled partners are incentivized to nudge the source partners to perform cooperative behaviors (which gives high return) and away from their sabotaging behaviors (which gives low return). Therefore, the distilled partners would not learn the sabotaging behaviors. Additionally, when the source partners cooperate, they do so in specialized ways as they have already learned specialized behaviors with XP-min. This means that the distilled partners will also learn these specialized behaviors as well. We’ll include more of this intuition in section 5 in the camera ready version.
_We have also included weaknesses mentioned by the reviewer in the general response. We hope that our responses will resolve the questions and concerns raised by the reviewer. We are happy to further discuss and clarify if the reviewer feel their comments are not addressed._
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for addressing the weaknesses, and responding to my question, which has helped me better understand that aspect of the work. I maintain my positive opinion of the work, as well as my current score. | Summary: The submission positions itself within the problem of ad-hoc teamwork: learning to cooperate with teammates previously unseen during training. Indeed, an important aspect of ad-hoc teamwork is the rule of "no prior coordination". Previous work focused on developing a rich enough set of training partners to allow good test performance. Various diversity metrics were used, such as cross-play minimization. However, these approaches lead to self-sabotaging behaviours in training partners, where they develop secret handshakes (i.e. initial sequences of actions) that identify partner types, and if the other partner fails the handshake protocol, the agent can refuse to cooperate and sabotage. The authors correctly identify that methods that resolve this handshake problem actually lead to a loss of diversity in the training set. Thus a big question asked in the paper is: how to have _meaningful diversity_ in the training set without having the handshake problem.
They propose to measure the diversity of a population as the entropy of a _function of the induced trajectory distribution under the population._ The choice of function allows one to decide what kind of diversity the designer cares about. They propose to measure the _overfitness_ (a.k.a. handshake problem) by measuring if the members of the population can in fact cooperate with a generalist oracle, an oracle agent that is assumed to not have learned handshakes. Finally the specialization is measured by the negative entropy of a specific policy. They empirically demonstrate that overfitness and/or under-specialization are both bad for training.
Their proposed solution for training set generation is simple: take a population generated by cross-play minimization, and distill it to a more specialized population by making the population members cooperate with each other more efficiently, which is achieved via reinforcement learning within population. They empirically show that this reduces the overfitness of the population generated by cross-play minimization, and leads to better training partners, without creating the loss of specialization problem compared to previous work.
Strengths: **Originality**
- The proposed method for generating training partners for ad-hoc teamwork is novel.
- The loss of specialization, diversity, and overfitness is partially novel. Especially, perhaps the diversity metric is not novel per se as similar metrics were proposed for behavioural diversity, but the authors don't claim novelty here either. The loss of specialization definition feels like folk knowledge, yet I also have not seen this defined in this way before.
- Overall: the paper is sufficiently novel, and the proposed algorithm is original.
**Quality**
- The experimental evaluation of the proposed method is relatively extensive, the results are presented nicely, and the overall quality of the work appears high.
**Clarity**
- The presentation is clear, although a bit dense. See weaknesses.
**Significance**
- The paper has notable significance for cooperative AI and ad-hoc teamwork.
Weaknesses: - Personally, I am not a big fan of the Overcooked for studying ad-hoc teamwork. I believe it is too constrained. It would be interesting to see how this method performs/compares in a more open-ended task with more degrees of freedom. It might actually be easier to achieve diversity then, but harder to achieve specialisation.
- Table 2 with all the abbreviations and colours look extremely busy. I believe in some parts of the paper the amount of coloured text and abbreviations make it actually harder to parse things rather than easier. This is probably also a personal constraint on my side.
- Not a big deal per se but the related works section is incredibly sparse. It is literally 16 lines. I understand that the authors wanted to squeeze in a lot of content to the paper, but I would advise you to extend this a little bit. Perhaps not in the way of adding more citations but in the way of explaining the landscape a bit more. Although, you could also argue with this and say the stuff that is super relevant are already discussed in detail in the introduction, and left-overs are quickly explained in the Related Works section.
- The limitations and also a potential discussion on the future work is left to the appendix. Now I do not like this. Instead of a Conclusion section regurgitating the paper I just read already, I would rather see a discussion on what kind of future work this opens up, and what are the limitations, **in the main paper.**
Technical Quality: 4
Clarity: 3
Questions for Authors: At this point, I do not have any questions.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Limitations discussion was left entirely to the appendix, which I do not like. However, in principle, they are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the overall positive sentiment towards the paper and their thoughtful feedbacks.
_We have included weaknesses mentioned by the reviewer in the general response. We hope that our responses will resolve the questions and concerns raised by the reviewer. We are happy to further discuss and clarify if the reviewer feel their comments are not addressed._
---
Rebuttal Comment 1.1:
Comment: My comments are addressed, thank you for your response.
I maintain my score. | null | null | Rebuttal 1:
Rebuttal: # General response to all reviewers
We thank all the reviewers for the overall positive sentiment towards the paper and their thoughtful feedbacks. Here, we address common concerns among the reviewers:
- __Only evaluating with multi-recipe Overcooked (reviewer sQwt, Vvdc, and G3wa)__
We agree with the reviewers that evaluation in different domains would be beneficial. While we focus on the multi-recipe Overcooked environment due to the scarcity of sufficiently complex cooperative settings designed for ad-hoc teamwork, future research could explore the generalizability of our findings to other domains as such environments become available since the algorithm is domain agnostic and should be readily applied to new domains. We believe that, given the current research landscape, multi-recipe Overcooked—a complex coordination task that covers various coordination challenges—provides a robust and suitable benchmark for this investigation.
- __Table 2 is hard to read (reviewer sQwt, Vvdc)__
We will improve legibility of the table in the camera-ready version.
- __Limitations and discussions are not in the main text and limited related work (reviewer sQwt, Vvdc)__
We’ll include the limitations and discussions sections (which are now located in the appendices) in the additional page given for the camera-ready version. We’ll also provide an extended related work section.
_We hope that our responses will resolve the questions and concerns raised by the reviewers. We are happy to further discuss and clarify if any reviewers feel their comments are not addressed._ | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enhancing Domain Adaptation through Prompt Gradient Alignment | Accept (poster) | Summary: This paper aims to leverage prompt tuning of vision-language models for Unsupervised Domain Adaptation (UDA) tasks. The authors formulate UDA as a multi-objective problem where each objective is modeled by a domain loss. To resolve conflicts between domains, they manipulate gradients by maximizing their cosine similarity. Additionally, to stabilize the training procedure, they propose using the norm of the gradients as a regularization term. They also provide a generalization error bound for their method. Empirical results demonstrate the effectiveness of the proposed method.
Strengths: The paper is well-organized and the presentation is clear. To my knowledge, the idea of aligning gradients of different objectives by maximizing cosine similarity and regularizing the gradient norm is intuitive and novel. The authors also provide a generalization bound for the proposed method, which is a valuable contribution.
Weaknesses: Major:
(1) Recent empirical and theoretical studies [1] show that simply reweighting the loss of different objectives can match the performance of gradient surgery. When the model is under-parameterized (which may be the case for prompt learning, as the prompt parameters are relatively small compared to the CLIP parameters), simple reweighting is sufficient to fully explore the Pareto front. My question is why the proposed gradient manipulation is better. Can the authors provide theoretical analysis or empirical validation?
[1] Revisiting Scalarization in Multi-Task Learning: A Theoretical Perspective
(2) The backbone CLIP-ResNet-50 is a weak backbone. Please consider conducting some experiments on a CLIP-ViT based model.
Minor:
(1) Missing references. There have been some studies that use gradient alignment for prompt learning, e.g., [3,4]. Please consider adding them to the related works section.
[3] Prompt-aligned Gradient for Prompt Tuning
[4] Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models
(2) Table 1 in the Introduction is not well explained. It does not specify which dataset and CLIP backbone are used, and the symbols $\rightarrow C, I, P$ are not explained. In addition, Table 2 should be referenced as Figure 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses, and I am willing to raise the score if the major weaknesses are addressed.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations have been discussed and no negative societal impact has been identified.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >1. why the proposed gradient manipulation is better?:
Thank you for your insightful question. We would appreciate further clarification on your reference to the paper discussing the limitations of scalarization, particularly in under-parametrized setups, as it seems to suggest scalarization’s inherent inability to fully explore the solution space.
- Regarding your point of the efficiency of scalarization over multi-task learning (MTL) methods, we discuss this in lines 53-59 of the main text. In particular, we simply re-weight per-task gradients (similar to scalarization) instead of adopting multi task learning methods. In addition, the MTL methods in [33,92] are generic methods and are designed to optimize all objectives simultaneously. In contrast, we focus more on adapting our model to the target task rather than training a model that has universal performance across all domains or profiling the entire Pareto front. This further motivates our choice to use re-weighting source and target objectives' gradients.
- While studies [33, 92] demonstrate scalarization’s advantages over certain MTL methods like IMTL, MGDA, GradDrop, and PCGrad (ProGrad), they do not establish scalarization’s equivalence with all gradient manipulation techniques. In contrast, gradient alignment has been proven effective in learning invariant features [69], and penalizing gradient norms is widely recognized for enhancing model generalization [102, 18, 4, 90]. Empirically, we conduct some ablation studies on these two components in Table 6, the illustrative example, and gradient similarity experiments, to show that solely using scalarization is not enough.
2. We provide results of our methods using ViT-B/16, ViT-L/14 backbones on OfficeHome in Tables 3 and 5 in the attached PDF, following experimental setups in [R1, R2]. We can observe the superiority of our methods among all baselines while finetuning a small portion of the backbones.
3. Thank you for the suggested references. We will add them in the revision.
4. Caption for Table 1 and reference typos: Thank you for pointing these out, we will fix them in the revision. The experiment in Table 1 is on dataset ImageCLEF with I, P, C representing ImageNet ILSVRC 2012, Pascal VOC 2012, and Caltech-256, respectively, and ResNet50 backbone is adopted. The results in Table 1 in fact are from Table 3, we will add detailed discription for this experiment in the later revision.
We hope our response could help address your concerns, we are very open to further discussion.
[R1] Wang, Zhenbin, et al. "Landa: Language-guided multi-source domain adaptation.", arXiv 24
[R2] Singha, Mainak, et al. "Ad-clip: Adapting domains in prompt space using clip.", ICCV 23
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. All my concerns are resolved and I have raised my rating
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We sincerely thank you for your insightful review and supportive feedback to make the paper more complete. Your support is highly appreciated, and we are glad that our responses have addressed your concerns. | Summary: This paper proposed a novel domain adaptation method that tunes the text prompts based on self-training with pseudo labels. The proposed method treats the source training and target training as multi-objective optimization problems, and it introduces to alignment of the gradients from both training (Prompt Gradient Alignment) so as not to cause conflict between the source and target training. The authors validated the proposed method empirically and theoretically.
Strengths: + This paper is rich in technical novelty.
+ There are many methods in domain generalization research that aim to align gradients from multiple domains. However, requires heavy hessian calculations. The proposed method successfully avoids this issue by approximation with Taylor expansion.
+ The proposed method is empirically and theoretically justified.
Weaknesses: + It is questionable whether the comparison with existing methods is fair.
+ The old domain adaptation methods fail to show their effectiveness when they are applied to the CLIP model. I suspect that those domain adaptation methods were not properly applied to CLIP. Is there any reason why they fail?
+ Conversely, would it not be possible to evaluate the proposed method in accordance with existing domain adaptation benchmarks? For example, would it be possible to evaluate the proposed method by updating the model parameters instead of the prompts?
+ There are some additional related works that should be discussed in the paper.
+ Bose, Shirsha, et al. "Stylip: Multi-scale style-conditioned prompt learning for clip-based domain generalization." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024. This work proposes a prompt training method for the domain generalization tasks.
+ Zhu, Beier, et al. "Prompt-aligned gradient for prompt tuning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. This introduces a gradient matching idea into prompt tuning, which has a similar mind to this paper.
+ Some of the formulas are a little difficult to follow.
+ It is very difficult to understand which of eq 9 or eq 11 is the actual loss function. This goes for eq 16 and eq 18 as well. I recommend separating the formulations for easier understanding.
Technical Quality: 3
Clarity: 2
Questions for Authors: + As mentioned in the third bullet of weaknesses, I could not fully understand the formulations. Could you explain which is the actual loss function?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: + I did not found any additional limitation for this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > whether the comparison with existing methods is fair
As we follow the experimental settings in [6, 15, 22, R4, R5, R6], we use their reported results which we believe have been verified to be fair when comparing old UDA methods and prompt-based methods. Specifically, they share the same vision backbone as prompt-based ones, e.g. ResNet, and are optimized to reach their full potential. Only one advantage of prompt-based methods is the use of additional textual information about class names when making predictions. However, this comparison setup is not only used in UDA, but also adopted in many works in other domains such as spurious debiasing [R7], continual learning [R1, R2]. Therefore, we believe that old UDA baselines are fairly compared. Additionally, we include comparisons with more recent CLIP-based baselines in Table 5 of the attached PDF. Here we finetune only a small portion of the model compared to other baselines, but still obtain competitive performance.
>is there any reason why they fail?
Their underperformance could be attributed to the difficulty of using a single backbone to simultaneously learn domain-invariant and class discriminative features [22, 75]. This difficulty usually arises from the inherent entanglement between domain and semantic information [R3], which can be further amplified as more source domains are involved [6]. In contrast, we not only devote a set of shared prompts to learn domain-invariant features, but also domain-specific prompts to capture useful information for classification. By leveraging these prompts, we create a more meaningful representation, leading to more precise predictions for the target domain
> would it be possible to evaluate the proposed method by updating the model
Yes, it is possible. Our proposed algorithm and theory development are generic and could be extended to finetuning the entire model. However, this would still entail a large computation and space complexity due to the size of the whole model, despite the avoidance of Hessian matrix calculation.
Even so, adapting the full model on source/target domain can easily lead to overfitting and potentially inferior results in the presence of domain shift, even in the case of transformer models [1, 14, 34]. Therefore we opt for prompt learning which is a better way to adapt CLIP to downstream tasks (similar to other CLIP-based baselines) and is less likely to overfit.
> Related works
Thanks for the suggestion, we will add them in the revision. Briefly, the second work, ProGrad, manipulates gradient of the fine-tuned loss to preserve general knowledge of the pretrained model. Although it shares the intuition of gradient alignment with our work, there is a significant difference: ProGrad attempts to remove conflicts between per-objective gradients at each time step, similar other gradient-based MTL methods such as [96, 39]. In contrast, we aim to stimulate their inherent consensus throughout training by encouraging the same training trajectory for both domains, hence, the model can find commonly good regions for them. The first work appears unrelated to ours as it approaches UDA from a architecture-view, i.e. introducing additional modules to generate content- and style-aware prompts, whereas we tackle it from the view of model optimization.
> Difficulty of some formulas and the actual loss function
- Ideally, Eq.(9) and Eq.(16) would be the actual loss functions that we want to take derivative. The reason we approximate them to get Eq.(11) and Eq.(18) is to show that minimizing them can fulfill our purposes of (i) aligning prompt gradients and (ii) penalizing prompt gradient norms.
- Note that Eq.(16) indicates our full algorithm while Eq.(9) is only about gradient alignment.
- However, one difficulty in taking derivative of Eq.(9) or Eq.(16) is the computation of Hessian matrix, hence we devise a practical approximation as shown in Eq.(13) and Eq.(19), which allows us to only need to compute gradient of the original source/target loss, i.e. Eqs.(3,4), at the perturbed prompts.
- Please refer to the general response to see the actual loss function.
[R1] Wang, Yabin, et al. "S-prompts learning with pre-trained transformers: An occam’s razor for domain incremental learning.", NeurIPS 22
[R2] Yu, Jiazuo, et al. "Boosting continual learning of vision-language models via mixture-of-experts adapters.", CVPR 24
[R3] Cai, Ruichu, et al. "Learning disentangled semantic representation for domain adaptation." IJCAI 19
[R4] Bai, Shuanghao, et al. "Prompt-based distribution alignment for unsupervised domain adaptation." AAAI 24
[R5] Li, Xinyao, et al. "Split to Merge: Unifying Separated Modalities for Unsupervised Domain Adaptation." CVPR 24
[R6] Du, Zhekai, et al. "Domain-agnostic mutual prompting for unsupervised domain adaptation." CVPR 24
[R7] Phan, Hoang, et al. "Controllable Prompt Tuning For Balancing Group Distributional Robustness", ICML 24
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses. I think my concerns have mostly been addressed. I still have concerns about the performance of the old domain adaptation methods, but I understand that is not a unique weakness of this paper but applied to all prior studies. I acknowledge that the proposed method is superior to the newer domain adaptation methods (e.g., DAPL) on equal conditions, so this concern may not be so critical. In conclusion, I maintain my positive evaluation for this paper.
---
Rebuttal 2:
Title: Author response to Reviewer dhjM
Comment: We would like to thank the reviewer for acknowledging our effort and we are encouraged that our responses have addressed most of your concerns. Regarding your last concern about the fairness of our experiments with traditional UDA methods, we have examined DANN and CDAN using a CLIP backbone to provide more insight into the limitations of domain-invariant feature learning in adapting pretrained CLIP to new domains. We will run more experiments on other datasets and incorporate obtained results to the revision. It is important to note that our baseline results, so far, are taken directly from prior work, as we adhere to the recent protocols for adapting CLIP from prior work.
| Method | Backbone | →C | →I | →P | Average | # Param |
|-|-|-|-|-|-|-|
| SOURCE ONLY | RN50-FC | 94.7 | 90.2 | 79.3 | 88.1 | 38M |
| DANN | RN50-FC | 96.0 | 92.3 | 80.2 | 89.5 | 38M |
| CDAN | RN50-FC | 96.2 | 92.0 | 80.4 | 89.5 | 38M |
| SOURCE ONLY | RN50 | 93.3 | 88.5 | 78.8 | 86.9 | 38M |
| DANN | RN50 | 94.5 | 91.5 | 79.0 | 88.3 | 38M |
| CDAN | RN50 | 94.7 | 92.0 | 79.0 | 88.6 | 38M |
| SOURCE ONLY | CLIP-RN50 | 93.0 | 90.7 | 78.7 | 87.4 | 102M |
| DANN | CLIP-RN50 | 95.0 | 91.7 | 79.2 | 88.6 | 102M |
| CDAN | CLIP-RN50 | 93.7 | 93.0 | 80.0 | 88.9 | 102M |
| DAN | ResNet50 | 93.3 | 92.2 | 77.6 | 87.7 | 48.9M |
| D-CORAL | ResNet50 | 93.6 | 91.7 | 77.1 | 87.5 | 47.5M |
| DANN | ResNet50 | 95.7 | 91.8 | 77.9 | 87.8 | 48.9M |
| PGA | Prompt-tuning| 96.8 | 95.7 | 84.6 | 92.4 | 114k |
| MPGA | Prompt-tuning| 97.4 | 96.5 | 84.7 | 92.9 | 131k |
Specifically, we applied DANN and CDAN on CLIP’s ResNet50 configured in three different ways: with a randomly initialized fully connected classifier (RN50-FC), with a frozen text encoder (RN50), and using the entire CLIP backbone (CLIP-RN50). The results, presented in the table below, demonstrate that appropriately adapting prior UDA methods to different parts of the CLIP model can yield better results compared to traditional methods on a ResNet backbone (e.g., DAN, CORAL, DANN). Eventhough, PGA and MPGA still exhibit superior performance, even with significantly fewer parameters finetuned. We hypothesize that relying solely on source classification loss and another objective for invariant feature learning can degrade CLIP’s rich semantic representation [R8, R9, R10, R11], which is crucial for predicting target domain data. To counteract this, utilizing target pseudo data (similar to self-training baseline in Table 1) or adopting a more carefully-designed optimization procedure that better leverages information from both source and target data—similar to our proposed method—could enhance performance.
We hope this response provides further insights into why traditional UDA methods may not perform as well as other prompt-based baselines. If the reviewer found any further unaddressed concerns, we are always happy to provide further clarifications and improve our work based on the constructive feedback from the reviewers.
[R8] Kumar, Ananya, et al. "Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution." ICLR 22.
[R9] Zheng, Zangwei, et al. "Preventing zero-shot transfer degradation in continual learning of vision-language models." ICCV 23.
[R10] Lai, Zhengfeng, et al. "Padclip: Pseudo-labeling with adaptive debiasing in clip for unsupervised domain adaptation." ICCV 23.
[R11] Ding, Yuxuan, et al. "Don't stop learning: Towards continual learning for the clip model." arXiv 22.
---
Rebuttal Comment 2.1:
Comment: Thank you for the response about the additional experiments with traditional UDA. These results fully addressed my last concern. I also appreciate the fact that the authors have done these experiments and reported the results that will be very meaningful to subsequent reseaches. Since all my concerns have now been addressed, I have raised my score to 7.
---
Rebuttal 3:
Title: Thank you
Comment: We sincerely thank the reviewer for the engaging discussion and greatly appreciate the valuable suggestions and feedback provided. We are honored by your appreciation of our paper and grateful for your support toward its acceptance. Thank you for your positive evaluation! | Summary: This paper proposes a novel approach called Prompt Gradient Alignment (PGA) for unsupervised domain adaptation (UDA). The key contributions are: (1) Formulating UDA as a multi-objective optimization problem with objectives for source and target domains; (2) Aligning gradients between objectives to encourage consensus; (3) Penalizing gradient norms to improve generalization; (4) Leveraging pre-trained vision-language models through prompt learning. The method is evaluated on standard UDA benchmarks and demonstrates consistent improvements over other prompt-based baselines. A theoretical generalization bound is also provided to justify the approach.
Strengths: The strong empirical results across multiple UDA benchmarks are particularly impressive. The consistent outperformance of other prompt-based methods on datasets like ImageCLEF, Office-Home, and DomainNet (as shown in Tables 3-5) provides robust evidence for the method's effectiveness. The performance gains are substantial in many cases, with improvements of up to 4% on average accuracy compared to state-of-the-art methods.
The theoretical analysis offering a generalization bound (Section 4.5) adds credibility to the approach, providing insights into why the method works and under what conditions it can be expected to perform well. This combination of empirical success and theoretical grounding is a significant strength of the paper.
The ablation studies (Section 5.4, Table 6) effectively demonstrate the contribution of each component of the proposed method, providing a clear understanding of how different elements (like gradient alignment and norm penalization) contribute to the overall performance. The visualization of gradient similarity evolution (Figure 7) offers additional insights into the method's behavior during training.
Weaknesses: The computational complexity and training time comparisons to baseline methods are notably absent. Without this information, it's difficult to assess the practical trade-offs of implementing PGA compared to existing methods. This is particularly important given the method's use of gradient manipulation, which could potentially increase computational requirements.
The paper's heavy reliance on pre-trained CLIP models raises questions about the method's applicability in scenarios where such pre-training is not available or suitable. While the use of pre-trained models is a strength in many cases, it could also be a limitation in domains significantly different from CLIP's training data.
The discussion of hyperparameter sensitivity is limited. Given the importance of hyperparameters like ρga and ρgn in the gradient alignment and norm penalization processes, a more thorough exploration of their impact on performance would be valuable.
While the method shows improvements over existing approaches, the degree of novelty is somewhat incremental. The core ideas build heavily on existing prompt-based and gradient manipulation techniques, which may limit the paper's impact.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does PGA's computational complexity compare to baseline methods? Are there significant differences in training time?
2. Have you explored performance on data types beyond image classification? How generalizable do you expect the method to be?
3. How sensitive is the method to hyperparameter choices, particularly for gradient alignment and norm penalization?
4. Given the reliance on CLIP pre-training, how well do you expect the method to perform in domains very different from CLIP's training data?
5. How does the method's performance change as the degree of domain shift varies? Is there a point where simpler approaches become competitive?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The reliance on CLIP pre-training potentially limits applicability to domains very different from CLIP's training data. It's unclear how well the method would perform on specialized domains (e.g., medical imaging, satellite imagery) that are not well-represented in CLIP's training set.
The paper does not thoroughly explore potential failure cases or scenarios where the method might struggle. For instance, how does the method perform when there are extreme label distribution shifts between source and target domains? Or when the domain shift is particularly severe?
The scalability of the method to very large datasets or high-dimensional data is not addressed. It's unclear how the computational requirements scale with dataset size or feature dimensionality.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. In Figure 1 of the attached PDF we provide the complexity for some comparative baselines. Accuracy curve (left): While DANN and CDAN obtain their best performance at approximately 77% after more than 1000s, PGA and MPGA achieve 84% within 100s. Besides, the first stage of pairwise source-target training of MPA takes 159s, followed by 35s for the second stage to actually train the final model. Number of Trainable Parameters (middle): PGA and MPGA, with fewer than 140k parameters, require significantly fewer parameters than MPA, DANN and CDAN, which have around 1M, 48.9M and 51.7M parameters, respectively. GPU Memory Usage (right)} PGA, MPGA, and MPA exhibit substantially lower memory footprints, around 1300MB compared to 7000MB of DANN and CDAN throughout training.
2. Thank you for the suggestion. Given the task-agnostic nature of our method, where PGA/MPGA utilizes only gradient information, and the successful application of CLIP-based methods in diverse tasks like image segmentation [R2] and object detection [R3], we believe our approach could be similarly effective in other tasks. For instance, applying our method to image segmentation, which requires predictions for each pixel, would necessitate training an additional decoder. Due to constraints in the rebuttal period, this extension is not straightforward, and is considered beyond the scope of our current experimental setup. We leave this for future work. Nonetheless, we believe that our method's superior results in image classification are sufficient to verify its benefits, as we adhere closely to the standard protocols of related UDA methods.
3. Figure 2 in the attached PDF shows that PGA is generally not sensitive to $\rho_{ga}$ and $\rho_{gn}$ within their acceptable range, i.e. 1e-2 to 10 for $\rho_{ga}$ and 1e-5 to 0.1 for $\rho_{gn}$. Specifically, (i) a too large value of $\rho_{gn}$ is less effective than smaller ones; (ii) ImageCLEF prefers larger values of $\rho_{ga}$ while OfficeHome prefers smaller ones, suggesting that source and target domains in the former dataset may be more similar than those in the latter, hence over-matching gradients in the latter dataset may be adverse.
4. It is true that when source and target domains are very different from CLIP's training data, prompt learning alone may not be sufficient to adapt well to the target domain. Yet, our method is still expected to perform better than prompt-based baselines. A failure case is discussed in our adaptation to the QuickDraw domain of DomainNet, which contains black and white sketches. Since this domain differs significantly from CLIP's training data, all prompt-based methods fail to achieve good accuracy and perform worse than some full-finetuning methods. This observation suggests integrating our methods with other invariant feature learning methods (lines 32-33 in our manuscript) or finetuning more parameters can better adapt to the target domain. Nonetheless, our PGA and MPGA still yield the highest results among prompt-based ones, indicating that our gradient alignment and norm penalization procedure is more helpful in closing domain gap than previous DAPL and MPA. Another example where prompt-based cannot surpass traditional UDA baselines is Rw→Cl from OfficeHome in Table 5.
To conclude, for specialized domains not well-represented by the pretrained CLIP model, a more effective strategy might involve updating additional parameters, such as the layers close to the output in both vision and text encoders, beyond just the prompts. This could lead to better domain adaptation at the expense of computational complexity increase. We acknowledge this limitation in Appendix D and leave it for future work.
5. We test PGA on the setting of label shift following [R1], where the source or target domains are down-sampled with only 30% of data from the first-half of the classes are taken (indicated by **s-** prefix). Results presented in Table 4 in the attached PDF, table below and Table 5 in main text show the effectiveness of PGA on different levels of label shift.
| Method | Ar→sCl | Ar→sPr | Ar→sRw | Cl→sAr | Cl→sPr | Cl→sRw | Pr→sAr | Pr→sCl | Pr→sRw | Rw→sAr | Rw→sCl | Rw→sPr | Avg |
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
| ResNet-50 | 41.5 | 65.8 | 73.6 | 52.2 | 59.5 | 63.6 | 51.5 | 36.4 | 71.3 | 65.2 | 42.8 | 75.4 | 58.2 |
| DANN | 47.8 | 55.9 | 66.0 | 45.3 | 54.8 | 56.8 | 49.4 | 48.0 | 70.2 | 65.4 | 55.5 | 72.7 | 58.3 |
| JAN | 45.8 | 69.7 | 74.9 | 53.9 | 63.2 | 65.0 | 56 | 42.5 | 74 | 65.9 | 47.4 | 78.8 | 61.4 |
| CDAN | 51.1 | 69.7 | 74.6 | 56.9 | 60.4 | 64.6 | 57.2 | 45.5 | 75.6 | 68.5 | 52.7 | 79.8 | 63.0 |
| IWDANN | 48.7 | 62.0 | 71.6 | 50.4 | 57.0 | 60.3 | 51.4 | 41.1 | 69.9 | 62.6 | 51.0 | 77.2 | 58.6 |
| IWJAN | 44.0 | 71.9 | 75.1 | 55.2 | 65.0 | 67.7 | 57.1 | 42.4 | 74.9 | 66.1 | 46.1 | 78.5 | 62.0 |
| IWCDAN | 52.3 | 72.2 | 76.3 | 56.9 | 67.3 | 67.7 | 57.2 | 44.8 | 77.8 | 67.3 | 53.3 | 80.6 | 64.6 |
| PCT| 57.5 | 78.2 | 80.5 | 66.7 | 74.3 | 75.4 | 64.6 | 50.7 | 81.3 | 72.9 | 57.3 | 83.5 | 70.2 |
| PGA (Ours) | 57.4 | 84.8 | 86.4 | 76.0 | 84.6 | 85.6 | 74.5 | 57.1 | 86.1 | 75.9 | 57.4 | 85.3 | 75.9 |
6. Scalability w.r.t dataset-size: Our experiments already included small (ImageCLEF), medium (OfficeHome), and large-scale (DomainNet, S2RDA) datasets. Furthermore, our proposed optimization procedure is not data-dependent, hence the increase in data size or dimension would not cause computational overhead for PGA and MPGA compared to other baselines.
[R1] Tanwisuth, Korawat, et al. "A prototype-oriented framework for unsupervised domain adaptation." NeurIPS 21
[R2] Wang, Zhaoqing, et al. "Cris: Clip-driven referring image segmentation." CVPR 22.
[R3] Wu, Xiaoshi, et al. "Cora: Adapting clip for open-vocabulary detection with region prompting and anchor pre-matching." ICCV 23.
---
Rebuttal 2:
Comment: Hi,
Could you take a look at the authors rebuttal and finalize your rating?
Thanks, AC | Summary: To enhance both transferability and discriminability for prompt learning based domain adaptation, this paper proposes a Prompt Gradient Alignment (PGA) method. PGA encompasses multiple domain-wise classification objectives, cosine similarity maximization regularizers between prompt gradients of different domains, and prompt gradient norm penalty for each classification objective. It can thus achieve both inter-domain gradient alignment and flat minima enforcement. To efficiently and effectively update the designed prompts, a practical gradient update procedure is devised and works under both single-source and multi-source UDA. Empirical results shown the superiority of the proposed method.
Strengths: 1. A novel method with an information-theoretic explanation. Aligning prompt gradients across domains moves the shared prompt towards the low-loss region of both domains, such that domain-invariant class-discriminative features can be captured, thus benifiting both domains.
2. A practical, flexible, and efficient gradient update procedure. It has a weighting term on the source signal to control how much emphasis should be put on the target domain and avoids the costly Hessian computation.
3. A well-written article. The paper has a smooth logic and is easy to understand.
Weaknesses: 1. The abstract should reflect the technical highlights. For example, UDA is essentially a multiple-objective optimization problem, which is not the technical novelty of this paper.
2. As indicated by Theorem 4.1, the proposed method minimizing source empirical loss, gradient norms, and gradient misalignment, model and reduce the upper bound of the established generalization error. Nevertheless, it is not intuitive why minimizing gradient norms and gradient misalignment are beneficial for the performance on the target domain.
3. In Theorem 4.1, g_t^src and g_t^T are the gradients w.r.t. P_t of source loss and target loss respectively. The two gradients should be derived from the same prompt or different prompts? Can the gradient alignment theory be extended to other spaces like image, feature, and output spaces?
4. Due to use of pseudo labels for the target domain, though they are improved by confidence filtering, the accuracy of pseudo labels may greatly affect the success of inter-domain gradient alignment. It is encouraged to investigate the relationship between them.
5. Some important related works are missing in the discussion of related works, e.g., [a].
[a] Tang et al. Unsupervised domain adaptation via distilled discriminative clustering. PR, 2022.
6. Validation on larger datasets like S2RDA [b] is important for three reasons. First, it is a more large-scale, realistic, challenging multi-domain benchmark dataset. Second, it is tailored for synthetic-to-real transfer, more in line with industrial needs. Third, this paper leverages the powerful generalization capability of large-scale pre-trained vision-language models and fine-tuning with a large amount of training data can avoid the potential risk of overfitting.
[b] Tang and Jia. A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation. CVPR, 2023.
7. What circumstances does the overfitting issue appear? Why does penalizing gradient norms alleviate the overfitting issue? Are there any theoretical reasons?
8. In Eq. (6), L_tgt should be L_T? In Eq. (20), L_s should be L_s^PGA? The authors may re-write the final overall objective for clarity.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: 1. Thanks for the suggestion, we will revise the abstract to reflect our technical contributions more clearly. Regarding the multiple objective optimization viewpoint, our MOO problem consists of per-domain objectives, motivated by the strong performance of the self-training baseline in Section 1. This empirical results show that optimizing target loss using pseudo data alone is already a strong baseline, which is not typically found in traditional UDA methods. They instead often combine source loss with auxiliary domain alignment objectives.
2.
- As can be seen from the first term on the R.H.S of Theorem 4.1, minimizing these two terms will reduce the generalization error, thereby closing the gap between target population risk and source empirical risk. Furthermore, as we also minimize the latter through the cross entropy loss on labeled source data, the former will be reduced. This is the ultimate goal an UDA method aims to achieve, as low population target risk indicates a model having a good generalizability on target data.
- Intuitively, maximizing gradient alignment encourages the optimization trajectories to be the same for all domains, and minimizing gradient norm helps find flat regions for both shared and domain-specific prompts, which alleviates overfitting and generally leads to better performance on source/target data [4, 18, 60]. Altogether, performance on target domain can be improved.
3.
- ${g_t^{src};g_t^{tgt}}$ should be derived from the same prompt, and in the theorem, to ease the proof, they refer to the whole prompt set $[P_{sh}, P_S, P_T]$ at time step $t$. Note that this does not contradict our gradient alignment strategy which is applied on the shared prompt only. Indeed, as we showed in Remark A.7, since $P_T$ is not involved when computing the source loss, we can write $g^{src}_t = [g^{sh,src}_t, g^S_t, 0]$. Similarly, $g^{tgt}_t = [g^{sh, tgt}_t, 0, g^T_t]$. Therefore, the inner product between $g^{src}_t$ and $g^{tgt}_t$ is equal to the inner product between $g^{sh,src}_t$ and $g^{sh,tgt}_T$.
- Extension of the theory to other spaces like image, feature, and output spaces: Please note that the second term in Theorem 4.1, which contains the KL-divergence between the two underlying source and target distributions $\mu, \mu'$, is the motivation for aligning these two distributions in image, feature or output space. Normally, this is done in feature space due to the rich representation captured by the feature extractor. For example, marginal feature alignment is adopted in [R1] and conditional feature alignment in [R2]. In the case of conditional alignment, one has to assign pseudo labels to target data using the learned source model [R3], which may lead to accumulation error. In our case, as we use zero-shot CLIP to predict pseudo labels, the error could be alleviated. Last but not least, as mentioned in the paper, incorporating our gradient matching and norm penalization into those methods fully reflects the two terms from the R.H.S of the bound, hence could further boost performance.
4. We provide an ablation varying the value of threshold $\tau$ in Table 2 of the attached PDF. Our methods are not sensitive to $\tau$, and the best result is obtained at a reasonable trade-off between the quantity and quality of pseudo data.
5. Thank you for your suggestion, we will add this paper to the related works and its results to the Office-Home experiment, where they obtain an average accuracy of 71.4 compared to 75.8 (ours).
6. We include the performance of CLIP-based models on two Synthetic-to-Real datasets S2RDA-49 and S2RDA-MS-39 in Table 1 of the attached PDF. PGA achieves the best performance among the baselines.
7.
- When does the overfitting issue appear: Since we are optimizing multiple objectives across domains, naively minimizing them might lead to overfitting on some particularly easy-to-optimize or limited-data objectives, while forcibly minimizing them could harm other objectives.
- Why does penalizing gradient norms help: Briefly, in standard supervised training, minimizing gradnorm will lower the population risk, hence reduce overfitting. Similarly, in UDA, applying this on source and target data will lead to lower population risks on respective domains. Specifically, followed Theorem 1 in [55], we can upper bound these risks as:
$[L_{\mu}(P), L_{\mu'}(P)] \leq \max_{||\epsilon_{sh}||\leq\rho_{gn}}[\max_{||\epsilon_S||\leq\rho_{gn}}L_S(P_{sh}+\epsilon_{sh},P_{S}+\epsilon_S)+f_S(||P||), \max_{||\epsilon_T||\leq\rho_{gn}}L_T(P_{sh}+\epsilon_{sh},P_{T}+\epsilon_T)+f_T(||P||)],$
where $f_S,f_T$ are strictly increasing functions.
Furthermore, the worst-case source loss can be approximated as
$\max_{||\epsilon_{sh}||\leq\rho_{gn}}\max_{||\epsilon_S||\leq\rho_{gn}}L_S(P_{sh}+\epsilon_{sh},P_{S}+\epsilon_S) \approx L_S(P)+\rho_{gn}(||g_{sh,S}||+||g_S||),$ and similarly for the worst-case target loss.
8.
- "In Eq. (6)". Thanks for pointing this out. Yes, $L_T$ is the correct notation. Also in the definition of the gradient w.r.t target loss (in line 224), the correct notation should be $g_t^{tgt}$. We will fix these in the revision.
- In Eq. (20): In fact, it should be $L_S$, which is similar to $L_T$ in the approximation of Eq.(19).
Informally, the PGA gradients corresponding to the source objective are the derivative of $L_S^{\text{PGA}}$ w.r.t $P$, which are then approximated by the derivative of the source loss at $L_S$ the perturbed prompts: $P_{sh}=P_{sh}-\rho_{ga}{b}+\rho_{gn}\frac{g_{sh,S}}{||g_{sh,S}||}, P_S =P_S+\rho_{gn}\frac{g_S}{||g_S||}$. (This approximation follows the derivation of Eq.(13)).
- Overall objective: please refer to the general response.
[R1] Nguyen, A. Tuan, et al. "KL guided domain adaptation.", ICLR 22
[R2] Nguyen, A. Tuan, et al. "Domain invariant representation learning with domain density transformations.", NeurIPS 21
[R3] Long, M., et al. "Conditional adversarial domain adaptation", NeurIPS 18
---
Rebuttal Comment 1.1:
Comment: I'd like to acknowledge receipt of your response, e.g., the supplementary experiments involving traditional UDA and larger datasets. The results effectively alleviate my previous apprehensions. I commend the authors for conducting these experiments and sharing results that will significantly impact future research. Given that all my concerns have been satisfactorily addressed, I have revised my rating to 8.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you for raising your concerns, and for your detailed review and insightful suggestions. Addressing these points will undoubtedly enhance the paper's presentation and more thoroughly examine our proposed method. We will further follow your valuable recommendation. | Rebuttal 1:
Rebuttal: We thank all reviewers for the valuable and supportive feedback. We appreciate that our paper is recognized for having strong **empirical results** (reviewers JYym, dhjM), a **generalization bound** which adds credibility to our approach (all reviewers), a **novel** method with **clear intuitions** (reviewer aPJr, dhjM, YFTB) and efficiency (reviewer YFTB, dhjM), and a **clear and coherent structure** (reviewer aPJr, YFTB).
In this paper, we tackle the unsupervised domain adaptation (UDA) problem by framing it as a multi-objective optimization task, which is motivated by the strong performance of self-training on the target domain, i.e. minimizing target domain loss on pseudo-labeled data. Furthermore, to promote inherent consensus between domains and encourage model flatness, we propose to align their gradients and penalize their gradnorms with an efficient gradient update procedure, supported by a backup generalization error bound.
Inspired by the reviewers' comments, we have enhanced our manuscript by including additional experimental results, responses, and clarifications, which are briefly summarized as follows:
- Results on S2RDA datasets (reviewer YFTB), ViT backbones (reviewer aPJr), and label-shift setting (reviewer JYym)
- Analysis on hyper-parameter sensitivity (reviewers JYym, YFTB) & computational complexity (reviewer JYym)
- Clarification on the final loss function (reviewers YFTB, dhjM)
- Discussion on failure cases (reviewer JYym) and on the difference between our method and gradient-surgery ones (reviewer aPJr).
We also attach a **PDF** file that contains our main additional results, which we will incorporate in the revision. Except for the complexity benchmark, results for all baselines are directly reported from prior work for a fair comparison. We hope that the new results will help address reviewers' concerns.
Here we would like to clarify the formula for final loss functions, and then present detailed responses separately for each reviewer.
**Final loss function**
As we cast UDA as a MOO problem, the ideal final objectives, in the case of single UDA, would be $[L_S^{PGA}(P), L_T^{PGA}(P)]$, where:
$L_T^{PGA}(P) := L_T(P_{sh} - \rho_{ga}\frac{g_{sh,S}}{||g_{sh,S}||.||g_{sh,T}||} + \rho_{gn}\frac{g_{sh,T}}{||g_{sh,T}||}, P_T + \rho_{gn}\frac{g_T}{||g_T||}),$
$L_S^{PGA}(P) := L_{S}(P_{sh} - \rho_{ga}\frac{g_{sh,T}}{||g_{sh,S}||.||g_{sh,T}||} + \rho_{gn}\frac{g_{sh,S}}{||g_{sh,S}||}, P_S + \rho_{gn}\frac{g_S}{||g_S||}).$
As mentioned in the paper (e.g. line 58-59, 172-174), we use scalarization method [30], i.e. simply reweighting loss functions with $\lambda$ put on the PGA source objective. As a result, the gradient updates for prompts are
$g_{sh,T}^{PGA},g_T^{PGA} = \nabla_P L_T^{PGA}(P), \quad g_{sh,S}^{PGA}, g_S^{PGA} = \nabla_P L_S^{PGA}(P),$
$P_S = P_S-\eta g ^{PGA}_S, P_T = P_T-\eta g^{PGA}_T,$
$P_{sh} = P_{sh} - \eta (g_{sh,T}^{PGA}+\lambda g_{sh,S}^{PGA}).$
However, computing these PGA gradients will trigger the computation of Hessian matrix. Hence, we implicitly calculate them via a practical algorithm that is shown in Eq.(19, 20) in the main text.
Pdf: /pdf/334567e2e17e08296c7e47cea90d3dbcf8573e58.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Bayesian-guided Label Mapping for Visual Reprogramming | Accept (oral) | Summary: This paper focuses on a technique called label mapping (LM) which is used in visual reprogramming. It finds the relation between pre-trained labels and downstream labels. In conventional fine-tuning, LM is just the backpropogation using the loss function. However, this paper and previous similar papers aim to develop a gradient-free way to find the weights in the last layer in the conventional fine-tuning. The whole paper is based on Bayes' rules (using it on the output of a pre-trained model, which is interesting), which is solid and theoretically justified.
Strengths: 1. Good paper flow. The flow of this paper is good and appreciated. Directly analyzing the loss using the Bayes' rule is clear.
2. Extensive experiments. Except for the good performance, the authors also try to understand the good performance, which is interesting. Especially on the visualization of top relationship between pre-trained labels and downstream labels. The result is convincing, as the selected top pre-trained labels are indeed related to downstream tasks.
3. Timing topic. I can see some applications which might need this kind of techniques. Keeping the integrity of a pre-trained model has many advantages, e.g., not lose the generalization of the pre-trained model.
4. Solid analysis. Two theoretical contributions are included in this paper. One is how to embed a pre-trained model into p(y^T|x^T), another one is how BLM has a higher accuracy.
Weaknesses: 1. Benchmark datasets follow the previous papers, which is good. However, these tasks seem not the most relevant to what you propose in this paper. Are there some scenarios where the fine-tuning the last layer is not feasible?
2. The paragraph between 253-266 is confusing, which can be moved to other section. Appendix perhaps.
3. One major drawback of the presentation is the algorithm tables. The equations are still not the best way for practitioners. Main algorithm table should be moved in Section 4. After looking at the algorithms, it is easy to find that the proposed method is easy to be implemented. However, I cannot feel that after I read the section 4.
4. How to choose Padding or Watermarking in practice?
5. Experiments to increase n are not necessary. Could the authors explain why it matters? As long as the method is valid, the accuracy will be improved when n increase.
Technical Quality: 4
Clarity: 4
Questions for Authors: See the weaknesses above.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: No concerns for this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Thank you for your question. This paper falls in the scope of Visual Reprogramming (VR). Thus, the experimental setting of this paper aligns with those used in previous VR studies, particularly ILM [1], to ensure comparability and consistency in the field. This setting serves as an established benchmark for evaluating VR techniques.
Regarding when fine-tuning the last layer is not feasible, we consider the following cases:
1. copyright and legal constraints: modifying some pretrained models may violate licensing agreements or intellectual property rights.
2. prevent catastrophic forgetting: keeping the pretrained model intact helps to maintain its general knowledge across tasks
3. black-box optimization: in addition, our method is useful when only the predicted logits of input are available. It does not need access to the model's internal parameters which fine-tuning clearly struggles to handle.
**W2:** Thank you for your question. Due to space limitations, lines 253-266 of the original text are not fully described. We will move this section to the appendix in the next version and include detailed information.
**W3:** We appreciate your feedback regarding the presentation of Sec. 4 and algorithm tables. We acknowledge your point about the placement of the algorithm tables. Due to space constraints, we had initially placed them in the Appendix (lines 455 and 462). However, we recognize the importance of making these algorithms more readily accessible to readers. We plan to move the main algorithm table to Sec. 4 as you suggested in the final version.
We understand your concern regarding the equations in Sec. 4. While the mathematical formulations are important for a rigorous presentation, we will smooth out the section with more practical insights. We aim to strike a balance between mathematical precision and practical applicability.
**W4:** Thank you for your question. The use of Padding or Watermarking depends on the specific downstream tasks, and we believe the selection may partly depend on the relationship between the pretrained model's input size and the downstream task's image dimensions.
For instance, let’s consider the results of BLM+ using ResNet-18 as the pre-trained model:
| | GTSRB | SVHN | CIFAR10 | CIFAR100 | Flowers102 | EuroSAT | OxfordPets | SUN397 | UCF101 | Food101 | DTD |
| -------- | -------- | -------- | -------- | -------- | -------- | ------- | -------- | -------- | -------- | -------- | -------- |
| Image Size | 32 | 32 | 32 | 32 | 128 | 128 | 128 | 128 | 128 | 128 | 128 |
| Padding | 54.3 | 74.2 | 66.8 | 30.6 | 50.1 | 86.7 | 70.6 | 18.7 | 32.0 | 25.1 | 43.9 |
| Watermarking | 82.0 | 78.8 | 75.7 | 41.6 | 44.1 | 84.8 | 73.3 | 19.4 | 35.4 | 22.9 | 43.0 |
Our observations:
- watermarking performs better when there is a significant size disparity: for tasks with much smaller images (32x32, e.g., GTSRB, SVHN, CIFAR10, CIFAR100) compared to the pre-trained model's input (224x224), watermarking often outperforms padding. It prevents introducing too many parameters around the image and significantly downscaling it.
- padding may perform better with larger downstream images in some cases: for tasks using larger images (128x128, e.g., Flowers102, EuroSAT), padding tends to perform better. It maintains the original image integrity by avoiding pixel value alterations.
In general, the choice impacts how we adapt pretrained model to new tasks. Therefore, the goal for input VR is to maximize the transfer of learned features while accommodating the new task's visual characteristics - the optimal choice may depend on factors beyond just image size and deserves future explorations.
**W5:** Thank you! Indeed, the accuracy typically improves with larger training sets. Our focus on smaller $n$ values serves as a purpose in evaluating the robustness of our proposed methods, BLM/BLM+, compared to the baseline ILM under limited data availability conditions. Our rationale for this evaluation is:
1. Practical implications: VR was proposed for adapting pretrained model to data-limited tasks. In this sense, obtaining large-scale labeled downstream task data can be challenging. By demonstrating robust performance with smaller $n$ (25%, 50%, and 75% of the original size), we highlight the practical advantages of BLM/BLM+ in data-limited scenarios, which better aligns with real-world applications.
2. Overfitting risk: Smaller training sets inherently carry a higher risk of overfitting. By testing our methods under these conditions, we can better assess their generalization capabilities compared to the baseline ILM.
Thus, these experiments were conducted to validate the effectiveness and reliability of BLM/BLM+ across various data regimes. While increasing $n$ may not be that meaningful, our evaluation aims to demonstrate that BLM/BLM+ maintains competitive performance even with limited data, thus providing a more comprehensive evaluation of the practical utility.
[1] Chen et al. Understanding and improving visual prompting: A label-mapping perspective. In CVPR, 2023. | Summary: Visual reprogramming is an interesting way to reuse a pre-trained classifier or an VLM. In previous methods, the way to change the output interface is basically gradient-free and one-on-one mapping. In this paper, the authors found that the previous way is suboptimal and ignores information. Then, from a theoretical perspective, a new objective is derived using Bayes' theorem. By optimizing this objective, the performance of VR methods is significantly improved, which is demonstrated via extensive experiments. A theoretical study is provided as well.
Strengths: 1. The research topic is of significance. VR is useful in practice. This paper understands and extends the label mapping from a mathematical perspective, which steps much further compared to previous methods.
2. The derivation regarding w is could generally cover previous methods (which is probably the first). The drawback of previous methods is manifested based on the derivation.
3. Math expressions are easy to follow.
4. Abundant experiments are provided, including large datasets, large models, and VLM. From the experimental results, we can see the improvement is universal, justifying the general effectiveness of the proposed method.
Weaknesses: 1. In Eq. (1), what is the real gap between right and left terms? It seems not easy to get this gap. I suggest using right one directly, which is easy to be acceptable.
2. Above the Eq. (6), one might misunderstand the way to calculate the frequency. Using () might be better.
3. Figure 2 is confusing. It looks like step 3 follows step 1, rather than step 2 following step 1. What is the actual sequence between step 1~3?
4. In section 4.2, the calculation of accuracy is confusing. $x$ is missing.
5. What does $\bar{r}$ mean in l.262?
6. Training time is potentially a concern. How many training epochs are needed for BLM?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to "Weaknesses".
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No further concerns regarding limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Thanks! The gap between left-hand side (LHS) and right-hand side (RHS) comes from the relationship between Maximum Likelihood Estimation (LHS) and Empirical Risk Minimization (RHS) in statistical learning theory.
- LHS: represents the true objective of VR is to maximize the conditional probability of downstream label given downstream input image.
- RHS: provides a practical and empirical approximation using a finite training set and a chosen loss function.
This formulation aligns with fundamental principles in learning theory:
- We use empirical risk as a proxy for the expected risk.
- A loss function is chosen to approximate the negative log-likelihood.
- This approximation connects to generalization bounds. The gap between the expected and empirical risks can be bounded using techniques like VC-dimension.
- The law of large numbers ensures convergence of empirical to expected risk as the training set size increases.
However, we agree that using RHS directly is more straightforward. We plan to incorporate your suggestion in the next version for easier understanding.
**W2:** Thanks for the suggestion, we will include () in the next version.
**W3:** Thank you for your question. In fact, step 3 (applying $\omega$) is performed after step 2 (calculating $\omega$, and the blue dotted line indicates the flow of $\omega$). However, after calculating $\omega$, the logits output from step 1 will also be used to obtain the final prediction result in step 3 (as shown by the purple solid line). We will revise Fig. 2 to more clearly illustrate this process in the next version of the paper.
**W4:** Thanks for pointing it out. The omission of $x$ in Eq. 10 was intentional because:
- Our analysis in Sec 4.2 (and Appendix Sec. C) focuses on output label mapping (LM) and specifically compares different LM methods, provided that the pretrained model $f_{\rm pre}$, input $x$, and input transformations $f_{\rm in}$ are the same across different LM methods (details in Appendix Sec. C.1).
- The expected accuracy calculation is based on conditional probabilities where $x$ is implicitly part of the condition. As all LM methods operate on the same $x$, we can safely omit it, allowing us to concisely highlight the differences of LM themselves.
We will also add a note in the main text to clarify this omission, emphasizing our focus on comparing LM methods and the scope of this analysis.
**W5:** Thank you for your question. We will clarify it in the next version. $y_r^{\rm S}$ means a class that is more relevant to $y_{i}^{\rm T}$ (line 256), while $y_{\bar r}^{\rm S}$ is a class that is less relevant to $y_{i}^{\rm T}$ (line 257). Here we assume two classes, $y_{r}^{\rm S}$ and $y_{\bar r}^{\rm S}$, in the output space of the pre-trained model, which satisfy $p(Y^{\rm T}=y_i^{\rm T}|Y^{\rm S}=y_r^{\rm S}, X^{\rm T})>p(Y^{\rm T}=y_i^{\rm T}|Y^{\rm S}=y_{\bar r}^{\rm S}, X^{\rm T})$.
**W6:** Thank you for your question. Please refer to our reply to **Common Question 1**.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: Thank you very much for your review. My concerns are addressed, and I have raised my score.
---
Reply to Comment 1.1.1:
Title: Thanks for your support!
Comment: Dear Reviewer zxuh,
Many thanks for your support and increasing your score to 8! We will merge all comments into the updated version.
Best regards,
Authors of Submission7993 | Summary: Another type of transfer learning approach is considered in this paper: model reprogramming. Different from classical transfer learning approaches, model reprogramming only changes models via changing the input space and output space, which is more efficient to fit a pretrained model to some downstream tasks. Specifically, this paper focuses on the output space, label mapping. The motivation is strong as it is clear that single map does not work well. Then, new methods are proposed based on Bayes' theorem. Experiments are enough and solid, in terms of model size or dataset size.
Strengths: 1. Transfer learning is even more important in the current era. This paper focuses on an interesting direction, which is significant to the field.
2. The main result of this paper looks general, compared to previous label mapping methods. It is novel to first consider a pretrained model's output into the P(y|x) and then analyze what we should do.
3. I enjoy reading the experiment section, which is quite comprehensive. I did not see some necessary experiment missing.
Weaknesses: 1. Transfer learning literature is missing in this paper. Although model reprogramming is not considered as a transfer learning method in the literature, it is one of transfer learning techniques. This review can help position this paper better.
2. Can we transfer a pretrained model to any possible task? Are there standards to help choose which model should be used for a specific downstream task?
3. For example, if we can use a model trained with cars to help recognize images of animals? It looks impossible. How to tell when we can use a pretrained model?
4. Computing w in BLM also needs time. Is there efficient advantage of VR anymore? What is the performance of finetuning? The gap should be reported in this paper.
5. Main algorithm table should be moved to the main paper. It is not very easy to implement BLM or BLM+ based on the formula, but the algorithms are very helpful.
6. It would be better to explain meaning of some conditional probabilities.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Transfer learning literature is missing in this paper. Although model reprogramming is not considered as a transfer learning method in the literature, it is one of transfer learning techniques. This review can help position this paper better.
2. Can we transfer a pretrained model to any possible task? Are there standards to help choose which model should be used for a specific downstream task?
3. For example, if we can use a model trained with cars to help recognize images of animals? It looks impossible. How to tell when we can use a pretrained model?
4. Computing w in BLM also needs time. Is there efficient advantage of VR anymore? What is the performance of finetuning? The gap should be reported in this paper.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: No concerns for this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1/Q1:** Thank you very much for your suggestion. In the next version, we will expand our literature review to include relevant transfer learning concepts, particularly focusing on how Visual Reprogramming relates to and differs from traditional transfer learning methods. To better position this paper, we'll also discuss the spectrum of transfer learning techniques, connect and compare Visual Reprogramming with other parameter fine-tuning methods, highlighting its merits in scenarios where pretrained model preservation is crucial.
**W2/Q2/W3/Q3:** Thanks for the question. To make our reply relevant, we will discuss the transferability in the context of VR.
**Transferability in VR** Drawing on theoretical foundations from [1], the transferability of a pretrained model to a downstream task can be bounded by:
$$\mathcal{L}^{\rm{T}} \leq \mathcal{L}^{\rm{S}} + \mathcal{W}(\mu^\rm{S}, \mu^\rm{T}) + \epsilon$$
where $\mathcal{L}^{\rm{T}}$ and $\mathcal{L}^{\rm{S}}$ denotes error for downstream and pretrained tasks. $\mathcal{W}(\mu^\rm{S}, \mu^\rm{T})$ is the Wasserstein distance between the logit distributions of pretrained input $\mu^\rm{S} = f_{\rm pre} (x^{\rm S})$ and the reprogrammed downstream input $\mu^\rm{T} = (f_{\rm pre} \circ f_{\text{in}}) (x^{\rm T})$, $\epsilon$ is a small constant. We therefore know that the performance on downstream tasks can be related to the pretrained task performance and the alignment between pretrained task and downstream task.
**Model selection** This bound suggests some insights on selecting models:
- Pretrained model with higher capacity may lead to lower $\mathcal{L}^{\rm{S}}$,
- Pretrained feature is relevant to the downstream task; or input VR and the output LM that effectively transform downstream input and output (both potentially contribute to lower $\mathcal{W}(\mu^\rm{S}, \mu^\rm{T})$ distance, indicating better alignment)
For example, a model with large capacity pretrained on general object recognition (e.g., ResNet on ImageNet) may transfer better to another recognition task than a model pretrained on a highly specialized dataset.
**Transferring between dissimilar domains (e.g., car-animal)** The feasibility depends on how is the Wasserstein distance minimized - this is theoretically possible even for seemingly unrelated domains, but the practical difficulty varies because of the challenge of measuring domain similarity and feature relevance.
**When to use a pretrained model** Currently, there lacks theoretical tools to accurately measure the above Wasserstein distance BEFORE training, which makes it difficult to directly tell when and whether a pretrained model can be used. Therefore, we look forward to future work that focuses on effective ad-hoc estimation of such distance and techniques that minimize it through optimized VR and LM methods.
In short, while VR offers a flexible way of transfer learning, the choice of pretrained model and its adaptability to a specific downstream task depends on both model capacity and the ability to align the feature distributions.
**W4/Q4:** Thank you for your questions. For details on training time, please refer to **Common Question 1**. Regarding the fine-tuning results, since we follow the settings outlined in [2], we will directly quote these results from [2] in the next version of our paper.
**W5:** Thank you for your suggestion. Due to space limitations, we have placed the algorithm tables in the appendix (lines 455 and 462). In the next version, we will prioritize incorporating them into the main text to enhance readability and understanding for our readers.
**W6:** Thank you for your suggestion. We have added a detailed explanation in **Common Questions 2** and introduced a simple example to illustrate. We will include this revision in the final version.
[1] Yang et al. Voice2Series: Reprogramming Acoustic Models for Time Series Classification. in ICML 2021.
[2] Chen et al. Understanding and improving visual prompting: A label-mapping perspective. In CVPR, 2023.
---
Rebuttal Comment 1.1:
Comment: My concerns have been addressed. I will support this paper with the positive score.
---
Reply to Comment 1.1.1:
Title: Many thanks for your support!
Comment: Dear Reviewer 885t,
We are glad to hear that your concerns are addressed. Thanks for your support.
Best,
Authors of Submission7993 | Summary: Pretrained models play a crucial role in current machine learning and computer vision tasks, and effectively leveraging them in downstream tasks has become increasingly important. This paper explores the research area of visual reprogramming (VR), which diverges from traditional fine-tuning by adjusting the input space rather than the parameter space of pretrained models. Additionally, VR necessitates establishing a mapping from pretrained labels to downstream labels, which is the primary focus of this paper. The proposed method for mapping pretrained labels to downstream labels is well-motivated and convincing. The experimental results are robust and demonstrate the efficacy of the approach.
Strengths: 1. The experiments are solid and impressive, covering larger models compared to previous works in this field.
2. The motivation is very clear, adhering to basic probability rules.
3. Two methods are proposed based on different estimation approaches for the key component, demonstrating sufficient technical contribution.
4. Some experiments provide insightful explanations on why the mapping learned from the proposed method is effective, which I found particularly interesting.
Weaknesses: 1. Although the motivation is clear, it feels somewhat dry. More detailed explanations are needed for Section 4.1. Specifically, some of the conditional probabilities require further clarification (e.g., Eq. (6)).
2. It is unclear how Section 4.2 can be extended to the multi-class case. The accuracy metric (Acc) used here appears to be suited only for binary cases.
3. While it is commendable that a universal hyperparameter performs well in your experiments, it raises the question of whether these hyperparameters can be tuned using a validation set. How do all methods perform when evaluated on the same validation set?
4. I am not entirely convinced by Figure 4. Why is pineapple selected as well? This needs further explanation.
5. The absence of algorithm tables in the main content is a significant omission and should be addressed. Similar to the first weakness, the current method presentation feels somewhat dry.
6. Is BLM+ without Top-K and Bayes the second column in Table 3? If so, additional descriptions should be included for clarity.
7. What are the fine-tuning results in this context? It is essential to establish a standard to determine when VR is needed. Additionally, what is the total time cost compared to fine-tuning?
Technical Quality: 4
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors have adequately discussed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** Thank you. We have added a detailed explanation in **Common Question 2** and provided a simple example to illustrate the conditional probabilities.
**W2:** Thanks. We want to clarify how our analysis can be extended to multi-class cases.
**Expected Accuracy Definition** The formula (Eq. 10) evaluates the expected probability of correctly mapping each $y^{\rm S} \in \mathcal{Y}^{\rm S}$ to the corresponding $y^{\rm T} \in \mathcal{Y}^{\rm T}$, remaining valid even for multi-class settings. It is agnostic to the number of classes in both $\mathcal{Y}^{\rm S}$ and $\mathcal{Y}^{\rm T}$.
**Mapping Function Revision** Recall the definition of PLM (Definition C.1) and DLM (Definition C.2) are
- PLM: $\mathrm{Acc}(f_{\rm plm}) = \sum_{y^{\rm T} \in \mathcal{Y}^{\rm T}} p(y^{\rm T}) \cdot \sum_{y^{\rm S} \in \mathcal{Y}^{\rm S}} p(y^{\rm S}) \cdot \omega_{y^{\rm S}, y^{\rm T}}$
- DLM: $\mathrm{Acc}(f_{\rm dlm}) = \sum_{y^{\rm T} \in \mathcal{Y}^{\rm T}} p(y^{\rm T}) \cdot \sum_{y^{\rm S} \in \mathcal{Y}^{\rm S}} p(y^{\rm S}) \cdot \delta_{y^{\rm S}, g(y^{\rm S})}$
For multi-class cases, we need to revise the mapping function from $f_{\rm lm}(y^{\rm S}) \in \lbrace y^{\rm T}, 1 - y^{\rm T} \rbrace$ in binary label spaces to align with $\mathcal{Y}^{\rm S} = \lbrace1, 2, ..., k_{\rm S} \rbrace$, $\mathcal{Y}^{\rm T} = \lbrace1, 2, ..., k_{\rm T} \rbrace$. Thus, we need to expand previously used identity/flip mapping rule $g(y^{\rm S})$ to cover a broader range of possible mappings.
**Proof Sketch** PLM can achieve at least the accuracy of the optimal DLM by constructing $\omega_{y^{\rm S}, y^{\rm T}}$ to match the optimal deterministic mapping rule $g^*(y^{\rm S})$, which ensures $\sum_{y^{\rm S} \in \mathcal{Y}^{\rm S}} p(y^{\rm S}) \cdot \omega_{y^{\rm S}, y^{\rm T}} \geq \sum_{y^{\rm S} \in \mathcal{Y}^{\rm S}} p(y^{\rm S}) \cdot \mathbb{I}[g^*(y^{\rm S}) = y^{\rm T}], \forall y^{\rm T} \in \mathcal{Y}^{\rm T}$. Intuitively, this inequality holds, especially when $g^*(y^{\rm S}) \neq y^{\rm T}$, $\mathbb{I}[g^*(y^{\rm S}) = y^{\rm T}]=0$, while $\omega_{y^{\rm S}, y^{\rm T}}$ can be greater than 0 in such cases, showing better flexibility.
Due to the character limit, we leave the complete proof to future work. We hope this clarification addresses your concern.
**W3:** Thank you. In Appendix Sec. E, we have acknowledged that optimal values may vary across datasets (Fig. 8-9). Our initial use of universal hyperparameters was intended to show that BLM/BLM+'s performance gains are not sensitive to hyperparameters.
Following your advice, we quickly run additional experiments using a 70\%/30\% train/validation split of the original training set to find optimal hyperparameters for each dataset, shown as
| | Flowers102 | UCF101 | DTD | OxfordPets | CIFAR10 |
|------------|------------|--------|-------|------------|---------|
| Optimal $\alpha$ | 0.15 | 0.15 | 0.5 | 0.5 | 0.5 |
| Optimal $\lambda$ | 1 | 1 | 1 | 10 | 10 |
| Accuracy (Trained on 70% Samples) | 45.82 | 31.84 | **43.75** | **72.27** | **66.54** |
| Shared $\alpha$ | 0.15 | 0.15 | 0.15 | 0.15 | 0.15 |
| Shared $\lambda$ | 1 | 1 | 1 | 1 | 1 |
| Accuracy (Trained on 70% Samples) | 45.82 | 31.84 | 42.31 | 70.52 | 66.04 |
Observe that dataset-specific tuning indeed yields better performance compared to shared hyperparameters, suggesting that optimal hyperparameters tailored to each dataset are desired. We plan to include these in the revision.
**W4:** Thanks for your question. This observation stems from the statistical nature of our BLM+ algorithm, which computes label mappings based on the co-occurrence of predicted pretrained labels and ground truth downstream labels throughout the VR learning iterations (Fig. 4 visualizes the top-3 predicted pretrained label at each iteration).
In the later stages of VR learning, we observe that for marigold images $X^\mathrm{T}$, the conditional probabilities shifted:
$p(Y^{\rm T}={\tt Marigold}|Y^{\rm S}={\tt Pineapple}, X^{\rm T})>p(Y^{\rm T}={\tt Marigold}|Y^{\rm S}={\tt Teddy}, X^{\rm T})$
$p(Y^{\rm T}={\tt Marigold}|Y^{\rm S}={\tt Pineapple}, X^{\rm T})>p(Y^{\rm T}={\tt Marigold}|Y^{\rm S}={\tt Broccoli}, X^{\rm T})$
This indicates that the feature of marigold images, after processing by input VR and the pretrained model, share more similarities with pineapple than with teddy bears or broccoli. Thus, pineapple replaced these other labels among the top-k predicted pretrained labels.
Visually, this replacement is intuitive as well: The colors of a pineapple — primarily yellow, orange, and gold — are similar to those of Marigold. Teddy bear and Airedale are predominantly brown, while guacamole and broccoli are mainly green. Additionally, the shape of a pineapple, which is typically oval, resembles that of Marigold.
**W5:** Thank you for your suggestion. While space limitations led us to initially place the algorithm tables in the appendix (lines 455 and 462), we recognize the importance of making this information more accessible. In the revision, we will prioritize incorporating them into the main text. We will also work on enriching and smoothing out the method presentation to make it more engaging.
**W6:** Yes, BLM+ without Top-K and Bayes is the same as BLM without Bayes. Thanks for the reminder, we will explain it clearly.
**W7:** Thank you for your questions. As we follow the settings in [1], the fine-tuning results can be directly quoted from [1], and we will include this in the next version of our paper. Regarding training time, please refer to **Common Question 1** for a detailed answer. Regarding the standard to determine when VR is needed, we believe that (1) when issues such as copyright or avoiding catastrophic forgetting exist, the pre-trained model needs to be kept unchanged; (2) when the resources for training downstream tasks are limited, VR can be used.
[1] Chen et al. Understanding and improving visual prompting: A label-mapping perspective. In CVPR, 2023. | Rebuttal 1:
Rebuttal: **Common Question 1:** Concerns about the required number of epochs and training time for BLM/BLM+
**Response 1:** Regarding the number of epochs, we initially used 200 epochs as with the original papers to ensure a fair comparison with the baseline methods. However, during the rebuttal stage, we conducted additional experiments to assess the impact of different epoch numbers (60, 100, 150) on our BLM/BLM+ model, using ResNet-18 as the pretrained model. The results are shown in Table 1.
**Table 1. Epoch Numbers and Testing Accuracy of Different Methods**
| | |BLM| (ours)| | | BLM+ | (ours) | || ILM | FLM |
|---------------------|------|------|------|------|--------------|------|------|------|---|------|------|
| Epochs | 60 | 100 | 150 | 200 | 60 | 100 | 150 | 200 || 200 | 200 |
| Average on 12 Tasks (%) | 44.5 | 45.2 | 45.5 | 45.3 | 45.8 | 46.4 | 46.9 | 46.7 || 40.6 | 37.2 |
We found that running 100 epochs yields results comparable to those achieved with 200 epochs. This demonstrates that BLM and BLM+ require **less convergence time**, highlighting their efficiency.
Training time for one epoch is listed in Table 6 of our paper submission. Additionally, the total training time for one task, in comparison with baseline visual reprogramming methods and fine-tuning methods, is calculated below:
**Table 2. Time Consumption of Different Methods on Flowers102 Dataset (Single A100 GPU)**
| | | VR: Baselines | | VR: Ours | | VR: Ours | | Finetuning Methods | |
|-------------|------------------|:-------------:|:-----:|:-----------------:|:------------------:|:-----------------:|:------------------:|:---------------------:|:----------------:|
| | | FLM | ILM | BLM(200 epochs) | BLM+(200 epochs) | **BLM(100 epochs)** | **BLM+(100 epochs)** | Finetuning Last Layer | Fully Finetuning |
| ResNet-18 | Parameter Number | 0.10M | 0.10M | 0.10M | 0.10M | 0.10M | 0.10M | 0.51M | 11.7M |
| | Training Time (min) | 11.97 | 12.04 | 11.95 | 13.06 | **6.03** | **6.52** | 14.03 | 15.28 |
| ResNeXt-101 | Parameter Number | 0.10M | 0.10M | 0.10M | 0.10M | 0.10M | 0.10M | 2.0M | 88.8M |
| | Training Time (min) | 24.68 | 24.81 | 24.51 | 24.71 | **12.33** | **12.44** | 24.49 | 35.07 |
Combined with the results in the table, we analyze the efficiency of BLM and BLM+ from three perspectives:
(1) **Extra Consumption of Calculating Mapping Matrix $\omega$ Compared with One-to-One Mapping:** Compared to the baseline method ILM, the additional cost for BLM/BLM+ primarily involves the gradient-free multiplication and division within the mapping matrix (which is sized according to the source and target label spaces, 1000 × 102 in this case). This additional cost is minimal, as shown by the 4th-6th columns in Table 2.
(2) **Time Consumption of Updating Mapping Matrix $\omega$ per Epoch:** The baseline method FLM calculates the mapping $\omega$ once and keeps it fixed, while ILM and our methods update $\omega$ at each step. However, updating $\omega$ does not require running the model to obtain current predictions each epoch. Instead, predictions from the most recent epoch can be reused. As a result, there is no noticeable time overhead for updating $\omega$ per epoch, as indicated by the 3rd-6th columns in Table 2.
(3) **Time Consumption Compared with Finetuning Methods:** Since BLM/BLM+ use **fewer parameters** and **converge in fewer epochs**, they are significantly faster than finetuning the last layer or the entire model. This is demonstrated in the 7th-10th columns of Table 2.
---
**Common Question 2:** Detailed Explanations of Conditional Probabilities in Section 4.1
**Response 2:** Eq. (6) and (7) aim to estimate $p(Y^{\rm T} = y^{\rm T}, Y^{\rm S} = y^{\rm S} \mid X^{\rm T})$ and $p(Y^{\rm S} = y^{\rm S} \mid X^{\rm T})$ respectively. Here, $X^{\rm T} \in \mathcal{X}^{\rm T}$ represents a variable from the downstream task input space, while $Y^{\rm T} \in \mathcal{Y}^{\rm T}$ and $Y^{\rm S} \in \mathcal{Y}^{\rm S}$ are variables from the target and source label spaces, respectively.
The conditional probability $p(Y^{\rm T} = y^{\rm T}, Y^{\rm S} = y^{\rm S} \mid X^{\rm T})$ represents the joint distribution of $Y^{\rm T}$ and $Y^{\rm S}$, given the input reprogramming $f_{\rm in}$, the pretrained model $f_{\rm pre}$, and the variable $X^{\rm T}$ of the downstream task. Similarly, $p(Y^{\rm S} = y^{\rm S} \mid X^{\rm T})$ represents the distribution of $Y^{\rm S}$ under these conditions.
For example, consider the following setup:
- $\mathcal{Y}^{\rm T} = \lbrace\tt Cat, \tt Dog\rbrace$,
- $\mathcal{Y}^{\rm S} = \lbrace\tt CockerSpaniel, \tt EnglishSpringer, \tt EgyptianCat\rbrace$,
- Downstream samples are $ \lbrace(x_1, {\tt Dog}), (x_2, {\tt Dog}), (x_3, {\tt Dog}), (x_4, {\tt Cat})\rbrace$.
If the reprogrammed predictions calculated by $f_{\rm pre}(f_{\rm in}(x_i \mid \theta))$ are $\lbrace x_1: {\tt CockerSpaniel}, x_2: {\tt CockerSpaniel}, x_3: {\tt EnglishSpringer}, x_4: {\tt EgyptianCat}\rbrace$, then $p(Y^{\rm T} = y^{\rm T}, Y^{\rm S} = y^{\rm S} \mid X^{\rm T})$ can be estimated as a 2 \* 3 matrix with the following nonzero values:
- $p(Y^{\rm T} = {\tt Dog}, Y^{\rm S} = {\tt CockerSpaniel} \mid X^{\rm T}) = \frac{1}{2}$,
- $p(Y^{\rm T} = {\tt Dog}, Y^{\rm S} = {\tt EnglishSpringer} \mid X^{\rm T}) = \frac{1}{4}$,
- $p(Y^{\rm T} = {\tt Cat}, Y^{\rm S} = {\tt EgyptianCat} \mid X^{\rm T}) = \frac{1}{4}$. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Are Large-scale Soft Labels Necessary for Large-scale Dataset Distillation? | Accept (poster) | Summary: This paper addresses the challenge of large-scale dataset distillation, specifically focusing on reducing the storage requirements for auxiliary soft labels in ImageNet-level condensation. The authors propose Label Pruning for Large-scale Distillation (LPLD), which aims to achieve state-of-the-art performance while significantly reducing the storage needed for soft labels. The main contributions include:
1. Identifying that high within-class similarity in condensed datasets necessitates large-scale soft labels.
2. Introducing class-wise supervision during image synthesis to increase within-class diversity.
3. Demonstrating that simple random pruning of soft labels is effective when data diversity is improved.
4. Achieving SOTA performance with significantly reduced label storage, validated across various networks and datasets.
Strengths: 1. The paper addresses an often overlooked problem in dataset distillation: the large storage requirements for auxiliary data. Their class-wise supervision approach offers a novel solution to this issue.
2. The analysis is thorough, using feature cosine similarity and MMD to demonstrate increased diversity in synthetic data. Experiments across Tiny-ImageNet, ImageNet-1K, and ImageNet-21K show the method's effectiveness on various datasets.
3. The paper is well-organized and written. The authors clearly explain their motivations, provide detailed methodology, and offer comprehensive analysis of their results. Their figures and tables effectively illustrate key points.
4. By significantly reducing storage needs (40x compression of soft labels) while maintaining or improving performance, this work makes dataset distillation more practical for large-scale applications.
Weaknesses: Limited theoretical analysis: While the paper provides comprehensive empirical evidence, a more rigorous theoretical analysis could strengthen it. I know this is a common challenge in dataset distillation research. However, the detailed empirical study presented here lays valuable groundwork for potential future theoretical analyses in this direction, particularly regarding the relationship between class-wise supervision and increased diversity.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How sensitive is the method to the choice of hyperparameters, particularly in the label pruning process? Is there a way to automatically determine the optimal pruning rate?
2. Have you explored the potential of combining your class-wise supervision approach with other dataset distillation techniques? Could this lead to further improvements?
3. How does the performance of your method scale with the number of classes? Is there a point where the benefits of class-wise supervision diminish?
4. Have you investigated the impact of your method on the training time of the distilled dataset compared to previous approaches?
5. In Table 6(c), why the performance on Swin-V2-Tiny is the worst even if the architecture has the largest size (28.4M)?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes. Limitations are provided in Appendix E.5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your questions and feedbacks. We want to address them one by one.
> 1. How sensitive is the method to the choice of hyperparameters, particularly in the label pruning process? Is there a way to automatically determine the optimal pruning rate?
Thank you for bringing up this point. Regarding the label pruning process, we adopt two random processes: **(1) random soft label pruning** and **(2) random resampling for training**, as shown in Figure 5. The pruning ratio is the only hyperparameter introduced in this process.
It's important to note that there is no universally optimal pruning rate in this scenario, as the ideal rate depends on the trade-offs between accuracy and storage. From our observations, datasets with higher Images-Per-Class (IPCs) tend to be more robust to label pruning. This indicates that such datasets can better maintain accuracy even when labels are pruned, offering more flexibility in balancing the trade-offs.
> 2. Have you explored the potential of combining your class-wise supervision approach with other dataset distillation techniques? Could this lead to further improvements?
Thank you for the question. We explored applying our method to CDA, and it shows improvements over the pruning results, making CDA more robust to label pruning, as shown in **Table A**.
**Table A**: Directly applying our method on CDA. ImageNet-1K, IPC50.
| | 0x | 10x | 20x | 40x |
| ----------- | ---- | ---- | ---- | ---- |
| CDA | 53.5 | 50.3 | 46.1 | 38.0 |
| Ours on CDA | 55.2 | 53.4 | 51.0 | 46.0 |
Additionally, we believe that initializing with images from RDED [a] could eventually be beneficial. RDED enhances image diversity through an optimization-free approach, and we can further boost image diversity through class-wise supervision.
> 3. How does the performance of your method scale with the number of classes? Is there a point where the benefits of class-wise supervision diminish?
Thank you for the question. We do not observe a diminishing performance with an increasing number of classes. **Table B** provides the results of the ImageNet-21K-P dataset (10,450 classes). Despite the increasing challenges of the dataset itself, our method outperforms SRe2L and even CDA by a noticeable margin at 0x baseline. Notably, for IPC20 under a 40x pruning ratio, the performance suffers only a 0.9% drop on ImageNet-21K-P.
**Table B**: Performance of different methods on ImageNet-21K-P. The dataset contains 10,450 classes. This is a simplified version of Table 7 from our paper.
| ImageNet-21K-P | SRe2L (1x) | CDA (1x) | Ours (1x) | Ours (40x) |
| -------------- | ---------- | -------- | --------- | ---------- |
| IPC10 | 18.5 | 22.6 | 25.4 | 21.3 |
| IPC20 | 20.5 | 26.4 | 30.3 | 29.4 |
> 4. Have you investigated the impact of your method on the training time of the distilled dataset compared to previous approaches?
Thank you for the insightful question; this is worth exploring. Currently, we have not changed the total training epochs (e.g., 300 epochs for ImageNet-1K), so there is no apparent difference in training the distilled dataset. The only difference is that SRe2L uses the full set of labels while we reuse a subset of the labels. The reuse of labels has almost no impact on the training time.
> 5. In Table 6(c), why the performance on Swin-V2-Tiny is the worst even if the architecture has the largest size (28.4M)?
Table 6(c) presents the results of cross-architecture performance. Specifically, synthetic data are recovered using ResNet18 and evaluated on other networks. Poor performance on Swin-V2-Tiny may be due to its different network structure (i.e., transformer-based).
1. Transformer-based networks may adapt less effectively to CNNs, especially since all the information is obtained from ResNet.
2. Additionally, transformer-based networks are well-known for being data-hungry, and distilled datasets contain even less data. Therefore, Swin-V2-Tiny may exhibit the poorest performance despite having the largest model size.
We note that a similar pattern is observed in RDED [a], as shown in **Table C**.
**Table C**: Cross-architecture performance. Data is obtained from Table 5 of RDED [a].
| Recover: ResNet-18 | Model Size | SRe2L | RDED |
| ------------------ | ---------- | ---------- | ---------- |
| ResNet-18 | 11.7M | 21.7 ± 0.6 | 42.3 ± 0.6 |
| EfficientNet-B0 | 5.3M | 25.2 ± 0.2 | 42.8 ± 0.5 |
| MobileNet-V2 | 3.5M | 19.7 ± 0.1 | 34.4 ± 0.2 |
| Swin-V2-Tiny | 28.4M | 9.6 ± 0.3 | 17.8 ± 0.1 |
I hope the responses address your concerns. Thank you!
[a] Sun, Peng, et al. "On the diversity and realism of distilled dataset: An efficient dataset distillation paradigm." *CVPR, 2024.*
---
Rebuttal Comment 1.1:
Title: After rebuttal
Comment: Thanks for providing the detailed rebuttal. All of my concerns have been properly addressed. In addition, I have also read the other reviews’ comments and corresponding rebuttals. These efforts are appreciated.
After careful consideration, I believe this is a high-quality paper that merits acceptance. The proposed soft label compression technique is valuable and contributes significantly to the field.
I will adjust my score accordingly once the discussions between other reviewers and the authors have concluded.
---
Reply to Comment 1.1.1:
Title: Thank you for your positive feedback
Comment: Thank you sincerely for your positive feedback on our rebuttal. We find it very encouraging that our paper is considered high-quality and that our soft label compression technique is recognized as a significant contribution to the field.
As we are approaching the discussion deadline, we wanted to provide a brief update: one reviewer has already raised their score after reading the rebuttal, while another has not yet responded. Given your intention to adjust the score and your positive assessment, we were wondering if you might consider updating your score at this time. Your support could be crucial in the final evaluation of our paper.
We deeply appreciate your time and expertise throughout this review process. Thank you again for your thoughtful consideration. | Summary: The paper focuses on reducing the size of soft-label storage in large-scale dataset condensation. The authors discussed why the labels are large-scaled and then proposed to prune the labels by increasing the diversity of synthetic images. Extensive experiments are conducted to validate the effectiveness of the proposed method.
Strengths: 1. The paper tackles the problem of large storage space consumed by soft-labels, and discover that this is attributed to the lack of image diversity within classes.
2. Extensive experiments demonstrate the effectiveness of the proposed method, achieving comparable or even better performance than previous methods with reduced label storage.
Weaknesses: The motivation of the work is well-grounded. I don’t observe major drawbacks but do have some minor concerns towards its technical contributions.
1. The proposed method seems to be a class-wise version of SRe^2L, with a soft label-reuse mechanism during training. Thus the technical contributions seem to be weak.
2. How is Equation (8) used in the proposed method? Or is it just a theoretical result?
3. I have doubts on the computational cost (memory and time) of the proposed method (not storage), doing class-wise optimization may increase the distillation time, especially on ImageNet.
4. How is the random label pruning done? E.g. Is the same number of soft-labels removed for each epoch?
5. Proposition 2 is quite confusing, is “higher MMD” a typo?
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see Weaknesses.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes, the limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for bring up all these values feedbacks.
> 1. The proposed method seems to be a class-wise version of SRe^2L, with a soft label-reuse mechanism during training. Thus the technical contributions seem to be weak.
Thank you for your important question.
Our approach is not merely a variant of existing methods but a comprehensive redesign that addresses key limitations. We'd like to elaborate on our key contributions and the novelty of our work:
1. **Novel Motivation and Insight**. We recognized the critical importance of image diversity in large-scale distillation tasks. This insight led us to rethink the fundamental approach to distillation tasks.
2. **Unified Framework for Diversity and Pruning**. We uniquely combine image diversity enhancement and label pruning synergistically.
3. **Simplicity**. The strength of our approach lies in its simplicity. We demonstrate that **random** **pruning** within our class-wise framework is sufficient, eliminating the need for complex pruning metrics.
4. **Effectiveness**. Our method enhances performance with full labels. Additionally, our method can maintain comparable results at a 40x compression ratio, demonstrating its robustness across various data scenarios. This shows that rethinking the problem from a class-wise angle can lead to significant improvements.
> 2. How is Equation (8) used in the proposed method? Or is it just a theoretical result?
Thanks for your thoughtful question. Our experiments are grounded in a careful analysis of the number of updates required for stable BN statistics.
First, for Equation 8:
$$n \geq \max \left(\frac{-2 \ln (T)}{\delta^2 \min \left(p_c\right)}, \frac{\ln (T)}{(1-\delta) \ln (1-\varepsilon) \cdot \min \left(p_c\right)}\right)$$
Let’s substitute the values and compute the two parts one by one,
1. $T=0.05$ (95% confidence)
2. $\delta=0.2$ (moderate sigma to handling class imbalance)
3. $\epsilon=0.1$ (default value in BN)
4. $p_c=(732/1,281,167)= 0.0005711$ = least number of images in a class / total images
$$n \geq \max \left( \frac{-2 \times \ln(0.05)}{0.2^2 \times 0.0005711}, \frac{\ln(0.05)}{(1 - 0.2) \ln(1 - 0.1) \cdot 0.0005711}\right) = \max (262234, 62234) = 262,234$$
Next, let us elaborate on how we use the theoretical result to guide the experiment design:
1. $n \geq 262,234$ means that the theoretical number of updates needed for stable BN statistics is $262,234$.
2. In ResNet training on ImageNet-1K, the standard setting [a] uses a batch size of 256 and trains for 90 epochs. The total number of updates is $(1,281,167/256) \times 90 = 450,411$. This significantly exceeds our theoretical requirement of 262,234 updates.
3. This observation gave us a key insight: pretrained models have already undergone sufficient updates to achieve stable BN statistics. Therefore, we can recompute class-wise BN statistics using a pretrained model for only **one epoch** (5,005 updates).
[a] He, Kaiming, et al. "Deep residual learning for image recognition." *CVPR*. 2016.
> 3. I have doubts on the computational cost (memory and time) of the proposed method (not storage), doing class-wise optimization may increase the distillation time, especially on ImageNet.
Thank you for bringing up your concern. We acknowledge that our method can indeed increase computational costs in some cases. However, the impact varies depending on the Images-Per-Class (IPC) setting. Based on our experiments in **Table A**, we have:
- **IPC 50**: 19.95% slower than SRe2L.
- **IPC 100**: 7.28% slower than SRe2L.
- **IPC 200**: 4.83% **faster** than SRe2L.
On average, the computational time increase is 7.47%. This suggests that while there is indeed a computational cost increase in some scenarios, the impact is not uniform and our method can even be more efficient in certain configurations.
We believe this trade-off is exceptionally favorable: a minimal **7.47%** increase in processing time unlocks a dramatic **40x** reduction in data volume. Such powerful compression capability far outweighs the marginal computational cost, making our approach particularly valuable for large-scale applications where data efficiency is paramount.
**Table** **A**: Real computation cost of SRe2L and our method on ImageNet-1K. The time spent on inner loops is averaged from 3 loops. The optimization process contains two loops: the outer loop and the inner loop. Total time = (Time for Inner Loop * Outer Loop Numbers).
| | IPC 50 | IPC 100 | IPC 200 |
|---|---|---|---|
| SRe2L - Breakdown | 37m5s $\times$ 50 | 37m5s $\times$ 100 | 37m5s $\times$ 200 |
| SRe2L - Total | 30h55m | 61h49m | 123h37m |
| Ours - Breakdown | 2m13s$\times$ 100 | 3m59s $\times$ 1000 | 7m4s $\times$ 1000 |
| Ours - Total | 37h5m | 66h19m | 117h39m |
> 4. How is the random label pruning done? E.g. Is the same number of soft-labels removed for each epoch?
Thank you for this important question.
Yes, the number of soft labels removed for each epoch is the **same**. The detailed process for random label pruning is shown in Figure 5. There are two different levels for random pruning:
1. First-level random pruning for the soft label pool: **9 samples** from **12 samples**.
2. Second-level random pruning for training: **3 samples** from **9 samples**.
For first-level random pruning, we introduce two different schemes:
1. Epoch-level random pruning: if 1st sample is removed, 2nd 3rd must be removed.
2. Batch-level random pruning: even if 1st is removed, 2nd 3rd can be randomly removed or kept.
In our paper, for first-level random pruning, we stick to batch-level random pruning as it introduces more randomness. For second-level random pruning, we use normal randomness.
> 5. Proposition 2 is quite confusing, is “higher MMD” a typo?
Thank you so much for bringing out this typo. We have corrected this error in the original manuscript. Also, we have conducted a thorough review of the entire document to ensure no similar errors exist.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for the detailed response and my concerns are addressed. I now vote for acceptance of this paper. However, I strongly recommend the authors to include the discussions on time cost and detailed pruning process into the main paper/appendix. These will be insightful for the community.
---
Reply to Comment 1.1.1:
Title: Thank you for the acceptance
Comment: We are deeply grateful for your acceptance of our paper and truly appreciate your insightful feedback. Your comment that our discussion "will be insightful for the community" is particularly encouraging to us.
We are committed to improving our manuscript based on your thoughtful suggestions. | Summary: This paper discovers that the conventional method generates images with high similarity. To solve this, the authors introduce class-wise supervision during the image-synthesizing process by batching the samples within classes. Thanks to the increase in diversity, the soft labels can be pruned to reduce the storage size. Extensive experiments are performed on ImageNet with different compression ratios of labels to validate the effectiveness of the method.
Strengths: 1. Improving the diversity in condensed images is crucial for obtaining high performance.
2. Label pruning is necessary to reduce the storage size for soft labels.
Weaknesses: 1. There are confusing sentences. The authors mention that ”The high similarity of images within the same class requires extensive data augmentation to provide different supervision” and then “To address this issue, we propose Label Pruning for Large-scale Distillation”. How does label pruning address the issue? I see that increasing the diversity of synthetic samples does improve the performance and label pruning is proposed to reduce the storage size. These are not highly related.
2. The authors should compare the performance of the proposed Label pruning with other compression methods (e.g., Marginal Smoothing/Re-Norm with Top-K, using different K when targeting at 10x/20x…) proposed in FKD. Without comparing these methods, it is unclear whether the proposed approach is better than previous pruning methods.
FKD: A Fast Knowledge Distillation Framework for Visual Recognition, Zhiqiang Shen et al
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for raising these questions and concerns. We want to address them one by one.
> 1. There are confusing sentences. The authors mention that ”The high similarity of images within the same class requires extensive data augmentation to provide different supervision” and then “To address this issue, we propose Label Pruning for Large-scale Distillation”. How does label pruning address the issue? I see that increasing the diversity of synthetic samples does improve the performance and label pruning is proposed to reduce the storage size. These are not highly related.
Thank you for your insightful question.
```
Previous: High Similarity -> More Augmentation -> More Label with
High Similarity
Naive Pruning: High Similarity -> More Augmentation -> More Label with ------> Fewer Label with
High Similarity High Similarity
Ours LPLD: Low Similarity -> Less Augmentation -> Fewer Label with
Low Similarity
```
We draw the above diagram to clarify the relationship between image similarity and our proposed Label Pruning for Large-scale Distillation (LPLD) method.
First, it's important to distinguish between naive pruning and our LPLD pruning:
1. **Naive pruning**: This method directly prunes soft labels generated by previous approaches like SRe2L and CDA, starting with **high-similarity** images. It does not address the issue of image diversity.
2. **Our LPLD** pruning: We begin with **low-similarity** images and then apply label pruning. This two-step process addresses both image diversity and storage efficiency.
When the reviewer mentions the lack of **direct relation** between pruning and image similarity, we believe this observation is most applicable to naive pruning methods, not our LPLD. Our approach, however, takes a fundamentally different direction:
1. We first enhance image diversity by creating **low-similarity** images. This step creates a prerequisite for effective **pruning**.
2. We then apply label pruning to this diverse set of images, which allows for more effective compression while maintaining performance.
To illustrate the effectiveness of **connecting image similarity and pruning**, we conducted a comparison between naive pruning (**NOT related** to similarity) and our LPLD (**related** to similarity) for ImageNet-1K at IPC50, as shown in **Table A**. The results demonstrate that our method outperforms naive pruning, especially at higher compression rates.
**Table A**: Comparison between naive pruning on previous methods and our LPLD pruning for ImageNet-1K at IPC50.
| Compression Rate | 0x | 10x | 20x | 30x | 40x |
|---|---|---|---|---|---|
| SRe2L | 41.10% | 40.30% | 39.00% | 34.60% | 29.80% |
| CDA | 48.70% | 45.00% | 41.20% | 35.80% | 30.90% |
| Ours | **48.80%** | **46.70%** | **44.30%** | **40.20**% | **38.40%** |
In summary, our LPLD method addresses the issue of high image similarity within classes by creating diverse images first and then applying label pruning. This approach improves both the diversity of synthetic samples and reduces storage size, addressing both concerns at the same time.
> 2. The authors should compare the performance of the proposed Label pruning with other compression methods (e.g., Marginal Smoothing/Re-Norm with Top-K, using different K when targeting at 10x/20x…) proposed in FKD. Without comparing these methods, it is unclear whether the proposed approach is better than previous pruning methods.
Thank you for your question.
Although FKD's approach is orthogonal to our method (reasons for the orthogonality are listed in another response comment), we compared different target components in **Table B** and conducted a comparative analysis presented in **Table C**. Table C follows the definition of the components in Table B. Please note that FKD only compresses component 6, with the compression rate related to hyper-parameter $K$. Components 1-5 remain uncompressed (1x rate). We achieve better performance:
1. Higher Accuracy at Comparable Compression Rates: For IPC10, our method achieves 32.70% accuracy at 10x compression, while FKD only reaches 18.10% at 8.2x compression.
2. Better Compression at Similar Accuracy Levels: On IPC10, our method attains 20.20% accuracy at 40x compression, whereas FKD achieves 19.04% at just 4.5x compression.
We hope this response addresses your concern.
**Table B**: Different target components between FKD and ours. FKD, originally for **model** distillation, requires storage only for components 1, 2, 6. Adapting it to **dataset** distillation requires additional storage for components 3, 4, 5.
| | FKD | Ours |
|---|---|---|
| 1. coordinates of crops | ❌ | ✔ |
| 2. flip status | ❌ | ✔ |
| 3. index of cutmix images | ❌ | ✔ |
| 4. strength of cutmix | ❌ | ✔ |
| 5. coordinates of cutmix bounding box | ❌ | ✔ |
| 6. prediction logits | ✔ | ✔ |
**Table C**: Comparison between FKD's two label quantization strategies (Marginal Smoothing and Marginal Re-Norm) and ours. FKD’s quantized logits store both values and indices, so their actual storage is doubled, and their compression rate is halved.
| Method | Compression rate of component 1-5 | Compression rate of component 6 | Full compression rate | Accuracy (%) on IPC10 |
|---|---|---|---|---|
| Baseline (no compression) | 1x | 1x | **1x** | 34.60 |
| FKD (Smoothing, K=100) | 1x | (10/2)=5x | **4.5x** | 18.70 |
| FKD (Smoothing, K=50) | 1x | (20/2)=10x | **8.2x** | 15.53 |
| FKD (Smoothing, K=10) | 1x | (100/2)=50x | **23.0x** | 9.20 |
| FKD (Re-Norm, K=100) | 1x | (10/2)=5x | **4.5x** | 19.04 |
| FKD (Re-Norm, K=50) | 1x | (20/2)=10x | **8.2x** | 18.10 |
| FKD (Re-Norm, K=10) | 1x | (100/2)=50x | **23.0x** | 15.52 |
| Ours (10x) | 10x | 10x | **10x** | **32.70** |
| Ours (20x) | 20x | 20x | **20x** | 28.60 |
| Ours (40x) | 40x | 40x | **40x** | 20.20 |
---
Rebuttal 2:
Title: The reasons why label quantization in FKD is orthogonal to our method
Comment: Thank you for bringing up the questions related to FKD.
We would like to emphasize that label quantization mentioned in FKD is orthogonal to our method for the following reasons:
1. **We consider more** **components** **as shown in Table B.** There are **six** **components** related to soft labels. FKD only compresses the prediction logits (component 6), while our method addresses all six components.
2. **We have different compression targets, as shown in Table D**: Even for the overlapping storage component (component 6: prediction logits), our compression target differs from FKD's. The total stored prediction logits can be approximated by the `number_of_condensed_images × number_of_augmentations × dimension_of_logits`.
- FKD's **Label Quantization** focuses on compressing the `dimension_of_logits`.
- Our **Label Pruning** method focuses on compressing the `number_of_augmentations`.
**Table** **B**: Different storage components between FKD and ours. FKD, originally for **model** distillation, requires storage only for components 1, 2, 6. Adapting it to **dataset** distillation requires additional storage for components 3, 4, 5.
| | FKD | Ours |
|---|---|---|
| 1. coordinates of crops | ❌ | ✔ |
| 2. flip status | ❌ | ✔ |
| 3. index of cutmix images | ❌ | ✔ |
| 4. strength of cutmix | ❌ | ✔ |
| 5. coordinates of cutmix bounding box | ❌ | ✔ |
| 6. prediction logits | ✔ | ✔ |
**Table D:** Breakdown explanation for component 6 (prediction logits) storage between FKD’s label quantization and our label pruning. The number of condensed images is computed by `N = IPC x number_of_classes`. FKD's compression target is `dimension_of_logits`, while our target is `number_of_augmentations`.
| | Number of condensed images | Number of augmentations per image | Dimension of logits per augmentation | Total storage for prediction logits |
| ------------------------- | ------------------------ | ------------------------------- | ------------------------------- | ----------------------------------- |
| Baseline (no compression) | N | 300 | 1,000 | N $\times$ 300 $\times$ 1000 |
| Label Quantization (FKD) | N | 300 | 10 | N $\times$ 300 $\times$ 10 |
| Label Pruning (Ours) | N | 3 | 1,000 | N $\times$ 3 $\times$ 1000 |
---
Rebuttal Comment 2.1:
Title: Grateful for any feedback
Comment: Dear Reviewer S5vb,
We greatly appreciate the time and effort you've invested in reviewing our work. As we are now two days away from the discussion deadline, we wanted to reach out regarding our rebuttal, which we submitted earlier.
If possible, we would be grateful for any feedback you might be able to provide, as this would give us the opportunity to engage in a productive discussion before the deadline. We're eager to address any remaining questions or concerns you may have about our paper.
We understand that you likely have a busy schedule, and we truly appreciate your valuable insights and expertise in this process.
Thank you for your time and consideration.
---
Reply to Comment 2.1.1:
Title: Grateful for any feedback
Comment: Dear Reviewer S5vb,
I apologize for reaching out again after my previous message. With the discussion deadline now just one day away, I wanted to very gently follow up on our earlier correspondence regarding our paper and rebuttal.
We completely understand if you haven't had the opportunity to review our rebuttal yet, given the many demands on your time. However, if you have had the chance to look it over, we would be immensely grateful for any feedback you could provide. Your insights, no matter how concise, would be valuable in helping us engage in a constructive discussion before the deadline closes.
We are sincerely thankful for your continued involvement in this process, and we respect your time and professional commitments. Thank you for your understanding and consideration. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Data Distribution Valuation | Accept (poster) | Summary: This paper starts by highlighting the importance of accurately assessing the value of data distributions, especially in the growing data economy. To address the problem, this paper introduces a data distribution valuation method based on Maximum Mean Discrepancy (MMD) for comparing the value of data distributions from samples. This paper assumes that each vendor $i$ holds a distribution $P_i$ which follows a Huber model, and is a mixture of ground truth distribution $P^*$ and arbitrary distribution $Q_i$. Based on this assumption, the authors discuss the theoretical foundations and assumptions, providing detailed proofs and derivations for the proposed methods. The study addresses heterogeneity in data distributions and the challenges of combining multiple data vendors' datasets. Experimental results demonstrate the sample efficiency and effectiveness of the proposed methods in ranking data distributions.
Strengths: 1. This paper provides a detailed explanation of the theoretical foundations for the valuation of data distributions, including assumptions and proofs, ensuring the rigor and completeness of the theory.
2. This paper studies a meaningful problem: how to compare the values of data distributions from their samples, which can help evaluate the value of data provided by different vendors.
3. This paper is well-organized and easy to follow. The paper introduces a novel method based on maximum mean discrepancy (MMD) for data distribution valuation, offering a fresh perspective in the field.
Weaknesses: 1. This paper relies on certain assumptions, such as the Huber model of data heterogeneity, which may not always hold in real-world scenarios.
2. The experiment for ranking data distributions lacks generalizability.
3. Using data samples to represent data distribution can cause issues, such as dealing with malicious data vendors.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In real-world scenarios, data distributions often encompass various complex heterogeneity factors that the Huber model may not accurately capture. Data collected from different sources often exhibit significant variability and may not follow the same distribution.
2. The experiment in the article for ranking data distributions involves mixing two similar datasets, which fails to adequately represent the heterogeneity between datasets provided by different vendors, thus affecting the generalizability of the results.
3. In real-world scenarios, some vendors may provide data samples that do not accurately reflect their true data distribution. Some malicious vendors might even forge samples (for example, using some real data as samples while the rest of dataset is synthetic or useless data). The article does not take this into account.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Sj3P for reviewing our paper, and appreciating the theoretical rigor of our work, the meaningfulness of our studied problem and the novelty and fresh perspective of our method.
We would like to address the comments and feedback as follows.
> This paper relies on certain assumptions, such as the Huber model of data heterogeneity, which may not always hold in real-world scenarios. In real-world scenarios, data distributions often encompass various complex heterogeneity factors that the Huber model may not accurately capture. Data collected from different sources often exhibit significant variability and may not follow the same distribution.
We acknowledge that the assumption on the Huber model presents a theoretical limitation (line 404), and we have investigated settings where the Huber model is _not_ satisfied (i.e., additive Gaussian noise, and a class imbalance setting) where our method continues to perform well, and this is mentioned in Section 3 (lines 138-139) and Section 6 (lines 405-406). These results are deferred to the Appendix D.3.3 due to page constraints. In particular, the class imbalance setting represents the observation that "Data collected from different sources often exhibit significant variability and may not follow the same distribution."
Nevertheless, we wish to point out that this assumption of Huber model is to ensure the theoretical tractability of analysis (e.g., Proposition 1, Equation 2), which has not been explored previously in the sense that prior works have not considered a precise form of data distribution or characterization of data heterogeneity (elaborated in Appendix B.3 in lines 909-930). Additionally, we discuss the considerations of extending beyond the Huber model (lines 1110-1114) and the opportunities for future research.
> The experiment for ranking data distributions lacks generalizability. The experiment in the article for ranking data distributions involves mixing two similar datasets, which fails to adequately represent the heterogeneity between datasets provided by different vendors, thus affecting the generalizability of the results.
In our main paper, the settings for the empirical results follow the Huber model, to verify our theoretical results and method that are based on the Huber model. We acknowledge that a single formalization of data heterogeneity cannot generalize to all settings (lines 58-60), so we have also performed experiments under _non-Huber_ settings in Appendix D.3.3. To elaborate, we investigate two specific non-Huber settings for data heterogeneity: (1) with additive Gaussian noise and (2) class imbalance setting. For (1), the datasets are "perturbed" with additive Gaussian noise in the features. For (2), different data vendors are restricted to observe different supports of the full data distribution, which is a commonly adopted setting for data heterogeneity in distributed machine learning [68]. We also consider a continuous interpolation of (2). In these settings, our method continues to perform well, as shown in Tables 8,9 & 10 in Appendix D.3.3. Nevertheless, we acknowledge that these settings do not cover all cases of data heterogeneity in practice; extending the method and results to more settings is an important direction for future work.
> Using data samples to represent data distribution can cause issues, such as dealing with malicious data vendors. In real-world scenarios, some vendors may provide data samples that do not accurately reflect their true data distribution. Some malicious vendors might even forge samples (for example, using some real data as samples while the rest of dataset is synthetic or useless data). The article does not take this into account.
Indeed the consideration of incentivizing the data vendors to truthfully report their samples is of great practical importance, and it has difficulties especially in our setting which does _not_ assume to have access to a ground truth reference distribution. Formally, the notion of _incentive compatibility_ of a valuation metric can ensure that the vendors perform truthful reporting. In other words, it deters malicious vendors or discourages vendors from reporting forged samples. In our setting of data distribution valuation, if the valuation is accurate (e.g., with access to the ground truth reference distribution such as in Section 4.1), it can be shown to satisfy incentive compatibility; and if the valuation has a bounded error (e.g., without the ground truth reference distribution, but with some approximated reference distribution such as in Section 4.2), then the valuation can be said to satisfy an approximate (i.e., weaker) version of incentive compatibility. We make these precise in Appendix A.4.3 (lines 764-795), and provide preliminary empirical results in Appendix D.3.4. In our empirical investigation, the data vendor who mis-reports the samples receives a significantly lower value (Figures 6,7,8 & 9 in Appendix D.3.4). These results demonstrate that our proposed method can mitigate the issue of malicious data vendors to some extent. This discussion and the results are deferred to appendix due to page constraints and that these are not the primary focus of our work. We point out that our discussion and results are preliminary in this very important direction, which may require a separate and in-depth treatment, and hope that our preliminary results can serve as starting points for future research.
We wish to thank Reviewer Sj3P for the positive feedback and questions. We hope our response has clarified your questions and helped improve your opinion of our work. | Summary: The valuation of data is crucial in data marketplaces. Instead of assessing the value of a specific dataset, this paper focuses on the valuation of data distribution behind the dataset itself. For example, several vendors are trying to sell different or even the same datasets, what is the best distribution to purchase when we only observe a sampled dataset? The authors model the problem of data distribution valuation and use the MMD-based method as a metric to evaluate the valuation. They provide theoretically guaranteed policies for buyers to take action and empirically demonstrate it using real-world datasets. The results indicate that the method is sample-efficient and outperforms other valuation metrics.
Strengths: * Originality: This paper is interesting as it evaluates the distribution behind a sampled dataset rather than the dataset itself. This approach is new and can be valuable when dealing with partially sampled datasets.
* Quality: The results seem promising and support the use of MMD-based metrics for assessing data distribution.
* Clarity: The writing and formulation are clear and easy to understand.
* Significance: In data marketplaces, most data is only available for preview and is often sampled. Understanding the value of data distribution is beneficial for the field and users.
Weaknesses: 1. The paper's motivation is interesting and contributes to the field. However, I believe there is a missing experiment regarding ranking different error levels of distributions. Suppose we have five distributions ranging from 100% correct to 0% correct. Can the valuation score accurately rank these distributions or reveal their actual error levels? This experiment differs from directly comparing the valuation of the dataset itself. We should see a regression line, where the x-axis is the error level of distribution, while the y-axis is the valuation score.
2. Why do the correlation scores drop when we move the dataset from CIFAR10 to CIFAR100? The description about this is not clear to me. More clarification on this will be helpful.
3. The method of sampling from a distribution to create a dataset can influence its evaluation. Have there been any empirical findings or methods to address sampling bias? This missing experiment can justify the robustness of the valuation function.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. The method used to sample from a distribution to construct a dataset can affect its valuation. Are there any empirical results or methods to overcome sampling bias?
2. How does the accuracy of valuation change when a large number of vendors contribute to the mixed reference distribution? Also, does the valuation score reveal the relative level of two distributions or their absolute values?
3. In Equation 1, what is the reason for giving the value function a negative term instead of taking the reciprocal?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: I didn't see any potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer s86g for reviewing our paper, and for the positive feedback on the novelty of our approach, the quality of our methodology and solution, the clarity of our writing and significance of our work.
We wish to provide the following clarifications. The requested experimental results (and detailed settings) are in `response.pdf` in the global comment.
> ... We should see a regression line, where the x-axis is the error level of distribution, while the y-axis is the valuation score.
Figure 11 in `response.pdf` shows a relatively clear negative correlation.
> Can the valuation score accurately rank these distributions or reveal their actual error levels?"
The high $r^2$ coefficient and the $p$-value (of a fitted linear regression) indicate that the valuation scores correlate with and thus can reveal the error levels. Table 22 in `response.pdf` shows more extensive and quantitative results.
Regarding settings, we follow the reviewer's suggestions as closely as possible:
- We consider up to $1000$ vendors for generalizability.
- The error level is by $d(P_i,P^*)$ where a low (resp. high) MMD means a low (resp. high) error, instead of the percentage-type error levels since it is not precisely defined for a distribution to be 60\% correct. Also, our main paper (e.g., Table 2) already shows a high correlation between valuation score and machine learning accuracy.
- We obtain the value from distributions directly, using discrete distributions and an analytic expression of MMD (from lines 547-575). This is in general not possible due to not knowing the exact analytic pdfs of data distributions (e.g., MNIST).
> Why do the correlation scores drop when we move the dataset from CIFAR10 to CIFAR100? The description about this is not clear to me. More clarification on this will be helpful.
We clarify that the first two columns of Table 2 are _not_ showing results of "moving from CIFAR10 to CIFAR100". In our setting for Table 2, CIFAR10 is designated as $P^*$ and CIFAR100 is designated as $Q$ (lines 311-315, or Table 4 in Appendix D.2). The first (resp. second) column of Table 2 shows the correlation when a clean hold-out validation set $D_{\text{val}}\sim P^*$ is available (resp. unavailable). The results (Tables 2 & 3) show that the effectiveness of a valuation metric decreases without $D_{\text{val}}$. Importantly, Ours (and MMD$^2$) experience a smaller decrease/"drop" and are thus preferrable in practice where $D_{\text{val}}$ is unavailable.
> The method used to sample from a distribution ... can affect its valuation. Are there any empirical results ... to overcome sampling bias?
For a fixed distribution, different sampling methods yield different sampled datasets, and thus different valuations. This is an expected and correct behavior, and in particular, the "negative" sampling bias can be mitigated by our proposed valuation.
A "positive" sampling bias is when a data vendor, using knowlegde of $P^*$, to prioritize the sampling from $P^*$ in their $D_i$. This will result in a higher valuation. It is a correct behavior as $D_i$ contains more information about $P^*$ (i.e., the ground truth). A "negative" sampling bias is when the vendor performs sampling in a way that "moves away" from $P^*$ (e.g., injecting noise), then this will result in a lower valuation, as $D_i$ contains less information about $P^*$.
In practice, the positive bias is rare due to not knowing $P^*$. However, the negative bias is possible, can deteriorate the quality of the data and should be discouraged. This is precisely formalized via the so-called incentive compatibility (from mechanism design), which we investigate w.r.t. our valuation, both theoretically (Appendix A.4.3, lines 764-795) and empirically (Appendix D.3.4, lines 1235-1281). In particular, the empirical results (e.g., Table 6) verify that a vendor who injects such a negative bias by misreporting (noisy data) receives a lower valuation, demonstrating that the negative sampling bias can be mitigated.
> How does the accuracy of valuation change when a large number of vendors ... ?
Table 22 shows that the accuracy of valuation (measured as the Pearson correlation between obtained values and the unknown true values) remains high for up to $1000$ data vendors.
> Also, does the valuation score reveal the relative level of two distributions or their absolute values?
The valuation score reveals a relative level. It can be combined with additional transformations (e.g., translation): Let $d(P_i,P^*)=0.1$, $d(P_j,P^*)=0.2$, so their values are $\Upsilon(P_i) = -0.1$ and $\Upsilon(P_j)=-0.2$. To obtain non-negative values, we can linearly translate by $+1$ gives $\Upsilon(P_i)+1 = -0.1+1 = 0.9$ and $\Upsilon(P_j)+1=-0.2+1=0.8$. Note the absolute value of MMD is bounded by the upper bound $K$ of the kernel $k$ (e.g., $K=1$ for the RBF kernel), so a translation by $+K$ ensures that all values are non-negative. While additional transformations are possible (e.g., positive scaling after translation), the transformed values should be interpreted in the specific use case that motivates the transformation.
> In Equation 1, what is the reason for giving the value function a negative term instead of taking the reciprocal?
Taking the negation has theoretical and practical reasons. Theoretically, it (i) preserves the triangle inequality of MMD (e.g., used in Prop. 1), and (ii) enables the precise characterization of heterogeneity in Equation 2 (lines 191-194). Practically, (i) the uniform convergence of the MMD estimator (Lemma 1 in Appendix A.2) continues to apply (which may not for the reciprocal), making the value easy to estimate, while (ii) taking the reciprocal may lead to complications (e.g., division-by-zero errors).
We thank Reviewer s86g for the comments and questions, and will incorporate the discussion and additional results in our revision. We hope that our response has clarified your questions and helped raise your opinion of our work.
---
Rebuttal Comment 1.1:
Comment: Thanks for conducting additional experiments and clearing my concerns on sampling, more vendors, and different error levels' valuations. Regression lines in your attachment have a negative correlation and enhance the quality of this work. Thanks for clarifying my questions on cifar10 and cifar100. I have increased my rating and support this paper. Thanks for your response!
---
Reply to Comment 1.1.1:
Title: Thank you for the acknowledgement and increasing your score
Comment: We wish to thank Reviewer s86g for acknowledging our rebuttal and increasing the score. We really appreciate your support! | Summary: This paper addresses the problem of data distribution valuation in data markets, where buyers need to evaluate the quality of data distributions to make informed purchasing decisions. The authors formulate the problem and identify three technical challenges: heterogeneity modeling, defining the value of a sampling distribution, and choosing a reference data distribution. They make three key design choices: assuming a Huber model for data heterogeneity, using negative maximum mean discrepancy (MMD) as the value metric, and considering a class of convex mixtures of vendor distributions as the reference. The paper derives an error guarantee and comparison policy for the proposed method and demonstrates its effectiveness on real-world classification and regression tasks. Overall, this work provides a novel framework for data distribution valuation, enabling buyers to make informed decisions in data markets.
Strengths: 1. The paper introduces a novel approach to data valuation by focusing on the value of the underlying data distribution from a small sample. This addresses a gap in existing methods, which typically do not formalize the value of sampling distributions or provide actionable policies for comparing them.
2. The paper employs a Huber model to capture data heterogeneity and utilizes the maximum mean discrepancy (MMD) for evaluating sampling distributions. This combination allows for precise, theoretically grounded comparisons of data distributions and allows for sample efficient assessment of the valuation. Assuming a convex combination of the distribution as the reference they provide an error guarantee without making the common assumption of knowing the reference distribution.
3. The authors validate their method through real-world classification and regression tasks. The demonstrated sample efficiency and effectiveness of their MMD-based valuation method, particularly its superior performance in most classification settings compared to existing metrics highlights the practical relevance and robustness of their approach.
Weaknesses: 1. Based on Theorem 1 the valuation of $D$ boils down to the samples available from it and the averaged out heterogeneity $d(Q_\omega, P^*)$. In practice, a small sample that looks cleaner could be preferred over a bigger but noisy sample. It looks like the current results do not account for individual noise (heterogeneity), I see it in the Huber model but the theoretical results are averaging out as in Observation 1.
2. The sample complexity of $O(\frac{1}{\sqrt{m}})$ to estimate MMD, might even be sufficient for the learning task at hand, so why would vendors be willing to show such a big sample of the dataset and by seeing samples from each vendor won’t one be able to accomplish the learning objective without even selecting a vendor and buying a larger sample from them.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer YGYg for reviewing our paper, and for appreciating the novelty of our approach and acknowledging our theoretical and empirical results.
We would like to respond to the feedback and comments as follows,
> Based on Theorem 1 the valuation of $D$ boils down to the samples available from it and the averaged out heterogeneity $d(Q_\omega, P^*)$. In practice, a small sample that looks cleaner could be preferred over a bigger but noisy sample. It looks like the current results do not account for individual noise (heterogeneity), I see it in the Huber model but the theoretical results are averaging out as in Observation 1.
We point out that our definition (Equation 1) enables a precise characterization of the individual noise (w.r.t. $\epsilon_i, Q_i$) as in Equation 2 and in lines 190-196. Nevertheless, we acknowledge that our method leveraging the "averaging" and Observation 1 is not guaranteed to precisely identify the individual noise (e.g., by directly obtaining $\epsilon_i, Q_i$). We wish to highlight that this may be theoretically difficult to do (i.e., precisely and quantitatively identifying the individual noise), because we do _not_ make the assumption of knowing the reference distribution (as recognized by the reviewer). Intuitively, without knowing what the ground truth is (i.e., the reference distribution), one cannot reliably identify the individual noise, because it is difficult to distinguish noise from signal. We provide a more technical elaboration on this perspective and highlight the theoretical difficulties in Appendix C (see Q6 in lines 1054-1087) with corroborating observations and results from other fields (i.e., robust statistics and mechanism design). Informally, our discussion implies that additional assumptions are needed in order to identify, account for or manage individual noise precisely and quantitatively, so exploring the exact form of such assumptions and the methods to do so is an important future direction (in Appendix B.3 in lines 951-977 and also in Appendix C in lines 1084-1087).
> The sample complexity of $\mathcal{O}(1/\sqrt{m})$ to estimate MMD, might even be sufficient for the learning task at hand, so why would vendors be willing to show such a big sample of the dataset and by seeing samples from each vendor won’t one be able to accomplish the learning objective without even selecting a vendor and buying a larger sample from them.
The vendors may _not_ need to show a big sample of the dataset. As an example, the actual data need not be disclosed (in plaintext) by utilizing cryptographic approaches, such as computing MMD in a trusted execution environment (TEE). This provides a verifiable computed MMD _without_ disclosing the data. To expand on this, the vendors may each provide a relatively small public preview sample (e.g., a few hundred data points) and the interested buyers can request a "quotation" computed from a much larger sample dataset from the vendor in a cryptographically secure way: (i) buyer does not see the dataset; (ii) buyer can verify that the computation is correct. We note that our work aims to provide the theoretical foundations for an MMD-based data valuation, so the design for such cryptographic computations is beyond the scope of our current work.
To a prospective buyer, the computed MMD (e.g., in a TEE) with the sample size, can provide useful information to gauge the error margin for estimating the value of the data, using the sample complexity results as a guideline. Concretely, suppose that vendor $i$'s value is estimated to be $0.8$ with an error margin of $0.2$ while vendor $j$'s value is estimated to be $0.7$ with an error margin of $0.05$ because vendor $j$ supplied a larger sample. A risk-averse buyer (who cares about the worst-case scenario) may pick vendor $j$ over $i$ since the lower bound of the value for $j$ is $0.7-0.05=0.65$, greater than that ($0.8-0.2=0.6$) for vendor $i$.
That being said, in many operational data marketplaces and suppliers (e.g., Datarade, Snowflake, or the Bloomberg terminal), their business models may already account for this possible scenario and factor it into their pricing mechanism, since after all, not every buyer who expresses interest in the preview sample would eventually purchase the product (i.e., data distribution). We think that the buyers who end up purchasing the data would have stricter error requirements than what the preview sample can meet. We will incorporate the above discussion in our revision.
We thank Reviewer YGYg for the detailed feedback and hope that we have addressed your comments and improved your opinion of our work.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal and congratulations for the nice work! I have increased my score.
I presumed that MMD will be computed on the sample shown to the buyer. It would be nice to have some discussion in the paper to avoid this confusion.
Minor stuff: please improve the presentation of results in the tables.
You don't have to respond to this. I am curious about, how vendors can create the samples without revealing too much. In particular in your setting, as a buyer I can collect the freely visible samples from multiple vendors and train a model. If the vendor distributions are diverse enough then a buyer would still be able to get a very good model from those free samples (freeloader, lol). I am curious about when such freeloading is possible and when it is not, theoretically and can the vendors do something to prevent it. If it requires them to co-ordinate that might introduce conflicts.
---
Reply to Comment 1.1.1:
Title: Thank you for increasing the score
Comment: We would like to thank Reviewer YGYg for acknowledging our rebuttal and increasing the score. We will definitely take note of your feedback and incorporate it into our revision.
As for your question on if/"how vendors can create the samples without revealing too much", and if it requires coordination among vendors and its implications, we believe it to be an interesting avenue to explore an algorithmic solution intersecting statistics/machine learning and game theory and will definitely note this in our revision.
Thanks again for your encouraging feedback and helpful comments. | null | null | Rebuttal 1:
Rebuttal: We wish to thank the reviewers for reviewing our paper and providing the detailed feedback. We especially appreciate the positive feedback on the __novelty of our work__ (all reviewers), the __clarity of presentation__ (Reviewers `s86g`, `Sj3P`), and the __quality of our results__ (all reviewers).
---
We include some additional experimental results (and the detailed settings) mentioned by Reviewer `s86g` in `response.pdf`:
- Figure 11 plots the valuation scores vs. error levels of distributions and fitted linear regression, for different number of vendors.
- Table 22 tabulates the "accuracy" (measured by Pearson coefficient) of our proposed valuation scores ($\hat{\Upsilon}$ in Equation 5) with mean and standard error over $10$ independent trials for up to $1000$ vendors.
Pdf: /pdf/faad39f8d4667901eab7bdd9cf5ae26e646c57be.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SegVol: Universal and Interactive Volumetric Medical Image Segmentation | Accept (spotlight) | Summary: In this paper, the authors proposed SegVol, a 3D foundation segmentation model for medical images. This model supports universal and interactive volumetric medical image segmentation of more than 200 anatomical categories. This model is also well-designed with spatial and semantic prompts.
In the inference stage, the authors designed a zoom-out-zoom-in mechanism to enable efficient and precise segmentation results
Their extensive experiments show their method surpasses other SAM-like interactive segmentation methods.
Strengths: Interesting concepts. The idea of integrating spatial and semantic prompts is interesting and achieves prospective results.
Paper clarity. The paper is overall well-written and structured. I also enjoyed the quality of the figures which help understanding the method.
Good results. The method achieves SoTA results compared with the SAM-like interactive segmentation methods.
Weaknesses: I had a hard time finding weaknesses in the paper. Those I find are either nitpick or more directions for future work.
I put the weaknesses in the questions section.
Technical Quality: 3
Clarity: 4
Questions for Authors: I would appreciate it if the authors could answer my questions.
1. Figure 2, the Violin plots seem confused and it is not cited in the paper. Can the authors explain more about this figure?
2. Will some of the 25 public volumetric medical segmentation datasets share the same data? Can the authors provide more detailed information about the datasets?
3. How do the authors process the collected data with inconsistent annotations? For example, different hospitals will have different annotations of the Aorta. Will this influence the results?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors adequately addressed their limitations in the manuscript as well as the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable comments and kind words. We answer your questions as follows.
> **Q1: Figure 2, the violin plots seem confused and it is not cited in the paper.**
Figure 2 is cited on Page 7 Line 205: *‘We visualize the Dice score distributions of all methods in all the tasks as violin plots, depicted in Figure 2.’*
Figure 2 on Page 5 and Table 2 on Page 6 describe the results of the same experiments from two different perspectives. Table 2 presents the precise quantitative results (the median value), while the violin plots in Figure 2 illustrate the distributions of the results for intuitive comparison.
> **Q2: Will some of the 25 public volumetric medical segmentation datasets share the same data? Can the authors provide more detailed information about the datasets?**
There are no duplicate cases in our processed 25 public datasets. We used the hash code to validate all cases (image-label pair) during the data processing phase. It was found that the LiTS dataset contains many duplicate cases which also exist in other datasets, thus the whole LiTS dataset was removed. Besides, one duplicate case was found and removed in the remaining 25 public datasets.
For more detailed information about these open-source datasets, please refer to their official websites which are provided in Page 17 Table 5.
> **Q3:** **How do the authors process the collected data with inconsistent annotations?**
It is an important advantage of SegVol that the model is compatible with inconsistent but reasonable annotations. There is no need for additional data processing. For example, the whole ‘kidney’ is annotated in one hospital, while more specifically ‘left kidney’ and ‘right kidney’ are annotated in other hospitals. SegVol can learn to comprehend such inconsistent annotations and generalize well to users’ rough or precise semantic+spatial prompts in the testing phase.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's detailed response. Most of my concerns have been addressed. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We will polish the final version based on your suggestions. | Summary: This paper proposes a 3D foundation segmentation model, named SegVol, supporting universal and interactive volumetric medical image segmentation. By scaling up training data to 90K unlabeled Computed Tomography (CT) volumes and 6K labeled CT volumes, this foundation model supports the segmentation of over 200 anatomical categories using semantic and spatial prompts. Besides, a zoom-out-zoom-in mechanism is designed to facilitate efficient and precise inference on volumetric images.
Strengths: (1) Collect and process 25 public volumetric medical segmentation datasets, encompassing over 200 anatomical categories. The pseudo label is introduced to relieve the spurious correlation in the training data.
(2) Implement massive 3D pre-training on 96K CT volumes and supervised fine-tuning on the 6k labeled datasets.
(3) Support spatial-prompt, semantic-prompt, and combined-prompt segmentation, achieving high-precision segmentation and semantic disambiguation.
(4) Design a zoom-out-zoom-in mechanism that significantly reduces the computational cost, meanwhile preserving precise segmentation.
Weaknesses: (1) As shown in Appendix Table 5, the numbers of trainset volumes is very dunbalanced, which will affect the performance of the proposed method. How the author address the above issue? They should give further analysis and validation.
(2) For unseen anatomical categories, what is the segmentation effect of SegVol? Provide relevant experiments for further explanation.
(3) What are the complexity、number of parameters and running speed of the SegVol, and provide a comparison with existing SOTA methods.
(4) Minor: The reference format should be unified,for example,[8]、[43]
Technical Quality: 3
Clarity: 3
Questions for Authors: Answer the questions in Weakness Section
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Not Applicable
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments. Below we answer the specific questions.
> **Q1: As shown in Appendix Table 5, the number of train-set volumes is very unbalanced, which will affect the performance of the proposed method.**
It is supposed that the unbalanced training set may cause poor performance on the minor categories/domains in some tasks. However, this problem does not matter in our work. We studied this problem by training SegVol on those small sub-datasets and comparing its performance to that trained on the joint of all datasets. Specifically, BTCV, MSD-spleen, and FLARE22 are used as the minor sub-datasets (domains), which have 24, 32, and 40 training samples respectively, and account for only 0.5%, 0.7%, and 0.9% of the whole training set. The following table shows that SegVol trained on the whole dataset, which consists of 25 sub-datasets, achieves much better performances than those trained on the small sub-datasets respectively. The results indicate that SegVol, as a foundation model, performs well on minor sub-datasets (domains) and does not suffer from the problem of unbalanced training data. We will provide more analysis in the final version.
| **Avg. Dice Score** | BTCV | MSD-spleen | FLARE22 |
| :---------------------------------- | :--------: | :--------: | :--------: |
| SegVol trained on small sub-dataset | 0.5195 | 0.9471 | 0.5567 |
| SegVol trained on the whole dataset | **0.8058** | **0.9597** | **0.8822** |
> **Q2: For unseen anatomical categories, what is the segmentation effect of SegVol?**
In Page 6 Table 2 and Page 7 Line 202, we presented the experiments on ULS23, where most lesion categories are unseen for SegVol. The unseen categories include most abdominal lesions, all bone lesions, some lung lesions, and all mediastinal lesions. For unseen categories, we use spatial prompts and general semantic prompts, e.g., ‘lesion’ or ‘tumor’, to drive the model. The results show that SegVol can achieve good segmentation performance (Dice: 0.7046) on these unseen categories.
After the review phase, we will host an online running model for users to test various cases from unseen categories.
> **Q3: What are the complexity, number of parameters and running speed of the SegVol?**
The following table shows the Total Parameters, the average Multiply-Accumulates(MACs), and the average Time required to process a case of the different methods. The comparison indicates that our method takes less computational cost while achieving much better performance.
Note that when calculating MACs Per Case and Time Per Case, the slice-by-slice calculation of the 2D method and the scanning process of the 3D method are accumulated respectively. SAM-MED3D only processes volume with a size of $128\times128\times128$. The experiments are implemented on the validation set of AMOS22, and the setting is the same as that in Sec. 3.2.
| **Method** | **Total Parameters** | **Avg. MACs Per Case↓** | **Avg. Time Per Case (s)↓** | **Avg. Dice Score↑** |
| :--------- | :------------------: | :---------------------: | :-------------------------: | :------------------: |
| SAM | 94M | 1.3e+13 | 2.1764 | 0.5271 |
| MedSAM | 94M | 1.3e+13 | 2.1886 | 0.5371 |
| SAM-MED2D | 271M | 2.3e+12 | 3.5547 | *0.6668* |
| SAM-MED3D | 101M | **1.0e+11** | **0.1768** | 0.6200 |
| SegVol | 181M | *6.7e+11* | *0.3283* | **0.8593** |
> **Q4: Minor: The reference format should be unified.**
Thanks for your reminder. We will unify the reference format in the final version.
---
Rebuttal 2:
Comment: Dear Reviewer rczM,
We hope our response addresses your questions. Please feel free to reach out if you have any further inquiries. We would greatly appreciate it if you could consider raising your rating.
Thank you very much! | Summary: The paper describes the utilization of SegVol, a deep learning model, to segment any organ/tumor/lesion on 3D CT data.
**objectives**
Create a universal segmentation model that can segment with :
- any type of labeling
- good performances on complex tasks
- low computational cost (i.e., no sliding window)
**contributions**
The authors contribute to the state of the art in 3 ways :
- first model for segmenting over 200 anatomical categories
- support spatial (points/bbox) and semantic prompt (text describing each category)
- introduction of a zoom-out-zoom-in mechanism to improve the model's efficiency and accuracy
**datasets**
- 25 open-source segmentation CT datasets + background filtering
- generation of pseudo-labels with Felzenswalb-Huttenlocher to remove spurious correlation between the datasets
**architecture**
The whole architectures is divided into four blocks:
- image encoder: 3D ViT pretrained using SimMIM on 96K unlabeled CTs + fine-tuning on 6K labeled CTs
- text encoder: the CLIP model is used to generate a text embedding from the text prompt. The text prompt only consists in the name of the anatomical region to be segmented.
- prompt encoder: text embedding and spatial prompt (bbox+points) are given to generate the prompt embedding.
- mask decoder: from the prompt embedding, the segmentation is generated.
**training algorithm**
- the encoder is pretrained using SimMIM
- the model is fine-tuned using an algorithm described in appendix B. For each epoch, an iteration of training is made using the ground truth, and a second one using the pseudo masks.
**results**
Experimental setup is given, as well as a description of the three external datasets used for the evaluation.
- comparison with SAM-like methods : Table 2 shows the superiority of the method with Dice score > 0.7 on each task. Figure 2 helps to visualize the dice score distribution for each model, on each task. An additional comparison to standard fully supervised segmentation models such as nnUNet, SwinUNetR is provided. Superiority of SegVol is also shown compared to this model on Figure 8 and Table 6.
- ablation studies
- Zoom-out-zoom-in mechanism : Table 3 shows the interest of the method, improving performances and reducing duration compared to sliding window
- Scaling up training data : Fig 3.(a) shows that the model performances improve as the number of training data increases.
- Prompt impact : Fig 3 (a and b) shows that point prompt is inferior to text prompt, and best prompt combination is bbox+text
- Case studies
- Disambiguation via semantic-prompt: Figure 4 shows that when the spatial prompt is not sufficient to describe the segmenting task, the text prompt adds the complementary information to locate the good anatomical region.
- spatial-prompt impact: Figure 5 shows how the points prompt are related to the results. When choosing two points, the category becomes clear, whereas with one point can identify two categories.
**discussion**
- Scalability : the authors believe that the model have a strong model scalability.
- Generalizability : Generalization of SegVol to MRI is discussed, with an example given in appendix C showing a strong dice score of 80%.
- Limitations : only limitation is that text prompt only consider a category, and not a full sentence with logical reasoning.
- Broader impact : the authors believe that their method is universal and doesn't have any negative impact.
Strengths: - **originality** : the concept of SegVol is not original itself. However, the addition of the zoom-in-zoom-out principle and the combination of bbox + text + points prompts is new in this kind of models.
- **quality** : the work is done rigorously, respecting a scientific reasoning all along the article.
- **clarity** : the paper lacks clarity as there are two parts difficult to understand.
- 1) The description of the training algorithm is not made in the text, only the steps are written in appendix B.
- 2) The hyperparameters regarding the model's dimension (number of layer, ViT dimension) is not explicited.
- **significance** : The results are impactful for the community of medical image segmentation, and could have plenty of applications. The property of segmenting multiple organs, from multiple entirely different datasets is remarkable.
Weaknesses: The only weakness regards the clarity of the training algorithm, which is hard to understand even with appendix B and training steps.
This point could be improved by describing the steps in the appendix with a paragraph, or maybe with another diagram that follows the exact same steps.
Technical Quality: 4
Clarity: 2
Questions for Authors: **datasets**
l. 83 : Why is Felzenswalb-Huttenlocher (FH) algorithm to generate pseudo masks ?
l. 81 : How is it supposed to remove spurious correlation between datasets ?
**architecture**
l. 96 : The architecture principle is described, but some details are lacking such as the number of layers of each network, the number of heads in the ViT, etc.
l. 133 : why is the computational cost reduced using the zoom-in-zoom-out principle compared to sliding window? clarify this point.
l. 140 : how are prompts generated from the coarse segmentation masks? clarify if the prompts are generated thanks to the prompt encoder as a bbox, or points.
**results**
l. 169 : Could you indicate the training time (SimMIM and finetuning) of the model?
l. 169 : Why did you choose these 3 specific datasets as external test sets?
**discussion**
l. 257 : Why is data salability a good thing ? Why is this part in the discussion? The data scalability of the model is not discussed or compared with other models. If not discussed, move to the conclusion.
Confidence: 4
Soundness: 4
Presentation: 2
Contribution: 4
Limitations: Only one limitation is addressed, considering the text prompt which cannot be a logical sentence at the moment.
However, the amount of data required to train the model is not discussed. Would a few-shot finetuning method work on small datasets (10 samples)?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your high recognition of our paper and the detailed feedback. We answer your questions as follows.
> **Q1: Clarity of the training algorithm. Describing the steps with a paragraph or diagram.**
Thank you for the suggestion. We will provide the detailed text description of the Training Algorithm in the final version. Here, we give a simple diagram, Figure 1 in the attached PDF, and a brief description for your reference. Specifically, each case (training sample) consists of an Image *x*, a Ground Truth(GT) Mask Set *Y*, and a Pseudo Mask Set *Z*. The training loss of each sample consists of the ground-truth loss and the pseudo loss. The ground-truth loss is computed by inputting the image, the ground-truth mask (label) and the sampled prompt into the model, while the pseudo loss is computed by inputting the image, the pseudo label and the fixed prompt into the model. Finally, the model is updated by minimizing the weighted sum of the two losses.
> **Q2: Why is FH algorithm to generate pseudo masks? How is it supposed to remove spurious correlation between datasets?**
The FH algorithm is an unsupervised segmentation method that can generate segmentation masks purely based on the voxel similarity of regions in CT scans. In the beginning, we also tried several candidates, including *morphological_chan_vese*, *morphological_geodesic_active_contour*, *quickshift*, *slic*, etc. Finally, we found the FH algorithm performs best, and thus chose it to generate pseudo masks.
Since in most datasets, the 3D CTs are annotated with only a few segmentation categories, which causes a spurious correlation between the labeled categories and the specific dataset. Using pseudo masks can supplement unlabeled categories in a dataset, therefore relieving the spurious correlation.
> **Q3: Some details are lacking such as the number of layers of each network, the number of heads in the ViT, etc.**
Thanks for reminding, we will add the network details in the paper. Here are main hyper-parameters of the ViT in SegVol:
patch_size=(4, 16, 16), num_layers=12, num_heads=12, hidden_size=768.
The code and model weights of SegVol will be released after the review period.
> **Q4: Why is the computational cost reduced using the zoom-out-zoom-in principle compared to the sliding window?**
The traditional sliding window method requires scanning the entire 3D-CT, processing *hundreds* of windows. In contrast, the proposed zoom-out-zoom-in mechanism only requires one global inference of 3D-CT and scanning the ROI with *dozens* of windows.
> **Q5: How are prompts generated from the coarse segmentation masks?**
The process of prompts generated from the coarse segmentation masks is the same as that of prompts generated from ground truth masks, where point prompts are created in or beside the targets, bbox prompts are formed near the target boundaries, and text prompts are derived from category names. The detailed strategies are presented in Page 4 Sec 2.3.
> **Q6: Could you indicate the training time (SimMIM and finetuning)?**
SimMIM pre-training time: ~20$\times$8 GPU hours on NVIDIA A100-SXM4-40GB.
Finetuning time: ~300$\times$8 GPU hours on NVIDIA A100-SXM4-40GB.
> **Q7:** **Why did you choose these 3 specific datasets as external test sets?**
AMOS22 is a popular dataset for medical image segmentation. Many works are trained and evaluated on it. The validation set of AMOS22, as a representative dataset of abdominal organs, has a large number of samples and includes 15 major abdominal organs.
The ULS23 dataset is a novel large-scale lesion segmentation dataset with thousands of cases covering various lesions, which is a challenging benchmark of lesion segmentation.
SegTHOR dataset focuses on the thorax, which can be a supplement of AMOS22 (abdomen) and ULS23 (lesions). On the whole, the three datasets cover most of the segmentation tasks of organs and lesions in the thorax and abdomen.
> **Q8: Why is data salability a good thing? Why is this part in the discussion?**
The scaling law of foundation models has been verified in multiple CV and NLP tasks. We achieved the success of scaling law in 3D medical segmentation by the design of universal prompts and pseudo masks for joint learning on datasets with inconsistent annotations. We provided a preliminary experiment of the data scaling law in Figure 3 (a) Page 7, which shows that 1) the performance improves significantly with more training data, 2) and SegVol has not yet reached its ceiling if more training data is provided. We will provide more detailed discussion and comparison to other methods in the final version.
> **Q9: The amount of data required to train the model (SegVol) is not discussed. Would a few-shot finetuning method work on small datasets (10 samples)?**
Figure 3 (a) in Page 7 demonstrates the relationship between the model performance and the amount of finetuning training data. Generally speaking, more training data leads to better performance.
We conducted the few-shot finetuning experiment on small datasets, FLARE22 (40 training cases) and MSD-spleen (32 training cases), to study the few-shot learning ability of SegVol. The table below demonstrates that 1) finetuning SegVol on dozens of samples works well on easy datasets such as MSD-spleen, in which the few-shot performance is close to the joint finetuning on all datasets; 2) for challenging datasets such as FLARE22, finetuning on all datasets can achieves much better performance than few-shot finetuning.
| Avg. Dice Score | 100 epochs | 200 epochs | 300 epochs | 400 epochs | 500 epochs | SegVol* |
| :-------------- | :---------: | :--------: | :--------: | :--------: | :--------: | :-----: |
| FLARE22 | 0.0463 | 0.4028 | 0.4926 | 0.5617 | 0.5567 | 0.8822 |
| MSD-spleen | 0.7566 | 0.7866 | 0.9433 | 0.9454 | 0.9471 | 0.9597 |
*Note: **SegVol\*** represents the model finetuned on all datasets.*
---
Rebuttal 2:
Comment: Thank you for your detailed answers to each of my interrogations.
All the questions have been answered by the authors.
Q1 : the algorithm is clearer with this small text, it should be included in the article.
Q2 : the choice of FH is then arbitrary, it should be specified in the article.
Q3 : For the network details, maybe a results of a torchinfo summary, or something in this idea could be added in appendix so that every hyperparameters could be seen visually (it would be be a bonus, but optional).
Q9 : The results of joint finetuning compared to few-shot finetuning is relevent, and could be at least mentionned.
I will maintain my grading, considering the clarity (presentation) will be improved, but is not yet done.
---
Rebuttal Comment 2.1:
Comment: Thank you for the further feedback. We will polish the final version based on your suggestions. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewers’ constructive comments and valuable suggestions.
We are glad to see that our paper received high praise from all reviewers. Especially, **Reviewer 8irr** finds that ‘*the results are impactful for the community of medical image segmentation, and could have plenty of applications.*’ **Reviewer Ykum** thinks that ‘*I had a hard time finding weaknesses in the paper. Those I find are either nitpick or more directions for future work.*’
We address the specific questions from reviewers as follows.
Pdf: /pdf/ad17020a8a7c435b4b35fb762df152a165c66c83.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AdaNovo: Towards Robust \emph{De Novo} Peptide Sequencing in Proteomics against Data Biases | Accept (poster) | Summary: The authors introduce a novel approach for the robust de novo sequencing of peptides from mass spectrometry experiments. The novelty on the application side is the focus on post-translationally modified peptides that are commonly ignored in pure de novo sequencing approaches, however, biologically of highest relevance in many applications. Methodologically this is achieved by computing the Conditional Mutual Information between mass spectra and amino acid sequences. Evaluation is carried out in comparison against commonly used learning based tools in the field on a commonly used benchmark dataset for de novo sequencing.
Strengths: • The authors identify a highly relevant biological question, the robust identification of post-translational peptides that is only very rarely studied in the context of de novo sequencing
• The proposed solution based on conditional mutual information well fits the problem at hand
• The authors benchmark against state-of-the-art-tool on a commonly used data set and can show improvements there
Weaknesses: • The initial hypothesis (Post-Translational Modifications due to their lower frequency in training data compared to canonical amino acids, further resulting in unsatisfactory peptide sequencing) is somewhat flawed. PTMs are rare: even most commonly occurring ones such as phosphorylation (while common on the protein level) end up being extremely sparsely occuring (<1%) on the peptide level under natural conditions. Missing PTMs is suboptimal, but the effect on the peptide sequencing level overall has to be ignorable.
• What is commonly done (e.g. https://www.nature.com/articles/s42256-022-00467-7) is to use machine learning models to identify the location of PTMs without necessarily requiring the sequence. This is a different problem than the de novo sequencing and the authors may want to acknowledge those two different schools of thought and also may want to consider benchmarking accordingly.
• The benchmark data set is likely not adequate for learning tasks and is potentially subject to major data leakage. A high proportion of peptides between highly related organisms such as mouse and human is shared (e.g. https://www.researchgate.net/figure/Human-and-Mouse-Proteins-with-Identical-Amino-Acid-Sequences_tbl1_14316492. This is even more pronounced if you consider that highly expressed proteins (such as house keeping proteins) tend to be even better conserved. Thus, many peptides of one species are contained in the training set, but then also as part of another related species, part of the test set. This is not covering the problem of identifying novel proteins, particularly in the settings where a user would like to apply de novo sequencing (usually underexplored organisms without closely related ones with available genomes)
• The lack of error bars (due to computational complexity) makes it hard to judge the claim that the method provided by the authors outperforms other approaches. The standard deviations given actually indicate that this may not be the case and that there may be no statistically significant advance.
Technical Quality: 2
Clarity: 3
Questions for Authors: * While I understand that computing standard deviations for all experiments can be computationally prohibitive, could you also provide them at least for Mouse and Human for all tools and indicate why you believe that AdaNovo outperforms other tools?
* Can you quantify how many of the peptides in the 9 species datasets are actually identical between species or how you protect against data leakage?
* Can you indicate absolute numbers for peptides with/without PTMs in the 9 species benchmark
* Can you delineate your approach from existing approaches to identify PTMs and argue why it is beneficial to include this step in de novo sequencing rather than running it separately.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors indicate limitations in identifying never observed PTMs (they may want toexplicitly acknowledge that some open search tools actually are able to do that).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: Response to Reviewer DL85 (1/2)
Comment: > *Missing PTMs' the effect on the peptide sequencing overall has to be ignorable. Can you indicate absolute numbers for peptides with/without PTMs in the 9 species benchmark?*
Thanks for your insightful and to-the-point reviews! We provide the absolute numbers for peptides with/without PTMs of the 9 species benchmark in **Table Re1**. As can be observed, peptides with PTMs account for 11.8% to 31.1% across different species. **These proportions are relatively large and have significant impacts on the overall performance of peptide sequencing. This is because if there is a single incorrect prediction of a PTM within a peptide, the entire predicted peptide will be erroneous.**
Furthermore, we remove peptides with PTMs from the test set and compared the results with those obtained using the model before the removal of PTMs. As shown in **Table Re2**, the model's peptide-level precision significantly improves after removing the peptides with PTMs in the test set, which verifies that the **PTMs have significant impacts on the overall de novo peptide sequencing performance**.
**Table Re1**: The absolute numbers for peptides with/without PTMs in 9 species datasets.
| Species|Rice bean |Honeybee |Bacillus|Clam bacteria| Human| Mouse| M. mazei| Yeast| Tomato|
|----|----|----|----|----|----|----|----|----|----|
| Peptides with PTMs|10512|64488|34474 |46846 |36008| 10725 |36472 |24849| 76620 |
| Peptides without PTMs |27263 |250083 |257309 |103765 |94575| 26296 |127949 |86463| 213430 |
| The Ratios of Peptides with PTMs |27.8% |20.5% |11.8% |31.1% |27.6%|29.0% |22.2% |22.3%| 26.4% |
**Table Re2**: The peptide-level precision of models before and after removing peptides with PTMs in test set.
| Species|Bacillus |M. mazei |Clam bacteria|
|----|----|----|----|
| Casanovo (before removing peptides with PTMs)|0.513|0.474|0.347 |
| Casanovo (after removing peptides with PTMs)|0.572|0.520| 0.385|
| AdaNovo (before removing peptides with PTMs)|0.561|0.523|0.397|
| AdaNovo (after removing peptides with PTMs)|0.598|0.545|0.423|
> *Can you delineate your approach from existing approaches to identify PTMs and argue why it is beneficial to include this step in de novo sequencing rather than running it separately.*
Thanks for your comments! AdaNovo can predict both PTMs' locations and types of the PTMs while previous methods can only predict the location of one specific PTMs type such as phosphorylation [1] or ubiquitination [2]. Therefore, the task studied here is more challenging. If we conduct PTMs identification and de novo sequencing separately, we need to train distinct models to predict various types of PTMs one by one, which is computationally complex and prone to errors. More importantly, as verfied in **Table Re2**, PTMs have significant impacts on the overall performance of peptide sequencing. **Therefore, it is necessary to design novel methods to enhance PTMs identification in de novo peptide sequencing**.
> *The benchmark data set is potentially subject to data leakage. Can you quantify how many of the peptides in the 9 species datasets are actually identical between species or how you protect against data leakage?*
Another good point! The nine species datasets are widely used for de novo sequencing methods [3,4,5] and we follow the same experimental settings as them. Specifically, we employ a leave-one-out cross validation framework where the peptides in the training set are almost completely disjoint from the peptides of the held-out species. To illustrate this point, among the ∼26,000 unique peptide labels associated with the human spectra in the test data, only 7% overlap with the ∼250,000 unique peptide labels associated with spectra from the other eight species. The majority (93%) of peptides are not present in the training set, ensuring that the dataset is suitable for evaluating de novo sequencing methods. To further mitigate your concerns, we remove the training samples whose peptide labels overlap with the training set. The results in **Table Re3** verify that AdaNovo also exhibits significant advantages on such datasets.
More importantly, in de novo sequencing, we can regard the mass spectra as samples and peptides as labels. The same peptide labels will yield different mass spectra under different experimental conditions. Data leakage occurs when test samples are present in the training set. In our experiments, the training and test spectra come from different experiments, making it impossible for test spectra to be present in the training set. **This segregation ensures that there is no data leakage in the benchmark data**.
**Table Re3**: The peptide-level precision on test set without peptides overlap with the training set.
| Species|Bacillus |M. mazei |Clam bacteria|
|----|----|----|----|
| Casanovo |0.492|0.465|0.328|
| AdaNovo |0.549|0.515|0.376|
**The second part of our response can be found in the next block.**
---
Rebuttal 2:
Rebuttal: # Response to Reviewer DL85 (2/2)
> *Could you provide standard deviations for Mouse and Human for all tools?*
Thanks for your helpful reviews! Following your valuable advice, we reproduce the performance of DeepNovo and PointNovo with standard deviations over 5 different random initializations. The results shown in **Table Re4** indicate that AdaNovo outperforms previous methods with statistically significant margins.
**Table Re4**: Empirical comparison of previous models on Mouse and Human datasets in terms of **amino acid-level/peptide-level precision (with standard deviations)**.
| Models| DeepNovo |PointNovo |CasaNovo |AdaNovo|
|----|----|----|----|----|
| Mouse| 0.627 ± 0.009/0.293 ± 0.012 |0.625 ± 0.010/0.352 ± 0.014 | 0.612 ± 0.015/0.449 ± 0.010 |**0.667** ± 0.018/**0.493** ± 0.015
| Human| 0.603 ± 0.012/0.290 ± 0.019| 0.596 ± 0.014/0.342 ± 0.010| 0.585 ± 0.010/0.343 ± 0.016 | **0.618** ± 0.013/**0.373** ± 0.012
> *The authors indicate limitations in identifying never observed PTMs.*
Thank you for your professional reviews! **The open search tools such as MSFragger [6] can identify the PTMs the human has discovered, as included in the UniMod database [7]. However, what we mean are the PTMs that have not been found by humans.** We will further clarify the distinction in the revised version.
[1] Ad hoc learning of peptide fragmentation from mass spectra enables an interpretable detection of phosphorylated and cross-linked peptides (Altenburg et al., Nature Machine Intelligence)
[2] Large-scale prediction of protein ubiquitination sites using a multimodal deep architecture (He at al., BMC Systems Biology)
[3] De novo peptide sequencing by deep learning (Tran et al., PNAS)
[4] Computationally instrument-resolutionindependent de novo peptide sequencing for high-resolution devices (Qiao et al., Nature Machine Intelligence)
[5] De Novo Mass Spectrometry Peptide Sequencing with a Transformer Model (Yilmaz et al., ICML 2022)
[6] MSFragger: ultrafast and comprehensive peptide identification in mass spectrometry–based proteomics. (Kong et al., Nature Methods)
[7] Unimod: Protein modifications for mass spectrometry. (Creasy et al., Nature Methods)
---
We greatly appreciate your insightful and helpful comments, as they will undoubtedly help us improve the quality of our article. If our response has successfully addressed your concerns and clarified any ambiguities, we respectfully hope that you consider raising the score. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript.
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed explanations. This is very helpful.
However, I wonder if we have a misunderstanding regarding PTMs here. The motivation of the paper and this answers given seem to imply that the authors focus on biological meaningful PTMs (such as phosphorylation), however the numbers of frequency of occurrence imply rather that the authors may find modifications such as Carbamidomethylation or oxidations that result from the mass spectrometry acquisition process (but do not carry any biological relevance and are not actual PTMs in the biological sense).
The frequency numbers the authors describe is significantly above what is commonly described in literature for biological modification.
This is relevant for the discussion here since
I usually filter out fixed oder very frequent modificatons differently than I search true PTMs. Also frequency matters a lot for the discussion of training specific models or not and also for the leakage discussion I wonder whether this was considered (so are only the directly 1:1 identical peptides removed or also those that may have an Ox(M).
Thanks for clarifying
---
Rebuttal 3:
Title: Clarification regarding the misunderstanding between us
Comment: Thanks very much for your professional, insightful, and helpful reviews! We greatly enjoy such in-depth discussions and believe they will significantly enhance the quality of our work.
>***Q1**: I wonder if we have a misunderstanding regarding PTMs here. The motivation of the paper seem to imply that the authors only focus on biological meaningful PTMs ... The frequency numbers the authors describe is significantly above what is commonly described in literature for biological modification...*
**The motivation of our work lies in that all the PTM types (instead of one specific PTM type) have significant impacts on the overall de novo sequencing performance**, which we have explained and experimentally verified in our original manuscript and response above.
Following your previous advice, we provide **the ratios/frequency of peptides with PTMs** in **Table Re1**, which can be formulated as, $$\frac{\text{The number of the peptides with any PTMs type}}{\text{The number of the peptides}}$$. However, **the frequency numbers of PTMs** you mentioned now can be formulated as $$\frac{\text{The number of a specific type of PTMs}}{\text{The number of amino acids}}$$. We show the statistics of the frequency of 3 PTM types in **Table Re5**, from which we can observe that **the frequency of PTMs is low (0.051% - 0.914%) in the datasets, just like what is commonly described in literature for biological modification.** **However, the frequency of peptide with PTMs is relatatively high (see **Table Re1**)**, resulting in low peptide-level precision of previous de novo sequencing methods because if a PTM in a peptide sequence is predicted incorrectly, the entire predicted peptide sequence is incorrect in the de novo sequencing task. In other words, the PTMs have significant impacts on the overall peptide-level performance of de novo sequencing even if their frequency is lower than canonical amino acids.
**Table Re5**: The frequency of 3 PTM types in 9 species datasets.
| PTM types|Rice bean |Honeybee |Bacillus|Clam bacteria| Human| Mouse| M. mazei| Yeast| Tomato|
|----|----|----|----|----|----|----|----|----|----|
|M(+15.99)|0.698%|0.077%|0.218%|0.914%|0.238%|0.617%|0.871%|0.389%|0.900%|
|N(+.98)|0.304%|0.217%|0.536%|0.441%|0.424%|0.328%|0.462%|0.385%|0.356%|
|Q(+.98)|0.051%|0.118%|0.112%|0.090%|0.143%|0.103%|0.061%|0.134%|0.065%|
>***Q2**: Frequency matters a lot for the discussion of training specific models or not and also for the leakage discussion I wonder whether this was considered ...*
In some biological applications, we may only care about one specific type of PTMs. For example, in PTMs prediction task, previous methods usually train one model for one specific PTM type. However, to evaluate de novo sequencing models, it is unreasonable to only keep one specific PTM type and remove the other PTM types (such as Ox(M)) in the datasets because all the PTM types have significant impacts on the overall performance of de novo sequencing models, as we have explained and experimentally verified in our manuscript or response above. Additionally, de novo sequencing is primaryly applied to identify the proteomic "darker matters" caused by PTMs and other alterations. It would be better that the de novo sequencing models can identify more PTM types. **Therefore, it is reasonable that the widely-used 9 species datasets for evaluating de novo sequencing models contain multiple PTM types. There is NO data leakage in these datasets as we have throughly claried before.**
Furthermore, regarding train specific models for each PTM type in PTMs prediction task, some recent works in PTMs prediction also develop for the prediction of multiple types of PTMs. One is CapsNet_PTM which uses a CapsNet for seven types of PTMs prediction [1]. More recently, MusiteDeep has been extended to incorporate the CapsNet with ensemble techniques for the prediction of more types of PTMs [2].
[1] Capsule network for protein post-translational modification site prediction (Wang et al., Bioinformatics)
[2] MusiteDeep: a deep-learning based webserver for protein post-translational modification site prediction and visualization (Wang et al., Nucleic Acids Research)
---
Thank you once again for your thorough, insightful, and constructive reviews! We truly appreciate these detailed discussions and believe they will help resolve any misunderstandings between us. If our responses have addressed your concerns fairly, we respectfully hope you might consider raising your score to support our work. If you have any other questions or concerns, we are very willing to engage in further discussion. Thank you for the time you have dedicated to our work, which will undoubtedly help us improve the quality of our manuscript.
---
Rebuttal Comment 3.1:
Comment: Thank you for clarifying and resolving our misunderstanding. However, please note that there is a significant difference between „natural“ PTMs such as phosphorylation which carry biological meaning and accidental modifications such as those that you list in table 1, Oxidiation of M or deamidation of N and Q are artifacts of suboptimal sample handling in mass spectrometry acquisition. Missing them means missing a peptide, I agree, but it does not mean the same as missing a natural ptm, such as
missing a phosphorylation which means missing an important biological signal.
There are many tools out there already for cleaning mass spectrometry artifacts and that is easier (since a very limited space) than actually finding ptms.
To me, the story telling of this contribution is rather misleading since it appears to solve a holy grail in MS proteomics (finding true ptms), but the evidence provided appears geared towards technical artifact removal. Don’t get me wrong. This is also valuable, but it is not the same as finding natural ptms.
I have adjusted my score here, but still have doubts regarding this contribution.
---
Rebuttal 4:
Title: (Clarification on the contribution) Our model AdaNovo is also applicable to the identification of natural or biologically meaningful PTMs
Comment: Another good point! **Actually, our model AdaNovo is also applicable to the identification of biologically meaningful PTMs, depending on the training dataset used. If the dataset contains various "natural" or biologically meaningful PTMs (such as phosphorylation), AdaNovo can also be used to identify these important biological signals.** We conducted some experiments to support our claims. Specifically, we utilized a dataset that encompasses 21 distinct PTMs, referred to as the 21PTMs dataset, as detailed in [1]. For illustrative purposes, we selected data for 5 "natural" or biologically meaningful PTMs—tyrosine phosphorylation, tyrosine nitration, proline hydroxylation, lysine methylation, and arginine methylation—to train and test Casanova and AdaNovo (train:test = 9:1). As shown in **Table Re 6**, AdaNovo outperforms Casanovo in identifying these ("natural" or biologically meaningful) PTMs, which further lead to AdaNovo’s superiority in overall de novo sequencing performance. We will emphasize this good point in the final version.
**Table Re6**: The models' performance on the datasets with natural PTMs.
| Models|PTM-level precision |Amino acid-level precision | Peptide-level precision|
|----|----|----|----|
|Casanovo| 0.413|0.604 | 0.336 |
|AdaNovo| 0.518| 0.620| 0.377|
[1] Proteometools: Systematic characterization of 21 post-translational protein modifications by liquid chromatography-tandem mass spectrometry (LC-MS/MS) using synthetic peptides. (Zolg, D. P., et al., Molecular & Cellular Proteomics)
---
We sincerely appreciate the points you have raised. They have helped us identify issues we had not previously considered, particularly regarding AdaNovo’s capability to identify natural or biologically meaningful PTMs.
Sincerely hope that our response can address your concerns and further enhance your positive recommendation of our manuscript. If you have any additional questions or concerns, we would be more than happy to engage in a deeper and more thorough discussion. Thank you once again!
---
Rebuttal 5:
Title: Look forward to post-rebuttal feedback!
Comment: Dear Reviewer DL85,
We have provided detailed responses to your reviews. Considering that the deadline for the discussion phase is approaching, we would like to know if our responses have adequately addressed your concerns. If you have any additional concerns or questions, we are more than happy to engage in further discussion. Thank you again for your time and effort in reviewing our manuscript. Your feedback has been instrumental in improving our research:)
Best,
Authors
---
Rebuttal 6:
Title: Have our responses addressed your concerns to your satisfaction?
Comment: Dear Reviewer DL85,
We would like to express our sincere gratitude for dedicating your time to reviewing our paper. **Your insightful comments on the difference between natural PTMs and accidental PTMs are particularly valuable, inspiring us to apply AdaNovo in the identification of natural or biologically meaningful PTMs.**
We have thoroughly considered your feedback and carefully responded to each of your questions with extensive experiments. We would greatly appreciate your feedback on whether our responses have addressed your concerns to your satisfaction.
Once again, we sincerely thank you for your invaluable contribution to our paper. As the deadline is approaching, we eagerly await your post-rebuttal feedback.
Best regards,
Authors. | Summary: The paper proposed a new method in protein sequencing for tandem mass spectra, especially in solving post-translational modifications.
Specifically, two (say conditional and unconditional) decoders are designed for protein sequences generation, from which conditional mutual information is calculated and further used to weigh the training samples and calculate losses.
Strengths: 1. The proposed CMI weighting framework is interesting and seemingly fits the scenario.
2. Good empirical evidence for model effectiveness.
3. The method may be generalized to other scenarios with imbalanced data distribution.
Weaknesses: 1. The studied problem is interesting, while may not be of interest for the major audience in NeurIPS. It is also hard for readers who are not familiar with the domain to evaluate this work.
2. Casanovo appears to be the most important baseline in this paper. The authors would better articulate the relation between AdaNovo and Casanovo.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is "AdaNovo w/o decoder #2 and any reweighting" actually Casanovo? If not, please include this as a part of the ablation in Table 2-5.
2. Is 72% accuracy on AA level enough in real applications? Maybe some comparisons with database-searching or physics-based methods help answer this question.
3. Section 3.3.1 and 3.3.2 have confusing names. Consider changing them into amino-acid-level / peptide-level weighting.
4. Eq (5) and (7) seem somewhat arbitrary. Is there any evidence that the setup is optimal?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are discussed in Section 5 which is somewhat formalistic.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > *Q1: Is "AdaNovo w/o decoder #2 and any reweighting" actually Casanovo? *
Thanks for your helpful reviews! Yes, "AdaNovo w/o decoder #2 and any reweighting" is Casanovo indeed. Therefore, **we have performed the ablition study in the original version (Table 2-5).**
> *Q2: Is 72% accuracy on AA level enough in real applications? Maybe some comparisons with database-searching methods help answer this question.*
The accuracy of AdaNovo is sufficient for practical applications. In fact, the earliest deep learning model for de novo peptide sequencing, DeepNovo[1], has already been integrated into the commercial software PEAKS. Additionally, what we want to emphasize is that the database search method can only identify proteins already present in the database, whereas de novo peptide sequencing can identify new proteins that are out of the database. **Therefore, database searching methods are not suited for our scenarios where the peptide sequences in test set do not exist in the database.**
> *Q3: Section 3.3.1 and 3.3.2 have confusing titles. Consider changing them into amino-acid-level / peptide-level weighting.*
Thanks for your valuable advice! We will mitigate this issue in the revised version.
> *Q4: The normalizations in Eq (5) and (7) seem somewhat arbitrary. Is there any evidence that the setup is optimal?*
Thanks for your insightful reviews! The normalization in Eq (5) and (7) is carefully designed. Specifically, we employ the widely-used Z-Score Normalization to standardize variables by transforming them into a standard normal distribution with a mean of 0 and a standard deviation of 1. The cofficients $s_1$ and $s_2$ are designed to control the effects of amino acid-level and peptide-level adaptive training, respectively. Additionally, the inclusion of +1 in the formula typically ensures that the resulting weight is non-negative and that each PSM is fully utlized for training.
>*Q5: The authors would better articulate the relation between AdaNovo and Casanovo*
Casanovo [2] initially employed Transformer encoder-decoder architecture to predict the peptide sequence for the observed spectra. **In comparison to Casanovo, AdaNovo's innovation lies in its training strategy specifically tailored for spectrum-peptide matching data to mitigate the data biases (various noise and missing peaks in mass spectra and low-frequency PTMs), rather than the Transformer encoder-decoder architecture**. Actually, the de novo peptide sequencing task we have explored, transitioning from mass spectra to amino acid sequences, shares similarities with fields like image captioning [5] (translating images into descriptive texts) and protein inverse folding [6] (deriving amino acid sequences from protein structures), where the encoder-decoder architecture is widely adopted. In these domains, innovation often stems from training strategies rather than the model architectures themselves.
>*Q6: The studied problem is interesting, while may not be of interest for the major audience in NeurIPS.*
Many works [2,3,4] in computational mass spectrum including Casanovo [2] have been published in top-tier machine learning conference including NeurIPS and ICML recently because the tasks in this field can be easily formulated as machine learning problem.
Moreover, the initial phase of drug discovery involves pinpointing disease biomarker proteins or drug target proteins. De novo sequencing serves as a pivotal step in identifying these proteins within the realm of proteomics. Despite remarkable progress in AI for drug design under known targets in top-tier AI conference, the primary bottleneck in the drug discovery pipeline persists in uncovering these crucial protein targets. Our objective is to encourage researchers in the AI community focus more attention on this task, thereby advancing the development of AI-driven drug discovery and development (AIDD).
[1] De novo peptide sequencing by deep learning (Tran et al., PNAS 2017)
[2] De Novo Mass Spectrometry Peptide Sequencing with a Transformer Model (Yilmaz et al., ICML 2022)
[3] Efficiently predicting high resolution mass spectra with graph neural networks (Murphy et al., ICML 2023)
[4] Prefix-tree decoding for predicting mass spectra from molecules (Goldman, Samuel, et al., NeurIPS 2023)
[5] From Show to Tell: A Survey on Deep Learning-Based Image Captioning (Stefanini et al., TPAMI 2023)
[6] ProteinInvBench: Benchmarking Protein Inverse Folding on Diverse Tasks, Models, and Metrics (Gao et al., NeurIPS 2023)
---
We greatly appreciate your insightful and helpful comments, as they will undoubtedly help us improve the quality of our article. If our response has successfully addressed your concerns and clarified any ambiguities, we respectfully hope that you consider raising the score. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript. Your feedback has been invaluable in improving our research.
---
Rebuttal Comment 1.1:
Title: Reply to authors' rebuttal
Comment: Thank you for the rebuttal. As my original scoring is optimistic, I retain my scoring to this paper.
---
Reply to Comment 1.1.1:
Title: Thanks for the efforts and time paid in our work!
Comment: Dear Reviewer xGNa,
Thanks for your swift response! Your feedbeck has been invaluable in improving our research. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion to enhance your positive impression or confidence in our manuscript.
Once again, we sincerely appreciate your time and effort in reviewing our manuscript!
Best,
Authors | Summary: In the field of proteomics, tandem mass spectrometry has been crucial for analyzing protein composition in biological tissues. However, existing methods struggle to identify amino acids with Post-Translational Modifications (PTMs) due to their lower frequency in training data compared to canonical amino acids, leading to suboptimal peptide sequencing performance. Additionally, noise and missing peaks in mass spectra reduce the reliability of training data (Peptide-Spectrum Matches, PSMs. To address these challenges, the authors introduce AdaNovo, a novel framework that uses conditional mutual information to mitigate these biases. AdaNovo outperforms existing methods on a widely-used benchmark, showing significant improvements in identifying PTMs.
Strengths: 1. The motivation behind this work is strong, addressing the unique challenges of high noise levels in mass spectrometry data and the difficulty in identifying PTMs. The method is novel and simultaneously alleviates two key issues.
2. The experimental results are promising, particularly in accurately identifying PTMs. Moreover, the authors provide the code for reproducing these results.
3. The article is well-presented and easy to follow. For instance, the authors first introduce the process of protein identification based on mass spectrometry, which aids understanding for researchers in the AI field.
Weaknesses: 1.In the inference stage, the decoder predicts the highest-scoring amino acid for each peptide sequence position. However, the beam search has been verified to be more effective way to decode text sequence. Have the authors tried this decoding strategy in AdaNovo?
2.What is the meaning of FDR = 1% in line 176? How to calculate the FDR?
3.Minor issue: The space between the citation and main text is not consistent across the manuscript.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations and potential societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > *Q1: Have the authors tried beam search in AdaNovo?*
Thanks for your helpful reviews! Following your valuable advice, we apply the beam search (beam size = 5) in AdaNovo and observe consistent improvements over greedy search in **Table Re3** (peptide-level) and **Table Re4** (amino acid-level). Thanks again for such helpful advice!
**Table Re3**: Comparison between greedy search and beam search in terms of **peptide-level precision**.
| Models| Mouse |Human |Yeast|
|----|----|----|----|
| AdaNovo (greedy search)|0.493|0.373 | 0.612 |
| AdaNovo (beam search) |0.523|0.395 | 0.637 |
**Table Re4**: Comparison between greedy search and beam search in terms of **amino acid-level precision**.
| Models| Mouse |Human |Yeast|
|----|----|----|----|
| AdaNovo (greedy search)|0.667 |0.618 | 0.825 |
| AdaNovo (beam search) |0.690 |0.643 | 0.836 |
> *Q2: What is the meaning of FDR = 1% in line 176? How to calculate the FDR?*
FDR (False Discovery Rate) is a measure used to assess the reliability of peptide or protein identifications in database search. The typical method to calculate FDR involves using a decoy database. Here’s a simplified overview of the process:
**1. Database Search**: Peptide sequences are identified by matching observed mass spectra to theoretical mass spectra derived from a protein sequence database.
**2. Decoy Database**: A decoy database is created by reversing or shuffling the protein sequences in the original database. This decoy database contains sequences that mimic the characteristics of the real database but do not correspond to actual proteins.
**3. Scoring**: Peptide identifications from the real and decoy databases are scored based on parameters such as peptide mass accuracy, retention time, and fragmentation pattern.
**4.FDR Calculation**: The FDR is then calculated based on the number of accepted identifications from the decoy database compared to the accepted identifications from the real database. This ratio provides an estimate of the proportion of false identifications among all accepted identifications. The FDR calculation can be expressed as:
$$
\text{FDR} = \frac{\text{Number of decoy identifications}}{\text{Number of target identifications}}
$$
A common approach is to set a threshold for the accepted FDR, such as 1% or 5%, to control the rate of false identifications. This method allows researchers to estimate the proportion of false identifications among their identifications, providing a measure of the reliability of their results.
> *Q3: Minor issue: The space between the citation and main text is not consistent across the manuscript.*
Thanks for your careful reviews! We will mitigate this issue in the revised version.
---
We greatly appreciate your insightful and helpful comments, as they will undoubtedly help us improve the quality of our article. If our response has successfully addressed your concerns and clarified any ambiguities, we respectfully hope that you consider raising the score. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript. Your feedback has been invaluable in improving our research.
---
Rebuttal Comment 1.1:
Title: Response
Comment: Thanks. My issues has been addressed, so I raise the score to 7
---
Rebuttal 2:
Title: Thanks for your helpful and insightful feedback!
Comment: Dear Reviewer #sJoV,
Thank you for your insightful and constructive review! **Your feedback has been instrumental in enhancing our research, particularly the beam search method you suggested, which promises to enhance the overall performance of the AdaNovo model.**
Best regards,
Authors | Summary: This work introduces AdaNovo, a framework for improving peptide sequencing by addressing biases in training data. It calculates conditional mutual information (CMI) between mass spectra and amino acids/peptides, enhancing robustness against noise and improving PTM identification. Besides, the model consists of a Transformer-based architecture and employs adaptive training strategies. Extensive experiments show AdaNovo outperforms existing methods on a 9-species benchmark, with significant gains in PTM identification.
Strengths: 1. The introduction of AdaNovo, which leverages conditional mutual information (CMI) to tackle biases in training data, is an effective approach demonstrated by the superior performance in experiments.
2. The paper is well-written and clearly structured.
Weaknesses: 1. The extra cost of memory compared to Casanovo is significant (~40%) as also indicated by the authors, which could limit the max length of predicted peptide sequences.
2. AdaNovo is built on the Transformer encoder-decoder architecture, which was initially employed by Casanovo to predict the peptide sequence for the observed spectra. Thus I feel the novelty of the proposed model is limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As shown in Table 1, PointNovo performs on par with Casanovo, what are the costs of computing and storage of PointNovo? The authors only compare the costs of computing and storage of Casanovo and their AdaNovo in Table 5.
2. For the results in Table 4, why using focal loss would give worse results compared to cross-entropy loss? Besides, what is the detailed number of amino acids in each category in the dataset? It would be better to provide more statistical details of the dataset in the Appendix.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > *Q1: The extra cost of memory compared to Casanovo is significant, which could limit the max length of predicted peptide sequences.*
Thanks for your insightful reciews! **In mass spectrometry, proteins are enzymatically broken down into peptides for analysis, with peptide lengths typically ranging from 2 to 50 amino acids. This is considerably shorter than full protein sequences, which can contain hundreds or thousands of amino acid residues.** Therefore, the memory constraints mentioned might not be a limiting factor for AdaNovo in predicting peptide sequences.
Moreover, **we can reduce memory consumption by scaling down Decoder #2**. In the rebuttal process, we scale down Decoder #2 to transformer with 3 layers and 6 layers. The experiments shown in **Table Re1** indicate a marginal performance drop while significantly reducing memory consumption. Thank you very much for your comments! They have helped us identify issues we hadn't thought of before, significantly improving the efficiency of AdaNovo. We will update the experimental results in the revised version.
**Table Re1**: The performance of scaling down Decoder #2 from 9 layers to 3 layers on Clam bacteria dataset.
| Models| #Params (M) |Peptide-level precision | Amino acid-level precision|
|----|----|----|----|
| Casanovo |47.35 | 0.347| 0.617 |
| AdaNovo with 3-leyer Decoder #2 |53.67 |0.389 | 0.642 |
| AdaNovo with 6-leyer Decoder #2 |59.99 |0.392 |0.648 |
| AdaNovo with 9-leyer Decoder #2 |66.31 |0.397 | 0.656 |
> *Q2: What are the costs of computing and storage of PointNovo?*
Thanks for your valuable advice! We compare the costs of computing and storage of PointNovo, Casanovo and AdaNovo in **Table Re2**. As can be observed, PointNovo contains less parameters than Casanovo and AdaNovo with similar training and inference speed. However, as shown in Table 1 of the paper, its performance is significantly inferior to Casanovo and AdaNovo.
**Table Re2**: Costs of Computing and Storage on the same device (Nvidia A100 GPU).
| Models| #Params (M) |Training time (h) |Inference time (h)|
|----|----|----|----|
| PointNovo | 4.78| 57.49|7.56|
| Casanovo | 47.35|56.52 |7.14 |
| AdaNovo | 66.31|60.17 | 7.09 |
> *Q3: What is the detailed number of amino acids in each category in the dataset? In Table 4, why using focal loss would give worse results than cross-entropy loss? *
Thanks for your insightful reviews! We provide the detailed number of amino acids in each category in **Table Re2** (Due to the limited sapce, we only show the number of 6 canonical amino acids + 3 PTMs and we will show all the numbers in the revised version). Additionally, focal loss is designed to address moderate class imbalance. However, in our case, canonical amnio acids vastly outnumbers PTMs, focal loss may not be able to effectively handle the imbalance, leading to suboptimal performance. Also, a previous work [3] also report that focal loss is inferior to cross-entropy loss in de novo sequencing. Thanks once again for your valuable advice!
**Table Re2**: The detailed number of amino acids in each category.
| Amnino acids | G | A | S | P | V | T | M(+15.99) | N(+. 98) | Q(+.98) |
|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|
| Rice bean | 45578 | 41453 | 21550 | 34618 | 34586 | 26844 | 3604 | 1572 | 265 |
| Honey bee | 318184 | 330329 | 285195 | 269162 | 330057 | 252258 | 3567 | 10096 | 5460
|
> *Q4: AdaNovo is built on the Transformer encoder-decoder architecture. Thus I feel the novelty of the proposed model is limited.*
**AdaNovo's innovation lies in its training strategy (conditional mutual information-based adaptive training) specifically tailored for spectrum-peptide matching data to mitigate the data biases (various noise and missing peaks in mass spectra and low-frequency PTMs), rather than the Transformer encoder-decoder model architecture**. The de novo peptide sequencing task we have explored, transitioning from mass spectra to amino acid sequences, shares similarities with fields like image captioning [1] (translating images into descriptive texts) and protein inverse folding [2] (deriving amino acid sequences from protein structures), where the encoder-decoder architecture is widely adopted. In these domains, innovation often stems from training strategies rather than the model architectures themselves [1,2].
[1] From Show to Tell: A Survey on Deep Learning-Based Image Captioning (Stefanini et al., TPAMI 2023)
[2] ProteinInvBench: Benchmarking Protein Inverse Folding on Diverse Tasks, Models, and Metrics (Gao et al., NeurIPS 2023)
[3] De Novo Mass Spectrometry Peptide Sequencing with a Transformer Model (Yilmaz et al., ICML 2022)
---
We greatly appreciate your insightful and helpful comments, as they will undoubtedly help us improve the quality of our article. If our response has successfully addressed your concerns and clarified any ambiguities, we respectfully hope that you consider raising the score. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript. Your feedback has been invaluable in improving our research.
---
Rebuttal 2:
Title: Has our response resolved your concerns?
Comment: Dear Reviewer XCqV,
We appreciate your insightful and helpful reviews! **Your points on memory consumption are particularly valuable, inspiring us to reduce the size of Peptide Decoder #2 to further enhance AdaNovo's efficiency.** If our response has resolved your concerns and clarified any ambiguities, we respectfully hope you might consider raising the score. Should you have further questions or need additional clarification, we would be happy to discuss them. Thank you again for your time and effort in reviewing our manuscript. Your feedback has been instrumental in improving our research.
Best,
Authors
---
Rebuttal 3:
Title: Has our response adequately addressed your concerns?
Comment: Dear Reviewer XCqV,
We appreciate your insightful and helpful reviews! **Your points on memory consumption are particularly valuable, inspiring us to reduce the size of Peptide Decoder #2 to further enhance AdaNovo's efficiency.** Considering that the deadline for the discussion phase is approaching, we would like to know if our responses have adequately addressed your concerns. If our response has resolved your concerns and clarified any ambiguities, we respectfully hope you might consider raising the score. Should you have further questions or need additional clarification, we would be happy to discuss them. We truly appreciate your time and effort, which have been crucial in refining our research.
Best regards,
Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents AdaNovo, a novel framework designed to address the biases in training data used for de novo peptide sequencing in proteomics. The main contribution is the calculation of Conditional Mutual Information (CMI) between mass spectra and amino acids, enabling robust training that mitigates the negative impacts of Post-Translational Modifications (PTMs) frequency bias and spectral noise. Extensive experiments demonstrate that AdaNovo outperforms existing methods, especially in identifying amino acids with PTMs, thus enhancing peptide sequencing performance.
The math seems clearly described to me, and well motivated. The results in table 1 highlights the improved performance. In particular,
For this review, please note that I'm not familiar with bioinformatics, and in particular the field of predicting protein sequences from mass spec.
Strengths: The use of Conditional Mutual Information (CMI) for addressing training data biases in de novo peptide sequencing is novel.
The experimental design is thorough, with extensive benchmarks across multiple species datasets, demonstrating significant improvements over state-of-the-art methods.
The paper is well-organized, with clear sections on background, methodology, experiments, and conclusions.
The results show substantial improvements in PTM identification, which is critical for proteomics research and applications in drug discovery and precision medicine.
Weaknesses: This is a narrow field of proteomics, there are other problems that could be considered to truly showcase the power of the proposed methodology.
Technical Quality: 3
Clarity: 3
Questions for Authors: For Figure 4, can you supply the dataset size and the performance? Or perhaps include performance in table 1.
How does the amount of PTMs impact the performance vs other classifiers? Can you train models where you gradually remove the amount of PTMs and showcase the performance dropoff of your model vs. casanova?
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I'm not familiar enough with the topic to understand if there are ethical concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > *Q1: For Figure 4, can you supply the dataset size and the performance? How does the amount of PTMs impact the performance of Casanovo and AdaNovo?*
Thanks for your helpful comments! We provide the dataset size and performance in **Table Re1**. We would add these results in Figure 4 of the revised version.
Additionally, following your valuable advice, we remove 50% and 100% peptides with PTMs and report the results in **Table Re2**, from which we can observe that the performance (peptide-level precision) of Casanovo and AdaNovo significantly improves after removing the peptides with PTMs in the test set. These results indicate that the amounts of PTMs have a significant impact on the performance of de novo sequencing models and AdaNovo is less sensitive to the amounts compared with Casanovo.
**Table Re1**: The absolute numbers for peptides with/without PTMs in 9 species datasets.
| Species|Rice bean |Honeybee |Bacillus|Clam bacteria| Human| Mouse| M. mazei| Yeast| Tomato|
|----|----|----|----|----|----|----|----|----|----|
| The number of PSMs |37775 |314571 |291783 |150611 |130583|37021 |164421 |111312| 290050 |
| The Ratios of Peptide with PTMs |27.8% |20.5% |11.8% |31.1% |27.6%|29.0% |22.2% |22.3%| 26.4% |
| PTMs precision of AdaNovo| 69.2% |52.4% |56.4% |57.8% |48.1% |54.8% |54.6% |66.6%| 59.7% |
**Table Re2**: The peptide-level precision of Casanovo and AdaNovo before and after removing peptides with PTMs in test set.
| Species|Bacillus |M. mazei |Clam bacteria|
|----|----|----|----|
| Casanovo (before removing peptides with PTMs)|0.513|0.474|0.347 |
| Casanovo (after removing 50% peptides with PTMs)|0.536|0.491| 0.356|
| Casanovo (after removing 100% peptides with PTMs)|0.572|0.520| 0.385|
| AdaNovo (before removing peptides with PTMs)|0.561|0.523|0.397|
| AdaNovo (before removing 50% peptides with PTMs)|0.573|0.529|0.415|
| AdaNovo (after removing 100% peptides with PTMs)|0.598|0.545|0.423|
> *Q2: This is a narrow field of proteomics, there are other problems that could be considered to truly showcase the power of the proposed methodology.*
The initial phase of drug discovery involves pinpointing disease biomarker proteins or drug target proteins. De novo sequencing serves as a pivotal step in identifying these proteins within the realm of proteomics. **Despite remarkable progress in AI for drug design under known targets, the primary bottleneck in the drug discovery pipeline persists in uncovering these crucial protein targets**. Our objective is to encourage researchers in the AI community focus more attention this task, thereby advancing the development of AI-driven drug discovery and development (AIDD).
**Moreover, many works [1,2,3] in computational mass spectra including Casanovo [2] have been published in top-tier machine learning conference including NeurIPS and ICML because the tasks in this field can be easily formulated as machine learning problem.**
[1] Prefix-tree decoding for predicting mass spectra from molecules (Goldman, Samuel, et al., NeurIPS 2023)
[2] De Novo Mass Spectrometry Peptide Sequencing with a Transformer Model (Yilmaz et al., ICML 2022)
[3] Efficiently predicting high resolution mass spectra with graph neural networks (Murphy et al., ICML 2023)
---
We greatly appreciate your insightful comments, as they will undoubtedly help us improve the quality of our article. If our response has successfully addressed your concerns and clarified any ambiguities, we respectfully hope that you consider raising the score/confidence. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript. Your feedback has been invaluable in improving our research.
---
Rebuttal Comment 1.1:
Title: Has our response resolved your concerns?
Comment: Dear Reviewer xCSd,
We appreciate your insightful and helpful reviews! **Your points on the impact of the number of PTMs on the performance of Casanovo and AdaNovo were particularly enlightening, and it’s an aspect we had not previously considered. We have conducted a detailed experimental analysis and discussion on this issue in our response above.**
If our response has resolved your concerns and clarified any ambiguities, we respectfully hope you might consider raising the (confidence) score. Should you have further questions or need additional clarification, we would be happy to discuss them. Thank you again for your time and effort in reviewing our manuscript. Your feedback has been instrumental in improving our research!
Best regards,
Authors
---
Reply to Comment 1.1.1:
Title: Look forward to post-rebuttal feedback
Comment: Dear Reviewer xCSd,
We have provided detailed responses to your reviews. Considering that the deadline for the discussion phase is approaching, we would like to know if our responses have adequately addressed your concerns. If you have any additional concerns or questions, we are more than happy to engage in further discussion. Thank you again for your time and effort in reviewing our manuscript. Your feedback has been instrumental in improving our research!
Best,
Authors
---
Rebuttal 2:
Title: We have progressively removed PTMs and show the plot below
Comment: Dear Reviewer xCSd,
Thank you for your concise and insightful feedback! Based on your valuable advice, we have progressively removed peptides with PTMs in the test dataset, ranging from 0% to 100% (kindly note that the earlier range of 5% - 10% might not accurately reflect the trend). The updated results, available at this anonymous link (https://anonymous.4open.science/r/Remove_PTM/Figure_Re1.png), show that the peptide-level precision of both Casanovo and AdaNovo improves significantly as the proportion of removed peptides with PTMs is increased. This demonstrates that the presence of PTMs notably affects the performance of de novo sequencing models, with AdaNovo showing less sensitivity compared to Casanovo.
If our response addresses your concerns and clarifies any uncertainties, we would greatly appreciate your consideration in adjusting the score or confidence. Should you have any further questions or require additional information, please do not hesitate to reach out. Thank you once again for your time and constructive review, which has been invaluable in enhancing our research!
Best regards,
The Authors | Summary: The paper introduces AdaNovo, a novel framework for de novo peptide sequencing that significantly improves the identification of post-translational modifications (PTMs) and enhances robustness against data biases in proteomics. AdaNovo utilizes Conditional Mutual Information (CMI) to reweight training losses based on the dependence of target amino acids on mass spectrum data, addressing common issues like noise and missing peaks in mass spectra. The framework shows superior performance on a 9-species benchmark, especially in identifying amino acids with PTMs, compared to previous methods. The paper also discusses the computational efficiency and scalability of AdaNovo, suggesting it as a potent tool for advancing proteomic research.
Strengths: 1. AdaNovo significantly improves the identification of Post-Translational Modifications (PTMs), crucial for detailed proteomic analysis.
2. It effectively handles noise and biases in mass spectra data, ensuring reliable peptide sequencing results.
Weaknesses: 1. In eq.3 use MI (X , Z; Yj | Y<j ) but in model it is CMI(X , Z; Yj | Y<j ), and should not distinguish between italic rv and bold rv.
2. CMI(X , Z; Yj ) = MI (X , Z; Yj | Y<j )? ?Please provide a more detailed process.
3. There is a lack of verification whether the Transformer's predictions align with the model's hypothesized outputs,Additionally, although CMI can technically be calculated, the inputs used in your model do not conform well to standard data formats.
4. The paper does not provide comparisons with other methods that utilize strategies for handling repetitive long-tail data,
Technical Quality: 2
Clarity: 1
Questions for Authors: see weakness
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > *Q1: In eq.3 use MI (X , Z; Yj | Y<j ) but in model it is CMI(X , Z; Yj | Y<j ), and should not distinguish between italic rv and bold rv.*
Thanks for your careful review! **Kindly note that the conditioned mutual information in model is formulated as CMI(X , Z; Yj) or MI (X , Z; Yj | Y<j ) rather than CMI(X , Z; Yj | Y<j )**. In eq.3, we can rewrite CMI(X , Z; Yj) as MI (X , Z; Yj | Y<j ) because condition mutual information is defined as the mutual information between two random variables (X , Z) and Yj conditioned on a third Y<j. In other words, CMI is a kind of mutual information. Here, we use the well-known concept of mutual information to help readers understand the calculation process of conditional mutual information.
Additionally, the bold rv denotes specific values while the italic rv denotes the variables. **It is necessary to distinguish between values (bold rv) and variables (italic rv) because the conditional mutual information is defined as a measure of the mutual dependence between two variables instead of specific values**.
> *Q2: CMI(X , Z; Yj ) = MI (X , Z; Yj | Y<j )? Please provide a more detailed process.*
**Yes, CMI(X , Z; Yj ) = MI (X , Z; Yj | Y<j )**. Here, CMI(X , Z; Yj) denotes the conditional mutual information between (X, Z) and Yj (conditioned on Y<j). This step can be derived using the definition of Conditional Mutual Information, which is the mutual information of two random variables (X , Z) and Yj given the value of a third Y<j. In other words, CMI is a kind of mutual information. Here, we use the well-known concept of mutual information to help readers understand the calculation process of conditional mutual information.
> *Q3: The paper does not provide comparisons with other methods that utilize strategies for handling repetitive long-tail data.*
**We have compared with such methods in section 4.6 of the original paper**.
> *Q4: There is a lack of verification whether the Transformer's predictions align with the model's hypothesized outputs. Additionally, although CMI can technically be calculated, the inputs used in your model do not conform well to standard data formats.*
We have elborated on how we feed the mass spectrum to the transformer in Appendix A. More specifically, we regard each mass spectrum peak (m_i, I_i) as a word/token in natural language processing and obtain its embedding by individually encoding its m/z value m_i and intensity value I_i before combining them through summation. The entire mass spectrum can be regarded as a sentence. The precursor can also be encoded using similar way. More details can be found in Appendix A.
Additionally, we feed mass spectrum ($\mathbf{x}$), precuosor ($\mathbf{z}$) and previous sequence ( $\mathbf{y_{<j}}$ ) to the MS Encoder and Peptide Decoder #1 to predict the next amino acid ($y_j$). Therefore, the output of Peptide Decoder #1 is $p(y_{j}|\mathbf{x}, \mathbf{z}, \mathbf{y_{<j}})$. Similarly, Peptide Decoder #2 takes previous sequence ($\mathbf{y_{<j}}$) as input and outputs $p(y_{j}|\mathbf{y_{<j}})$. And then, we can use $p(y_{j}|\mathbf{x}, \mathbf{z}, \mathbf{y_{<j}})$ and $p(y_{j}|\mathbf{y_{< j}})$ to calculate the conditional mutual information using Eq. (4).
---
We greatly appreciate your helpful comments, as they will undoubtedly help us improve the quality of our article. If our response has successfully addressed your concerns and clarified the ambiguities, we respectfully hope that you consider raising the score. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript. Your feedback has been invaluable in improving our research.
---
Rebuttal 2:
Comment: It is recommended that you write your formula according to the format provided in https://www.jmlr.org/papers/volume24/21-0482/21-0482.pdf.
Regarding the misunderstanding between us, you directly denote $ CMI(X, Z; Y_j) $ as $ I(X, Z, Yj | Y_{<j})$. However, CMI involves the mutual information between two variables given a third variable or set of variables.This could potentially lead to a misinterpretation as $I(X, Z | Y_j)$, particularly given that $X, Y_j, Z$ are the only variables under consideration.
In terms of formula notation, there remains significant room for enhancement. Therefore, the original score will be retained, with the expectation that the author will make further refinements.
---
Rebuttal 3:
Title: Clarification regarding the misunderstandings in terms of formula notation.
Comment: Dear Reviewer XeYm,
Thanks for your prompt and insightful response! We agree that the misunderstanding between us lies in the notation of the conditional mutual information. Following your valuable advice, we would update $CMI(X, Z; Y_{j})$ as $I(X,Z; Y_{j}|Y_{<j})$ in Eq. (3) and Eq. (4) to avoid misunderstandings.
Furthermore, we would like to inquire if our response adequately addresses your concerns. Should you have any further questions or require additional clarification, we would be delighted to engage in further discussion. Once again, we sincerely appreciate your time and effort in reviewing our manuscript. Your feedback has been invaluable in improving our research!
Best,
Authors
---
Rebuttal 4:
Title: We have updated the formula notations following your valuable advice
Comment: Dear Reviewer XeYm,
Thanks very much for your feedback on the formula notation! **Your suggestions help standardize the mathematical notations in our manuscript and avoid potential misunderstandings.** Following your valuable advice, we have updated the notations and uploaded the revised manuscript to the anonymous link: https://anonymous.4open.science/r/NeurIPS24_AdaNovo_Revision-7C65/AdaNovo_NeurIPS2024_Revision.pdf.
Specifically, we replace $CMI(X, Z; Y)$ and $MI(X, Z; Y)$ with $I(X, Z; Y|Y_{<j})$ and $I(X, Z; Y)$, respectively. Also, $\mathcal{X}$, $\mathcal{Y}$, $\mathcal{Z}$, $\mathcal{Y_{<j}}$ are replaced with $X, Y, Z, Y_{<j}$.
**These changes do not require substantial modifications to the original manuscript. Sincerely hope that they would resolve your concerns regarding the formula notations.** Please let us know if our response has adequately addressed your concerns or if further clarification is needed. We appreciate your time and invaluable feedback in improving our research!
Best,
Authors
---
Rebuttal 5:
Title: Have our responses adequately addressed your concerns?
Comment: Dear Reviewer XeYm,
We have provided detailed responses to your reviews. Considering that the deadline for the discussion phase is approaching, we would like to know if our responses have adequately addressed your concerns. If you have any additional concerns or questions, we are more than happy to engage in further discussion. Thank you again for your time and effort in reviewing our manuscript. Your feedback has been instrumental in improving our research!
Best,
Authors
---
Rebuttal 6:
Title: Have our responses addressed your concerns to your satisfaction?
Comment: Dear Reviewer XeYm,
We would like to express our sincere gratitude for dedicating your time to reviewing our paper. **Your suggestions help standardize the mathematical notations in our manuscript and avoid potential misunderstandings.**
We have thoroughly considered your feedback and carefully responded to each of your questions. We would greatly appreciate your feedback on whether our responses have addressed your concerns to your satisfaction.
Once again, we sincerely thank you for your invaluable contribution to our paper. As the deadline is approaching, we eagerly await your post-rebuttal feedback.
Best regards,
Authors. | null | null | null | null |
Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents | Accept (poster) | Summary: The paper design a generative simulation platform for strategic interactions and cooperative decision-making in LLM agents. Three similar economics scenarios where all agents exploit a common pool resource with sustaining it for the future are tested. They design a LLM-based agent architecture and test them on open and clossed LLMs. Extensive evaluations includes the benchmarking, norm rebustness given a new greedy newcomer, the sustainability improvement from universalization reasoning, ablation of communication and analysis of agent dialogues.
Strengths: 1. Well-written, easy to understand by a reader out of this field.
2. propose a new multi-agent platform to test LLM agents on sustainable behavior in three simplified economic scenarios.
3. Evaluations are comprehensive and insightful.
Weaknesses: Are these three scenarios too similar? they seems to be one task but with different name. Can you test on more settings with a shared common resources?
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 1
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work commenting that it is “well-written, easy to understand”, “propose a new multi-agent platform”, and “evaluations are comprehensive and insightful.”
Below we address your one comment raised in the review:
> Are these three scenarios too similar? they seems to be one task but with different name. Can you test on more settings with a shared common resources?
In our study, we need to have both similarity and variation between tasks. The shared common underlying structure behind the three scenarios enable us to aggregate the analyses and show sustainability results across different instances. However, the variations enable us to ensure robustness such that our results are general to any particular set of prompts and situations. Therefore we must balance diversity with interpretability. The above three variations were inspired by prominent examples from the economics literature to illustrate the generalizability of the same underlying phenomena.
To explain existing diversity more directly we can think of two dimensions of variation in the simulations reported.
Dimensions 1) Our three scenarios differ in their framing and the number of quantities the agents must reason about:
- In the Fishing scenario, agents only need to consider one variable: the number of fish.
- In the Sheep Grazing scenario, agents need to consider two variables: the number of sheep and available grass.
- The River Pollution scenario also requires two variables: widget production and its impact on pollution levels.
While still maintaining the same underlying causal model of resource extraction and replication, these differences created differences in LLM performance as can be seen in Table 1. The sustainability/gain/efficiency scores differ across the three scenarios (although the overall trends remain systematic).
Dimension 2) Perturbations to the base simulation results in more complex dynamics that the LLM based agents must face. These reflect more substantive variation in the causal dynamics.
1. The “Newcomer” perturbation scenario (Section 3.3) checks whether LLM agents are robust against newcomers with a different behavior from the existing agents who have established a norm.
2. The “Absence of Communication” scenario (Section 3.5) tests how cooperation decays if agents cannot communicate in an open ended fashion with each other.
3. The “Universalization Reasoning” scenario (Section 3.4) probes whether making the long-term group outcome more salient improves cooperative behavior.
**We will update Section 5 (Future Work) to more explicitly describe additional ways in which the scenarios could be varied to more deeply study LLM cooperative decision making and make clear why we chose the three scenarios studied in this work.**
Please do not hesitate to let us know if you have further questions.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
Thank you again for kindly reviewing our paper. We have provided a detailed response to your question, and highlighted the variety in our experiments. Should our response address your concerns, would you considering raising your score, perhaps to a clear acceptance?
Or if you have further questions, we would be more than happy to provide additional details and engage in the discussion! Thank you in advance for your kind consideration. | Summary: The paper introduces a generative simulation platform to investigate the dynamics of resource sharing among multiple large language model (LLM) agents. Specifically the authors construct a common pool resource problem where the classic social science problem, tragedy of the commons, can be demonstrated. Authors show that most LLM agents fail to achieve a sustainable equilibrium. The paper also highlights that "Universalization"-based reasoning significantly enhances sustainability.
Strengths: * The introduction of GovSim provides a novel platform to study the cooperative behavior of LLM agents in resource-sharing scenarios. The design bridges the development of LLM-agents with classic social science theories, offering a unique perspective into developing rational LLMs.
* The paper offers thorough analyses and a variety of experiments. Experiments are conducted over most state-of-the-art LLMs. I particularly like the introduction of "newcomers" which consolidates the experiments.
* The evaluation metrics proposed are well-motivated and theoretically grounded.
Weaknesses: * While the GovSim environment is novel in its exploration of social science concepts, it is somewhat limited in its ability to accurately represent and visualize the complexity of actual agents' behavior. A platform where agents' actions can be grounded would further enhance the argument
* The paper would be strengthened if the authors incorporated human participants in the study, allowing LLMs to coordinate with humans in real-time. This would provide valuable insights into how AI agents interact and cooperate with human decision-makers.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Could the authors provide insights into how the platform could be further scaled up to handle more complex scenarios and larger agent populations?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A, see weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work, especially your comments acknowledging that our GovSim “provides a novel platform,” “bridges the development of LLM-agents with classic social science theories,” has “a unique perspective,” “offers thorough analyses and a variety of experiments”, and has
“well-motivated and theoretically grounded” “evaluation metrics.”
Below we provide some additional information to your comments:
> While the GovSim environment is novel in its exploration of social science concepts, it is somewhat limited in its ability to accurately represent and visualize the complexity of actual agents' behavior. A platform where agents' actions can be grounded would further enhance the argument
In this first version the main grounding comes for the decision to extract resources from the environment that interacts in a causal way with the simulation environment. As such we are able to provide graphs of resource levels over time and statistics on agent actions and their distributional properties. **We also provide in our uploaded code package (to be open-sourced upon acceptance) an interactive dashboard to visualize the agent interactions with the environment** (see Figure 7, Appendix B). Our Appendix B provides a detailed technical description of our setup, and B.3 makes an effort to build the web interface to ground the agent actions.
> The paper would be strengthened if the authors incorporated human participants in the study, allowing LLMs to coordinate with humans in real-time. This would provide valuable insights into how AI agents interact and cooperate with human decision-makers.
Thank you for the excellent suggestion for future work. Incorporating human participants will provide valuable insights into human-AI cooperation in these scenarios. Potential benefits include understanding how humans interpret and respond to LLM communication, identifying areas where LLMs excel or struggle compared to humans, and exploring the emergence of human-AI cooperative norms. We believe that there is enough new work to explore human-AI cooperation in a manuscript focused on those big questions. **In the camera ready version of the manuscript we will add the following text: “Using the GovSim platform, a promising next step is to incorporate humans into the simulation. These human-AI interactions will challenge LLM based agents to cooperate with humans using open ended communication and we can see whether the norms that develop are either more or less effective than those created by LLMs alone.”**
> Could the authors provide insights into how the platform could be further scaled up to handle more complex scenarios and larger agent populations?
Great question! **We will add the following to the future work section to engage directly with the question of scaling in Section 5.** To scale up GovSim, we would like to explore:
- _A larger agent population:_ Our current simulation can easily generalize to more agents and a diversity of types. Adding agents will increase the simulation runtime, as each agent needs to condition their own behavior and dialogue on the other agents' actions and dialogs. Perhaps fine-tuned smaller LLMs can act as efficient simulators in this context without a loss in performance.
- _Coordinated adaptation:_ People can flexibly adapt to sudden changes in game dynamics. For example, when the resource suddenly shrinks (a temporary shock) or changes in the reproduction rate require agents to rapidly adjust their cooperative norms in a coordinated way. GovSim enables these kinds of experiments as the simulation environment is modular such that resource dynamics, agents, and other elements are easily changeable for different runs of the simulation.
- _Challenging tradeoffs:_ We are also interested in understanding exceptions to norms. For instance, one agent may need to handle a one-off choice of serious personal harm and group sustainability, e.g., one agent will experience negative welfare unless they take more resources than allowed by the emergent cooperative norm – do other agents adapt and allow for such one-offs?
Thank you again for the constructive comments!
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I will keep my current score. | Summary: This paper proposes GOVSIM, a simulation platform for studying cooperative decision-making in Large Language Model (LLM) agents. The authors test various LLMs in three resource-sharing scenarios, finding that only a few instances (2 out of 45) achieve sustainable outcomes. They demonstrate that communication between agents is crucial for cooperation, with negotiation being the primary form of interaction. The study also conducts a robustness test of the effect of introducing a greedy newcomer into an established norm. The authors provide insights into improving agent performance through universalization reasoning and offer a comprehensive analysis of the factors contributing to sustainable cooperation in AI systems.
Strengths: - This paper is clearly written and easy to follow.
- Discussions about the cooperative abilities of agents and potential safety issues are both interesting and important. Additionally, the scenarios used in this article to discuss these issues are very engaging.
- Experiments on different models are adequate.
Weaknesses: 1. I regard the primary flaw of this article as it does not test the performance of the GPT-4 model. If my understanding is correct, Table 3 in Appendix D indicates that the paper uses GPT-4 Turbo (as the GPT-4 in the paper). GPT-4-turbo and GPT-4 are entirely different models, and many existing studies suggest that GPT-4 performs better on certain tasks. Additional testing using the GPT-4 model could confirm whether LLMs have the ability to cooperate. Note that recent work has not only found that some LLMs possess cooperative abilities but also exhibit spontaneous cooperation.
2. The discussion on robustness testing is inadequate, failing to explain why the system can be stable in certain circumstances. In the experiment, instead of introducing a large disturbance by the "LLM newcomer" in just one round, it would be more effective to introduce disturbances in different rounds to see if other LLM agents can adapt to these disturbances and maintain environmental sustainability by reducing their share in certain rounds where disturbances take place.
3. The paper claims that the three proposed scenarios have similar mathematical structures, but the performance of the third scenario (River Pollution) is significantly worse than the other two. Although the article attributes this to the need to consider more factors, there is no data analysis or log analysis to support this claim. This weakens the generality of the research.
4. The paper does not demonstrate the generalizability of its conclusions through prompt sensitivity analysis (e.g., paraphrasing). We cannot be sure whether specific prompts influenced the simulation results or if there is any data leakage involved.
5. The research lacks clear contributions. While I believe that discussing the cooperative modes of agents is very meaningful, the article does not profoundly demonstrate how to enhance cooperation or explain why cooperation cannot be achieved. Many existing studies have already discussed how agents' communication abilities can enhance cooperation, which is not a distinct contribution point.
Technical Quality: 3
Clarity: 4
Questions for Authors: In addition to the issues raised above, the author may further explore the following questions:
1. Lack of explanation and discussion on the performance differences between models. Although many experiments demonstrate the differences between models, there are no tests attempting to explain how they differ. The author could address this issue by showing the differences in dialogue and actual behavior between models, even if only qualitatively.
2. Similar to question 1, the interpretation of experimental results could be richer. For example, the author's interpretation of Figures 5c and 5d only explains why some models that can understand the beliefs and numerical implications of other agents perform well, but does not explain why llama3-70b performs poorly, even though it has relatively high accuracy in Figure 5c. A more profound explanation could help us understand the sources of performance differences between models and the underlying reasons for agents to cooperate.
3. Ablation experiments could be added. For instance, instead of having the agent remember only the key information for planning, providing all information for the round could be tested to see if the agent's performance improves. There are many summarizations and simplifications in the study, which might potentially impact the decision-making performance of the models.
Minor issues:
- P.2 Line 72: pertubation -> perturbation?
- P.4 Line 138: The sudden appearance of the word "policy" is very abrupt in reading.
- P.34 There appears to be a duplication of GPT-4 in Table 20.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive feedback on our work.
## Addressing Weaknesses
> I regard the primary flaw of this article as it does not test the performance of GPT-4
Although some studies suggested enhanced performance of GPT-4, whose latest version was released on 13 Jun 2023, this is no longer the case with the recent GTP-4-Turbo model released on 09 Apr 2024 and of GPT-4o on 13 May 2024. According to OpenAI's evals, these two models exceed the capabilities of the original GPT-4 (see openai/simple-evals benchmark results). Moreover, both GPT-4-Turbo and GPT4o have a higher rank on LmSys's Arena. **We conducted two actions: (1) updated the model names in the paper as GPT-4-Turbo to avoid any confusion, and (2) added new results for the original GPT4 in our rebuttal PDF, which confirms its less cooperative skills than the other two.**
> recent work has not only found that some LLMs possess cooperative abilities but also exhibit spontaneous cooperation
We agree that this is a potentially confusing result. Recent work on cooperation with LLMs (see related work section) has mostly been limited to simple social dilemma (e.g., prisoners dilemma) and other matrix-form games played by two players without communication. In contrast, we study a common pool resource problem played with 5 agents over 12 iterations interleaved with open-ended dialogue. To effectively reason how to sustain the shared resource into the future remains a significant challenge as we demonstrate empirically. This is not inconsistent with prior work that shows that LLM based agents have a proclivity to cooperate. The complexity of GovSim and the shift from prompting LLMs to generative agents are both novel features to our understanding of cooperation in LLM agents. **We have updated the related work section to draw this contrast more clearly.**
We hope clarification of the above two points can resolve your main concern of our work.
> it would be more effective to introduce disturbances in different rounds
In the newcomer scenario, the round number is not a key variable, since a shared trend is that once the agents reach a stable equilibrium, the stability remains in the rest of the rounds. To explain further how models reach stability, we inspect their reasoning steps. We find that, during the discussions, once the agents agree on an upper limit, this is then kept and the agents conform to it in multiple rounds. This also explains why agents (in Figure 3b) do not recover to the max-efficiency resource consumption, but keep a low consumption after the disturbance.
> Many existing studies [...], which is not a distinct contribution point.
While open-ended communication in the context of cooperation has been studied in behavioral economics and social psychology, we believe we are the first (or among the first) to study the role of open-ended communication in aiding LLMs to solve common pool resource problems. **We’d be happy to add to the related work section if we miss important citations.**
Furthermore, characterizing the role of communication was just one of our contributions. In addition to our communication ablation study we also emphasize the following key interdisciplinary novelties:
- Long-term cooperation for sustainability: We are the first to combine the idea in Governing the Commons with LLMs, i.e., whether LLMs can reliably sustain cooperation over many time periods with complex dynamics.
- Two-player vs. Multi-player interactions: Most existing studies focus on two-player cooperation, e.g., prisoner’s dilemma (e.g., in GTBench), whereas our work involves up to five multiple agents and the resulting complex dynamics, leading to more complex group behaviors. It only takes one non-cooperative player to harm the entire group and so with five players, sustainable cooperation is more fragile and requires more robust agents.
- Universalization: We are the first to study how Universalization (a cognitive model of moral thinking) impacts LLM behavior. Inspired by work on human subjects (Levine et al. 2022), giving LLMs the ability to Universalize increases LLM survival time by 4 months.
## Addressing Questions
> The author could address this issue by showing the differences in dialogue and actual behavior between models, even if only qualitatively.
We appreciate your feedback. We have conducted several quantitative analyses to compare the differences of models on sub-skills and the breakdown of their dialogs:
- Section 3.6 Analysis of Agent Dialogues: We quantitatively analyze the conversations produced by the LLM during the discussion phase. Figure 4b shows that GPT-4o has the largest portion of negotiation-related discussions, which may explain why it is the best at sustainability in most evaluations. **As the reviewer suggests, we provide qualitative examples of such dialogs in Appendix G and will add more examples for a selection of the LLMs in the camera ready.**
- Section 3.7 Subskill Analysis & Appendix F.2: We investigate several sub-skills to explain their overall sustainability results. For example, Figure 5c and 5d confirm that only GPT-4o, GPT-4-turbo, and Claude-3 Opus can formulate beliefs about other agents independently and calculate their numerical implications. This explains their higher sustainability rates (Pearson correlation of 0.83 for test case d). See Lines 266-284 for more analyses, and more supplementary figures for each scenario in F.2.
> the interpretation of experimental results could be richer. [...] does not explain why llama3-70b performs poorly
For a model to perform well in GovSim, it requires all of the underlying sub-skills. As in Figure 5a & 5b, Llama3-70B performs poorly on a subset of these sub-skills, e.g., failing to handle prompts like “If each fisherman catches M tons, how many tons of fish will there be next month?”. **In the camera ready, we will update Figure 5 with our rebuttal PDF's Figure 1 to better highlight this reasoning.**
---
Rebuttal Comment 1.1:
Comment: > prompt sensitivity analysis
To address this worry, we developed two paraphrased versions of the fishing scenario prompt, which were 61% and 81% different from the original, respectively. We tested these paraphrased prompts using GPT-4-Turbo on 5 seeds, achieving average survival times of 9 months and 12 months. Aggregating these results with our original setup yields an average survival time of 11 months across the variants. This analysis demonstrates that while there is some heterogeneity in performance across prompt variations, our core findings remain robust.
Our intention in developing the three scenarios (lake, pasture, pollution) was to better characterize variability and robustness of LLM behavior. Average across 5 seeds for each run – while we observe some heterogeneity – our results are robust.
> any data leakage involved.
We don’t believe data leakage is possible as all scenarios are developed specifically for this project. All prompts are available in Appendix C for inspection and are not otherwise available on the internet in any form.
Thank you for the insightful feedback. We hope that if we have sufficiently addressed your concerns you will consider raising your score. If any outstanding questions remain, please let us know and we will promptly respond.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you again for kindly reviewing our paper. We have read your reviews very seriously, conducted additional experiments, and clarified confusion points. Since the deadline of the discussion period is approaching in 2 days, could you let us know if our answers and additional experiments help address your original concerns?
We really cherish your feedback, and would be more than happy to provide additional details per your request! | Summary: This paper presents GOVernance of the Commons SIMulation (GOVSIM), a generative simulation platform to study strategic interactions and cooperative decision-making among large language model (LLM) agents. The authors investigate sustainable resource sharing in a society of AI agents using different LLMs to determine their ability to achieve cooperative outcomes. The study finds that most LLMs fail to maintain sustainable cooperation, largely due to communication deficits and the inability to consider long-term consequences. The paper also introduces "Universalization"-based reasoning, which significantly improves the sustainability of the agents' actions.
Strengths: **Novelty:** The introduction of GOVSIM as a platform for studying cooperative behavior in LLMs is innovative and addresses a critical gap in the literature on AI safety and multi-agent interactions.
**Comprehensive Analysis:** The paper thoroughly evaluates different LLMs across multiple resource-sharing scenarios, providing a broad view of their cooperative capabilities.
**Open Source Contribution:** The authors promise to provide the full suite of their research results, including the simulation environment, agent prompts, and a web interface, which can foster further research and development in this field.
**Ethical Considerations:** The study integrates ethical reasoning (Universalization) into the agents' decision-making process, demonstrating a forward-thinking approach to embedding moral principles in AI behavior.
Weaknesses: **Limited Scenario Complexity:** The resource-sharing scenarios in GOVSIM are relatively simplistic and may not capture the full range of complexities found in real-world resource management.
**Generalizability:** The findings might not generalize well to more complex or heterogeneous environments, especially those involving mixed human-AI interactions.
**Communication Limitations:** The study highlights the importance of communication but does not provide a detailed analysis of how different communication strategies or protocols might improve cooperative outcomes.
**Over-reliance on LLMs:** The study assumes that current LLMs can approximate human-like strategic reasoning and negotiation skills, which might be an overestimation of their current capabilities.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How does the performance of LLM agents in GOVSIM compare to human performance in similar resource-sharing scenarios?
- What specific communication strategies or enhancements could be implemented to improve cooperative outcomes among LLM agents?
- How would the introduction of more complex, real-world variables (e.g., variable resource regeneration rates, multiple resource types) impact the agents' ability to cooperate sustainably?
- What measures can be taken to improve the generalizability of the findings to more diverse and heterogeneous environments?
- How do different LLMs handle the introduction of multiple adversarial agents or more sophisticated strategic manipulations?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper introduces a novel and valuable simulation platform for studying cooperative behavior among LLMs, contributing to the field of AI safety and multi-agent systems. However, the scenarios presented are a bit simplistic to provide meaningful insights into real-world applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful review and your recognition of its four strengths in terms of novelty, comprehensive analysis, open-source contribution, and ethical considerations.
We aim to address your concerns and demonstrate the robustness and impact of this research.
## Addressing Weaknesses
*Re “Limited Scenario Complexity”:* While the scenarios in GovSim are simplified to some extent, the complex open-ended nature of our simulation is a significant step towards realism compared to the highly simplified paradigms leveraged from behavioral game theory which have largely been the focus of prior work. Furthermore, while more complex variants are possible, our goal is to first establish a framework that can serve as a foundation that can be flexibly extended by ourselves and others in the community. The design choices made balance complexity and interpretability as simpler scenarios allow us to study cooperative principles with greater systematicity. Moreover, our current scenarios and dynamics already present significant challenges for current LLMs. **We have added a discussion of these considerations to the manuscript.**
*Re “Generalizability”:* Because our framework is open-ended by design, it is not possible to study the full diversity and heterogeneity of settings that GovSim can support in a single paper. **To address this gap we have added an additional discussion on future work (as in Section 5) to incorporate more complex variables (such as variable regeneration rates and multiple resource types) and discuss more heterogeneous agent pairings (heterogenous mixtures of weak and strong LLMs, and mixed human-AI interactions).** Since GovSim is open-source it will enable researchers to contribute additional scenarios and environments, enhancing the generalizability.
*Re “Communication Limitations”:* The idea that AI can coordinate cooperation through open-ended communication is a key novelty of the GovSim environment. We show that ablating communication causes a significant reduction in sustainability and that the most effective models mostly use the communication period to negotiate and persuade (Figure 4). **We have improved the presentation of these results with an update of Figure 4, which is shown in our rebuttal PDF as Figure 2.** We hope to study a larger set of protocols (peer punishment, voting, binding agreements, and coalition formation), or even allowing private communications between subsets of agents in future work.
*Re “Over-reliance on LLMs”*: We have edited the text to remove these assumptions. We agree with the reviewer that while LLMs are a powerful new technology they do not possess human-like strategic reasoning and negotiation skills in many cases. **See new data that includes a single-turn comparison with the human subjects (below)**.
## Addressing Questions
> How does the performance of LLM agents in GOVSIM compare to human performance?
We have conducted preliminary single-turn comparisons with human subjects in the fishing scenario, following a setup similar to Figure 5b. While there is significant variation in both human and LLM performance the top performing LLMs (e.g., GPT-4-turbo, GPT4o) exceed the human performance in sustainable resource management (66% vs. 22% respectively). Interestingly, both human participants and LLMs benefit greatly from the Universalization prompting scheme (99% vs. 66%).
> What specific communication strategies or enhancements could be implemented to improve cooperative outcomes among LLM agents?
See response to “Communication Limitations” above.
> How would the introduction of more complex, real-world variables (e.g., variable resource regeneration rates, multiple resource types) impact the agents' ability to cooperate sustainably?
We anticipate that these variables will make sustainable cooperation more challenging for agents, requiring better negotiation skills and more sophisticated long-term planning and reasoning. For instance, with multiple resource types, agents would need to balance their preferences and negotiate trade-offs. Variable regeneration rates would require adaptive strategies and potentially more frequent communication. These complexities would test the LLMs' ability to reason about interconnected systems and make increasingly complex decisions under uncertainty. As LLM capabilities improve, GovSim can be flexibly extended with these challenges. Based on our current results, LLM capabilities are not yet sufficient to handle such scenarios effectively. Our experiments already reveal limitations in long-term planning and multi-variable reasoning for many models.
> What measures can be taken to improve the generalizability of the findings to more diverse and heterogeneous environments?
See response to “Generalizability” above.
> How do different LLMs handle the introduction of multiple adversarial agents or more sophisticated strategic manipulations?
We found that even a single aggressive agent significantly disrupted cooperation leading to lower equality scores (98.05 in the default setting to 85.78 with the newcomer). More sophisticated or multiple adversaries, especially those using deceptive strategies, are likely to lead to an even greater reduction in cooperation. How LLMs cope with these agents can be flexibly studied within the GovSim framework.
We hope that our responses and updates to the text address your concerns and demonstrate the potential for the broader impact of our research. Please follow up with any remaining questions or if our responses have been insufficient in any way. If you believe we have sufficiently addressed your requests we kindly request you to reconsider your score.
---
Rebuttal Comment 1.1:
Title: Encouraging Discussions
Comment: Dear Reviewer,
Thank you again for your efforts in reviewing our work. Should you have a moment, could you read over our response? We would be happy to address if you have further questions! Please don't hesitate to let us know if our answers look good to you. | Rebuttal 1:
Rebuttal: Firstly, we would like to thank all reviewers for the valuable feedback. Three out of four reviewers recommended acceptance (with ratings of 7, 6, and 6) and we believe we have addressed the key concerns of Reviewer 2 directly. We are very encouraged by the large number and diversity of positive comments:
1. **Novelty**: The introduction of GovSim as a platform for studying cooperative behavior in LLMs is recognized as innovative and addressing a critical gap in the literature on AI safety and multi-agent interactions (Reviewer 9ZQN: “GovSim as a platform” “is innovative and addresses a critical gap”).
2. **Comprehensive Experiments**: The experiments on different models are acknowledged as adequate and insightful (Reviewer KPE1: “Experiments on different models are adequate”). The extensive evaluations, including benchmarking, norm robustness testing, and sustainability improvements, are also praised (Reviewer dG5Q: “evaluations are comprehensive and insightful”).
3. **Insightful Analysis**: Our work thoroughly evaluates different LLMs across multiple resource-sharing scenarios, providing a broad view of their cooperative capabilities (Reviewer 9ZQN: “The paper thoroughly evaluates different LLMs across multiple resource-sharing scenarios” and provides “a broad view”). Additionally, the scenarios used in this article are noted for being very engaging (Reviewer KPE1: “the scenarios” “are very engaging”).
4. **Method innovation of model’s ethical behavior**: The integration of ethical reasoning (Universalization) into the agents' decision-making process demonstrates a forward-thinking approach to embedding moral principles in AI behavior (Reviewer 9ZQN: “integrates ethical reasoning”, “demonstrating a forward-thinking approach”).
5. **Open Source Contribution**: Our commitment to providing the full suite of our research results, including the simulation environment, agent prompts, and a web interface, is highlighted as a strength that can foster further research and development in this field (Reviewer 9ZQN: “can foster further research and development in this field”).
6. **Well-Written and Accessible**: Our paper is noted for being clearly written and easy to follow, making it accessible to a broad audience (Reviewer KPE1: “clearly written and easy to follow”).
The main requests and critical feedback raised by reviewers include running our results on more models (the original GPT-4 model), conducting prompt sensitivity analysis, addressing potential future extensions, and explaining the varied settings we tested on. Our rebuttal addresses these concerns comprehensively: (1) As requested by Reviewer 2, we tested the performance with the original GPT-4 (0613) and report that it does not surpass GPT-4-Turbo and GPT-4o (see rebuttal PDF) or change the results of our study. We have provided new graphical analyses demonstrating the stability of performance and consistency across the different scenarios, and will conduct a paraphrasing test to ensure prompt robustness. Following the suggestion of Reviewer 1, we have compared human performance with LLM performance on the GovSim task. (2) As discussed in the individual replies, we will include in the camera ready (with the additional page allowed) a richer discussion of the scalability and adaptability of GovSim: including more complex, real-world variables and human-AI interaction tests. (3) We also used reviewer feedback to better reference some results and data analyses from the Appendix that can help answer some of the outstanding questions (e.g., variations in our tests, subskill analysis, and dialog analysis to explain different model behaviors).
We believe our manuscript presents a contribution to the field of AI safety and cooperative multi-agent systems by introducing the GovSim platform, which bridges the development of LLM-agents with classic social science theories and offers extensive evaluations across state-of-the-art models. Our simulators are easy to use and open source and we expect that researchers will continue to test LLM performance on our platform.
Pdf: /pdf/68b7fc7e859657a7f6e9010788aa81d128ac6252.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mixture of Tokens: Continuous MoE through Cross-Example Aggregation | Accept (poster) | Summary: This paper proposes a new MoE architecture called Mixture of Tokens (MoT). The motivation for this architecture is twofold: first it is the training instabilities incurred (among other things) by low precision training in standard sparse MoEs; secondly, it is the discontinuous nature of sparse MoEs that makes them hard to train. This work seem to be highly inspired by [1].
In contrast to MoEs, which select top-k subset of experts for each token, MoT aggregates tokens within groups created by subsetting examples across the batch dimension. The cross-example token aggregation weights are produced by a router conditioned on the tokens in the corresponding group. The aggregated group representations are then passed through all the experts. The output token representation is produced by a weighted average of all experts' outputs for a group, where the weights are the original aggregation weights.
The paper measures validation perplexity of MoT comparing it to compute matched dense transformer and sparse MoEs demonstrating its superiority over the prior and on par performance with the later. MoT shows better training stability in low precision training regime as compared to sparse MoEs.
[1] Puigcerver, Joan, et al. "From sparse to soft mixtures of experts." arXiv preprint arXiv:2308.00951 (2023).
Strengths: Overall, the paper proposes in interesting and novel extension of soft MoE to autoregressive problems. The idea of mixing tokens across examples in the batch is very interesting, given that it does not break the autoregressive property.
Originality: good. The idea of soft MoEs is not new, but its adoption to autoregressive problems is novel to the best of my knowledge.
Quality: The claims seem to be supported by evidences, yet experimental evaluation might be insufficient (see questions/weaknesses section).
Clarity: the paper is well organized. In terms of writing, several formulations might need to revisited (see weaknesses/questions).
Significance: the paper makes a moderate contribution to the field and can potentially impacting future research.
Weaknesses: One weakness of this paper is a lack of intuitive explanation of why mixing across examples makes sense? I understand that it results in more stable training due to lack of discreet operations, but why does it make sense from the modelling perspective?
If sentences in the batches are consecutive interrelated text passages, then MoT effectively increases the context length by letting the model to also rely on cross-example information. Can this have an effect on generalization ability? e.g. in instruction fine-tuning, examples in the batches are usually not related to each other, hence MoT type of pre-training may be detrimental for downstream task adaptation of these models, where downstream tasks examples are completely unrelated.
Therefore, another potential weakness for me is a lack of generalization/downstream adaptation analysis. Does the training stability advantage of MoT improves generalization and adaptability of the model to downstream tasks? Or does cross-example information sharing actually weakens downstream task generalization?
Technical Quality: 3
Clarity: 3
Questions for Authors: - ll. 52 - 61: I am uncertain if points 2 and 4 qualify as contributions of this work. These appear to be properties of the proposed MoT method rather than contributions in themselves.
- ll. 55-57: If I understand correctly, the key comparison should be with other sparse MoE methods rather than dense transformers. Therefore, why emphasize a speedup over dense transformers as a contribution when the more relevant comparison is the speedup over other sparse MoE architectures?
- l. 104: "Mixture of Experts, introduced into deep learning by [24],..." does not seem to be correct. MoEs have a much longer history, please consider works [2,3];
- l. 190: why half of feed-forward layers are replaced with MoT layers?
- ll. 167 - 169: an important difference to [15] that might be mentioned explicitly here, is that MoT aggregates tokens across examples in a batch, while [15] aggregates patches within each example.
- caption Figure 6: if I am not mistaken, MoT here not only matches but outperforms best sMoE
- since one of the main advantages of the proposed MoT is that it does not require additional load balancing tricks etc., it would be interesting to include a empirical comparison (or at least a discussion) to existing sparse MoEs that also do not require such tricks, see e.g. [4]
- how are examples in the batches created? Are these consecutive interrelated text passages from the same underlying text?
- MoT's weighted token grouping is reminiscent of an attention operation, but across examples in a batch. Is there any intuition for why aggregating information across examples can be useful? One potential way it can be useful, is that if sentences in the batches are related (e.g. maybe these are consecutive sentences from the underlying training text) then of course incorporating information from previous/future examples can help reducing loss (training and validation), since the examples (samples in a batch) are interrelated.
Overall, I believe this paper has significant potential. However, I have concerns regarding downstream task generalization and the relationship between the examples in the training batches. I am willing to reconsider my rating upon rebuttal.
[2] Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79–87, 1991b.
[3] Robert A Jacobs, Michael I Jordan, and Andrew G Barto. Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks. Cognitive science, 15(2):219–250, 1991a
[4] Csordás, Róbert, Kazuki Irie, and Jürgen Schmidhuber. "Approximating two-layer feedforward networks for efficient transformers." arXiv preprint arXiv:2310.10837 (2023).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are addressed in Sec. 5.
I think one missing limitation of this work is the fact that no downstream task generalization is evaluated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their questions and suggestions and appreciate the recognition of our paper's strengths. Below, we address the weaknesses and questions mentioned in the review. If our answers address the reviewer's concerns, we would like to kindly ask for the reconsideration of the rating.
**Regarding previous related work.** We would like to refer to our reply in the general comment.
## Regarding weaknesses:
**W1:** We agree that, apart from experimental results, an intuition behind mixing tokens across examples should be provided. We will add such an explanation to the camera-ready version.
As we mix tokens from multiple unrelated sequences, we do not expect the model to meaningfully use the information from one sequence to improve prediction in a different sequence. However, we argue that such mixing (1) provides richer feedback (gradients) to train the model, especially the router, and (2) provides a smoother loss landscape, resistant to small perturbations in inputs/weights.
Intuitively, for a given expert, **from the perspective of each token**, the token gets some amount of update to its representation (in the residual stream) based on:
1. Itself, which produces a proper signal expected to improve the token representation.
2. Tokens other than itself, which are essentially random tokens from unrelated sequences. As those sequences are randomly sampled from the dataset, the impact from those tokens will point in random directions and, essentially, just add some amount of noise to the token update.
We generally expect neural networks to be resistant to some amount of noise added to them (e.g. dropout doesn't hurt). Moreover, while the signal-to-noise ratio worsens for tokens with low expert weight, the expert weight also modulates the magnitude of the update. Therefore, the amount of noise added to the representation is limited.
We stipulate that MoT experts will learn to focus on a single token or a small number of tokens - minimizing the noise and approximating sparse MoE when optimal. Still, other tokens will be assigned nonzero weight, enabling some information to flow to the router for each and every token-expert pair (in contrast to sparse MoE). Moreover, the output of MoT is more continuous, with small perturbations of input/weights corresponding to small changes in the output instead of large discrete jumps with sparse MoE.
**W2:** During the training and evaluation, we essentially put random, unrelated examples into the same group. For reasons explained in **W1** not much cross-example information transfer occurs. Therefore, it does not have any effect on generalization ability.
**W3:** We believe that comparing perplexity generally predicts model improvements more reliably, especially when trying to predict how particular changes to the model architecture will impact extremely large-scale models. However, during the short rebuttal period, we measured performance on a few benchmarks relevant at this model scale, comparing MoT-Medium to Transformer-Medium, without fine-tuning, zero-shot. In these evaluations, for MoT, just a single evaluation query is put into a batch of 32, with the rest of the batch being comprised of random sequences from the C4 training dataset - ensuring this is still zero-shot.
| Metric | Dense Transformer | MoT/32E/1 | MoT/32E/16 |
| ------ | ----------------- | ----------------- | ------- |
| PIQA | 60.2 | 62.4 | **65.8** |
| HellaSwag | 27.3 | 31.1 | **33.3** |
| ARC-e | 35.5 | 37.3 | **39.6** |
## Regarding questions:
**Q1:** After some consideration, we agree with the reviewer, and we will move the results/properties of MoT itself (in particular, speed-up against the dense model and improved stability compared to MoE) from the contributions bullet points to the main introduction part.
**Q2:** We think it is important to compare MoT to both kinds of models—dense models, where the main benefit we show is a significant speedup (point 2), and MoE models, where the main benefit we focus on is increased stability of the training (point 4). We will make those points clearer while moving them into the main text.
**Q3:** Thank you for bringing this to our attention. This is an oversight on our part. While we already cite [5] (in line 75), we will change the citation in line 104 as well (to both [5] and [6]), to ensure proper credit is given.
**Q4:** In the literature on MoE (e.g. [1][7][8]), it is quite standard for MoE layers to replace only half of FFN layers - this keeps the majority of MoE advantages while requiring significantly fewer total parameters.
**Q5:** We will revise this paragraph to highlight the differences and similarities between SoftMoE and MoT more clearly, as suggested.
**Q6:** While MoT indeed outperforms sparse MoE here, the differences between the compared approaches are relatively minor. Given these results, we think it would be an exaggeration to claim that our method outperformed the others. Instead, we focus on showing other advantages of MoT, like increased training stability.
**Q7:** Please note that one of the methods we compare our approach to is the expert-choice variant of MoE that operates without load-balancing. However, it employs a sparse router, which requires higher precision and is not fully differentiable, leading to unstable training. By replacing the sparse router with our fully differentiable counterpart, we achieve stable training and enable training entirely in lower precision.
**Q8:** Training batches are created in the same manner as for regular training. We sample random passages from the training dataset until the desired number of examples in the batch is reached. Consequently, it is unlikely that any two passages within a batch are related.
**Q9:** See our explanation of intuition behind MoT in the previous section **W1** of this response.
---
Rebuttal Comment 1.1:
Comment: I want to thank the authors for their elaborate replies! I will increase my score.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's engagement and we are thankful for the update of the score. | Summary: In this paper, the authors propose a novel method called Mixture of Tokens (MOT), an expert-based architecture that addresses the drawbacks of existing Mixture of Experts (MoE) approaches, such as reduced training efficiency, instability, and the necessity of using load balancing losses and elaborate training schedules. These issues are often caused by the top-k operation used to select the active experts. The authors' approach combines tokens into groups (subsets of several tokens) and uses a weighted sum of tokens within these groups as input to each expert, with each expert using its own weights for the sum. This allows each token within the group to be processed by each expert, resolving the issues associated with top-k routing. The authors validate their method through extensive experiments on an autoregressive language modeling task, demonstrating significant improvements in training stability, reduced training costs (compute budget), and improved scaling with regard to model size.
Strengths: * The paper is clearly written and easy to follow. The idea is intuitive and easy to grasp. The related work section provides an adequate discussion of existing approaches (MoE and variations), highlights their problems, and explains the way to resolve them
* Based on the weaknesses of existing MoE approaches, the authors develop an efficient MoT approach that is relatively simple to integrate into existing architectures, does not increase computational cost, and provides important advantages such as more stable training and better convergence
* The authors provided an extensive evaluation of their approach, showing a 3x training speedup compared to common MoE architectures and improved performance in several important setups (such as low-precision training). Therefore, I find the contribution and value of the proposed approach to be clear and evident.
Weaknesses: * One of the weaknesses the authors pointed out in the limitations section is the necessity to use batched inference for the approach, which could limit the applicability scope of the method. While there are still important applications for batched inference, I believe that resolving this issue could significantly increase the influence of the method
* From the perspective of the experimental evaluation, I would be curious to see evidence that the behavior demonstrated in the paper would hold in other domains, such as images. Additionally, it seems that the proposed approach could resemble the method in [1] a lot.
* In terms of experiments, it would also be interesting to see comparisons with other MoE approaches mentioned in the related work, beyond the reported Token Choice and Expert Choice, or more novel approaches such as [2] and similar methods.
[1] Puigcerver, Joan, et al. "From sparse to soft mixtures of experts." ICLR 2024
[2] Anthony, Quentin, et al. "Blackmamba: Mixture of experts for state-space models." ICLR 2024
Technical Quality: 3
Clarity: 2
Questions for Authors: * As stated in the weaknesses section, is it possible to demonstrate the effectiveness of the approach beyond the autoregressive language modeling task? I believe it could significantly strengthen the paper.
* In addition to the previous question, could you elaborate on what the major differences are between the proposed approach and the [1] method? Is it the fact that in MoT mixing happens within different batch inputs and in [1] we mix tokens within the same image? If so, then the novelty of MoT (compared to [1]) comes from the grouping and the application to the autoregressive task, which could undermine the contribution of the paper.
[1] Puigcerver, Joan, et al. "From sparse to soft mixtures of experts." ICLR 2024
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: No significant limitations of the paper, though the authors' discussion on limitations is much appreciated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We want to thank the reviewer for their comments and questions. We also appreciate the mention of the simplicity of integrating our method into existing approaches. If the reviewer's concerns have been addressed, we would like to kindly ask for the reconsideration of the rating.
**Regarding batched inference and applicability.** We agree that the necessity of using batched inference, while fine for many industry applications, is a limitation for others. We would like to highlight, however, that transition tuning (introduced in Section 4.4 of our work) can lift this limitation by enabling many benefits of MoT during training (e.g. increased stability) while converting the model to sparse MoE for unbatched inference.
**Regarding other domains and tasks other than autoregressive language modeling.** We agree that testing the method on other domains is an interesting avenue of research. With limited time and budget, we focused on the domain with, arguably, the most pressing problem of efficiency because of recent scaling efforts across the whole industry. Apart from text the important domain to test MoT on would be vision - but those experiments would heavily overlap with a concurrent paper of SoftMoE. We will properly mention that in the Limitations section of our work.
**Regarding the differences with SoftMoE.** As the reviewer noted, the main difference between MoT and SoftMoE is that MoT mixing happens between different sequences and not within a sequence/image, enabling its use on autoregressive tasks. We would like to refer to our reply in the general comment for further information about concurrency and the novelty of our work compared to SoftMoE.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed rebuttal and for addressing the questions raised. I appreciate the clarifications provided, particularly regarding batched inference and the differences between MoT and SoftMoE. Given the solid contributions and your thorough responses, I am maintaining my original rating.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. We are glad that we were able to clarify the points raised. | Summary: This paper proposes a new routing algorithms for MoEs: the mixture of tokens. The context is the following: routing in MoEs is tricky because one token gets usually assigned to one or a few experts and so the gradient feedback to update the router is not great. Therefore, several recent papers proposed to either average the experts parameters [1,2] or mixing the tokens [3] to overcome this issue. This paper focuses on this research direction. Prior work [3] did it in the case of computer vision where causality is not an issue. This paper tries to adapt mixture of tokens to the case of autoregressive models. They first explain how they do the mixture of tokens in section 3.1 and then detail how they manage to ensure causality in section 3.2. Lastly in section 4, they show the benefits of their method over other routing algorithms: in particular, they show that Mixture of Tokens (MoT) minimizes the eval loss much faster than dense Transformers and the performance is slightly better than standard MoEs with token choice and expert choice routings.
[1] Zhong, Z., Xia, M., Chen, D., & Lewis, M. (2024). Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training. arXiv preprint arXiv:2405.03133.
[2] Muqeeth, M., Liu, H., & Raffel, C. (2023). Soft merging of experts with adaptive routing. arXiv preprint arXiv:2306.03745.
[3] Puigcerver, J., Riquelme, C., Mustafa, B., & Houlsby, N. (2023). From sparse to soft mixtures of experts. arXiv preprint arXiv:2308.00951.
Strengths: I think that mixture of tokens is an idea that has been around for a while [1] and i was curious to know how it behaves in the context of autoregressive models. So in this aspect, I think the paper is interesting. Also, I found the method well-presented.
[1] Puigcerver, J., Riquelme, C., Mustafa, B., & Houlsby, N. (2023). From sparse to soft mixtures of experts. arXiv preprint arXiv:2308.00951.
Weaknesses: In general, I am a bit doubtful about the methods that attempt to merge either tokens or parameters in order to solve the ill-posed problem of routing in MoEs. For me, MoEs are primarily introduced for efficiency in that we can increase the size of the model while keeping the same number of FLOPs. In my opinion, the approaches that merge tokens/parameters lose the efficiency aspect. I think the authors tried to alleviate this issue by using smaller experts (as done in [1]) but I don't think this totally solves the problem. [1] also shows that when the granularity is too high, MoEs performance drop. In any case, I am happy that some researchers tried mixing tokens in the context of autoregressive models but I do not fully believe in it. Here are some more precise questions I would like to ask to the authors:
- **Computational cost**:Can the authors analyze the additional number of FLOPs that their method incurs compared to standard token choice? Or to other merging methods like the parameter merging in [2]?
- **Any advantage over expert choice/token choice?**: do you think there is any benefit from using mixture of tokens over other existing routing algorithms? It is not obvious from Figure 6. I understand the primary motivation and it makes sense to me. But in practice, it does not seem that the merging methods yield any significant benefits over token or expert choice.
- **Preserving the causal structure**: This is always a challenge for routing algorithms like expert choice or Lory that do not naturally preserve causality. Have you tried other schemes to preserve the causal structure? Would the one that is used when running expert choice on autoregressive models fail? Can you clarify why you believe there is no information leakage, sorry I may have missed this point?
[1] Krajewski, J., Ludziejewski, J., Adamczewski, K., Pióro, M., Krutul, M., Antoniak, S., ... & Jaszczur, S. (2024). Scaling laws for fine-grained mixture of experts. arXiv preprint arXiv:2402.07871.
[2] Zhong, Z., Xia, M., Chen, D., & Lewis, M. (2024). Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training. arXiv preprint arXiv:2405.03133.
Technical Quality: 3
Clarity: 2
Questions for Authors: I have written my questions above.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have clearly mentioned the limitations of their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their feedback and questions. We also appreciate the recognition of the value of the research problem and the presentation of our work. We hope that the answers below adequately answered the reviewer's questions. If that is the case, we kindly ask for a reconsideration of the paper score.
**Regarding SoftMoE.**
We would like to refer to our reply in the general comment.
**Regarding computational cost.** The time complexity of our approach doesn't change compared to standard token choice or expert choice methods. The cost of computing routing logits itself takes $O(d_{model}*N_{experts}*N_{tokens})$ FLOPs, and this time complexity is the same for all MoE variants. The mixing and unmixing operations in MoT have the same time complexity. Regarding granularity, our models use values around compute optimal for this model size as calculated in [3]; therefore, according to their experiments, performance should not drop, as in the case of extreme granularity. We will clarify this in the camera-ready version and add a citation to their work.
**Regarding the benefits of MoT over expert choice/token choice.** We believe that MoT and techniques developed in the future based on MoT will provide better training stability (shown in our work), and therefore enable improved training performance, especially at scale. Moreover, within a continuous setting, each expert is trained on every token, which might be beneficial once we scale to higher expansion rates.
**Regarding the causal structure.** The original expert choice routing did not try to preserve the causal structure by default, with the Limitations section in [4] stating, "The expert choice method might not immediately apply to auto-regressive text generation as our current implementation takes in the past and future tokens to perform the top-k selection." The approach used in our paper is, we think, the only natural adaptation of expert choice to autoregressive models preventing information leaks. This method is used both in our expert-choice and MoT models. The approach of preventing the causal information leak used in [2] is not applicable to MoT. In MoT, we can never mix tokens from the same sequence, and routing based on the previous tokens/segment (rather than the current tokens/segment) doesn't mitigate the issue.
**Regarding no information leakage in MoT.** In MoT, there is no information leakage, as we never mix tokens from the same sequence together. Each mixture is based on a specific position in a sequence. So, a token at position $i$ in a sequence $s$ can, in a MoT layer, see tokens from any sequence at position $i$. Combined with the causal attention layer, token at position $i$ can see any tokens at positions $0:i$ in a sequence $s$ (as in standard causal Transformer), and also any tokens at positions $0:i$ in any sequence in a group/batch (through a combination of MoT layer being able to look at a different sequence, and causal attention being able to look at any previous position). However, looking at different sequences in a batch doesn't constitute an information leak since those other sequences are IID examples randomly drawn from the dataset.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the reviewers for their reply. I appreciate their clarification regarding the computational cost and I believe that this should be added to the final version of the paper. I increase my score by 1 point.
---
Reply to Comment 1.1.1:
Comment: We appreciate the reviewer's suggestion and are grateful for the improved score. We will include a clarification on the computational cost in the camera-ready version of the paper. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for assessing our paper. We appreciate all the positive comments and will consider all the feedback to improve our work.
We address each review's comments and questions in individual responses. In this general reply, we want to address just the shared concerns about comparison, concurrency, and novelty between MoT and SoftMoE[1]. Finally, we also provide references, which are shared among all of our replies.
**Regarding SoftMoE[1].** We would like to highlight that, as we stated in Section 3.3, our method was developed independently and concurrently with SoftMoE [1]. Many of our experiments were conducted and MoT code was publicly accessible before SoftMoE was published (due to anonymization we cannot provide links here; we can verify with AC if requested).
While we think that the concurrency of our work and SoftMoE is enough to defend our contribution - we also would like to note the importance of developing this kind of method for language modeling. This is clear to see in the ICLR 2024 review process of SoftMoE, where the most common criticism is its limitation to only non-autoregressive tasks. This weakness is mentioned in 3 out of 4 reviews of their work, and it is the only "Justification For Why Not Higher Score" identified in a meta-review. This shows the importance of our work even without consideration of the concurrence of [1] and MoT.
# References (common for all replies):
[1] Puigcerver, Joan, et al. "From sparse to soft mixtures of experts." arXiv preprint arXiv:2308.00951 (2023).
[2] Zhong, Zexuan, et al. "Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training" arXiv preprint arXiv:2405.03133 (2024)
[3] Krajewski, Jakub, et al. (2024). "Scaling laws for fine-grained mixture of experts." arXiv preprint arXiv:2402.07871 (2024)
[4] Zhou, Yanqi, et al. "Mixture-of-Experts with Expert Choice Routing" arXiv preprint arXiv:2202.09368 (2022)
[5] Jacobs, Robert A, et al. "Adaptive mixtures of local experts." Neural computation, 3(1):79–87, 1991b
[6] Jacobs, Robert A, et al. "Task decomposition through competition in a modular connectionist architecture: The what and where vision tasks." Cognitive science, 15(2):219–250, 1991a
[7] Fedus, William, et al. "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity." arXiv preprint arXiv:2101.03961 (2022).
[8] Lepikhin, Dmitry, et al. "GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding" arXiv preprint arXiv:2006.16668 (2020). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models | Accept (poster) | Summary: This paper proposes BAdam, a novel optimization method for memory-efficient full parameter finetuning of large language models. BAdam leverages the block coordinate descent framework with Adam as the inner solver. It partitions the model parameters into blocks and updates one block at a time using Adam steps.
Strengths: 1. The paper addresses the highly relevant and important problem of enabling full parameter finetuning of large language models under memory constraints. BAdam provides an original and creative solution by combining block coordinate descent with Adam.
2. The theoretical convergence analysis lends credibility to the proposed method, even if limited to the deterministic case. The proof seems sound and complete.
3. The experiments are thorough and convincing, clearly demonstrating BAdam's advantages in memory efficiency, running time, convergence behavior, and downstream performance compared to strong baselines like LoRA and LOMO across multiple models and datasets.
Weaknesses: 1. While sufficient for an initial proposal, the theoretical analysis is limited to the deterministic case. Extending the convergence results to the stochastic setting would further strengthen the paper.
2. The paper focuses on applying BAdam to the finetuning stage of large language models. It would be interesting to explore and discuss whether the proposed method could also be applied during the pretraining phase. While I understand that conducting experiments on pretraining may be prohibitive due to time constraints, providing some conceptual discussion on the feasibility, potential benefits, and challenges of using BAdam for pretraining could broaden the scope and impact of the work. For example, the authors could comment on whether the block-wise update scheme of BAdam would remain effective and efficient when dealing with the larger datasets and longer training horizons typically involved in pretraining. Addressing this aspect, even briefly, would give readers a more comprehensive view of BAdam's potential across different stages of the model development pipeline.
Technical Quality: 3
Clarity: 3
Questions for Authors: See my weakness part.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed key limitations such as the theoretical analysis being restricted to deterministic gradients and comparisons with Adam being limited to medium-sized models due to resource constraints. Suggestions are provided to address these limitations in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive and helpful comments. We address the reviewer's concerns in a point-by-point manner below. *All additional experiment results (figures and tables) are put in the one page supplementary PDF of the global rebuttal.*
**A. Convergence result under stochastic setting.** Our approach of obtaining complexity under the deterministic setting consists of two main steps: (1) establishing the descent property for the inner solver within one block and (2) aggregating the descent inequalities across all blocks for one block-epoch. Step (2) is a standard technique in the complexity analysis of BCD-type methods under reasonable conditions. To extend to stochastic setting, the main difficulty lies in extending step (1). The complete proof is beyond the scope of this rebuttal stage; we provide a proof sketch here. We adopt the same assumptions on the smoothness of the objective function and stochastic gradient errors as in [1].
*Notation modification.* We distinguish the true gradient $g_{i}^{t,k}$ and the estimated stochastic gradient $\tilde{g}\_i^{t,k}$. The difference term $e_{i}^{t}$ between the true gradient $g_{i}^{t,k}$ and the scaled momentum $\hat{m}_{i}^{t,k}$ now includes stochastic gradient errors:
$$
e_{i}^{t, k} = \hat{m}_{i}^{t, k} - g\_{i}^{t, k},
$$
where the momentum is updated with stochastic gradients $m\_{i}^{t,k} = \beta\_{1} m\_{i} ^{t,k-1} + (1- \beta\_{1} ) \tilde{g}\_{i} ^{t,k} $.
Assuming stochastic gradients with a uniform almost sure bound $\sigma$, similar to [1], $||e_{i} ^{t,k}||$ can be bounded by the norms of the true gradients with probability at least $1- \delta$:
$$\sum_{k=1}^{K} ||e_{i} ^{t,k} ||^2 \le \Theta (\lambda) \sum_{k=1}^{K} ||g_i^{t,k} || ^2 + \mathcal{O}\left(\sigma ^2 \log (\delta ^{-1} ) ( 1/ \beta_1 + \beta_1 K) \right).$$
*Approximate descent inequality (Inner solver).* Substituting the above probabilistic bound of error terms into the following approximate descent inequality for each step of the inner solver:
$$ \mathcal{L}( \theta _{i}^ {t,k}) - \mathcal{L}(\theta _{i} ^{t,k-1} ) \le - \Theta (\alpha ) ||\nabla _{i} \mathcal{L} (\theta _{i} ^{t,k-1} ) || ^2 + \Theta (\alpha) || e\_{i} ^{t,k-1} || ^2.$$
Then combining a telescoping argument with our technique of bounding gradient differences in the inner loop $||g_{i} ^{t,k} - g_{i} ^{t,1} ||, \forall k \le K $ by $||g_{i} ^{t,1} || $:
$$ \sum_{k=1}^{K} || g _{i} ^{t,k} - g _{i} ^{t,1} || \le \mathcal{O} \left(\alpha K^2 \right) || g _{i} ^{t,1} || + \mathcal{O} \left(\alpha K^{3 /2} \sigma \sqrt{(1 / \beta _{1} + \beta _{1} K) \log (\delta ^{-1} ) } \right). $$
We obtain the following approximate descent inequality for each block under the stochastic setting:
$$\mathcal{L} (\theta\_{i} ^{t} ) - \mathcal{L}(\theta\_{i-1}^{t} ) \le - \Theta ( \alpha K) || \nabla_i \mathcal{L}(\theta_i^{t}) ||^2 +\mathcal{O}\left(\alpha ^3 K ^3 \sigma ^2 \log (\delta^{-1} ) ( 1/ \beta_1 + \beta_1 K) \right).$$
*Approximate descent inequality (BCD epoch).* By aggregating the above approximate descent inequality across different blocks, similar to our arguments in the submitted manuscript, we derive the approximate descent inequality for one block-epoch:
$$\mathcal{L}(\theta ^{t+1} ) - \mathcal{L}(\theta ^{t} ) \le - \Theta (\alpha K) ||\nabla \mathcal{L}(\theta ^{t} )|| ^2 + \mathcal{O}\left(\alpha ^3 K ^3 D\sigma ^2 \log (\delta ^{-1} ) ( 1/ \beta_1 + \beta_1 K) \right).$$
Through standard manipulations, we should obtain a complexity result for finding an $\varepsilon$-stationary point.
[1] Li, H., Rakhlin, A., \& Jadbabaie, A. (2023). Convergence of Adam Under Relaxed Assumptions. NeurIPS.
**B. Initial continual pretraining experiment using BAdam.** Pretraining from scratch is out of scope of this work; we have conducted an initial continual pretraining (CPT) experiment for Llama 3-8B-Instruct on the StarCoder dataset using BAdam. We evaluate its performance using the online CPT loss; see Figure (f). We are only able to finish about 0.1 epoch (1.5B tokens) training due to time constraints. The online CPT loss shows that the model is effectively learning new domain specific (code-related) knowledge. In the following, we list several possible reasons why BAdam may become a candidate optimizer for LLM continual pretraining.
*BAdam exhibits as high rank update as that of Adam.* In the pretraining stage, model acquires massive knowledge over different domains by training on billions of tokens. Intuitively, new factual information (not appeared in the pretraining corpus) might not be encoded by low rank update. As shown in Figure (e), BAdam achieves almost the same high rank update as Adam through all modules of different transformer layers, partially ensuring BAdam’s learnability.
*Possible ability of avoiding general knowledge forgetting.* Perhaps one of the most challenging tasks in CPT is to avoid forgetting of general knowledge. We suspect that BAdam might be good at avoding forgetting compared to Adam. This conjecture stems form the fact that the BAdam uses each data batch to update only one block of parameters, thereby better preserving the general knowledge of the model during CPT. This is partly confirmed by that Llama 3-8B-Instruct has a 67.7 MMLU score (0-shot), while the MMLU score of our CPTed checkpoint after training on 1.5B tokens only decreases to 66.7 (without using any model merging techniques).
We hope that our response is satisfactory to the reviewer and that all concerns have been addressed appropriately. We will properly add the above continual pretraining experiments (with additional comparisons with LoRA and Adam) and try to add convergence result under stochastic setting (if our sketch does not contain vital error) to our next manuscript version. If the reviewer has any additional concerns, please let us know during the reviewer-author discussion period.
---
Rebuttal 2:
Comment: Thanks for the author's detailed response. It addressed most of my concerns. However, I share the same issue as Reviewer XVND regarding the GSM8K and MATH scores. I hope this can be addressed properly.
---
Rebuttal Comment 2.1:
Title: Authors' Response
Comment: Thank you for thoroughly reviewing our response and tracking the feedback of other reviewers. We are pleased to know that most of your concerns have been addressed. We have provided further clarification regarding the score gap of GSM8K and MATH, please see our response to reviewer XVND.
We would like to thank the reviewer once again for supporting our work. Should you have any additional questions, please let us know.
---
Reply to Comment 2.1.1:
Title: Comment on Scores of GSM8K and MATH
Comment: Dear reviewer JX1Q,
Thank you for your thoughtful participation in discussion. We would like to provide additional clarifications regarding the few-shot benchmark scores for GSM8K and MATH, as previously elaborated in our response to Reviewer XVND.
We recognized that these benchmarks are highly sensitive to the chain-of-thought (CoT) prompting technique, where minor variations in the prompts can lead to heavy fluctuations in the score. Consequently, the impact of the optimization algorithms diminishes when CoT samples are employed, as we have empirically verified. Based on this observation, we believe that evaluating the efficacy of BAdam solely based on these specific few-shot scores—without accounting for the CoT's impact—may not provide a convincing assessment of its performance. Additionally, we remark that it would be inappropriate to disregard BAdam's performance on the zero-shot setting, as it provides a measure that excludes the CoT's effects.
We hope that our clarification addresses your concern regarding the benchmark scores. Should you have any additional question, please let us know.
Best,
Authors
---
Rebuttal 3:
Title: Thank you
Comment: Thank you for following up and the detailed response. I appreciate the effort authors put into addressing this issue. Your explanation has increased my confidence in my comment, and I have raised my confidence level from 3 to 4 and score from 6 to 8. My expertise lies more in pretraining language models, and our team also struggles with dealing with the memory issues brought about by the Adam optimizer. Being able to optimize Adam is a good design. From a practical perspective, I believe this paper can bring relatively significant benefits to the field. I hope the authors can open-source their code in a high-quality manner to facilitate language model community use.
---
Rebuttal 4:
Comment: The authors would like to express their deepest gratitude to the reviewer for acknowledging our work and responses. We also greatly appreciate the reviewer for increasing the confidence level from 3 to 4 and the score from 6 to 8.
For pretraining, we will complete the continual pretraining experiment to show the efficiency of BAdam in this pretraining-related setting. Regarding open-sourcing, we promise that our implementation code that can reproduce our paper's results will be made publicly available and include the following features:
**1. Easy to use.** The integration of the BAdam and BCD framework into user's own codebase will be straightforward, necessitating minimal changes to the existing code.
**2. Distributed training support.** Our code will support both data-parallel and model-parallel training, based on DeepSpeed ZeRO-3, allowing for the efficient finetuning / training of truly large-scale models (e.g., 70B). We will also make the implementation of distributed training straightforward.
**3. Memory efficiency.** We will ensure that actual memory usage is consistent with the values reported in our paper. For instance, one will be able to train a Llama 3-8B model using a single RTX3090 and a Llama 3-70B model with just $3\times$ A100-80GB GPUs (based on our distributed training implementation).
Once again, we deeply thank the reviewer for your kind words and support of our work.
Title: Thank You | Summary: The paper introduces BAdam, a memory-efficient optimization method for fine-tuning large language models (LLMs) by leveraging block coordinate descent (BCD) with Adam as the inner solver. BAdam aims to reduce memory consumption while maintaining or improving performance. The paper presents theoretical convergence analysis and experimental results showing BAdam's effectiveness compared to existing methods like LoRA and LOMO, particularly in terms of memory efficiency and downstream performance.
Strengths: - **Extensive Theoretical Proof of Convergence**: The paper provides substantial theoretical evidence to support the convergence of the proposed method.
- **Detailed Method Analysis**: The analysis of the method is thorough, covering aspects like memory consumption and computation time.
Weaknesses: - **Need for More Quantitative Results**: The evaluation of 7B and 8B models requires more quantitative results, such as mathematical and world knowledge benchmarks (e.g., GSM8K and MMLU). Relying solely on MT-bench, which is scored by GPT-4, is not sufficiently objective.
- **Block-wise vs. Layer-wise Updates**: The paper's core discussion revolves around block-wise updates, but the actual implementation uses layer-wise updates. Other formats of block-wise updates should be explored.
- **Similarity to Existing Work**: The motivation of this paper is similar to "LIFT: Efficient Layer-wise Fine-tuning for Large Model Models"[1] which also discusses "learning one layer/block at a time" in Section 3.2. This similarity needs to be addressed and differentiated.
- **Verbose Section 3.1.2**: The discussion in Section 3.1.2 is overly verbose. Experiments with learning rates and other hyperparameters could be moved to the ablation studies or the appendix, while the main text should focus on the core experimental results.
[1] https://openreview.net/forum?id=u0INlprg3U
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive and helpful comments. We address the reviewer's concerns in a point-by-point manner below. All additional experiment results (figures and tables) are put in the one page supplementary PDF of the global rebuttal.
**A. More quantitative results on MMLU and math benchmarks.** Following the reviewer's suggestion, we have tested the MMLU scores of the instruction-tuned models; see Table 3. We can observe that BAdam performs as good as Adam and outperforms LoRA.
In Table 2, we have also conducted math-instruction tuning for Llama 3.1-8B on the MathInstruct dataset using BAdam and LoRA, and tested several math benchmarks including GSM8K, MATH, NumGLUE, SVAMP, MMLU-Math, and SAT-Math. The results demonstrate that BAdam clearly outperforms LoRA. We have not conducted the same experiments for Adam due to time constraints, but we will add such experiments in the next version.
We believe that these additional quantitative results, together with the MT-bench results, illustrate the capability of BAdam for LLM finetuning.
**B. More block partition schemes.** As requested by the reviewer, we have tested the ratio-based block partition scheme, where one block is formed by selecting a small part of parameters from each matrix module. We can observe that ratio partition performs nearly as well as the layer partition in terms of optimization ability as shown in Figure (c). In addition to this ratio partition, we can also treat each attention / mlp module as one block, and our test shows that this scheme also has similar optimization ability to the layer partition. We will add ablation study on these additional block partition schemes in the next version.
**C. Fundamental differences from LIFT.** Let us first mention that LIFT is properly cited in our Section 4. Though LIFT utilizes a layer-wise update of Adam, it differs from BAdam fundamentally in the following aspects.
*Different principles in algorithm foundation.* LIFT has two types of iteration policies as stated in their Appendix D, including cyclic and grouped iteration policies. The cyclic policy updates a layer for one iteration and then offloads Adam's optimizer states to the CPU (see the last paragraph of their Section 3.2). This is not a valid BCD with Adam scheme because the gradient of the same layer in the next round of update does not match the offloaded Adam's optimizer states (since other layers' parameters have been updated). On the other hand, their grouped policy updates one layer for the total number of scheduled iterations and does not revisit it, and hence there is no BCD loop at all. This makes their grouped policy a special type of block-restricted implementation of Adam rather than a general BCD scheme.
By contrast, BAdam is built on the BCD optimization framework, resulting in a fundamentally different algorithm. BAdam includes an outer BCD loop and properly accumulates the first and second momentum for updating a block ($K$ in our algorithm description). Importantly, when BAdam revisits a block, *its first and second momentum start at 0* (no offloads), which is crucial for ensuring both theoretical and empirical convergence.
*Theoretical convergence guarantee.* Thanks to BAdam's foundation in the BCD framework, we can provide a convergence guarantee, adding reliability and predictability to its performance.
*Practical code implementation.* We have made significant efforts to develop a compatible and robust code implementation for BAdam, with special attention to memory management. For example, our submitted code allows efficient finetuning of Llama 3-8B using a single RTX3090-24GB GPU, whereas LIFT requires several A100 GPUs (according to their experiment description since they have not open-sourced their implementation). Indeed, implementing a direct block-restricted update of Adam could be trivial with sufficient resources, as one can simply set a certain layer to require gradients in the original Adam optimizer without needing complicated memory management and optimizer coding. Our practical implementation ensures broader applicability and benefits for practitioners.
**D. Verbose experiment section.** We fully agree with the reviewer's concern. We have conducted additional experiments, including more quantitative results (MMLU and math benchmarks), ablation experiments comparing Adam, SGD, BAdam, and BSGD (BCD with SGD), comparison with Galore, and continual pretraining; see the one page supplementary PDF of the global rebuttal. We will incorporate these experiments into our next manuscript version. Specifically, our plan is to move a large part of Section 3.1.1, the convergence verification in Section 3.1.2, and the entire Section 3.2 to the Appendix. Instead, we will add more downstream evaluations (such as MMLU and math tests), create an ablation study section, and a continual pretraining section.
We hope that our response is satisfactory to the reviewer and that all concerns have been addressed appropriately. If the reviewer has any additional concerns, please let us know during the reviewer-author discussion period.
---
Rebuttal Comment 1.1:
Comment: Thank you for the authors' response.
We have reviewed your feedback and noted that most of our concerns have been addressed. However, we observed an issue with your Table 2 and Table 3 in the supplementary PDF. According to the official reports for LLaMA 3 and LLaMA 3.1 [1,2], the baseline performance that you report is lower than the official figures, especially for GSM8K and MATH in Table 2. The official tech report [1] indicates that LLaMA 3.1 8B achieves **84.5** on GSM8K (8-shot, CoT), whereas your report shows **17.8**. Similarly, the official MATH (0-shot, CoT) score is **51.9**, while your report indicates **8.6**. For Table 3, the tech report [2] shows that LLaMA 3 8B scores **66.7** on MMLU (5-shot). Based on this information, I believe your supplementary PDF lacks credibility and may negatively impact your paper.
> Reference:
> [1] https://ai.meta.com/blog/meta-llama-3-1/
> [2] https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct
---
Reply to Comment 1.1.1:
Title: Authors' Reponse
Comment: Thank you for reading our rebuttal and confirming that most of your concerns have been addressed. We now provide clarifications in response to your further questions regarding the gap in evaluation scores.
**A. Lower baseline GSM8K and MATH (in Table 2) than Meta's official values.** In the sequel, [1] and [2] refer to the two references the reviewer provided. The values reported in [1] are based on the finetuned model, i.e., Llama 3.1-8B-Instruct. This can be confirmed by comparing several benchmark scores in [1] with those of the instruct version in [2]. Namely, Meta just uses Llama 3.1-8B to represent Llama 3.1-8B-Instruct in [1]. In our provided Table 2 in the supplementary PDF, the "Base model" refers to the *pretrained base model*, i.e., Llama 3.1-8B. In addition, all of the reported scores in our Table 2 are obtained using *0-shot* prompting, which can also be different from that of 8-shot. Hence, we have different baseline performance from Meta's official values. In our Table 2, we use the evaluation code open-sourced by the MathInstruct paper [3] and use the same evaluation setup for all the models (base and finetuned models), ensuring a fair comparison between different optimization methods.
**B. Lower baseline MMLU (in Table 3) than Meta's official value.** The few-shot MMLU score heavily depends on the *prompt engineering* and *evaluation approach*, as revealed by the Hugging Face report [4]. In this report, it explains in detail why open LLM leaderboard has a much lower MMLU score of the Llama model compared to the official one released by Meta. Since Meta does not open-source their prompts and evaluation methodology that yields their reported MMLU score, the open-source evaluation code may produce a lower MMLU score than Meta's official value. In our provided Table 3 in the supplementary PDF, we use the open-sourced MMLU evaluation code from Llama-Factory [5] and use the same evaluation setup for all the models (base and finetuned models).
We would like to thank the reviewer once again for reading our rebuttal and raising further questions. We hope that the above response clarifies our results. If the reviewer has any additional questions, please let us know.
**References:**
[3] Yue et al. "Mammoth: Building math generalist models through hybrid instruction tuning", ICLR 2024.
[4] Fourrier et al. "What's going on with the Open LLM Leaderboard?", Hugging Face Blog.
[5] Zheng et al. "Llamafactory: Unified efficient fine-tuning of 100+ language models", ACL 2024. | Summary: The paper introduces memory-efficient optimizer BAdam, which combines the concepts of block coordinate descent (BCD) and Adam's update rule. BAdam demonstrates that, with moderate memory consumption—more than LOMO—it can surpass LoRA and significantly outperform LOMO in fine-tuning Llama 2-7B and Llama 3-8B. Additionally, BAdam shows similar fine-tuning performance to Adam when applied to the medium-sized masked language model RoBERTa-large.
Strengths: The paper's contributions and strengths are as follows:
1. It proposes using the well-known optimization technique BCD for the task of fine-tuning large language models while being memory efficient.
2. Empirical evidence highlights BAdam's potential in fine-tuning language models. It can outperform LoRA and significantly exceed LOMO in instruction-tuning Llama 2-7B and Llama 3-8B. Moreover, BAdam demonstrates comparable fine-tuning performance to Adam when used with the medium-sized masked language model RoBERTa-large.
3. It provides theoretical convergence analysis for the deterministic case.
Weaknesses: The paper's weaknesses are summarised as follows:
1. In extremely memory-limited settings, such as when there is only enough memory for inference, where MeZO or LOMO can apply, BAdam cannot apply due to the additional requirement of storing block-wise optimizer states.
2. The paper does not provide theoretical or practical insights into why BCD is effective for language model fine-tuning.
3. Compared to LoRA, BAdam requires storing full parameter checkpoints instead of a small number of adapters. As the scale of language models continues to grow, the issue of storing full parameters for fine-tuning becomes much more intolerable in practice.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Table 1, why LOMO update precision is only limited to Float16. Can't it be Float32?
2. In Table 5, using the same learning rate for SGD and AdamW is not convincing. Typically, SGD requires a larger learning rate than AdamW to achieve good performance. Have the authors tried conducting a grid search to determine the optimal learning rate for LOMO?
3. When presenting the results in Table 5, what parameters were fixed? Batch size, memory, etc.? I noticed that the batch sizes are not consistent across methods; for example, LOMO uses a batch size of 8, while LoRA and BAdam use a batch size of 16. Comparing different optimization methods with varying batch sizes is not ideal.
4. I believe more ablation studies could be conducted. For example, why is Adam used for the intermediate steps instead of SGD? How much performance loss would occur if SGD were used instead?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The author has addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive and helpful comments. We address the reviewer's concerns in a point-by-point manner below. *All additional experiment results (figures and tables) are put in the one page supplementary PDF of the global rebuttal.*
**A. Ablation study & effectiveness of BCD for LLM finetuning.** We have conducted ablation experiments on Adam, SGD (LOMO), BAdam, and BSGD (BCD with SGD) for finetuning Llama 3-8B. All optimization methods use grid-searched learning rates based on training and validation losses.
*Convergence ability.* In Figure (b), it can be observed that BCD variants converge slightly slower but soon exhibit similar convergence behavior in terms of running time compared to their full counterparts. It is worth mentioning that, unlike the full counterparts, BCD variants only update a block of parameters per data batch, demonstrating the strong optimization ability of BCD for LLM finetuning.
*Downstream performance.* In Table 1, we also test the MT-bench scores of the finetuned models. It is quite interesting to see that BSGD significantly outperforms SGD (almost as good as BAdam), even though they have almost the same optimization convergence behavior. The superiority of BCD variants over their full counterparts possibly arises from the fact that BCD uses each data batch to update only one block of parameters, thereby better preserving the general knowledge of the pretrained model during finetuning. These strong downstream performances of BCD further illustrate its suitability for LLM finetuning.
*Learnability interpreted by high rankness.* Unlike LoRA and Galore, BCD's memory efficiency is achieved without restricting its updates to a low rank space. In Figure (d), we display the cumulative explained variance of BAdam's update for the up-proj matrix, which is defined as sum of the $k$ largest squared singular value over the sum of all squared singular value. This result shows that BAdam's update has a heavy tailed singular values distribution and is far away from a low rank update. In Figure (e), we show that BAdam achieves almost the same high rank update as Adam through all modules of different transformer layers, partially ensuring BCD's learnability.
We believe that these experiment results demonstrate the effectiveness of BCD for LLM finetuning.
**B. Handle the extremely memory-limited setting with BAdam.** In the extremely memory-limited setting, we have the following two choices to apply BAdam.
*Smaller block partition.* Our BCD optimization framework (as well as our code implementation) provides a flexible block partition. Hence, we can choose a smaller block partition, further reducing the memory consumption as we only need to store the gradient and optimizer states for a smaller block of parameters. We have tested such a smaller block setting of BAdam (each layer is further partitioned to attention and mlp blocks), and it achieves a MT-bench score of 6.50, which is just slightly worse than the score of 6.67 reported in the manuscript.
*Cheap CPU update.* We can also store the optimizer states in CPU memory, and performs the cheap block update purely on CPU. In this case, only the block gradient needs to be communicated between CPU and GPU during update. We have tested this approach under the Llama 3-8B experiment setting and found that the operation only induces a 0.8-second overhead per update, resulting in roughly 16 minutes of additional time cost over 3 epochs of training. We will upload this part of new code implementation when uploading is allowed.
We remark that MeZO and LOMO also maintain the gradient of a certain block in memory when updating this block. Thus, our BCD scheme can be applied using one of the above schemes whenever MeZO and LOMO are applicable.
**C. BAdam needs to store the full checkpoint.** Storing the full parameter checkpoint appears to be inevitable when using full parameter optimization methods such as Adam and SGD. While LoRA has the advantage of storing different type of checkpoints, full parameter finetuning might achieve higher performance, as we have shown.
**D. LOMO update precision issue.** LOMO's update precision depends on the precision of the model weights. Consequently, updating the model using Float32 will double the memory consumption to $4M$. We follow the reported update precision (Float16) used in the LOMO paper.
**E. LOMO's performance under grid searched step size.** We have conducted a grid search for SGD (LOMO)'s learning rate among (5e-2, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6) based on loss, and identified 1e-2 as the optimal one for MT-bench evaluation. In Table 1, it can be observed that SGD (LOMO) with learning rate 1e-2 achieves an MT-bench score of 5.82, which aligns with the score of 5.83 obtained by using learning rate 1e-6 in our manuscript.
**F. Batch size is not consistent across different methods.** Since LOMO performs update on the fly and does not store the full gradient, it does not support gradient accumulation. We choose the largest batch size under the memory constraint of RTX3090-24GB for LOMO. We also ensure each algorithm uses the same amount of data. Hence, we count twice updates of LOMO as one update in our manuscript, ensuring a fair comparison to LoRA and BAdam.
We hope that our response is satisfactory to the reviewer and that all concerns have been addressed appropriately. We will properly add the above experiments to our next manuscript version. If the reviewer has any additional concerns, please let us know during the reviewer-author discussion period.
---
Rebuttal Comment 1.1:
Comment: The reviewer thanks the authors for the discussion. I decide to keep my score.
---
Reply to Comment 1.1.1:
Title: Authors' Response
Comment: Thank you for your careful review of our paper and your positive assessment. If you have any further questions, please let us know. | Summary: The authors proposed fine-tuning of LLMs with a block coordinate descent based Adam optimizer.
They presented results on convergence analysis, memory and run time profiling, and the quality of resulting fine-tuned models.
Strengths: + The proposed idea is clearly stated.
+ The memory usage analysis is comprehensive.
+ The authors conducted experiments on recent LLMs.
Weaknesses: - The novelty of the proposed technique is BCD optimization for fine-tuning LLMs, using the Adam optimizer. In this sense, the comparison against LOMO is justified, but the empirical advantage is not very significant. The author did not perform ablation study showing the necessity of the two ingredients: BCD and Adam--another important condition to compare with LOMO and BAdam would be BCD with SGD.
- The comparison against PEFT such as LoRA is not scientifically justified, on the other hand. Because BCD and parameter-efficient reparameterization are not mutually exclusive, and could be complementary.
- In addition to optimizer (e.g. LOMO and this work) and PEFT, there is a third class of memory-efficient LLM fine-tuning techniques: gradient compression (e.g. Galore, arXiv:2403.03507). Like LoRA, this is orthogonal and potentially complementary to BCD optimization as well. There is no comparison here.
Technical Quality: 2
Clarity: 2
Questions for Authors: * See above weaknesses on empirical results.
* A crucial motivation of the proposed method, is a conjectured unique suitability of BCD in LLM fine-tuning: as the authors put it, "the finetuning process boils down to an optimization problem that needs to handle a huge number of trainable model parameters, while the number of training data points are relatively small. This setting matches exactly the advantage of the BCD scheme". Unfortunately, however, this remained a conjecture. There is a lack of adequate theoretic or empirical support to this interesting and important open question.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for the constructive and helpful comments. We address the reviewer's concerns in a point-by-point manner below. *All additional experiment results (figures and tables) are put in the one page supplementary PDF of the global rebuttal.*
**A. Ablation study on BCD, Adam, and SGD.** As requested by the reviewer, we have conducted experiments on Adam, SGD (LOMO), BAdam, and BSGD (BCD with SGD as inner solver) for finetuning Llama 3-8B. All optimization methods use grid-searched learning rates based on training and validation losses.
*Convergence ability.* In Figure (b), it can be observed that BCD variants converge slightly slower but soon exhibit similar convergence behavior in terms of running time compared to their full counterparts. It is worth mentioning that, unlike the full counterparts, BCD variants only update one block of parameters per data batch, demonstrating the strong optimization ability of BCD for LLM finetuning.
*Downstream performance.* In Table 1, we also test the MT-bench scores of the finetuned models. It is quite interesting to see that BSGD significantly outperforms SGD (almost as good as BAdam), even though they have almost the same optimization convergence behavior. The superiority of BCD variants over their full counterparts possibly stems from the fact that BCD uses each data batch to update only one block of parameters, thereby better preserving the general knowledge of the pretrained model during finetuning. These strong downstream performances of BCD further illustrate its suitability for LLM finetuning.
These ablation results partly confirm the suitability of BCD for LLM finetuning and reveal that choosing either Adam or SGD as the inner solver has similar performance in terms of our downstream performance evaluation (i.e., MT-bench score for model trained on Alpaca-GPT4 dataset). We will add BSGD to our BCD framework and leave the comprehensive study of different inner solvers as future work.
**B. The suitability of BCD in LLM finetuning.** Let us elaborate on the suitability of BCD for LLM finetuning from the following three perspectives.
*Performance.* Based on the results of the above ablation study, we believe that we have demonstrated the strong performance achieved by BCD compared to its full counterpart. This directly indicates that BCD is suitable for LLM finetuning in terms of both optimization ability and downstream performance.
*Low memory consumption.* BCD not only achieves strong performance but is also memory efficient; it only stores the gradient and optimizer states of the active block, which is substantially lower than Adam's $18M$ cost. In particular, BSGD requires little additional memory beyond just storing the $2M$ LLM, while still exhibiting strong downstream performance. Given that memory is usually the bottleneck for finetuning LLM, the memory efficiency of BCD further demonstrates its suitability for this task.
*Learnability interpreted by high rankness.* Unlike LoRA and Galore, BCD's memory efficiency is achieved without restricting its updates to a low rank space. In Figure (d), we display the cumulative explained variance of BAdam's update for the up-proj matrix, which is defined as sum of the $k$ largest squared singular values over the sum of all squared singular values. This result shows that BAdam's update has a heavy tailed singular values distribution and is far away from a low rank update. In Figure (e), we show that BAdam achieves almost the same high rank update as Adam through all modules of different transformer layers, partially ensuring BCD's learnability.
**C. BCD + LoRA.** We first note that the high rank update of BAdam shown in Figures (d) and (e) partly illustrates the scientific difference between BCD and LoRA. Following the reviewer's suggestion, we also combined BCD with LoRA (B-LoRA) for LLM finetuning. Due to the advantages of BCD, B-LoRA can have a higher rank configuration under the same amount of memory budget and save computation time, compared to LoRA. B-LoRA has a slightly worse MT-bench score than LoRA (6.12 vs. 6.28) as shown in Table 1, while achieves a better 5-shot MMLU score than LoRA as shown in Table 3.
**D. Comparison with Galore.** In Figures (a)-(b) and Table 1, we report the results of Galore with rank 1024 and 8-bit Adam optimizer for finetuning Llama 3-8B. We set other hyperparameters according to their paper's suggestions. It can be seen that BAdam and BSGD outperform Galore in terms of MT-bench downstream performance, demonstrating the ability and suitability of BCD for LLM finetuning. In terms of convergence, BAdam also exhibits better convergence ability than Galore.
We have not conducted experiments on combining BCD and Galore due to time constraints, but we will add such experiments in our next manuscript version, as suggested by the reviewer.
**E. Significance of advantage over LOMO.** We note that the MT-bench score difference between GPT-3.5-turbo and GPT-4 on the LMSYS leaderboard is 1.05, which is close to the gap of 0.96 presented in our manuscript between the Llama 3-8B models tuned by LOMO and BAdam (5.69 versus 6.65). Therefore, the advantage appears to be significant.
We hope that our response is satisfactory to the reviewer and that all concerns have been addressed appropriately. We will properly add the above experiments to our next manuscript version. If the reviewer has any additional concerns, please let us know during the reviewer-author discussion period.
---
Rebuttal 2:
Title: Looking forward to your feedback (if any)
Comment: Dear reviewer FWuu,
We hope that our response justifies the design of BAdam and addresses most of your concerns. Since the deadline of the discussion phase is approaching, we would highly appreciate to receive feedback from you. If you have any further questions or remarks, please let us know so that we will have enough time to address your concerns. We will be more than happy to provide additional clarifications and details.
Best,
Authors. | Rebuttal 1:
Rebuttal: Dear ACs and Reviewers,
This global response contains our one-page supplementary PDF of the rebuttal. All additional figures and tables are included in this file.
Best regards,
Authors.
Pdf: /pdf/ce03c044f3f088e07245f917a6c364e116e94680.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Maximizing utility in multi-agent environments by anticipating the behavior of other learners | Accept (poster) | Summary: - The authors consider the problem of optimally exploiting a learning agent in zero-sum games and general-sum games.
- For zero-sum games, they look at a specific continuous time learner, replicator dynamics, and at an analogous discrete time learner, multiplicative weights updates. They provide an explicit expression for optimal utility of the exploiter agent in the continuous case, as well as a construction for a polynomial-time algorithm that achieves that utility.
- In the discrete case, they show that the discretized optimal strategy from the continuous case achieves at least as much utility. Moreover, they show that this utility can be exceeded, and they bound this difference between the continuous and discrete cases from below and from above.
- In the context of general-sum games, the paper shows that determining the optimal utility of the exploiter against a Best Response Algorithm (in discrete time) is NP-hard.
- The paper concludes by discussing open problems, in particular whether there are some learners and classes of general-sum games where one can say more about the optimal utilities for polynomial-time optimizers.
Strengths: - This paper makes a significant contribution by analyzing the general setting in which an optimizer is trying to exploit a mean-based learner. In my view, both the impossibility result in the general case as well as the characterization of the zero-sum case are conceptually interesting.
- The mathematical analysis appears sound and rigorous. The paper discusses an interesting connection to optimal control. (Though I have not checked the proofs in the appendix and this is not my main area of expertise).
- I found the paper was well written and well motivated.
- I appreciated the discussion of societal impact. I agree with the authors that this work is relevant since it is important to study how exploitable commonly used learning algorithms are.
Weaknesses: - The paper doesn't include any empirical simulations. It would have been interesting to see some example numerical computations of the optimal strategy in a game and resulting simulated utilities. How much better does the optimal strategy do compared to simple baselines, in some canonical games?
- Focusing on MWU and related algorithms makes sense to make the analysis tractable, but it also limits the applicability of the results. Moreover, it would be especially interesting to have an impossibility result for a no regret algorithm.
- While the computational hardness result is interesting, it does not appear very surprising given typical hardness results in game theory.
- While I found that the authors generally explained their approach well, I thought it might have been interesting to give some more explanation, potentially via a concrete example (in the zero-sum case). If this takes up too much space, one could include an example in the appendix.
Technical Quality: 4
Clarity: 3
Questions for Authors: - I wonder whether it would be possible to give a description of the constant optimal strategy in the continuous-time case, in words. How does it relate to the minimax strategy? I assume that even as $T\rightarrow \infty$, learners play suboptimally if the temperature $\eta<\infty$, and so the optimizer would deviate from the minimax strategy somewhat to exploit this. However, is it right that for $\eta\rightarrow \infty$, the optimizer strategy would eventually converge to the minimax strategy?
- Line 106: Should it say "$\eta\rightarrow \infty$ as $T\rightarrow \infty$"?
- This work relates to empirical ML work in which learners are gradient based and exploitative strategies are found via meta optimization (see e.g. model-free opponent shaping, https://arxiv.org/abs/2205.01447, and follow up work)
- Could you say under what conditions the constant $C_A$ in Proposition 4 is small? Does this depend on the inequality in Assumption 1? (i.e., if the two best responses lead to very different utilities, this leads to more possibility of exploitation?)
- Typo on line 350: "The details the force the learner"
- In the related work section, it would be nice to better contrast the discussed related work (especially on no regret learners) to the work in this paper. E.g., MWU is also a no regret learner, but best response isn't? How exactly does this paper extend prior work on exploiting these different learners? E.g., by analyzing computational complexity, or by providing tighter bounds, etc.?
Confidence: 2
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors adequately address limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the useful comments. We answer the main questions below, and move a few minor questions to an official comment, for space considerations.
``It would have been interesting to see some example numerical computations of the optimal strategy in a game and resulting simulated utilities. How much better does the optimal strategy do compared to simple baselines, in some canonical games?''
We are happy to include examples of games and the optimal rewards and strategies of the optimizer for different time horizons $T$ in the new revision of the paper.
``Focusing on MWU and related algorithms makes sense to make the analysis tractable, but it also limits the applicability of the results. Moreover, it would be especially interesting to have an impossibility result for a no regret algorithm.''
Even understanding what happens against specific types of algorithms such as MWU is an elusive open problem (look at paragraph 2 of page 3 of Is learning in games good for the learners), but also understanding what the optimal strategies are against these algorithms will be helpful for the analysis against other no-regret learners. You are right, an impossibility result for no-regret learning algorithms would be extremely interesting, and we leave it open for future work.
``While the computational hardness result is interesting, it does not appear very surprising given typical hardness results in game theory.''
A lot of results in game theory are impossibility results (such as for example computation of several types of equilibria in different games). However, our impossibility result is the first of its kind, as far as we know. That is, our result is the first that states that computing optimal strategies against learners with specific dynamics can be computationally hard, and none of the previous work has talked about any computational issues thus far.
``I wonder whether it would be possible to give a description of the constant optimal strategy in the continuous-time case, in words. How does it relate to the minimax strategy?''
As $\eta T \to \infty$ then the optimal constant strategy will approach a min-max strategy. However, for finite or small $T$ the optimal strategy might be different, depending on the game.
``I assume that even as $T\to \infty$, learners play suboptimally if the temperature $\eta \to \infty$, and so the optimizer would deviate from the minimax strategy somewhat to exploit this. However, is it right that for $\eta \to \infty$, the optimizer strategy would eventually converge to the minimax strategy?''
If $\eta \to \infty$ then the learner is essentially employing fictitious play (best responding to the history). When $\eta \to \infty$, the optimizer is able to take advantage of that by significantly deviating from the min-max strategy. This can also be seen in the way we analyze the computational lower bound and specifically you can view it in the example we provide in the last pages of the paper (p22-23). There, optimizer constantly changes the actions they are playing. The phenomenon where the optimizer has to constantly switch actions can also be seen in the matching pennies example that we also discuss in the paper (Proposition 9). It certainly is the case for matching pennies that the average-time strategy for the optimizer in the scenario where $\eta$ is large will converge to the min-max strategy, however, the per-round strategy is not. It is an interesting question we have not tackled whether the average-time strategy always converges to the minmax strategy.
``Could you say under what conditions the constant $C_A$ in Proposition 4 is small? Does this depend on the inequality in Assumption 1? (i.e., if the two best responses lead to very different utilities, this leads to more possibility of exploitation?)''
Yes, this depends on the inequality in Assumption 1. If the assumption is close to being unsatisfied, then the constant is small. If the assumption is satisfied with a large gap, then the constant is larger, which implies a better lower bound on the utility gained by the optimizer.
``In the related work section, it would be nice to better contrast the discussed related work (especially on no regret learners) to the work in this paper. E.g., MWU is also a no regret learner, but best response isn't? How exactly does this paper extend prior work on exploiting these different learners? E.g., by analyzing computational complexity, or by providing tighter bounds, etc.?''
Thanks for the question. Previous work has introduced the problem of strategizing against no-regret learners, and specifically mean based learners. It has also been shown that in zero sum games against these mean based learners, the best rewards the optimizer can achieve are $T \cdot Val(A) + o(T)$, while in general sum games, it has been shown that the optimizer can get significantly more utility than the one shot game value (which for general sum games is called the Stackelberg value), however no efficient algorithm on how to achieve this value has been found. In our work, we show for zero-sum games what is the exact optimal rewards for the optimizer for Replicator dynamics - one of the most widely used mean base learners. We also show a first computational lower bound for computing the optimal strategy against a learner that is best responding, i.e. MWU with infinite step size. We believe that ultimately the problem of computing an optimal strategy against mean based learners in general sum games is computationally hard, and we give a first result with our lower bound towards that direction. We will add these details in the related work section.
---
Rebuttal 2:
Title: Additional responses to questions of the reviewer
Comment: ``While I found that the authors generally explained their approach well, I thought it might have been interesting to give some more explanation, potentially via a concrete example (in the zero-sum case). If this takes up too much space, one could include an example in the appendix.''
We have a few examples (Example 1 in the appendix and the matching pennies game in Proposition 9), and we are happy to include more examples of zero sum games that show what the rewards of the optimizer are.
``Line 106: Should it say $\eta \to \infty$ as $T \to \infty$''
No this is correct as is. Usually the step size is set to be small i.e. $\eta = \frac{1}{\sqrt{T}}$ so increasing $T$ makes $\eta$ go to $0$.
``Typo on line 350: "The details the force the learner"''
Thanks for point that out. It should be "The details of how to force the learner ..."
---
Rebuttal Comment 2.1:
Comment: Thanks a lot for your detailed clarifications.
(Regarding the $\eta\rightarrow\infty$ point: thank you, I was confused about this. I think I understand now that $\eta T$ should go to infinity, but $\eta$ should go to zero, to enable learning but to prevent an oscillating/exploitable policy.) | Summary: The authors present a model where an optimizer plays with a learning agent and aims to extract better rewards from the sequential decision-making game by anticipating what the learner will do selecting a strategy that outperforms the value of a game. They study two settings: a zero-sum game, where they show a polynomial time algorithm that can extract an advantage for the optimizer, and a general-sum game, where they show the problem is NP-hard by reduction from the Hamiltonian cycle problem. They show that for a learner using Replicator Dynamics, the optimizer can get an advantage related to the time horizon by using a constant strategy.
Strengths: The paper tackles the important problem of using information about other agents to maximize utility in two-player games. The authors present various theorems and proofs to show how strategies for the optimizer can be devised to gain an advantage. Further, they also show a novel hardness proof for the general-sum game by reduction from the Hamiltonian cycle problem, when playing against a mean-based learner.
Weaknesses: My primary concern with this paper is in the organization and clarity of presentation. The authors do not contextualize and clearly state their results, or talk about the importance of their results. Further, a large portion of the theoretical information is included in the introduction, with various theorems being re-stated later on. Then, there is an awkward interlude with the related work section before continuing with the remaining theoretical analysis. This makes the paper very hard to follow, without any appropriate structure to guide the reader. I would expect a much shorter introduction, which introduces the problem setting and introduction without going into theoretical detail and summarizes the paper's contributions, following which related work or background for the problem is provided. In general, main Theorems and Propositions should not appear in the Introduction.
Some important details which could be highlighted are missing from the body. For example, the authors don't clearly state the benefit that can be achieved in a zero-sum game by the optimizer, except for the informal statement of proposition 1. This seems to me one of the main contributions of the paper, other than the hardness proof.
Theorems and propositions are referred to by multiple names and repeated. If possible, authors should use the same numbers for the same theorem or proposition.
The authors also focus only on the MWU and Replicator Dynamics models for the learner. However, there is no justification for why this model is chosen as opposed to any other kind of learner. The setting described is general, however all proofs rely on the learner using this model. Some discussion on why MWU and replicator dynamics are chosen will add to the concreteness of the paper.
Some minor/specific comments below:
It would be helpful if n and m are described explicitly as the cardinality of the action spaces of the two players. As it is right now, it is easy to miss when they are first introduced, and later equations (e.g. Eq. 1) rely on the readers knowing what they stand for.
What is \Delta, as used in the definition of the value for zero-sum games? It seems to be used as a function, but it isn't clear from the context what it is supposed to be. This same notation is used in line 163, which suggests \Delta(A) might be a set?
Line 164: i1, i2 are not described in Proposition 4.
The replicator dynamics is defined as a continuous time analogue of the MWU algorithm, but the MWU algorithm itself is not described until later in the paper. This can be confusing for readers, if MWU is not being used, then it should not be introduced earlier. If Replicator Dynamics needs to be described as a generalization or adaptation of MWU, MWU needs to be described first.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. Can you justify why MWU and Replicator Dynamics are chosen as the model for the learner?
2. Can you clearly state what the novel contribution of the paper is? My current understanding is that the hardness proof is novel, and the polynomial time algorithm for Zero-sum games is novel, and that is all.
3. Can you clarify the undefined notation, e.g. \Delta(A)?
4. Why is so much exposition included in the Introduction? Is there a reason for this?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: The authors don't specifically have a limitations section addressing the limits of the work, though they do propose future work building on the current results. The authors should address why only one model of the learner is considered, as that can be seen as a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We answer below the main concerns, and we answer additional questions in an official comment, due to the space limitations.
``My primary concern with this paper is in the organization and clarity of presentation...'':
We thank the author for their comment about the presentation. We note that in theory papers it is common to have an informal version of the theorems in the introduction, followed by a longer version in the technical sections. Yet, we will change the format in accordance to the reviewer's comments, and we agree with the reviewer that such a change can benefit the paper. Specifically, after the meta question, we will summarize in bullets the main contributions of the paper. Related work will follow. Then, we will write the preliminaries, and afterwards the results, in two sections: (1) optimizing against MWU and (2) a lower bound for optimizing against Best-Response. These will include results that currently appear both in the introduction and in sections 2 and 3.
``Some important details which could be highlighted are missing from the body. For example, the authors don't clearly state the benefit that can be achieved in a zero-sum game by the optimizer, except for the informal statement of proposition 1. This seems to me one of the main contributions of the paper, other than the hardness proof.''
We state the benefit through theorem 1 and propositions 1 to 4 in the introduction. First of all, theorem 1 states that the optimal rewards for the optimizer can be achieved with a strategy that is constant over time (i.e. $x(t) = x^*, 0\leq t \leq T$). In proposition 1 we state exactly the range of possible rewards the optimizer can achieve and what happens when $\eta T \to \infty$. In propositions 2-4 we explain what happens in the games with discrete time dynamics. First of all, the discrete time analogue of a continuous time game will always give more rewards for the optimizer (Proposition 2). This 'gap' in the rewards cannot be more than $\eta T/2$ and there exists a game that approximately achieves this gap, namely matching pennies (Proposition 3). Lastly, we show that up to constant factors, most games also exhibit this reward gap between the continuous and discrete game analogues (proposition 4).
``Can you justify why MWU and Replicator Dynamics are chosen as the model for the learner?''
We chose the MWU algorithm for the learner for several reasons. First of all, it is one of the most well-studied no-regret algorithms due to its optimal regret but also as its continuous time analogue, the replicator dynamics, is extremely well studied in the literature. MWU is also an algorithm that is widely used in online learning settings, therefore knowing how robust or manipulable it can be against strategic players in online settings is an interesting question. We also believe that our view of the problem as a continuous time problem, provides a new perspective on how to view the problem of strategizing against mean based learners (as introduced by [1]).
``Can you clearly state what the novel contribution of the paper is? My current understanding is that the hardness proof is novel, and the polynomial time algorithm for Zero-sum games is novel, and that is all.''
All of the results in our paper are novel, and are the following:
(1) The first set of results concerns zero-sum games where the learner is employing Replicator dynamics (for the coninuous case) and MWU (for the discrete case), as summarized in (1a) and (1b) below:
(1a) We show what is the exact best strategy for the optimizer against a replicator dynamics learner in the continuous time setting by presenting a closed form solution (Theorem 1). We show that this strategy can be efficiently computatble using optimization methods (Proposition 5) and we give a range of possible values that this optimal reward can take (Proposition 6). We also examine what happens in the limit when $\eta T \to \infty$ (proposition 4 and 7).
(1b) For discrete-time games, we show that the optimizer can always get more than the analogous continuous-time game (Proposition 2 / Proposition 8). This gap in the reward cannot be larger than $\eta T/2$ (Proposition 10) and we show a game that achieves that gap almost exactly (Proposition 9). Finally we show that under a condition, there is a large class of games that exhibits this gap between the continuous and discrete time games (Proposition 3 / Proposition 11).
(2) The final result we show is a first computational lower bound for computing the optimal strategy against learners and specifically, against Fictitious play.
---
Rebuttal 2:
Title: Answering additional questions by the reviewer
Comment: ``Why is so much exposition included in the Introduction? Is there a reason for this?''
In theoretical CS papers it is customary to include an informal version of the results in the introduction. We are happy to change the format of the paper for the sake of clarity. For more details, see our response to the primary concern (of the same author) in the official rebuttal.
``The replicator dynamics is defined as a continuous time analogue of the MWU algorithm, but the MWU algorithm itself is not described until later in the paper. This can be confusing for readers, if MWU is not being used, then it should not be introduced earlier. If Replicator Dynamics needs to be described as a generalization or adaptation of MWU, MWU needs to be described first.''
We defined the replicator dynamics first because it makes sense for the exposition to talk about the continuous time dynamics first, and afterwards talk about the discrete time dynamics. The replicator dynamics can be viewed as the continuous time analogue of MWU and MWU can be viewed as the discrete time analogue of the Replicator Dynamics interchangeably. We mention the MWU above, just because it is a well-known algorithm within the theory community, in order to help readers that are familiar with MWU and not with the Replicator Dynamics to make the connection. We will make it clear that the definition of MWU is not necessary in order to understand the replicator dyanmics.
``It would be helpful if n and m are described explicitly as the cardinality of the action spaces of the two players. As it is right now, it is easy to miss when they are first introduced, and later equations (e.g. Eq. 1) rely on the readers knowing what they stand for.''
We will make sure to add clear references for this.
``Line 164: i1, i2 are not described in Proposition 4.''
This is a typo, they are defined in assumption 1, thanks for pointing that out.
``Can you clarify the undefined notation, e.g. $\Delta(A)$?''
$\Delta(\mathcal{A})$ denotes the set of all probability distributions over the set $\mathcal{A}$, i.e. $\Delta(\mathcal{A}) = \\{(x_1, x_2, \dots, x_n) | x_1 + x_2 + \dots + x_n = 1,~ x_1\ge 0, x_2 \ge 0,\dots,x_n\ge 0 \\}$. More specifically in our setting, that is the set of all the mixed strategies for the optimizer. Similarly $\Delta(\mathcal{B})$ is the set of all mixed strategies for the learner, i.e. $\Delta(\mathcal{B}) = \\{(x_1, x_2, \dots, x_m) | x_1 + x_2 + \dots + x_m = 1 \\}$. We will make sure to add these definitions in our paper.
---
Rebuttal Comment 2.1:
Comment: Thanks for the detailed response to my concerns and questions, and for clarifying the significant contributions of the paper. I think a rewrite can significantly improve the paper, so I will retain my score for now. I will also go through the other reviews and rebuttals to further improve my understanding of the paper.
---
Reply to Comment 2.1.1:
Comment: Thank you for your prompt response.
We ask if you can still consider increasing the score, if you believe in the paper's merit. This is because the other reviewers rated the readability of this paper as 3, and since we will take your comments on the writeup and incorporate them in the final revision, if this paper gets accepted. | Summary: The paper studies a control problem where an optimizer plays with a mean-based learner in a zero-sum or general-sum game. The problem follows a previous line of work that aims to understand how to play optimally with no-regret learners in repeated games. The paper shows several new results. For zero-sum games, an optimal algirithm is provided for a continuous-time setting, where the learner uses a MWU-like learning dynamic. The authors show that this algorithm, when extended to the discrete-time setting, results in an algorithm that guarantees the optimizer the value of the corresponding one-shot game. For general-sum games, the paper provides a negative result, showing that no FPTAS would exist for computing the optimizer's optimal strategy.
Strengths: - The problem studied follows a line of very well-motivated problems. Understanding how to play optimally with mean-based agents is very interesting.
- The paper is overall technically sound and presents some solid results.
- The zero-sum and continuous-time settings are interesting and look like reasonable choices for the problem. The paper presents non-trivial results for these settings.
Weaknesses: - The informal statements of results in Section 1 could have been made clearer. The current presentation does not allow an easy comparison of results in different settings.
- The results only apply to an MWU learner, not general mean-beased learners, and it relies on the knowledge of $\eta$.
- In the discrete-time setting, the algorithm only guarantees a lower bound. There is no matching negative result.
- The negative result for the general-sum case only implies it is hard to get reward T-1, so it doesn't preclude the optimizer from obtaining sublinear regret overall. Moreover, fictitious play is a bit simplisitc as it is not no-regret. (However, I think the reduction itself is quite interesting.)
---
Minor:
- It may be clearer to state Assumption 1 as a condition or a property. Stating it as an assumption may make it sound like your result requires additional assumption, while I think the purpose of Proposition 4 is more about showing the existence of games with a utility gap.
- Line 362: closed form
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the reduction rely on a specific tie-breaking rule, or can the tie-breaking assumption be relaxed?
- Is there any intuition why the result in Proposition 1 relies on the cardinality of BR(x)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors addressed the limitations properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer and respond to the points raised by the review below:
``The authors show that this algorithm, when extended to the discrete-time setting, results in an algorithm that guarantees the optimizer the value of the corresponding one-shot game.''
This is not exactly true. We show that for both the discrete and continuous time settings the optimizer can achieve on average more than the one-shot game. For the continuous time regime we show a closed-form solution (Theorem 3) for the rewards of the optimizer that is always more than $T \cdot Val(A)$ (Proposition 1 / Proposition 6). Moreover, in the discrete game the optimizer can always achieve more rewards than in the analogous continuous game (Proposition 2 / Proposition 8) and we show that for a large class of games the 'gap' is $\Omega(\eta T)$ (Proposition 4 / Proposition 11).
``The informal statements of results in Section 1 could have been made clearer. The current presentation does not allow an easy comparison of results in different settings.''
We will rewrite the statements more clearly and compare them to previous related work in the new revision.
``The results only apply to an MWU learner, not general mean-beased learners, and it relies on the knowledge of $\eta$''
We believe that our result could be extended to the case where $\eta$ is not known in advance -- by an algorithm that learns $\eta$ on-the-go. This was not the focus of our paper and it is left for future work. For optimizing against arbitrary mean-based learners, we do not know if there is a clean solution, and devising an optimal strategy against any mean-based learner might be a very hard research problem (as also discussed by Brown et al., ``Is Learning in Games Good for the Learners?'')
``In the discrete-time setting, the algorithm only guarantees a lower bound. There is no matching negative result.''
We do provide upper and lower bounds, which are tight up to constant factors.
Proposition 4 proves that in discrete games, the optimizer cannot obtain more than $\eta T/2$ reward compared to the continuous time analogue (Proposition 3 / Proposition 10). For the positive result, we show that there exists a game where this gain is optimal up to a multiplicative $1+o(1)$ (Proposition 3 / Proposition 9) factor. Furthermore, for a class of games satisfying a mild condition, we show that the optimizer can gain $\Omega(\eta T)$ more reward than the Value of the game in the discrete-time setting (Proposition 4).
``The negative result for the general-sum case only implies it is hard to get reward T-1, so it doesn't preclude the optimizer from obtaining sublinear regret overall. Moreover, fictitious play is a bit simplisitc as it is not no-regret. (However, I think the reduction itself is quite interesting.)''
Both of your observations are correct; our reduction does not prove hardness of approximation of the optimal rewards, and also it is based on the assumption that the learner employs fictitious play (where $\eta \to \infty$). However, we see this lower bound as a first step to possibly unlocking the mystery of the strategization against MWU, that was first introduced by [1] 5 years ago. Many other works (inclduing [2], [3]) have this left as an open problem, and we believe that this problem is computationally hard. Our hardness result is the first in this line of work and we are hopeful that in the future we will be able to prove hardness of approximation and relax the $\eta \to \infty$ assumption.
``It may be clearer to state Assumption 1 as a condition or a property. Stating it as an assumption may make it sound like your result requires additional assumption, while I think the purpose of Proposition 4 is more about showing the existence of games with a utility gap.''
You are right, the point is to show that a certain category of games have a utiity gap between their discrete and continuous time analogues. We will phrase the assumption as a condition instead, for the utility gap to exist.
``Line 362: closed form''
Thanks for pointing the typo out.
``Does the reduction rely on a specific tie-breaking rule, or can the tie-breaking assumption be relaxed?''
Yes, the reduction does rely on the specific tie breaking rule, that is only used however in the first round of the game where the historical rewards of the learner for each action is 0. We did not see a simple way to relax this tie breaking rule, and we leave relaxing this assumption for future work.
``Is there any intuition why the result in Proposition 1 relies on the cardinality of BR(x)?''
The less best responses for the learner there are the longer it will take for the learner to converge from the uniform distribution to that best response. Imagine a game where every action of the learner is a best response; in that scenario the rewards for the optimizer will be exactly $T \cdot Val(A)$. In comparison, in the case where there is only a single best response, the optimizer will gain extra utility because of the time it takes for the learner to learn that best response. The less best responses, the more time it takes to concentrate all the probability for the learner in the best responses and therefore the more utility for the optimizer.
---
Rebuttal Comment 1.1:
Comment: Thank you. I appreciate the detailed responses and answers, which are very helpful for understanding the results. I'm now more positive about the paper. | Summary: This paper outlines conditions under which, in repeated two-player zero-sum and general-sum games between a learner with an online learning strategy and an optimizer that knows the learner's strategy and utility function, the optimizer can learn a policy to achieve a higher average utility than the value of the one-shot game. For the two-player zero-sum game setting, assuming the learner selects actions following Replicator Dynamics in continuous time, this paper proposes an algorithm in which the optimizer can take a constant (over time) optimal strategy that provably maximizes its utility. In discrete time with the learner following a Multiplicative Weights Update (MWU) strategy, the continuous time optimal utility of the optimizer is shown to lower bound the discrete time optimal performance following the same strategy. For general-sum games, this paper proves a computational hardness result showing it is NP-hard to approximate the optimal utility for the optimizer against a Best Response learner. This is proven via a reduction from the Hamiltonian cycle problem.
Strengths: 1. The paper does a good job at highlighting the motivating questions, and the main ideas of the proposed approach are in general clearly explained. The proof for the reduction from the Hamiltonian cycle was quite intuitively explained too, and I really appreciate it.
2. The theoretical contributions seem sound and relevant to the optimizer-learner setting in two-player zero-sum and general-sum games. Please note that I have not thoroughly checked the proof details in the appendix.
Weaknesses: 1. This paper primarily focuses on theoretical bounds on optimal utility and computational hardness guarantees for learning an optimal strategy for the optimizer against mean-based learners in the zero-sum and general-sum games. But there are no empirical evaluations of the proposed algorithms. In section 1.2 Related Work, authors point out prior work in contracts and auction design, which could also be broadly categorized as mechanism design. Some recent related papers (eg. [1],[2] ) have proposed experimental frameworks to analyze interactions in similar optimizer-learner frameworks, which could be referred to for similar experiment design with repeated games and bandit agents.
[1] Guo, W., Agrawal, K.K., Grover, A., Muthukumar, V.K. and Pananjady, A., 2022, March. Learning from an Exploring Demonstrator: Optimal Reward Estimation for Bandits. In International Conference on Artificial Intelligence and Statistics (AISTATS).
[2] Banerjee, A., Phade, S.R., Ermon, S. and Zheng, S., MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning. Transactions on Machine Learning Research.
2. The connection to optimal control could perhaps be better motivated and explained.
3. Confusing notation that can be improved:
- Line 255 and 256: Should it be $R_{cont}(x, h(0), T, A, -A)$ and $R^*_{cont}(h(0),T,A,-A)$?
- Line 308: "for each node $v_i$ of the graph, the learner has two associated actions $v_i$ and $v_i'$" - the overloaded use of $v_i$ is confusing.
- I might have missed it, but what is the relation between $h_i(t)$ and $h(t)$? Is it explicitly defined somewhere in the paper?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. line 164: "$i_1, i_2$ are defined in Proposition 4. - Should this instead be "$b_{i_1}, b_{i_2}$ defined in Assumption 1" ?
2. If I understand correctly, the reason why "the learner would have to change actions frequently" (line 153) for the optimizer to get a higher optimal utility, is so that the optimizer can exploit the gain by playing an optimal strategy at every time step? In this paper, for the discrete time dynamics, the learner's strategy is therefore assumed stochastic to ensure that the learner frequently changes actions. What would be the effect of assuming a stochastic learner in the continuous time case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, authors have discussed the limitations and potential future directions for their approach.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer and respond to the points raised by the review below:
``In section 1.2 Related Work, authors point out prior work in contracts and auction design, which could also be broadly categorized as mechanism design. Some recent related papers (eg. [1],[2] ) have proposed experimental frameworks to analyze interactions in similar optimizer-learner frameworks, which could be referred to for similar experiment design with repeated games and bandit agents.''
Thank you for pointing out these papers. We will include them in the related work section in the new revision of our paper.
``The connection to optimal control could perhaps be better motivated and explained.''
The optimal control paragraph is explaining how Theorem 1 can be derived using the HJB equation. The problem of strategizing against MWU can be viewed as a control problem where the system has some dynamics (in this case the dynamics of MWU), a utility function (the rewards of the optimizer), and the control which is the strategy of the optimizer. The HJB equation gives a partial differential equation that when solved gives the closed-form solution of Theorem 1. We included it because we think it might be helpful for thinking about the general sum case. The connection of optimizing against learners and control has already been discussed in prior work [24], yet from a different angle and they did not discuss the HJB equation. We will definitely include a more detailed explanation in the new revision of the paper.
``Line 255 and 256: Should it be $R_{cont}(x, h(0), T, A, -A)$ and $R_{cont}^*(h(0), T, A, -A)$?''
You are right, this is a typo. We will fix it in the new revision.
``Line 308: "for each node $v_i$ of the graph, the learner has two associated actions $v_i$ and $v_i'$" - the overloaded use of $v_i$ is confusing.''
We will use a different notation for the actions, thank you for pointing this out.
``I might have missed it, but what is the relation between $h_i(t)$ and $h(t)$? Is it explicitly defined somewhere in the paper?''
$h(t)$ is an $m$-dimensional vector and thus $h_i(t)$ is the $i$-th coordinate of that vector. We will make sure to include it in the definitions.
``Line 164: $i_1,i_2$ are defined in Proposition 4. - Should this instead be $b_1, b_2$ defined in Assumption 1" ?''
You are right, this is a typo. It should be " ... where $i_1, i_2$ are defined in Assumption 1 ...", not in Proposition 4.
``If I understand correctly, the reason why "the learner would have to change actions frequently" (line 153) for the optimizer to get a higher optimal utility, is so that the optimizer can exploit the gain by playing an optimal strategy at every time step? In this paper, for the discrete time dynamics, the learner's strategy is therefore assumed stochastic to ensure that the learner frequently changes actions. What would be the effect of assuming a stochastic learner in the continuous time case?''
First of all there is a typo in that line (153). It should be "the optimizer would have to change actions frequently". Both in the discrete and the continuous dynamics the learner and the optimizer are allowed to play stochastically. To better understand what we mean by the optimizer having to change actions frequently take a look at Proposition 9, which exemplifies the main difference between the continuous and discrete time dynamics. It studies the game of Matching Pennies, both in discrete and continuous-time. For the continuous-time dynamics, the best value the optimizer can get is $0$. This is a consequence of Theorem 3. However, for the discrete time dynamics, it is possible to get $\Omega(\eta T)$ reward for the optimizer. This is achieved when the optimizer switches actions all the time, and then it is possible to gain positive reward, as presented in the proof. Essentially, the main difference between the continuous time dynamics and the discrete time dynamics is that in the continuous case the learner changes the strategy smoothly and therefore cannot be exploited as we can exploit the discrete time dynamics. Intuitively, the learner is ``slower to respond'' in the discrete setting, hence a frequent change of actions from the optimizer could benefit the latter.
---
Rebuttal Comment 1.1:
Title: Acknowledgement
Comment: Thank you for the response and clarifications. I have also read the other reviews and responses, which helped improve my understanding of this work. I will maintain the current score. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization | Accept (poster) | Summary: The research introduces UA-4DGS, a novel approach designed to reconstruct dynamic scenes from monocular videos. UA-4DGS compensates for information loss due to large motion and self-occlusion by incorporating diffusion priors with pixel-wise uncertainty scores. It proposes an uncertainty-aware diffusion-based regularization, which selectively enhances uncertain regions while maintaining certain ones, preventing inconsistencies with training images. It also identifies and addresses the initialization problem in Gaussian Splatting for dynamic scenes, where static scene-focused Structure from Motion (SfM) techniques fail to initialize dynamic regions properly. They propose dynamic region densification to enhance reconstruction performance and memory efficiency by adding extra Gaussian points to dynamic areas.
Strengths: The paper is presented very clearly, well structured, and it is in general easy to follow and understand.
UA-4DGS's effectiveness is demonstrated using the DyCheck benchmark with complex object motions. Additionally, the research shows that integrating uncertainty considerations into other NeRF regularization techniques can also improve performance.
Weaknesses: 1. Writing Quality: The paper contains many typographical and grammatical errors, as well as notation misuse, e.g. L31, 37, 44, 79, and 124. I recommend a thorough review to correct these mistakes.
2. Some assumptions and claims in the paper are questionable and require further validation. For instance, the basic assumption regarding the definition of certainty ("we assume that Gaussians frequently visible in the training images have low uncertainty, while those seen less often due to motion or occlusion have high uncertainty, as they are reconstructed with lower accuracy") and the claim about under-reconstruction in dynamic regions ("under-reconstruction in dynamic regions negatively impacts the training process, resulting in an excessive number of gaussian points and slowing down inference time") need to be substantiated with additional evidence.
3. Experimental results: The experimental results presented in Figure 3 are disappointing. While the proposed UA-4DGS method seems to outperform the baselines, the results are still unsatisfactory, with significant noise and blurriness. Additionally, the comparison is based on only seven images, which is insufficient. I strongly suggest expanding the testing to include more cases to provide a more comprehensive evaluation of the model's performance.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. In L171, why use both L1 and L2 losses at the same time?
2. Could you introduce more details about L176 - "Thus, we cache 200 images every 2000 iterations for training efficiency."?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations of this paper are primarily found in the method's assumptions and the experimental demonstrations. Please refer to the weaknesses part for more details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive comments on improving the writing and analysis of the proposed method. We answered your two concerns on evidence of our assumptions, and lack of experiments as below. We promise to revise our paper by considering your comments including writing quality.
## W2. Lack of evidence on the assumptions and claims
### **A. Evidence on the certainty definition**
- We quantify uncertainty based on the visibility in training images. To verify our uncertainty modeling strategy, we measure the estimated uncertainty using AUSE (Area Under the Sparsification Error) [1], which is a common measure for uncertainty quantification [2], where a lower value indicates a high correlation between true error and estimated uncertainty.
- For more valid comparisons, we also compare our method with a recent algorithm, FisherRF presented at ECCV24, which suggests a Fisher information-based uncertainty quantification. Note that FisherRF addresses a completely different task, specifically active learning for view selection, whereas our work is the **first to apply uncertainty information for regularization during training**. Despite this, we demonstrate that our definition is more valid than FisherRF in a sparse setting by showing lower AUSE values (both with MSE and MAE).
|Method | AUSE-MSE | AUSE-MAE|
|-----------------|----------------|---------------|
|Random (upper bound) | 0.4871 | 0.4878|
|FisherRF [3] | 0.4042 | 0.4047 |
|Visibility (ours) | 0.3010 | 0.3061|
AUSE-MSE and AUSE-MAE adopt Mean Squared Error (MSE) and Mean Absolute Error (MAE) for the error estimations, respectively. Note that we measure the AUSE in a few-shot setting, which can result in overall higher AUSE values due to the limited data available.
- [1] Estimating and exploiting the aleatoric uncertainty in surface normal estimation. ICCV 2021
- [2] Bayes’ Rays, CVPR 2024
- [3] FisherRF: Active View Selection and Uncertainty Quantification for Radiance Fields using Fisher Information, ECCV2024
### **B. Claims on the importance of dynamic region densification**
- During training, the Gaussian splatting algorithm [4] accumulates gradients of the position values (xy-direction of image space) of each Gaussian. If these gradients exceed a predefined threshold, the Gaussians move, split, or clone to address missing Gaussians in adjacent regions. In dynamic scene reconstruction, no Gaussians initially exist in dynamic regions, causing those in static regions to repeatedly clone and split, leading to a rapid increase in Gaussian numbers.
- To verify our claims about over-cloning and over-splitting, we measure the percentage of Gaussians whose gradients exceed the densification threshold, as shown in the table below. Without our dynamic region densification, we observed high gradients in the xy-directions, leading to over-cloning and over-splitting. For better visualization, please see Figure I in the attached PDF, where we illustrate the ratio of Gaussians exceeding the threshold with varying thresholds.
| Iteration | w/o dynamic dens. | w/ dynamic dens. |
|----------------------|----------------------|--------------------|
| 600 | 12.71% | 3.38% |
| 800 | 10.09% | 3.34% |
| 1200 | 11.65% | 3.55% |
Also, the below table shows the correlation between gaussian numbers and inference speed. It implies that over-densification (over-cloning & over-splitting) reduces inference speed.
|Gaussian numbers | Inference speed (FPS) |
|---------------------|--------------|
|589102 | 117.36 |
|1132420 | 81.89 |
[4] 3D Gaussian Splatting for Real-Time Radiance Field Rendering, SIGGRAPH 2023
## W3. Experimental results
We tackle in-the-wild settings where a monocular video contains complex motion. 4DGS approaches deal with monocular video reconstructions, but their target datasets [5,6] are too limited and unrealistic; they only contain small motion or have unrealistic train/test split strategies, making them similar to video interpolation. (The images for training and testing are sampled from a single video stream with a significantly overlapped time interval.)
Again, our target domain is in-the-wild monocular video for real-world scenarios, and it is difficult to find a dataset that satisfies the required conditions. The Dycheck dataset is an example of a realistic video dataset. Please refer to the attached file for more qualitative results.
- [5] HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. SIGGRAPH 2021
- [6] D-NeRF: Neural Radiance Fields for Dynamic Scenes. CVPR 2021
## Q1. L1 and L2 losses
L1 and L2 losses are often used together in machine learning and optimization tasks to leverage the benefits of both types of loss functions and achieve better regularization. The joint use of two loss terms is helpful for the robustness to outliers (thanks to L1) and the smoothness (thanks to L2). In our problem, the synthesized images should satisfy the properties so the choice of such a loss function is reasonable.
## Q2 Details of caching methods
When refining rendered images with diffusion, the DDIM process is quite time-consuming if conducted every iteration. To avoid this, we sample 200 images at the beginning of every 2000 iterations, refine them with DDIM, and cache them. During the 2000 iterations, we utilize these cached images for the UA-Diffusion loss.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Dear Authors,
I appreciate your efforts in the rebuttal and for providing additional demonstrations. I have carefully reviewed your reply, particularly the examples in the attached PDF. While some cases look very good, most of them appear blurry and noisy in the details. I believe the method still has room for improvement.
Thank you.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the valuable feedback. We have carefully considered the concerns raised in W2 regarding the evidence supporting our assumptions, and we hope that our explanations have provided the necessary clarity.
Regarding the concerns about qualitative results, as also mentioned by Reviewer 6fVK, we respectfully suggest reviewing our detailed response to Reviewer 6fVK. While we fully acknowledge this concern, we would like to emphasize that our in-the-wild monocular setting dataset is particularly challenging compared to common 4DGS settings. Despite this difficulty, our method still outperforms the existing 4DGS baselines in terms of both qualitative and quantitative results.
If you have any further questions or would like additional clarification, please do not hesitate to contact us. We would be more than happy to provide additional information or discuss any aspect of our work in greater detail. Your feedback is deeply appreciated, and we remain fully committed to addressing any concerns you may have.
---
Rebuttal 2:
Comment: We sincerely appreciate your valuable feedback. We fully understand and acknowledge your comments, yet we kindly ask that you consider our final, careful clarification regarding our contribution. Your consideration of this would be greatly appreciated.
We have addressed an **emerging and challenging problem of 4D-Gaussian splatting (4DGS)**, especially in in-the-wild settings. Despite the inherent difficulty of the task, by proposing novel training schemes, we have improved both the qualitative and quantitative performance of existing baselines in these challenging settings, as highlighted by Reviewer 6fVK. Additionally, in terms of addressing the emerging and challenging problem of 4DGS, we believe **our paper offers valuable insights and directions that will make a meaningful contribution to the field**.
Regarding the qualitative results, we suggested referring to the response to Reviewer 6fVK in our previous response. As we have not responded directly, we would now like to summarize and emphasize the key points here.
- Our target **in-the-wild dataset is particularly challenging**, as evidenced by existing 4DGS baselines achieving PSNRs below 15 on this dataset, compared to over 25 on other datasets. This difficulty helps explain why the overall qualitative results in this in-the-wild dataset appear lower in quality compared to other commonly used datasets.
- In the extremely challenging dataset, although some residual blurriness remains, **UA-4DGS consistently outperforms 4DGS baselines and significantly reduces blurriness**, as illustrated in both the main paper and the supplementary pdf file.
As detailed in our response to Reviewer LdSG, our method also enhances baseline performance on the easier NeRF-DS dataset [1], where overall PSNR is higher and noisy artifacts are almost nonexistent compared to the challenging settings for both the baseline and our method. This suggests that **the blurriness primarily results from the difficulties inherent to the in-the-wild datasets, and our method effectively enhances performance on easier datasets without such issues**.
- Our training scheme is **general and can be integrated with any 4DGS framework**, potentially enhancing performance through improved Gaussian deformation strategies. We demonstrated this compatibility in our response to Reviewer LdSG.
We appreciate your time and thoughtful feedback, and promise to revise our paper to include the discussions and responses provided during the rebuttal period. If you have any further concerns or questions, please feel free to reply to this message.
[1] NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects. CVPR 2023 | Summary: This paper proposes an uncertainty-aware regularization technique that uses diffusion priors to improve the reconstruction quality of underfitted areas. and a dynamic region densification technique to address the missing initialization problem on dynamic regions.
Experiments verify the proposed techniques.
Strengths: 1. The proposed regularization method, which applies uncertainty-aware diffusion priors on unseen views, can reduce inconsistencies with training images.
2. The proposed dynamic region densification can deal with the issue of missing initialization in dynamic regions.
3. The paper is well-written and easy to follow.
Weaknesses: 1. The experimental setup is not reasonable. The goal of this paper is to reconstruct dynamic scenes from monocular videos. However, as shown in Sec. 7.2, only one part is relevant to evaluating the proposed method, while the other two involve the generalization of uncertainty-aware regularization. I think the authors should conduct more experiments on the dynamic scene datasets and move the two parts about generalization into the `Appendix`.
2. As shown in L206, the authors claim the proposed method will compare with SC-GS. However, I cannot find such a comparison in Tables and Figures. Moreover, the proposed method should also be compared with Deformable 3D-GS[1].
3. The `Appendix` should be placed after the `References`.
[1] Yang, Z., Gao, X., Zhou, W., Jiao, S., Zhang, Y., Jin, X.: Deformable 3D Gaussians for high-fidelity monocular dynamic scene reconstruction. In CVPR. (2024)
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses.
More,
- According to Table 2, for three components of the proposed method, *Dynamic Densification* plays the most important role (+1.3 mPSNR), $L_{\mathrm{data}}$ is the second important (+0.49 mPSNR), and $L_{\mathrm{UA-diff}}$ is the least important. (+0.26 mPSNR). I would like to know the performance of each component when used individually.
- The proposed method seems to be able to change 4D-GS to any other GS-based method. I would like to know the performance of the proposed method when applying it on Deformable 3D-GS or SC-GS.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: This paper does **not** discuss the limitations and broader impacts.
Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for complimenting our novel training schemes, uncertainty-aware regularization, and densification technique for monocular video reconstruction, especially having complex object motions. The proposed uncertainty-aware regularization is a generic method, which is effective on both dynamic scene construction and few-shot static scene reconstruction, which is related to weakness 1. We will revise our paper by reflecting on the comments.
## W1. More experiments on dynamic scene reconstruction
We presented more qualitative results in the attached PDF file in comparison to other methods. The results clearly show that the proposed approach outperforms existing techniques.
In addition, to verify the generality of uncertainty-aware regularization on dynamic scene reconstruction, we evaluate UA-TV loss in a dynamic setting. The table below shows that UA-TV enhances the performance of UA-4DGS for dynamic scene reconstruction.
This table is an extension of Table 2 in the main paper.
### Depth regularization on dynamic scene
|UA-Diff | UA-TV | M-PSNR | M-SSIM | M-LPIPS |
|---------------|----------------|------------------|--------------------|------------------|
|x |x | 17.04 |0.463 |0.375 |
|o |x |17.30 |0.474 |0.375 |
|o |o |17.42 |0.478 |0.374 |
## W2. Missing baselines
### Additional baselines on Dycheck datasets
We are sorry about our mistake in L206. We tried to train SC-GS [1] on the Dycheck dataset but failed to optimize the model in most scenes. This is probably because SC-GS targets multi-view synchronized videos in their main paper, which are far from the setting in our mind. Note that, unlike the multi-view setting, the control points in their algorithm are difficult to initialize with only sparse information from in-the-wild monocular video.
In the main paper and the attached file for the rebuttal, we compared our algorithm with three existing techniques [2, 3, 4]. For Deformable 3D-GS [2], we tested it on the in-the-wild Dycheck dataset and visualized rendering results in the attached file. Deformable 3D-GS produces highly blurry images due to the smoothness property of MLP, showing a low mean m-PSNR value of 12.8. Additionally, it sometimes struggles to deform accurately while maintaining a canonical status.
- [1] SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes, CVPR2024, code release: 2024.3
- [2] Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction, CVPR2024, code release: 2023.9
- [3] 4DGS: 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering, CVPR2024, code release: 2023.12
- [4] Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis, CVPR2024, code release: 2023.12
### Limitations of existing baselines and position of our work
- 4DGS approaches often tackle monocular video reconstructions, but their target datasets are typically unrealistic; they contain limited motion and/or are trivially split into training and testing datasets, making them similar to video interpolation.
- We first point out that existing 4D-Gaussian splatting paradigms perform poorly on more realistic in-the-wild videos, such as the Dycheck dataset. Our paper partially addressed the existing challenges.
## Q1. Impact of the proposed components
We agree that the initialization strategy (dynamic densification) has more contribution than UA-Diff, but we emphasize that dynamic densification is also our contribution. Because UA-Diff always has to be applied after dynamic densification to show its effectiveness as shown in the table below, the direct comparison between the two components is fair. Both the modules are the important parts of our algorithm.
|Dynamic dens | UA-Diff | M-PSNR | M-SSIM | M-LPIPS |
|----------------------|-----------------------|---------------|--------------------|------------|
|x |x |15.74 |0.444 |0.373 |
|x |o |15.71 |0.441 |0.412 |
|o |x |17.04 |0.463 |0.375 |
|o |o |17.30 |0.474 |0.375 |
## Q2. Plugging into other baselines
Plugging into other baselines, such as SC-GS or Deformable-3DGS, is a good suggestion, but unfortunately, we need more time for implementation Currently, we observe that our dynamic densification works well on those baselines. We will discuss more extended experiments during the author-discussion period.
## L1. Limitations
Compared to datasets that use multi-view cameras, the in-the-wild setting inherently presents more challenges. Thus, the results is still blurry than other easier datasets. However, our algorithm outperforms the baselines in terms of both qualitative and quantitative results.
---
Rebuttal 2:
Title: Thank you for the rebuttal
Comment: Thank the authors for their detailed rebuttal! It helped with most of my concerns.
However, I still think it is not enough to evaluate the proposed method on only one dataset of dynamic scenes.
There are some datasets recommended:
1. NeRF-DS dataset used by Deformable 3D-GS
2. NVIDIA Dynamic dataset used by DynamicNeRF[1]
3. Unbiased4D Dataset [2]
- [1] Chen Gao, Ayush Saraf, Johannes Kopf, and Jia-Bin Huang. Dynamic view synthesis from dynamic monocular video. In ICCV, 2021
- [2] Johnson, Erik, et al. "Unbiased 4d: Monocular 4d reconstruction with a neural deformation model." In CVPR, 2023.
---
Rebuttal Comment 2.1:
Comment: We strive to provide comprehensive answers and hope our responses have addressed your queries. Nevertheless, we acknowledge that there might be moments of ambiguity or potential misunderstandings. Please don't hesitate to seek further clarification on any aspect of our work.
We're also grateful for the points you raised about compatibility with other 4DGS baselines in Question 2. To demonstrate this, we integrated our proposed methods into the Deformable-3DGS [1]. The table below shows quantitative results, highlighting how our method enhances the performance of another 4DGS baseline.
|Method |M-PSNR| M-SSIM| M-LPIPS|
|---|-----|-----|----|
|Deformable-3DGS [1] |13.75 | 0.398 | 0.495 |
|Deformable-3DGS [1] + Ours | 15.31 | 0.434 | 0.418 |
Thank you for suggesting additional monocular video datasets. We tested our algorithm on the plate scene in the NeRF-DS dataset [2], as shown in the table below, further demonstrating the generality of our method. We believe our approach can also be applied to other datasets, including [3,4]. We will incorporate these insights into our revised paper.
|Method |PSNR | SSIM | LPIPS|
|---|-----|-----|----|
|Deformable-3DGS [1] | 20.48 | 0.812 | 0.222 |
|Deformable-3DGS [1] + Ours |20.74 |0.814 | 0.213 |
[1] Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction. CVPR 2024
[2] NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects. CVPR 2023
[3] Dynamic view synthesis from dynamic monocular video. ICCV 2021
[4] Unbiased 4d: Monocular 4d reconstruction with a neural deformation model. CVPR 2023 | Summary: This paper tackles the problem of modeling dynamic 3D scenes from monocular videos. To tackle the more challenging dynamic regions in 4D Gaussians, the authors propose to measure the uncertainty and guide those regions with diffusion priors, while keeping certain regions unchanged. In addition, the authors propose to re-initialize the dynamic regions for better performance.
Uncertainty of Gaussian points is measured by their visibility in the training set. Re-initialization of dynamic Gaussian is done by exploiting pre-trained depth and optical flow models.
Strengths: 1. Complete ablation studies.
2. Improved quantitative results with respect to the baseline (4DGS).
3. Large quantitative improvements for the sparse view case.
Weaknesses: Major
1. Qualitative results are still of low quality (similar to the previous works).
2. Model training depends on several pre-trained models on different tasks, which increases the complexity of the overall method.
Minor
1. Main performance improvement comes from the initialization strategy
2. Missing definition of co-visibility mask
3. Check the writing continuity of lines 190 and 191
Technical Quality: 3
Clarity: 3
Questions for Authors: Where is the "camera information" in line 188, is it from COLMAP?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors checked the limitation box in the checklist. However, I cannot see the limitations section in the paper nor the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for acknowledging the strengths of our proposed method, particularly by highlighting the significantly improved performance in both the in-the-wild 4DGS task and the static few-shot task.
## W1. Low visualization quality
One of our main contributions addresses the overlooked issue of existing 4D-Gaussian splatting in in-the-wild scenarios. Although there exist 4DGS approaches that deal with monocular video reconstructions, they are limited to tackling videos with limited motion and/or unrealistic train/test split strategies, making the task similar to video interpolation. On the contrary, we target the Dycheck dataset, which is **more realistic and challenging than datasets used in previous research**. Consequently, our qualitative results are of low quality compared to the ones shown in other papers. To demonstrate the superiority of the proposed method, we present more qualitative results together with the outputs from other algorithms, Deformable-3DGS [1], 4DGS, and Spacetime, in the attached PDF file. Note that the additional results can only be evaluated qualitatively because their ground-truths are not available.
Regardless of the quantitative and qualitative improvements, we want to emphasize the value of our work also in the application of uncertainty quantification. Until now, uncertainty quantification in novel view synthesis tasks has been limited to active learning for view selection after training. However, we are the first to leverage uncertainty in the training process, demonstrating its effectiveness in monocular video and few-shot settings.
- [1] Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction, CVPR2024
## W2. Complexity of the overall method
Our method relies on several off-the-shelf models such as depth and flow estimators and diffusion models. However, depth and flow estimation models are used only once per example for training. Additionally, since the learnable parameters do not increase, training complexity only increases marginally. Since all the extra models are not required for inference (rendering), our inference cost is identical to the baseline algorithm.
## W3. Impact of proposed components
We agree that the initialization strategy (dynamic densification) has more contribution than UA-Diff, but we emphasize that dynamic densification is also our contribution. Because UA-Diff always has to be applied after dynamic densification to show its effectiveness as shown in the table below, the direct comparison between the two components is fair. Both the modules are the important parts of our algorithm.
|Dynamic dens | UA-Diff | M-PSNR | M-SSIM | M-LPIPS |
|----------------------|-----------------------|---------------|--------------------|------------|
|x |x |15.74 |0.444 |0.373 |
|x |o |15.71 |0.441 |0.412 |
|o |x |17.04 |0.463 |0.375 |
|o |o |17.30 |0.474 |0.375 |
## W4 & Q1 Clarification
The co-visibility mask, provided by the DyCheck dataset is used for evaluation, assigning a value of 1 to pixels to invisible regions during training. For m-PSNR, m-SSIM, and m-LPIPS, areas where the co-visibility mask is 0 are excluded from the testing process.
The camera information in lines 190 indicates the camera pose of the training images, which we used as provided by the DyCheck dataset.
## L1. Limitations
We apologize for the mistake regarding the checklist and will correct this by adding the limitations. Compared to datasets that use multiview cameras, the in-the-wild monocular video setting inherently presents more challenges. Thus, we agree that the results is still blurry than other easier datasets.
---
Rebuttal Comment 1.1:
Title: Thanks for your reply
Comment: Thanks for your answers. I appreciate the additional results, but, as other reviewers pointed out, the qualitative performance is below what NeurIPS expects, making me believe this work still needs more refinement.
---
Rebuttal 2:
Comment: Thank you for your feedback on our results; the quantitative results have significantly improved, but there is still room for enhancement in the qualitative results.
Firstly, we acknowledge this, but it’s important to consider that **our target in-the-wild dataset is particularly challenging**, as evidenced by the fact that **existing 4DGS baselines struggle to achieve a PSNR above 15 in our target dataset, whereas they often exceed 25 on other datasets**. This suggests that it is quite natural for the qualitative results to appear lower in quality compared to those from common 4DGS settings.
Secondly, although UA-4DGS contains some blurriness in the extremely challenging dataset [1] (tackled in our main paper), it **significantly outperforms the 4DGS baselines**, as shown in the attached file. Additionally, in case relatively less challenging NeRF-DS [2] dataset, we observed that our algorithm enhances the baseline with higher fidelity, without blurriness, as demonstrated in our response to Reviewer LdSG. Thus, the blurriness is primarily due to the difficulty of the target in-the-wild datasets, and our method is also capable of enhancing performance in easier datasets.
Finally, our method focuses on the training scheme, not the 4DGS network architecture. Since **the proposed training scheme is indeed general**, it can be seamlessly integrated into existing 4DGS baselines, as demonstrated in our response to Reviewer LdSG. Therefore, we believe our algorithm could achieve even greater performance when combined with future baselines that refine their architecture for better Gaussian deformation strategies. Of course, regardless of the baseline selection, we are committed to continuing our exploration of methods to enhance 4DGS in such a challenging setting.
If you have any further questions or would like additional clarification, please do not hesitate to contact us. We would be glad to provide more information or discuss any aspect of our work in greater detail. Your feedback is deeply appreciated, and we are fully committed to addressing any concerns you may have.
[1] Monocular Dynamic View Synthesis: A Reality Check. NeurIPS 2022
[2] NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects. CVPR 2023
---
Rebuttal Comment 2.1:
Title: Thanks for the clarification
Comment: Thanks for your clarifications. I recognize that the proposed method yields better results than the other methods, but it is still far from perceptually pleasant. For this reason, I can only raise my score to weak accept.
---
Reply to Comment 2.1.1:
Comment: We sincerely appreciate your insightful feedback and the further positive improvement in the rating of our work. We are also grateful for your acknowledgment of the strengths of our method, particularly its advanced qualitative and quantitative performance compared to existing works. Our paper addresses the emerging and challenging problem of 4D Gaussian splatting, especially in in-the-wild settings, and we believe it provides valuable insights and directions that will make a meaningful contribution to the field.
Thank you for your time and thoughtful feedback. If you have any further concerns or questions, feel free to reply to this message. | null | null | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable comments on improving our work. Before answering your questions and concerns, we would like to highlight our contributions.
We present a novel view synthesis algorithm for dynamic scenes captured by a monocular camera. Our target task is fairly new, and we introduce a few interesting ideas such as dynamic densification and visibility-based uncertainty modeling to improve the quality of outputs. Reviewers generally acknowledged our contributions, but expressed concerns about our experiment and presentation. This rebuttal answers the critical questions and concerns raised by the reviewers. We have attached a *one-page PDF* with the characteristics of our model and qualitative results. Please take a look at it and let us know if there are additional concerns or questions.
Pdf: /pdf/d1916ce78df4a9b274cacba2b56234f21cef2760.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
3D Focusing-and-Matching Network for Multi-Instance Point Cloud Registration | Accept (poster) | Summary: This paper proposes FMNet, an end-to-end deep learning approach for multi-instance point cloud registration. The key novelty is an attention-weighted feature matching module that can adaptively focus on reliable point correspondences during matching. Unlike traditional two-step methods, FMNet jointly learns feature extraction and matching within a unified network using an attention mechanism. The attention module computes matching probabilities between point features through scaled dot-product, allowing the network to dynamically highlight meaningful matches. Integrated with geometric and cycle consistency losses, FMNet achieves state-of-the-art registration performance on multiple datasets, demonstrating robust handling of noise, occlusions and point density variations. This work pioneers applying attention for solving the point cloud registration problem in a unified deep learning framework.
Strengths: 1. The proposed an attention-weighted feature matching module that can adaptively focus on important point pairs, enhancing robust matching. This innovative design surpasses many previous hand-crafted or geometry-constrained feature matching strategies.
2. The proposed FMNet end-to-end framework is well-designed, with clear mathematical explanations for the attention module. Extensive evaluations on public datasets and robustness analysis against noise/missing data provide strong empirical evidence.
Weaknesses: 1. The author's contribution is not summarized in points, which seems rather unclear.
2.Even though your network is clearly expressed in the article, I think it is overly structured, which is why I find it less than innovative.
3. In contrast to previous work PointCLM utilized density information for clustering relationships, it seems to me that this article does not effectively extract and utilize density information or other bases used for clustering. I think this information is more important for multi-instance alignment.
4. Compared to previous work PointCLM and ECC, the article seems to be less theoretical, with a large number of formulas used to describe the structure of the network.
5. I think this method of finding the center point first and then doing multi-instance point cloud registration is time consuming.
Technical Quality: 3
Clarity: 3
Questions for Authors: What is the approximate percentage of time spent finding the center point and the registration, respectively?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the author fully addresses the limitations of his proposed methodology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We appreciate your diligent review and valuable feedback to help improve our paper.
**Q1:** The contribution and innovation of the method.
**A1:** Our contributions lie in three aspects:
(1) Our primary contribution does not lie in the network architecture but rather in proposing a new pipeline to address the multi-instance point cloud registration problem. Existing methods (such as PointCLM and MIRETR) mainly learn correspondence between the one CAD model and multiple objects (**one-to-many paradigm**), while our method decompose the one-to-many paradigm into multiple pair-wise point cloud registration (**multiple one-to-one paradigm**) by first detecting the object centers and then learning the matching between the CAD model and each object proposal.
(2) Our new pipeline is simple yet powerful, achieving the new state-of-the-art on both Scan2CAD and ROBI datasets. Especially on the challenging ROBI dataset, our method significantly outperforms the previous SOTA MIRETR by about 7% in terms of MR, MP, and MF.
(3) The progressive decomposition approach of transforming multi-instance point cloud registration into multiple pair-wise registrations, as proposed in our paper, also holds significant insights for other tasks, such as multi-target tracking and map construction.
**Q2:** Reasons for not using density information (such as PointCLM) for clustering.
**A2:** PointCLM uses the feature density of contrastive learning to filter out matching pairs that do not belong to the instance before clustering. However, our method decomposes the one-to-many paradigm into multiple one-to-one paradigm by first detecting the object centers and then learning the matching between the CAD model and each object proposal. For the first detection stage, we can filter lots of background points by detecting the object centers and generating object proposals. For the second matching stage, the instance mask learning and overlap mask learning further filter the noisy points. To this end, we count the inlier ratio of points falling on objects and the total correspondence on the ROBI dataset. The following table show the results:
| Methods | Inlier Ratio (%) |
|---|---|
| PointCLM | 36.27% |
| MIRETR | 56.59% |
| 3DFMNet (ours) | **59.84%** |
It can be observed that our method achieves the highest inlier ratio, which further demonstrate that our multiple one-to-one paradigm can effectively improve inlier ratio.
**Q3:** The article seems to be less theoretical, with a large number of formulas used to describe the structure of the network.
**A3:** As mentioned in A1, our contribution primarily lies in proposing a new paradigm for solving the multi-instance point cloud registration problem. Our method is simple yet powerful, achieving new state-of-the-art performance on both Scan2CAD and ROBI dataset.
**Q4:** I think this method of finding the center point first and then doing multi-instance point cloud registration is time consuming.
**A4:** Generally, two-stage methods are time-consuming than one-stage methods. In Table 2 of the main paper, we compare our method (3DFMNet) with existing methods (such as PointCLM, MIRETR):
| Methods | | Total Time (s) |
|---|---|---|
| PointCLM | Two-Stage |0.63s|
| MIRETR | One-Stage | **0.40s** |
| 3DFMNet (ours) | Two-Stage | 0.54s |
Our inference speed of our two-stage 3DFMNet (0.54s) is lower than one-stage MIRETR (0.40s), while our inference speed is higher than two-stage PointCLM (0.63s). Although each method has different inference speeds, the inference speeds of these methods are all at the same level. In order to reduce the overall time consumption, one possible strategy for reducing the inference time of our two-stage method is to use parallel optimization to simultaneously match multiple pair-wise registrations. In the future, we will further consider reducing the inference time of our method.
**Q5:** The time spent of the center point detection and the registration.
**A5:** We supplement the detailed inference time of each stage in the following table:
|Focusing Model Time (first stage) | Matching Model Time (second stage) |Total Time|
|---|---|---|
| 0.145s | 0.405s | 0.540s |
It can be observed that the first stage (Focusing Model) tasks up a few time while the second stage (Matching Model) takes up a larger proportion of the time. Compared with the inference speed 0.400s of MIRETR, our total inference speed 0.540s is slightly lower than MIRETR.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal.
Thank you for your help with the NeurIPS!
Best, Your AC
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: **The discussion period will end in approximately 5 hours.** We sincerely hope that you could review our rebuttal and respond accordingly. Our rebuttal offers comprehensive explanations to the questions you raised, and we believe that our work shows a new yet powerful pipeline and high performance in multi-instance point cloud registration. We have also provided the discussion of density information and inference time analysis. We are confident that our responses will alter your perception of this paper. We look forward to your reply to our rebuttal. Once again, thank you for your dedication and efforts in the review process. | Summary: This paper introduces a novel focusing-and-matching technique for addressing the multi-instance point cloud registration challenge. Instead of fitting multiple models from a set of incorrect correspondences, this method initially detects potential instance regions and subsequently performs standard pairwise point cloud registration. The approach was tested on two benchmarks, where it outperformed existing state-of-the-art methods in terms of recall and precision metrics.
Strengths: 1. The primary contribution of this paper is the establishment of a new approach to solving the multi-instance registration task. Unlike previous methods that rely on multi-model fitting (using RANSAC-like methods) from a set of spurious correspondences, which are computationally expensive and unreliable due to the large matching space, this method uses learned scene priors to narrow down the matching space within individual regions of interest before performing standard pairwise point cloud registration.
2. The writing is clear and easy to follow. In particular, the methodology section is well-structured. The concept of each variable is connected and explained fluently.
Weaknesses: 1. Certain details in Section 3.2 need clarification. Specifically, when the first module predicts K object centers but there are only (K-2) ground truth instances, how does the pair-wise registration model manage these two falsely detected objects? From my understanding, it appears that the model only predicts the transformation parameters without providing additional confidence scores.
2. The metrics require explanation or references. For MR, MP, and MF, how is a “registered instance” defined? What criteria are used to consider a predicted pose successful, such as RMSE, chamfer distance, relative translation error, or relative rotation error? Additionally, what are the thresholds?
3. More experimental results related to the 3D focusing module are needed, as this module determines the number of pairwise registrations performed, which sets the upper bound for the number of “successfully registered instances.” Results such as the number of detected objects, correctly detected objects, and wrongly detected objects are necessary, as these significantly impact the metrics, including MR, MP, and MF.
Technical Quality: 3
Clarity: 4
Questions for Authors: I don't have any questions.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: I can see the limitation is the performance of the 3D focusing module, espeecially its generalizability to unseen scene. Once it wrongly detects object proposals, the following pairwise registration module will fail to register the instance as well.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the diligent comments to improve the paper.
**Q1:** How does the pair-wise registration model manage these falsely detected objects?
**A1:** Our method is a two-stage approach, so we analysis the falsely detected objects in both two stage.
(1) For the 3D multi-object focusing model (first stage), we compute the mean recall and mean precision of detect object centers. Specifically, we usex the distance between the predicted center and the ground truth center ($distance \leq 0.1 * r_{instance}$, $r_{instance}$ is the radius of instance) as a metric to determine the success of the 3D multi-object focusing module. The results are as following table:
| Datasets | Mean Recall (%) | Mean Precision (%) |
|---|---|---|
| Scan2CAD | 98.14% | 98.85% |
| ROBI | 80.30% | 99.99% |
On Scan2CAD and ROBI, it can be observed that our method achieve very high precision (98.85% and 99.99%), which means that our method can successfully detect the object centers.
(2) For the 3D dual-masking instance matching module (second stage), the instance mask learning and overlap mask learning further filter the falsely detected objects. Specifically, we study the influence of the false detected objects on the Scan2CAD dataset. According to the above table, it can be observed that there are 1.05% objects are falsely detected, i.e., there are 25 cases of failed detected objects on the Scan2CAD dataset. After the instance mask learning and overlap mask learning, the 22 cases are filtered. Since instance mask learning and overlap mask learning cannot effectively achieve the matching points between the falsely detected objects and the CAD model, there are few matching points obtained for falsely detected objects, resulting in SVD being unable to solve for the relative pose. Therefore, falsely detected objects slightly affect our final results.
**Q2:** Details of the metrics.
**A2:** We refer to the settings used in MIRETR and previous work to determine whether an instance is recognized as correctly registered based on RTE and RRE [1][2]. Specifically, we consider a match successful when $RTE \leq voxelsize * 4$ and $RRE \leq 15^\circ$. Following existing methods, such as MIRETR, the voxel sizes of Scan2CAD and ROBI dataset are set to 0.025m and 0.0015m, respectively.
[1] Yuan M, Li Z, Jin Q, et al. PointCLM: A Contrastive Learning-based Framework for Multi-instance Point Cloud Registration. In ECCV, 2022.
[2] Yu Z, Qin Z, Zheng L, et al. Learning Instance-Aware Correspondences for Robust Multi-Instance Point Cloud Registration in Cluttered Scenes. In CVPR, 2024.
**Q3:** Experimental results related to the 3D multi-object focusing module are needed.
**A3:** To evaluate 3D multi-object focusing module, we compute the mean recall (MR) and mean precision (MR) and root mean square error (RMSE) of detect object centers. Specifically, we usex the distance between the predicted center and the ground truth center ($distance \leq 0.1 * r_{instance}$, $r_{instance}$ is the radius of instance) as a metric to determine the success of the 3D multi-object focusing module. The results are as following table:
| Datasets | MR (%) | MP (%) | RMSE (m) |
|---|---|---|---|
| Scan2CAD | 98.14% | 98.85% | 0.0814m |
| ROBI | 80.30% | 99.99% | 0.0065m |
It can be observed that our 3D multi-object focusing module has good results in terms of MR, MP, and RMSE, which ensures that our method can successfully detect the object centers in the first stage.
**Q4:** The generalizability of the 3D multi-object focusing module to the unseen scenes.
**A4:** For the unseen scenes, we follow MIRETR and use ShapeNet dataset [1] (total 55 categories) to conduct experiments on invisible semantics. Specifically, we use CAD models of ShapeNet dataset from the first 30 categories for training and the remaining 25 categories for testing to assess generalization to new categories. To prevent class imbalance, we follow MIRETR to randomly sample up to 500 models per category. Each point cloud pair consists of a randomly selected CAD model and a scene model created by applying 4–16 random poses. This results in 8,634 pairs for training, 900 for validation, and 7,683 for testing. The results of testing set are as following table:
| Methods | MR (%) | MP (%) | MF (%) |
|---|---|---|---|
| MIRETR | 94.95% | 93.94% | 94.44% |
| 3DFMNet (ours) | **95.12%** | **94.23%** | **94.67%** |
It can be observed that our method show good generalizability on unseen scenes compared to previous SOTA MIRETR. We will add the results in the revised manuscript.
[1] Chang A X, Funkhouser T, Guibas L, et al. ShapeNet: An Information-Rich 3D Model Repository. arXiv preprint arXiv:1512.03012, 2015.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal.
Thank you for your help with the NeurIPS!
Best, Your AC
---
Rebuttal Comment 1.2:
Title: Change to "Weak Accept"
Comment: The rebuttal from the authors addressed most of my questions. Additional experiment results show good support for the proposed method. The potential improvement for this work is predicting additional pair-wise confidence scores or using pair-wise registration accuracy as the indicator to handle the falsely detected objects. Overall, I would like to update the score to "Weak Accept". | Summary: For multi-instance point cloud registration, the authors proposed a 3D focusing-and-matching network by learning multiple pair-wise point cloud registration. Specifically, a 3D multi-object focusing module is proposed to locate the center of each object and generate object proposals. In addition, a 3D dual-masking instance matching is introduce to estimate the pose between the model point cloud and each object proposal. Extensive experiments on two popular datasets, Scan2CAD and ROBI, show that the proposed method achieves new state-of-the-art performance on the multi-instance point cloud registration task.
Strengths: - From the experimental results, it can be seen that decomposing multi-instance registration into multiple pair-wise registrations is very simple and effective.
- The method achieves new state-of-the-art on Scan2CAD and ROBI. Especially in the challenging ROBI dataset, the proposed method is significantly better than the previous SOTA (+9%).
- It is very convincing that the authors analyzed the upper bound of the method.
Weaknesses: - Since the overall architecture is a two-stage structure for multi-instance point cloud registration, its inference time is lower than one-stage MIRETR (0.54s vs. 0.40s per scene in Table 2). Why the inference time is slightly lower than MIRETR? Considering the two-stage strategy, the author should conduct a detailed analysis.
- The authors did not provide detailed training and testing strategies for its two-stage approach.
- In Table 1, it can be observed that the proposed method did not achieve SOTA in terms of MR on the Scan2CAD dataset. What are the possible reasons behind this phenomenon?
- In Figure 4, it can be observed that both MIRETR and the proposed method 3DFMNet cannot successfully match all parts on the ROBI dataset, especially on dense scenes with lots of parts.
- As mentioned in the limitation, the proposed method is a two-stage method, and its inference time is slightly lower than MIRETR. It is better to discuss a potential plan to solve this issue.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors did not discuss the computational complexity of method
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal:
We thank the reviewer for the detailed comments to improve the paper.
**Q1:** Detailed analysis of why the inference speed is slightly lower than MIRETR.
**A1:** In the main paper, we have measured the total inference time of our method (0.540s) and MIRETR (0.400s) per scene on the ROBI dataset. Since our method is a two-stage method, we supplement the detailed inference time of each stage in the following table:
|Focusing Model Time (first stage) | Matching Model Time (second stage) |Total Time|
|---|---|---|
|0.145s|0.405s|0.540s|
It can be observed that the first stage (Focusing Model) tasks up a few time while the second stage (Matching Model) takes up a larger proportion of the time. Compared with the inference speed 0.400s of MIRETR, our inference speed 0.540s is slightly lower than MIRETR.
**Q2:** Details of training and testing strategies.
**A2:** During training, we use the ground truth center as supervision to train the 3D multi-object focusing module and use the point cloud around the ground truth center as the training data to train the matching network. During testing, we use the center predicted by the 3D multi-object focusing module and its surrounding point cloud as the input to the 3D dual-masking instance matching module to regress the final pose.
**Q3:** Possible reasons behind the phenomenon of MR not SOTA on the Scan2CAD.
**A3:** In Table 1 of the main paper, it can be seen that in the Scan2CAD dataset, our method improves the mean precision (MP) by about 3% while reducing the mean recall (MR) by about 1%. For Scan2CAD dataset, it may be due to some objects in the scene being too close to each other and relatively small in scale compared to the scene, which cause the multiple objects to be detected as a single object, leading to a decrease in MR. In order to better evaluate MP and MR, we also adopt the mean F1 score (MF), which is the harmonic mean of both MP and MR. In Table 1 of the main paper, it can be observed that our method improve MF by about 1.3%, which further demonstrate the effectiveness of our method.
**Q4:** Analyze the reasons why current methods cannot achieve good results on the ROBI dataset.
**A4:** Due to the fact that the ROBI dataset is generated from monocular images, occlusions and other factors can cause the instance point clouds to be quite incomplete, making it more challenging. This is the main reason that the performance of all methods is not ideal. Nonetheless, our method significantly outperforms the previous SOTA MIRETR by about 7% in terms of MR, MP, and MF.
**Q5:** Potential plan to solve the problem of inference time.
**A5:** In the future, we plan to use multiple GPUs for parallel computation to address this issue. We believe that the time taken to process a single matching proposal will be the upper limit of our optimization.
**Q6:** Discuss the computational complexity.
**A6:** We provide the inference time and the number of parameters in the ROBI dataset in the following table:
| Methods | Inference Time | Parameters (MB) | MR (%) | MP (%) | MF (%) |
|---|---|---|---|---|---|
| MIRETR | **0.400s** | **11.31M** | 38.51% | 41.19% | 39.80% |
| 3DFMNet (ours) | 0.540s | 21.15M | **46.81% (+8.3%)** | **50.61% (+9.42%)** | **48.63% (+8.83%)** |
It can be our running time is slightly higher than MIRETR. Although the number of parameters is about 2$\times$ of MIRETR, the performance has been greatly improved by about 8% in terms of MR, MP, and MF on the ROBI dataset.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
This is a gentle reminder to please review the rebuttal provided by the authors. Your feedback is crucial to the decision-making process. Please consider updating your score after reading the rebuttal.
Thank you for your help with the NeurIPS!
Best, Your AC
---
Rebuttal 2:
Comment: The rebuttal from the authors addressed most of my questions. Since the findings in this paper are interesting, the interpretations are rationale, and the experimental results are convincing to me, I would like to keep the score to "Weak Accept". | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Score-Optimal Diffusion Schedules | Accept (poster) | Summary: This paper proposes a novel algorithm for adaptively selecting an optimal discretisation schedule in the training of diffusion model. The proposed method does not require hyper-parameter tuning and adapts to the dynamics and geometry of the diffusion path. This method also achieves competitive FID scores on image datasets.
Strengths: + The findings of the paper are interesting. Current schedule designs are heuristic, and this paper provides a theoretical analysis, which is of interest to the community.
+ The proposed online data-dependent adaptive discretization schedule tuning algorithm is simple and effective.
+ The mathematical analysis presented in this paper is exceptionally detailed. Meanwhile, adequate related work was discussed in detail. The authors also provide corresponding code that makes the work in this paper very solid.
+ The authors provide clear visual comparisons that show the distribution of optimized times in detail.
Weaknesses: + The experiments in this paper are somewhat weak and lack validation for the LDM model. In fact, the LDM model is widely used, so it is important to validate the proposed method for LDM.
+ Some other diffusion frameworks also use schedule discretization in the same way, e.g., VP-SDE [1] and Flow Matching [2], is the method proposed in this paper robust for them as well?
[1] Score-Based Generative Modeling through Stochastic Differential Equations ICLR 2021
[2] Flow Matching for Generative Modeling ICLR 2023
+ The authors should provide additional quantitative and visual comparisons, as the FID metric alone is insufficient for assessing generation quality [1]. Low FID scores do not necessarily correlate with superior generative performance. Therefore, incorporating other metrics such as sFID, Recall, and Precision would offer a more comprehensive evaluation. This enhanced analysis will help further substantiate the validity of the proposed method.
[1] The Role of ImageNet Classes in Fréchet Inception Distance ICLR 2023
[2] Diffusion Models Beat GANs on Image Synthesis NeurIPS 2021
+ The mathematical parts of this paper are somewhat difficult to understand, especially the differential geometry part of it, and I would suggest that the authors add some background and notation notes.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see Weakness.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Please see Weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their kind comments on the strengths of our paper. We believe that our schedule tuning algorithm could serve as a strong default for diffusion models and might be applicable more broadly, though a more extensive exploration is difficult to undertake in an initial paper. We hope to use this work as a starting point for further investigation into general schedule tuning in generative models.
In the context of diffusion models, we welcome the reviewer's suggestions for more comprehensive numerical validation. To this end, we have presented preliminary summary statistics for sFID. We are appreciative of the reviewers suggestions for additional metrics, and agree that their addition to a potential camera ready article would increase the strength of the paper.
We believe that our methodology could be applied to flow-matching based models, but this would also require mild adjustment to the analysis we present. We hope our present work could be foundational to a potential future study.
We agree that validation of LDM with our method is a worthwhile and meaningful venture. To this end, we have implemented our method in the HuggingFace LDM library, but due to time constraints do not currently have FID to report. In a potential camera ready article, we would additionally include FID and sFID comparisons on optimised and standard schedules for Latent Diffusion Models.
_The authors should provide additional quantitative and visual comparisons, as the FID metric alone is insufficient for assessing generation quality [1]. Low FID scores do not necessarily correlate with superior generative performance. Therefore, incorporating other metrics such as sFID, Recall, and Precision would offer a more comprehensive evaluation. This enhanced analysis will help further substantiate the validity of the proposed method._
We thank the reviewer for their suggestion. We would like some assistance from the reviewer, we would like to confirm that sFID refers to the metric computed in:
Nash C, Menick J, Dieleman S, Battaglia PW. Generating images with sparse representations. arXiv preprint arXiv:2103.03841. 2021 Mar 5.
We have implemented this metric and computed preliminary results for sFID on the CIFAR dataset which are included in our rebuttal document. We have found that the results for sFID correlate strongly with the results for FID in this case. In a potential camera-ready version, we would incorporate this metric across all the datasets presented. We appreciate the reviewer’s input, which has helped substantiate the validity of our method and improve the results presented in our paper.
_The mathematical parts of this paper are somewhat difficult to understand, especially the differential geometry part of it, and I would suggest that the authors add some background and notation notes._
We agree with the reviewer that this mathematical content is new to the diffusion literature. In a potential camera-ready version, we would include a brief introduction to metric tensors and charts commonly used in differential geometry, along with their relation to the work we present here. We appreciate the reviewer’s suggestion, which will help us present our work more clearly.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's thorough response to my questions, which are satisfactorily addressed. I find the work to be meaningful and have accordingly increased my score.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for assessing our work and further thank them for increasing their score. Thank-you! | Summary: This paper proposes a method to optimize for the discretization schedule of the diffusion sampling process. The main idea is to minimize a surrogate for the total length of the diffusion sampling path between two consecutive time steps, where the length is defined in terms of the Fisher divergence between two steps. Experiments show that the proposed scheme can find schedules competitive with the ones discovered with manual search.
Strengths: * The problem studied is important for diffusion generative modeling as the manual search for a good sampling schedule can be time-consuming.
* The proposed method is simple, despite the rather convolutated presentation.
* Empirical evaluation shows that the proposed scheme can have on-par performance with hand-tuned schedules.
Weaknesses: * The presentation of the paper could use a lot of improvement. I found many sections unnecessarily convoluted without conveying much insight.
- The whole setup of predictor/corrector seems unnecessary to me. If I understand correctly, Section 2.1 and 2.2 are used to introduce the cost which turns out to be just the Fisher divergence either between $p_{t_{i+1}}$ and $p_{t_{i}}$ (the so-called corrector optimized cost) or $F_\sharp p_{t_i+1}$ and $p_{t_i}$ for $F$ equal to the one-step of ODE integration.
- Section 2.3 and Theorem 2.1 do not seem to be related to the actual algorithm.
* The theoretical insights in Section 3 are unclear to me. Section 3.1 seems to be regurgitating the fact in differential geometry that the energy-minimizing path connecting two points is a constant-speed geodesic. In particular, I'm unable to identify interesting theoretical contributions related to the setting of diffusion generative modeling.
* The computation cost could be quite high, especially using the predictor-optimized schedule, which does not really offer a noticeable advantage over the cheaper corrector-optimized schedule. This seems to suggest one should always choose the simpler corrector-optimized schedule. Yet with the corrector optimized schedule, the picture of predictor/corrector seems irrelevant since the predictor is just the identity.
Technical Quality: 2
Clarity: 1
Questions for Authors: * Line 95, why is the Markov kernel taking in a probability measure as the second argument, as opposed to the usual definition where it takes in a Borel set (so that conditioned on the first argument you get a probability measure)?
* Line 136, what is the point of introducing coupled noises in (9) and (10) if they are going to be canceled anyway?
* Line 218, what is the "distance" referred to here?
* Line 235, can the authors comment on why we know $\Delta t$ is sufficiently small a priori (since we are optimizing the time intervals themselves)?
* Can the authors justify the usage of simple cubic interpolation in step 3 of Algorithm 1 in order to invert $\Lambda$? Why would interpolation work well?
Minor comments:
* In (3), why is there a tilde on $W_t$?
* Line 75, what are "Lebesgue probability measures"?
* In (4), one could split (3) into a prediction and a correction term differently. Why should this specific way of splitting?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewers for their constructive feedback. We will address each point raised individually below:
- _The whole setup of predictor/corrector seems unnecessary to me._
See general rebuttal response.
- _Section 2.3 and Theorem 2.1 do not seem to be related to the actual algorithm._
Section 2.3 and Theorem 2.1 are necessary links that take us from the Fisher Divergence at two time points t, t’ to a metric that can be used to derive the optimal schedule. In Section 3.1 when we come to find the costs associated with diffusion paths, we require an infinitesimal notion of cost that can be integrated along the diffusion path. The Fisher Divergence as stated in Section 2.2 is unsuitable for this purpose because it is evaluated between two times t and t’. In Section 2.3 and Theorem 2.1 we derive this corresponding infinitesimal cost. Our algorithm relies entirely on the approximation of this infinitesimal cost along the diffusion path.
We appreciate the reviewer highlighting this lack of clarity in the significance of Section 2.3 and Theorem 2.1. In a revised version we will add a clarifying sentence before Theorem 2.1 to emphasise its relevance and importance.
- _Section 3.1 seems to be regurgitating the fact in differential geometry that the energy-minimizing path connecting two points is a constant-speed geodesic_
Our algorithm is grounded in the theoretical insights from Section 3. The relevance of Section 3.1 lies in demonstrating how schedule optimization in diffusion models can be reframed as finding an energy-minimising path with an appropriate metric. The key connection between differential geometry and diffusion models is established in Theorem 2.1, where we derive the Reimannian metric compatible with the objectives of diffusion models, and demonstrate how to approximate the energy using the Fisher divergence. The material from differential geometry included in this section is meant to ensure completeness. To improve the clarity of this section, we will add a linking sentence that explicitly states how the local metric tensor from Theorem 2.1 enables us to use differential geometry to study diffusion schedule design.
- _The computation cost could be quite high.._
See general rebuttal response.
Questions:
- _Line 95_
Thank you for pointing out this typo.
- _Line 136_
The introduction of coupled noises in Section 2.2 is a direct consequence of performing a perturbation analysis on the Langevin dynamics trajectories. Specifically, we are interested in understanding how these trajectories evolve over a small time interval when the score function is slightly perturbed.
By coupling the noise terms in equations (9) and (10), we ensure that any observed differences between these trajectories are solely due to the perturbation and not due to random fluctuations. The noise cancels out naturally because we have isolated the perturbation of the score; this cancellation is a result of the computational approach rather than a manufactured effect. Without coupling the noises, the comparison would be less informative, as it would be unclear whether differences were due to the perturbation or simply random noise.
- _Line 218_
Here "distance" is intended to mean "length along the path," as referenced in the preceding lines 202-207. We will update the text to more clearly state "length along the path".
- _Line 235_
We have not claimed that we know $\Delta t$ is sufficiently small, but rather if it is small, then our approximation is valid. One way to assess if a grid is fine enough is by computing the statistical length $\Lambda$, which is independent of the schedule, across a range of grid sizes. When this value stabilises for a particular $\Delta t$, it indicates that the desired integral approximation of Equation 21 has been achieved.
- _Simple cubic interpolation in step 3_
We also remark that be apprimae $\Lambda^{-1}(t)$ directly by interpolating with knots $\{(\Lambda(t_i), t_i)\}$. This avoids needing to approximate $\Lambda(t)$ and numerically inverting. Since we avoid numerical inversion methods, the choice of interpolant is not overly critical. We use cubic interpolation as an example, but any interpolation method respecting monotonicity (e.g. linear) can also be used. We will add this comment after Algorithm 1.
- _In (3)_
In the context of the forward/backward SDEs in a diffusion model, the Brownian increments for the forward and backward SDEs are independent of one another. We use a tilde to emphasise that the Brownian increment for the forward diffusion ($\tilde{W}$) is different from the Brownian increment of the backward diffusion ($W$).
- _Line 75_
Lebesgue probability measures are those that have a density function with respect to the Lebesgue measure. We've included standard references to make this clearer for readers who may be unfamiliar with probability theory and thank the reviewer for improving the accessibility of our work.
- _In (4)_
This specific way of splitting is chosen to identify the unique ODE that transports samples from one density $p_{t}$ along the diffusion path to $p_{t'}$. This approach was originally identified and used in the seminal work of Yang Song [reference]. We will add a reference in the text to where this splitting is derived in Yang Song's paper.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for the detailed response. I remain unconvinced the whole predictor/corrector setup is necessary. For instance, (13) makes sense to me at first glance (as Fisher divergence), and the predictor/corrector setup only adds more confusion. In addition, the authors acknowledged that there is no real benefit from the predictor-optimized schedule as opposed to the simpler corrector-optimized schedule where there is essentially no predictor. The novelty and significance of Section 3.1 remains unclear to me. My other concern about the justification of the ad-hoc way of inverting $\Lambda(t)$ is also not well addressed. As such, I would like to keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We would like to specifically address your concern regarding the procedure for inverting $ t \to \Lambda(t) $.
We are very sorry if our previous explanation was unclear. Let us provide some additional clarifying details:
First, $\Lambda$ is an increasing positive function, as the integrand is positive. Furthermore, according to Theorem 2.1, the integrand of $\Lambda$ is a continuous function, making $\Lambda$ a $C^1$ (continuously differentiable) increasing function. Since $\Lambda$ is monotonic increasing, it is injective, and thus an inverse exists. Moreover, this inverse function is also monotonic increasing and $C^1$.
There are two options to obtain $ \Lambda^{-1} $. The first would be to obtain the set of points $ \\{(t_i, \Lambda(t_i))\\}\_{i=1}^N $ and fit a cubic interpolant to this set of points thus obtaining an approximation of $s=\Lambda(t)$. One would then have to perform some numerical strategy to invert this fitted function. The other option (this is the option that we perform in the paper), is to fit a cubic interpolant to the set of points $\\{ (\Lambda(t_i), t_i) \\}\_{i=1}^N$. Notice that we have swapped the order of $ t_i $ and $ \Lambda(t_i) $. When we fit a cubic interpolant to this set of points we directly obtain an approximation of $t= \Lambda^{-1}(s)$ __without ever having to perform any inversion__.
We hope this additional detail clarifies our procedure and addresses your concern about the method of inverting $\Lambda$.
Thank you again for your feedback, and we hope this explanation provides the clarity needed. | Summary: A popular way to sample a target distribution is to run a "predictor-corrector" SDE, which additively combines two processes whose stationary distribution is the target: a Langevin (or "corrector") process and a reverse-diffusion (or "predictor") process.
How to discretize this SDE without deterioriating the approximation quality of its terminal distribution is the topic of this paper. The authors derive a discretization cost (Eq 13), which is the Fisher divergence between two distributions $\textemdash$ one obtained running the "predictor-corrector" process in continuous-time and the other obtained by running the "predictor" process only in discrete-time $\textemdash$ multiplied by the velocity of the "corrector" process.
This cost can be approximates using the estimated scores (section 3.2) and the authors optimize this cost with respect the schedule. They illustrate numerically the optimized schedules on image datasets and verify that they yield better approximations of the target distribution (measured in FID).
Strengths: The paper is clearly written and derives a principled way to derive an optimal schedule for the popular "predictor-corrector" SDE, of which "annealed Langevin dynamics" and a "reverse diffusion process" are a special case. This is of interest to the sampling and diffusion models communities.
The appendix has nice, additional experiments assessing the optimal schedule and 1D and image datasets.
The authors are also clear about the limitations of their approach, namely that it assumes perfect estimation of the scores.
Weaknesses: Including in the Appendix something like Figure 10 but for ImageNet, CIFAR, or CelebA would be interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: It is unclear to me if in Eq 57, $J_t$ requires computing the Hessian of $\log p_t(\cdot)$: if so, wouldn't that be an expensive quantity to compute in order to obtain the optimal schedule?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We agree that more visual comparisons of the different schedules would be illustrative. To this end, we will include standard diffusion progression plots, comparing the image diffusion path for CIFAR and CelebA on different schedules. In a potential camera-ready version, we would include one such plot in the main body of the text and all comparisons in the appendix. We greatly appreciate the reviewer’s comment, which has helped improve the presentation of our schedule optimisation.
_It is unclear to me if in Eq 57, $J_t$ requires computing the Hessian of $\log p_t $: if so, wouldn't that be an expensive quantity to compute in order to obtain the optimal schedule?_
We thank the reviewer for highlighting this important computational point. In our implementation, we have formulated an estimator (Proposition B.1) that completely avoids computing the full Hessian. This proposition is currently in the appendix, but we state it here for completeness:
**Proposition B.1:** Let $ F_{t,t'} $ be the predictor map given by the forward Euler discretisation (8) of the probability flow ODE. For $ N \in \mathbb{N} $, let $ \hat{J}_{t,N}(x) $ be the Jacobian of the Hutchinson trace estimator (Hutchinson, 1989) for $ \nabla(\nabla \log p_t(x)) $ at $ x \in X $ and $ t \in [0, 1] $:
$$
\hat{J}_{t,N}(x) = \frac{1}{N} \sum_n \nabla(v_n^T J_t(x) v_n),
$$
If $ \Delta t $ is small enough such that:
$$
\Delta t \cdot \text{Tr} \left( f(t)I - 2g(t)\nabla \log p_t(x) \right) < 1,
$$
then, as $ N \to \infty $, the following limit exists almost surely:
$$
\nabla \log \det \nabla F_{t'}(x) = -\Delta t g(t)^2 \lim_{N \to \infty} \hat{J}_{t,N}(x) + O(\Delta t^2).
$$
Here, we only require the trace of the Hessian, so we need to compute only the diagonal entries, ensuring that the memory cost remains linear with respect to the parameters. Additionally, we have utilised a Hutchinson trace estimator for this term, which can be efficiently implemented in standard deep learning libraries.
In our provided code, we have explicitly implemented this in PyTorch and have found that this computation is scalable to image datasets. This approach has enabled us to compute our predictor-optimised schedules, something infeasible if we had needed to compute the full Hessian. Our estimator performs well, producing schedules that achieve good FID scores. This was a subtle computational challenge that was non-trivial to overcome in our work.
In a potential camera-ready version, we will clarify this approximation and emphasise that the computational resources required are significantly less than those needed for computing the full Hessian. We will also make clear in the main text that we have a scalable estimator for this term and do not require computing the Hessian matrix. We thank the reviewer for bringing this point to our attention and helping improve the quality of our paper.
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: Thanks to the authors for their response.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer once more for their assessment of our work. Thank-you! | Summary: The paper proposes an improved discretization schedules based on a novel cost measure that they proposes. The proposed method can update a given schedule to achieve greater sample quality as is demonstrated with solid experiments.
Strengths: Strengths
* The proposed method is novel and can adapt to most given SGM model to update its discretization schedule to achieve better sample quality. The computation of the proposed method is also tractable.
Weaknesses: Weaknesses
* While the paper provides heuristic theoretical justification to the design of the cost function, a better way to justify that the updated schedule provides better sample may be to directly measure the difference between the learned density (using a discretization schedule) and the true density $p_0$ and point out that updated schedule can reduce this difference.
Technical Quality: 3
Clarity: 3
Questions for Authors: * It seems the proposed method do not accelerate the backward sampling process since the updated schedule has the same number of timesteps. I wonder if the author have examined the computational efficiency of their Algorithm 2? Since when it comes to SGM, the running time is one of the major concerns.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and questions above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their review and suggested improvements. We agree that providing a simple error plot for cases where the true density is known would be informative. In our diffusion model, we do not have the ability to query the density directly; however, we do predict the score. In Figure 4 of our paper, we make this comparison for our bimodal example of the scores at t=0. It can be noted here that the linear schedule devolves into learning a function dissimilar to the expected shape of the score for a bimodal distribution. To further illustrate this point, we have created a plot that evolves over training, showing the error of the score at time t=0, corresponding to our data distribution. We observe that with our schedule optimisation, we achieve a lower error than with a fixed linear schedule over the course of training. Additionally, we have computed the likelihood of the generated samples in this case where the true density is known. Similarly, we observe that during training, our optimised schedule generates datasets with higher likelihood than the fixed schedule case. We thank the reviewer for their suggestion, which has improved the quality of our paper, and we will include these improvements in a potential camera-ready submission.
_Q: It seems the proposed method do not accelerate the backward sampling process since the updated schedule has the same number of timesteps. I wonder if the author have examined the computational efficiency of their Algorithm 2? Since when it comes to SGM, the running time is one of the major concerns._
We thank the reviewer for their question. We do observe in our simple 1D experiment in Figure 1 that an optimised schedule can achieve better results with fewer discretisation points, improving runtime. To better convey this potential use case, we have added follow-up experiments based on the reviewer's previous suggestion to see if we can see a gain in FID with fewer data points for image generation.
Additionally, we have conducted a refinement test to assess the computational benefit of optimising the schedule. For this, we took the rho3 schedule for CIFAR10 and computed the FID score for 18, 20, 30, 50, and 100 discretisation points. In this range, at 18 discretisation points, the rho3 schedule achieves an FID of 5.47, while at 100 discretisation points, it achieves an FID of 2.05. In comparison, the optimised schedule achieves an FID of 1.99 at 18 discretisation points. We further observe the same trend in the sFID, another image performance metric.
In this experiment, we can see that optimising the schedule can lead to improved FID and sFID performance with fewer discretisation points. Furthermore, we compared all schedules with only 10 discretisation points. In this case, the optimised schedule outperforms all others in terms of FID and sFID, demonstrating its superior performance in the few discretisation point regime. We thank the reviewer for their suggestion to perform this interesting computational experiment. In a potential camera-ready submission, we would include this study in the main manuscript.
---
Rebuttal Comment 1.1:
Comment: We would like to thank the reviewer once again for their comments, which have led to improvements in our experiments, including a refined analysis that demonstrates a computational gain with our method in the few-point discretisation regime. We would like to confirm if we have addressed all of the reviewer's comments and questions. Thank you! | Rebuttal 1:
Rebuttal: We thank all the reviewers for taking the time to read our paper and for their kind comments on the strengths of the presentation and methodology. We also thank the reviewers for providing constructive feedback to improve the paper for the potential camera-ready version. We will address each point made by the reviewers individually. Below, we would like to summarize the additional experiments we conducted based on the reviewer feedback comparing (1) performance metrics and (2) schedule refinements. Please see the rebuttal document for the results.
(1) Performance metrics:
We have implemented the sFID metric in addition to FID and KLUB on the CIFAR dataset, which is included in our rebuttal document. We have found that the results for sFID correlate strongly with the results for FID in this case. In a potential camera-ready version, we would incorporate this metric across all the datasets presented. We appreciate the reviewer’s input, which has helped substantiate the validity of our method and improve the results presented in our paper.
(2) Schedule refinements:
Additionally, we have conducted a refinement test to assess the computational benefit of optimising the schedule. For this, we took the rho3 schedule for CIFAR10 and computed the FID score for 18, 20, 30, 50, and 100 discretisation points. In this range, at 18 discretisation points, the rho3 schedule achieves an FID of 5.47, while at 100 discretisation points, it achieves an FID of 2.05. In comparison, the optimised schedule achieves an FID of 1.99 at 18 discretisation points. We observe the same trend in the sFID, another image performance metric. Notably, the optimised schedule maintains stable performance even with 10 discretisation points compared to a suboptimal schedule.
We would further like to briefly comment on importance of the predictor/corrector framework used in our work and the reasoning for presenting our work within it. We further would like to address why we include analysis for both predictor and corrector optimised schedules in our work.
Our predictor/corrector framework is based on the seminal work of Yang Song et al. [1,2] (also see [2,3]). It is crucial to justify the final cost that we use to find optimal schedules. We agree that the final cost coincides with the standard Fisher Divergence; however, without the predictor/corrector formalism, it would be unclear whether this cost is justified and what relation controlling the Fisher Divergence would have in controlling error along the trajectory of a diffusion model. Our work provides this link by showing that the Fisher Divergence cost can be derived by minimising the work done by the diffusion model update steps under the predictor-corrector framework, thereby providing the relationship between the Fisher Divergence and diffusion model sampling. Furthermore, without understanding this link between predictor/corrector and the Fisher Divergence, it would be unclear what weighting should be applied to the Fisher Divergence when computing the cost for different time points along the trajectory. In Section 3.3, we find that the variance of the applied noise is the appropriate scaling, which is derived by noting that Langevin corrector steps should have steps on the order of the scale of the target distribution, which directly implies a meaningful scaling for the Fisher Divergence. Without the predictor/corrector framework, it would be unclear which scaling to use.
Corrector-optimised schedules are indeed more computationally efficient, requiring only one additional function evaluation per training step without gradient tracking compared to using no schedule optimisation. Despite this minimal overhead, they provide performance similar to the predictor-optimised schedule, which requires higher-order gradients. We explicitly mention this in our experimental section (Lines 311-312), where we recommend the use of the more cost-effective corrector-optimised schedule for image datasets. However, we believe it is important to include the analysis of the predictor-optimised schedule for two main reasons: firstly, it is a natural extension of the corrector schedule, and understanding its computational trade-offs could be valuable for the research community; secondly, while the corrector-optimised schedule performs similarly on image datasets, the predictor-optimised schedule may offer advantages in other application domains. We agree that the corrector-optimised schedule's benefits should be more prominently highlighted for its simplicity, lower cost, and competitive performance. We propose making this recommendation earlier in the paper, particularly in Section 2.
- [1] Generative Modeling by Estimating Gradients of the Data Distribution, Yang Song, Stefano Ermon
- [2] Score-Based Generative Modeling through Stochastic Differential Equations, Yang Song et al
- [3] Elucidating the Design Space of Diffusion-Based Generative Models, Term Karras et al
- [4] The probability flow ODE is provably fast, Sitan Chen et al
Pdf: /pdf/d2742bd594295e92edbb5f603d54c9e8e13b430a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Accelerating Matroid Optimization through Fast Imprecise Oracles | Accept (poster) | Summary: The paper studies the problem of finding a maximum-weight basis of a matroid in a learning-augmented setting, where there are two different oracles that the algorithm can query to check whether a set is independent: An exact oracle that always gives correct answers (but may be slow), and a dirty oracle that may give incorrect answers. Formally, the answers of the dirty oracle correspond to some other "dirty" matroid (or more generally, downward closed set system) that ideally is close to the true matroid. In the setting considered, queries to the dirty oracle are free, and the goal is to minimize the number of queries to the exact oracle to find a maximum weight basis.
The quality of the dirty oracle is parametrized by two error measures, corresponding roughly to the numbers of elements that would need to be added/removed to move from a max-weight basis for the dirty matroid to a max-weight basis for the true matroid. The main result is an algorithm whose number of clean queries interpolates (as a function of the error parameters) between n-r+k (where n is the number of items in the ground set and r the size of any basis, and $k\ge 1$ is an algorithm parameter that can be chosen) and $n(1+1/k)$. One should note that without dirty queries, the problem requires n exact queries. The dependence on the error in the interpolation is shown to be asymptotically optimal.
If the algorithm has access to rank queries rather than just independence queries (rank queries return the rank of a set, i.e., the size of the maximal independent subset) the guarantees can be improved to a quantity interpolating beetween 2 and n+1 exact queries depending on the input.
An extension of the main results includes an application to the matrix intersection problem.
Strengths: - Essentially tight bounds.
- Non-trivial algorithms. They are not overly complicated, but getting everything right requires some care.
- Clear presentation. The simpler algorithm for the unweighted setting is helpful before diving into the unweighted setting.
Weaknesses: - In order for the improvement of the algorithm for independence queries to be a super-constant factor requires n-r is sub-linear in n. It is not clear how realistic this is. The authors hint at graphic networks in sparse graphs having high rank -- does this mean $r=\Omega(n)$ or even $r = n - o(n)$? The result in the case of rank queries avoids this issue though.
- Significant work is required for the robustness result, i.e., to ensure the number of clean queries is simultaneously bounded by a function of the error parameters and by a quantity slightly larger than n. However, the trivial approach of running two algorithms in parallel would achieve a similar result, losing only a factor of 2, so the improvement over this trivial approach is relatively small.
- The extension to matroid intersection seems relatively weak (strong assumptions and an error measure that can be exponentially large in n in the first setting). Since this is only an extension rather than a main result, this does not affect the main contribution though.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please elaborate on the size of n-r in typical settings, in relation to the first possible weakness above.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Dependence on n-r
Indeed, our results on independence oracles cannot avoid the dependence on n-r, though this can be mitigated by using a stronger oracle type, such as a rank oracle.
Nevertheless, improvements over the greedy guarantee are possible, e.g. for graphic matroids of sparse graphs.
In the graphic matroid of a connected graph $G=(V,E)$, we have $n = |E| \ge |V|-1$ and $r = |V|-1$. If the graph is sparse and satisfies $|E| = |V|-1 + x$ for an $x \in o(|V|)$, then $n-r$ is asymptotically smaller than the greedy guarantee of $n$. Likely more realistic is a value $x \le c \cdot |V|$ for some constant $c$. While this does not give an asymptotic improvement over the greedy algorithm, saving a constant factor can be significant in practice.
### Our Robustness Approach vs Standard Approach
It is correct that the standard approach in learning-augmented algorithms, which involves running two algorithms alternatingly (in parallel), guarantees a simple robustness by losing a factor of 2 in robustness _and_ consistency.
Our more complex algorithm can achieve a robustness of 2n and at the same time the optimal consistency of n-r+1 (when k=1).
Moreover, we can achieve a better-than-2 robustness while only losing an additive constant in the consistency, which is not possible with the standard approach.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I'm intending to keep my score. | Summary: This paper aims to solve fundamental matroid optimization problems, specifically, computing a maximum-weight basis of a matroid, a complex combinatorial optimization problem, To this end, the author proposes a two-oracle model, which uses fast but dirty oracle to reduce the time to call clean oracles. Then, the author proposes the algorithm to compute the maximum-weight basis and gives the upper bound of the oracles calls needed. Finally, the author also discusses advanced settings like different kinds of oracles and other matroid optimization problem.
Strengths: 1. The theoretical technique of this paper seems strong, the paper gives a theorem and detailed proof of the bound of the cost to call oracles.
2. The two-model design is superior to traditional methods which require n+1 clean calls, this seems to be a huge increase.
3. The extended result in the Section 4 is interesting, it helps to show that the algorithm is generalizable and can be suitable for different settings.
Weaknesses: 1. The organization of this paper is very confusing, for example, the author includes a large part of the preliminary in its introduction rather than an independent section. Moreover, the author includes many extended results and propositions in its final section 4, but doesn't include a conclusion.
2. The paper studies combinatorial optimization problems which can be hard to comprehend for readers with not much background knowledge, while the paper is not self-contained enough. The preliminary in Section 1.3 alone is not enough to grasp an intuitive concept of that.
3. This paper doesn't include any empirical experiment.
Technical Quality: 3
Clarity: 1
Questions for Authors: Perhaps it will be beneficial to include more figures as illustrations? For example, you can give an example of the algorithm walkthrough as a toy example. Also, it can also be helpful for the readers to understand the new proposed concept like k-safe and modification example.
Confidence: 1
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: I think the author has stated the limitations as he/she clearly stated the necessary assumptions in the respective theorem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Organization of the Paper and Figures
We regret that space constraints prevent us from providing a more gentle introduction to the well-established field of matroid optimization. In a full version, we will include more figures and examples for our algorithms to better illustrate our results.
### Empirical Verification
We are happy to add experiments in a full version. Our work provides theoretical results with proofs that hold true without experimental verification. It has become increasingly common in the field of learning-augmented algorithm design to omit experimental verification for purely theoretical results, even when publishing at AI venues. | Summary: The paper mainly studies the problem of finding a maximum weight basis in a matroid $\mathcal{M}=(E,\mathcal{I})$ using two types of independence oracles "clean" and "dirty". The clean oracle determines whether a set $S \subseteq E$ is an independent set in $\mathcal{M}$, and the dirty oracle determines independence according to another matroid $\mathcal{M}_d=(E,\mathcal{I}_d)$. The dirty oracle is free but might be imprecise for $\mathcal{M}$. For measuring the error of $\mathcal{M}_d$ with respect to $\mathcal{M}$, the parameters $\eta_A$ and $\eta_R$ are defined, which intuitively are the number of elements that have to be added to/removed from a maximum-weight basis of $\mathcal{M}_d$ to reach a maximum-weight basis of $\mathcal{M}$.
The main result of the paper is an algorithm that computes a maximum-weight basis of $\mathcal{M}$ using at most $\min (n-r+k+\eta_A \cdot (k+1)+ \eta_R \cdot (k+1) \lceil \log_2 r_d \rceil, (1+1/k)n )$ calls to the clean oracle, where $n=|E|$, $k$ is a positive integer, and $r$ and $r_d$ are the ranks of $\mathcal{M}$ and $\mathcal{M}_d$, respectively. The authors also prove lower bounds that show any deterministic algorithm should have dependencies with respect to $n$, $r$, $\eta_A$, and $\eta_B$ that are similar to those of the proposed algorithm.
Strengths: * The authors provide interesting and nontrivial upper and lower bounds for a fundamental problem.
* The two-oracle model considered, which can be viewed as a learning-augmented model, is theoretically interesting.
* The paper is well-written, and the algorithms are presented in a logical sequence that is easy to follow. The warm-up algorithms introduced in section 2 are especially helpful in facilitating understanding of the techniques and challenges.
Weaknesses: * The parameter $k$ in the main result of the paper is intuitively used to determine how much we want to trust the dirty oracle. When
$k$ is set close to 0 to heavily favor the dirty oracle, if the errors $\eta_A$ and $\eta_R$ turn out to be high, the algorithm might use $\Omega(n \log n)$ calls to the clean oracle. This significantly exceeds the calls required by the optimal worst-case algorithm without predictions, which uses $n$ calls to the clean oracle. In this sense, the algorithm is not robust.
* The results lack empirical verification. Even basic proof-of-concept experiments would be valuable to assess whether this model could be practically applicable.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Minor remarks:
* In lines 71-73 and 86-87, the wording was initially confusing to me. I think the way these results are stated later as the minimum or maximum of two values is easier to read.
* The error measures defined in lines 375-380 can be exponential in $n$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors state their theoretical results formally, describing all assumptions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Question Regarding Worst-case Guarantee (claimed n log n bound)
Our wording in lines 71-73 might have been misleading and we will change it. Our robustness guarantee, and thus the overall guarantee of the algorithm, is at most 2n (when k=1, which is the smallest value of k for which the stated guarantee holds). In a revision, we will use the standard minimum expression for combining the error-dependent guarantee and the robustness guarantee.
### Question/comment about the error measures defined in lines 375-380 to be potentially exponential in n
Yes, this is correct. If this is undesirable, we can adapt it to the number of "wrong" edges (at most rn) in the exchange graph constructed to find the alternating paths, which can be defined as follows: taking the max over all independent sets X, how many pairs (x,y) with X-x+y are dependent but judged as independent by the dirty oracle.
With this new definition, we can obtain the same results with only a minor modification in the analysis.
### Empirical Verification
We are happy to add experiments in a full version. Our work provides theoretical results with proofs that hold true without experimental verification. It has become increasingly common in the field of learning-augmented algorithm design to omit experimental verification for purely theoretical results, even when publishing at AI venues.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their response. Regarding the robustness guarantees, I mistakenly assumed that $k$ can be arbitrarily close to zero. Thanks for the clarification.
After reading all the reviews and responses, I have decided to maintain my original score. | Summary: The paper explores the concept of 2-oracle algorithms for matroid optimisation problems. The underlying idea is to equip algorithms with a second, somewhat "weaker" oracle. The second oracle is also permits the algorithm to query matroid information (similar to the first oracle) but only gets imprecise answer (different from the first oracle). Therefore the second oracle is also called "fast" as the assumption is that its use is cheaper than the use of the first oracle. In this context the problem of computing a maximum-weight basis for a matroid is investigated. The main result obtained is the existence of an algorithm for computing a maximum-weight basis with a prescribed number of oracle calls. Some related tools and aspects like error-depdent and robust algorithms are also considered.
Strengths: - The main results are established in form of formal theorems that come with a proofs; I did not check the proofs in detail but they look solid (the proof techniques look standard and adequate for this purpose); thus the soundness of the material appears good
- While this is not the first paper that studies the power of 2-oracle algorithms its study in the context of matroid optimisation appears to be original; thus the novelty of the paper is good
Weaknesses: - The paper is not easy to read; at the end of the introduction (i.e. Section 1) there should be an outline of the organisation of the paper to give the reader orientation
- The organisation of the paper is poor: the 9 pages text consist of 4 pages introduction (Section 1), 1.5 pages warm-up (computing an "unweighted" basis) (Section 2), 2 pages computing a max-weighted basis (Section 3) and 1.5 pages future work (Section 4); the way how the material is organised is really unfriendly for the reader
- The main results require a long list of lemmas (which are given in the paper or its appendix); so this might be tedious for the reader to follow the line of proof
- The presentation of the material in the paper is at most fair; changing this may require quite some rewriting of the paper
Technical Quality: 3
Clarity: 2
Questions for Authors: - While matroid optimisation is closely related to combinatorial optimisation it is not clear why this paper is submitted to NeurIPS as the connection to neural computing is not that obvious; so this should be justified better
- The paper speaks of "robust" algorithms and of "robustifying" error-dependent algorithms in a sense of dealing with errors from the second oracle; but robust optimisation has been studied widely in the combinatorial optimisation community is usually concerned with uncertainty in the costs or times given as input; so these two meaning of "robust" may easily be mixed up and may be confusing for the reader
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Justification for Conference Fit
Among the major topics of NeurIPS, our paper fits very well under "Optimization" and "Machine Learning".
While we do not develop new ML methods, we instead design potential applications for ML-learned information. We theoretically analyze their potential for classic optimization problems in the context of learning-augmented algorithm design, which has been quite successfully established in AI venues, including NeurIPS (examples: Bai and Coester, 2023; Purohit, Svitkina and Kumar, 2018) and ICML (examples: Choo et al., 2023; Lykouris and Vassilvitskii, 2018), over the last five to eight years.
We refer to reference [26] in our submission for an almost complete list of papers in this area, many of which are published at NeurIPS or AI conferences in general.
In fact, an important use case of this area is to guide the development of ML methods: Which information should ML methods learn in order to improve widely used optimization algorithms?
1. *Bai, X., and Coester, C. "Sorting with predictions", NeurIPS 2023*
2. *Choo, D., et al. "Active causal structure learning with advice", ICML 2023*
3. *Purohit, M., Svitkina, Z., and Kumar, R., "Improving Online Algorithms via ML Predictions" NeurIPS 2019*
4. *Lykouris, T., and Vassilvtiskii, S. "Competitive Caching with Machine Learned Advice", ICML 2018*
### Term "robust" and its use in robust optimization
Indeed, robust optimization is a classical field of optimization concerned with worst-case guarantees in the face of uncertainty in the input, either in the objective function (cost) or constraints.
In the field of learning-augmented algorithms this phrase has been adopted to explicitly refer to worst-case guarantees with respect to the uncertainty about the predictions, which are part of the input.
### Weak presentation
We regret that the reviewer finds the paper difficult to read.
The page limit imposes serious constraints on presenting the technical material, but a full version will allow us to provide more details and illustrations.
As is common in theoretical papers, we aimed to present our technical results in a clear manner by breaking complex proofs into several lemmas, allowing readers to maintain an overview without needing to delve into all the details in the main body of the paper.
We are unclear about the specific concerns regarding the paper’s organization.
We hope that any issues with the organization of the sections do not significantly impact the evaluation of the scientific merits of the paper.
We will add an overview at the end of Section 1 as requested.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Many thanks for your answers that helped in clarifying several points. I am more happy now.
Regarding 1) Justification for Conference Fit. The motivation for submitting the paper is clearer now. The connection to learning-augmented algorithms could be explained a bit more detailed (as done here in your reply). In my opinion, reference [26] is not a solid argument as it is unclear according to which criteria this list of papers in Github is put together and maintained. Perhaps you can phrase this differently.
Regarding 2) Term "robust" and its use in robust optimization. Thanks for the explanation. You could also include such a brief remark into the paper to avoid confusion.
Regarding 3) Weak presentation. I understand that the page limit is a challenge, and you made a good effort to put a lot of content into the available space. My concern is not that you break a complex proof into a series of lemmas. I also like that you first study the unweighted case before you continue with the more general weighted case. My concern is rather that many readers will give up before they reach the end of the introduction (at the bottom of page 4), and that would be a pity. Good that you offered to include an overview of the paper’s organisation at the end of the introduction. The future reader of your paper will definitely benefit from this. (My recommendation would even be to subdivide the 4-page introduction and place this overview earlier; but this is up to you.) | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Confidence Regulation Neurons in Language Models | Accept (poster) | Summary: This paper studies how large language models (LLMs) regulate uncertainty in next-token predictions through specific components: entropy neurons and token frequency neurons. Entropy neurons, identified by their high weight norm and minimal direct impact on logits, influence model confidence by operating within an unembedding null space. Token frequency neurons adjust logits based on token frequency, modulating the output distribution toward or away from the unigram distribution. The study includes a detailed examination of these neurons in various models, demonstrating their role in managing prediction confidence, particularly in repeated sequence scenarios.
Strengths: - The extensive experimental validation, including ablation studies and cross-model analysis, gives empirical support to the theoretical claims, enhancing the overall robustness and reliability of the study.
- The paper is very well written, and it was easy to follow through even though the topic is rather complex.
- I think that the paper provides an good mechanistic explanation of how entropy neurons operate through the unembedding null space and LayerNorm, improving our understanding of their indirect impact on model predictions.
Overall, I find the paper's findings very valuable, the methodology is rouboust, and the authors adressed themselves the few limitations of the paper. I honestly enojoyed reviewing this paper.
Weaknesses: - As the authors say in the "limitations" section, they use entropy and distance from the token frequency distribution as proxies for confidence, which may not fully capture the complexity of confidence in language models.
- The extent to which entropy neurons influence model output via LayerNorm seems to change across models. This variability suggests that model architecture and training parameters could play a important role.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please, read my questions and suggestion as a starting point for a follow-up paper.
- Could you provide a more precise definition of confidence in the context of LLMs?
- While LayerNorm plays a crucial role in the functioning of entropy neurons, what might be the impact of other normalization techniques (for example, BatchNorm or RMSNorm) on these neurons?
- I'd suggest also to perform an analysis of how specific features of the models and training hyperparameters influence the effectiveness of entropy and token frequency neurons.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors properly addressed all the possible limitations of the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing that our work provides a “good mechanistic explanation of how entropy neurons operate [...], improving our understanding of their indirect impact on model predictions.” We are encouraged by your recognition of our findings as "very valuable" and appreciate that you enjoyed reviewing our paper. Your positive feedback is greatly appreciated.
> **Weakness 1** & **Q1:** Could you provide a more precise definition of confidence in the context of LLMs?
This is a good and challenging question. Defining and quantifying uncertainty in machine learning models, including LLMs, is a complex challenge. In general, the uncertainty of a machine learning model can be divided into two types: aleatoric, which arises from inherent noise in the data, and epistemic, which stems from the model’s lack of knowledge [1].
In the context of LLMs, which predict a probability distribution over a set of possible next tokens, possible candidates for gauging the model's confidence are, for instance, the largest predicted probability value, the gap between the two highest probabilities, or other, more global properties of the output distribution, such as entropy or divergence from a baseline distribution [2, 3]. We opt for the latter measures as single probability scores neglect broader characteristics of the output distribution.
In particular, the token frequency distribution represents a natural baseline for next-token prediction, which models resort to at early stages of training [4, 5], and serves as an educated guess based on the previously seen tokens.
Although we believe that our chosen proxies are reasonable and practical measures for assessing a model’s confidence in its next-token prediction, we acknowledge the challenges in uncertainty estimation. We will add a note to the paper to highlight this point.
> **Q2:** While LayerNorm plays a crucial role in the functioning of entropy neurons, what might be the impact of other normalization techniques (for example, BatchNorm or RMSNorm) on these neurons?
RMSNorm is essentially identical to LayerNorm, except it does not center the residual stream by subtracting its mean. Since the impact of entropy neurons is mediated through the re-scaling operation of LayerNorm rather than the centering, the function and presence of entropy neurons should not be affected by using RMSNorm instead of LayerNorm. Empirical validation of this comes from LLaMA 2, which uses RMSNorm and still exhibits the presence of entropy neurons.
BatchNorm, on the other hand, normalizes along the batch dimension. This operation involves normalizing across a dimension external to the model's architecture because the model, during a forward pass on input $x$, has no access to information about activations computed on other inputs $x'$ in the same batch. Therefore, we do not expect BatchNorm to impact the presence or function of entropy neurons.
> **Q3:** I'd suggest also to perform an analysis of how specific features of the models and training hyperparameters influence the effectiveness of entropy and token frequency neurons.
This is definitely an interesting question. In Appendix E, we analyze the effect of one training hyperparameter: the application of dropout during training. We study this in two versions of Pythia 160M that differ only in the application of dropout. We observe that dropout leads to lower unembedding singular values and, interestingly, to a more pronounced effective null space, making the presence of entropy neurons more pronounced. We agree that a more systematic analysis of the impact of architectural choices on confidence regulation neurons is an interesting avenue for future research and could provide valuable insights into how these mechanisms emerge.
---
[1] Kendall, A. and Gal, Y., What uncertainties do we need in bayesian deep learning for computer vision?. NeurIPS 2017.
[2] Huang, Y., et al., Look before you leap: An exploratory study of uncertainty measurement for large language models. arXiv 2023.
[3] Yoshikawa, H. and Okazaki, N., Selective-LAMA: Selective Prediction for Confidence-Aware Evaluation of Language Models. In Findings of EACL 2023.
[4] Meister, C., et al., A Natural Bias for Language Generation Models. ACL 2023.
[5] Chang, T.A. and Bergen, B.K., Word acquisition in neural language models. TACL 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed responses, I appreciate the clarifications provided, particularly regarding the definition of confidence in LLMs and the impact of different normalization techniques on entropy neurons. | Summary: The paper investigates specific neurons in LLMs (termed "confidence regulation neurons") that modulate the uncertainty of the next token prediction by modulating the output distribution. First, entropy neurons modulate the overall entropy of the output distribution by writing to an effective null space of the unembedding matrix, thereby influencing the residual stream norm with a minimal direct effect on the logits themselves. Second, token frequency neurons modulate the predictive distribution proportionally to the token frequency in the training data.
Strengths: - The paper clearly written and well-organized, making complex concepts accessible.
- It provides deeper insight into the role of entropy neurons in regulating the confidence of LLMs through the unembedding null space.
- It introduces token frequency neurons, a type of neurons that have not been discussed in prior work.
- It demonstrates the presence of these entropy and token frequency neurons across various models.
Weaknesses: - **Novelty**: The authors claim that they provide a "novel example of how language models can use LayerNorm to indirectly manipulate the logit values" and that prior studies "fail to account for the normalization effects imposed by LayerNorm" (lines 160 to 163).
However, the mechanisms of entropy neurons have already been discovered in previous work [1]. For instance, Gurnee et al. (2023) show that entropy neurons "modulate the model’s uncertainty over the next token by using the layer norm to squeeze the logit distribution, in a manner quite similar to manually increasing the temperature when performing inference.". Although prior studies do not differentiate between the total and direct effect of an entropy neuron on the output distribution, the novelty of the analysis is not clear-cut.
- **Clarity**: Individual neurons are referred to as *layer.position* and simply *position* interchangebly (e.g. 11.2378 and 2378). Since only neurons in the final layer were investigated, referring to the layer is redundant. Also, the authors should consider further simplifying the name of the neurons, as the exact *position* does not add immediate value to the reader.
- **Interpretability**: The interpretation of the results is sometimes unclear. For instance, the authors observe that token frequency neurons suppress common tokens and promote rare ones. They suggest this indicates the model is biased to predict common tokens more frequently than their actual occurrence. However, an alternative interpretation could be considered. The model might also assign high probability to a single rare token while assigning almost no probability to other rare tokens. Promoting all other rare tokens to match their token frequency would bring the output distribution closer to the token frequency distribution, while increasing entropy and lowering confidence. In other words, the model's bias might not be just towards common tokens but also towards certain rare tokens. Such alternative interpretations would also resolve the counterintuitive explanation. The authors should provide more insights to support their interpretations.
---
[1] Wes Gurnee, Theo Horsley, Zifan Carl Guo, Tara Rezaei Kheirkhah, Qinyi Sun, Will Hathaway, Neel Nanda, and Dimitris Bertsimas. Universal neurons in gpt2 language models.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why did the authors analyze 6 entropy neurons but only 5 token frequency neurons?
- In Figure 5(a), are entropy, loss, and neuron activations really all on the same scale (single y-axis)?
- Why does the induction case study not consider the novel token frequency neurons but only the known entropy neurons?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing that our work “provides deeper insight into the role of entropy neurons” and that our paper is “clearly written and well-organized, making complex concepts accessible”.
> **Novelty**: The novelty of the analysis is not clear-cut.
Gurnee et al. identified "entropy neurons" with large norms and low compositionality with the unembedding matrix across GPT-2 models trained with different seeds. They hypothesized that these neurons modulate the model's uncertainty through LayerNorm. Their experiments involved artificially increasing these neurons' activation values, leading to significant changes in LayerNorm scale and entropy. While insightful, these observations do not conclusively establish LayerNorm as the primary mechanism.
Our work extends these findings in several ways:
1. We identify and quantify the presence of an unembedding null space.
1. We differentiate and quantify the total and direct effects of entropy neurons, establishing LayerNorm as the key mechanism mediating their impact.
1. We analyze examples where entropy neurons naturally achieve maximum activation, providing insights into their behavior.
1. We detect entropy neurons across multiple models, reinforcing the generality of our findings.
1. We demonstrate the practical implications of entropy neurons in the induction setting, highlighting their role in modulating model confidence.
In conclusion, our study offers a substantially deeper and more rigorous investigation into the mechanisms of entropy neurons, building on the work of Gurnee et al. while introducing novel analyses and findings. Combined with our discovery of token frequency neurons, we believe our work provides a substantial degree of novelty and contributes meaningfully to the understanding of confidence regulation in language models.
> **Clarity:** Individual neurons are referred to as layer.position and simply position interchangeably
Thank you for pointing this out. We will add a sentence to clarify the notation and ensure consistency across the entire paper.
> **Interpretability:** The model might also assign high probability to a single rare token while assigning almost no probability to other rare tokens.
Thank you for helping us consider alternative hypotheses for our observations. It is important to explore different interpretations, and we appreciate your feedback.
To clarify this point, we dug deeper into the effect of token frequency neurons on the model output. We know that token frequency neurons suppress common tokens and promote rare ones, and the fact that we observe an increase in entropy upon such contribution leads us to conclude that (1) the model's output distribution is typically skewed towards assigning probability mass to common tokens that is higher than the token frequency dictates. The reviewer suggests that the same change in entropy could be observed if (2) the model assigns high probability to a single rare token while assigning almost no probability to other rare tokens.
To test these interpretations, we studied the change in the KL divergence between the model's output distribution (averaged over 15M tokens) and the token frequency distribution, while adding to the model’s output logits the logits representing the unigram distribution (multiplied by a factor $k$ that we vary). Adding this vector promotes common tokens and suppresses rare ones, while subtracting it achieves the opposite effect, simulating the contribution of token frequency neurons.
In the scenario suggested by Interpretation 2, adding the token frequency vector to the output logits would remove probability mass from the rare tokens (and particularly from the rare token that received a high probability score) and increase the probability assigned to common tokens. If this were the case, we would observe a final output distribution more aligned with the token frequency distribution, indicated by a decrease in the KL divergence between the two distributions.
According to Interpretation 1, when adding the token frequency vector to the output logits, the already high probability assigned to common tokens should become even larger, making the output distribution even more skewed. Therefore, the KL divergence between the model's output and the token frequency distribution should grow even larger.
The results of our analysis are illustrated in the attached PDF. We observed that the KL divergence between the model's output and the token frequency distribution increases as we add the token frequency vector to the logits, which is consistent with Interpretation 1. We will include these results in the final version of the paper.
> **Q1:** Why did the authors analyze 6 entropy neurons but only 5 token frequency neurons?
There was no particular reason for analyzing 6 entropy neurons but only 5 token frequency neurons. Our results (Figs. 2e, 2f, and 3b) show that entropy neurons and token frequency neurons exist on a continuous spectrum rather than distinct clusters. For our in-depth analyses, we focused on the most pronounced outliers to represent these mechanisms effectively, rather than investigating every neuron that exhibited these characteristics.
> **Q2:** In Figure 5(a), are entropy, loss, and neuron activations really all on the same scale?
Correct, they are all on the same scale.
> **Q3:** Why does the induction case study not consider the novel token frequency neurons but only the known entropy neurons?
We do provide results for token frequency neurons on induction in Appendix H. Similar to entropy neurons, the activations of token frequency neurons change substantially upon induction.
---
In conclusion, we would like to thank the reviewer for their feedback, particularly regarding the interpretation of our results. We will include the additional analysis in the paper to provide further clarity. We hope that, in light of these clarifications and additional analysis, the reviewer will consider increasing the overall rating.
---
Rebuttal 2:
Comment: Before this phase of the discussion period ends, we wanted to check in with the reviewer on whether we have addressed your concerns with our work?
---
Rebuttal Comment 2.1:
Comment: Thank you for the rebuttal; it addressed the majority of my concerns and questions. My current score reflects my overall assessment of this work pretty well, so I have decided to maintain the score. | Summary: This paper investigates two kinds of neurons by which transformer language models calibrate their predictions. These are (1) “entropy neurons”, which can affect logit values, but do not promote specific tokens, and (2) “token frequency neurons”, which influence a model’s likelihood of outputting bigram word statistics.
Strengths: - Calibration of model confidence is an important area of study for trustworthy deployment of ML systems.
- The experiments attempt to show that both kinds of neurons for model calibration arise across various model sizes and architectures up to LlaMa-2-7B, suggesting this description is general to an extent (varying success across models).
- The results have nice synergy with existing work, providing a clearer understanding of how these previously discovered neurons can influence model behavior.
Weaknesses: - It is unclear what the connection is between “token frequency neurons” and “entropy neurons”. Do these two kinds of neurons interact (if so, how?) or are they separate mechanisms by which models calibrate logits? As this is a major part of the paper, it should be made clear.
- The case studies provided are pretty simple settings. Studying the effect of calibration neurons on more real-world tasks like question answering - where there is a small set of potential answers and seeing if and how the same mechanisms apply would help strengthen the generality of the claims made. (This is noted by the authors in the limitations section, and I agree with them).
Technical Quality: 3
Clarity: 3
Questions for Authors: - The total effect is much larger than the direct effect for both sets of neurons. For the “token frequency neurons”, does this mean they are influencing other components that actually promote bigram statistics? Or what exactly is their effect mediated by? It is unclear to me how the same direction could promote the most common bigrams for every distinct token in the vocabulary.
- Are entropy neurons and token frequency neurons only studied at the last layer of the model? Do they appear elsewhere, and are just strongest at the last layer?
- Does the discovery of these neurons influence how we should think about using direct logic attribution as a way to understand components and their interactions? Lines 70-71 briefly mention this, but could you can briefly expand on what is meant here?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do point out a number of limitations to their study, which are valid and could strengthen the paper when/if addressed. Though many may not be in scope.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing that our work addresses “an important area of study for trustworthy deployment of ML systems”, providing “a clearer understanding of how these previously discovered neurons can influence model behavior.”
> **Weakness 1:** Do these two kinds of neurons interact (if so, how?) or are they separate mechanisms by which models calibrate logits?
The two types of neurons represent separate mechanisms that language models can implement to calibrate their logit values. Their activation might be triggered by different features of the input, and they affect the output distribution differently. While they can be active simultaneously (e.g., during induction, where entropy neurons play the most significant role), they exhibit distinct activation patterns. Token frequency neurons tend to activate densely, with activation values significantly larger than 0 occurring frequently. In contrast, entropy neurons generally activate sparsely, often in response to specific sequences like repetitions, structured strings (e.g., email addresses and links), and common phrases or n-grams that are likely repeated in the training data.
> **Weakness 2:** Studying the effect of calibration neurons on more real-world tasks like question answering [...] would help strengthen the generality of the claims made. (This is noted by the authors in the limitations section, and I agree with them).
We agree with the reviewer that showing the effect of entropy and token frequency neurons on more real-world tasks like question answering would provide strong evidence for the generality of the mechanisms we study. However, we would like to highlight that our research is the first to investigate these specific mechanisms within language models. By identifying and characterizing entropy neurons and token frequency neurons, we provide a foundational understanding that future work can build upon. Our findings offer an important starting point for exploring these mechanisms in more complex and varied settings, and we are optimistic that subsequent research will extend our work to a broader range of tasks, including real-world applications like question answering.
> **Q1:** The total effect is much larger than the direct effect for both sets of neurons. For the “token frequency neurons”, does this mean they are influencing other components that actually promote bigram statistics? Or what exactly is their effect mediated by? It is unclear to me how the same direction could promote the most common bigrams for every distinct token in the vocabulary.
Token frequency neurons work by promoting or suppressing the *unigram* distribution component in the model's output. This means they adjust the model's predictions based on the individual token frequencies rather than sequences of two tokens (bigrams). We demonstrate this mechanism by identifying the residual-stream direction representing the unigram/token frequency distribution. (To provide some intuition: increasing the value of the residual stream along this direction promotes common tokens and suppresses uncommon ones.) We show that this direction mediates a significant portion of these neurons’ effect on the output: when we prevent the residual stream from varying along this direction (isolating the direct path that "bypasses" the token frequency direction), the effect of these neurons decreases significantly. In other words, the total effect is larger than their direct effect (which excludes the contribution along the token frequency direction), indicating that these neurons significantly influence the model's output distribution by modifying the residual stream along the unigram/token frequency direction.
> **Q2:** Are entropy neurons and token frequency neurons only studied at the last layer of the model? Do they appear elsewhere, and are just strongest at the last layer?
We focused our study on final-layer neurons, as we expect entropy neurons to be strongest at the final layer, where their effect on the output cannot be mediated by other intermediate model components. However, these neurons might also be present in previous layers. An exhaustive search and comparison of entropy neurons across layers represent an interesting direction for future analysis.
> **Q3:** Does the discovery of these neurons influence how we should think about using direct logic attribution as a way to understand components and their interactions?
Thank you for bringing up this point. We believe this is an interesting insight from our study. Interpretability analyses based on direct logit attribution typically involve projecting an internal model representation or weight vector onto the vocabulary space to determine whether the set of tokens being promoted share a specific feature or concept [1, 2]. Our work shows that entropy neurons write onto a residual-stream effective null space, which gets mapped onto a neighborhood of the zero vector in the vocabulary space. If we were to interpret these neurons via direct logit attribution, we would mistakenly conclude that their impact on the model prediction is minimal and uninterpretable, missing their interaction with the final LayerNorm.
These observations suggest that, while performing direct logit attribution, we must consider that some subspaces in the model residual stream might not influence the next token prediction directly. Thus, we might be trying to understand the *direct* contribution of a model component to the final prediction when its main effect is *indirect* (e.g., mediated by LayerNorm) and its direct effect is minimal, even when the component studied is at the final layer of the model.
---
[1] Elhage, N.,et al., A mathematical framework for transformer circuits. Transformer Circuits Thread, 2021.
[2] Geva, M., et al., Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space. EMNLP 2022.
---
Rebuttal Comment 1.1:
Title: Thank you for the reply
Comment: Thank you for the detailed reply. After reading the authors' rebuttals and other reviews, all of my questions/concerns have now been addressed and I think the paper would be a solid contribution to the conference. As such, I have raised my score to a 7.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback. Thank you! | Summary: This work studies a small but potentially important subset of neurons in the final layer for a trained transformer model that appear to regulate the confidence of a model (proxied by the variation in entropy of the model's output). Specifically, the paper builds upon entropy neurons found in previous work, extending the analysis further to show that these entropy neurons (which have high norm but low composition with the unembedding matrix) essential modify the null space of the unembedding matrix. This effectively scales the logits without changing the relative ranks of the tokens, thus regulating confidence. The paper continues this exploration by finding "token frequency" neurons, which are neurons that move the model's output distribution towards unigram token frequency distribution. The work claims that this is another form of confidence regulation, essentially "moving" the distribution towards unigram frequencies when the model is not confident about its prediction.
The paper provides the methodology for finding these neurons, as well as performs ablation/intervention experiments on these neurons to measure the difference in "confidence". Experiments are conducted on several pre-trained models, including GPT2, Pythia, LLaMA-2, Gemma and Phi-2. The paper also ends with a specific analysis of "induction"-behavior depicted in recent mechanistic interpretability works, studying how entropy/token-frequency neurons affect the output distribution when sequences are repeated in the output.
Strengths: - The overall paper is well written, describing its motivation well, as well as the methodology. The figures presented do a good job at compressing the useful information in an accessible form.
- The experiments conducted as convincing, and improve upon the findings of the previous works on which this paper is built upon. The experiments are also conducted on several models, which further showcases the universality of the findings.
- The paper focuses on an important problem of understanding transformer models (although the immediate practical applicability of the findings is limited, it still answers important questions)
Weaknesses: - While the results presented in the paper are indeed exciting, they raise a lot of questions (see Questions section) that are easy to answer but have been ignored by the paper. For instance, while the paper has run experiments on many models, there is very little discussion of these results. The paper would be much stronger if some additional discussion revolving around model and vocabulary sizes was included.
- The paper mentions a few existing works on confidence regulation and calibration, but does not tie the work done in this paper within the context of this broader field.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How are the token frequencies computed for the second half of the paper? Are these sub-word frequencies? On what dataset are these computed, and do you expect the training data distribution to have any effect on the results?
- How many entropy neurons usually exist? From the plots, it seems this number may be somewhere in the single digit range? Explicitly stating this will still be useful information for the reader
- Can you hypothesize why Gemma 2B has few to no entropy neurons?
- Relatedly, what is the effect of model size on the number of these entropy/token-frequency neurons? Should one assume that model size has no effect? What about the vocabulary size?
- Token freq is confusing; by changing distribution differently for every token, aren’t we changing the logits necessarily?
- In Figure 4, what is the reciprocal rank exactly? Is a higher value better?
- One thing that is unclear is how token-frequency neurons are regulating confidence specifically? I understand that they are modulating the output distribution towards unigram frequencies, but this affects both the logit ranks as well as the variance, so the effect on "confidence" specifically is unclear. I would appreciate some clarity on this.
### General comments
- It will be useful to state somewhere that your neuron indexing strategy is `<layer>.<neuron>`, which may be unclear to a new reader.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Authors have addressed limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for recognizing that our work “focuses on an important problem”, with experiments that are “convincing, and improve upon the findings of the previous works on which this paper is built upon.”
We appreciate the many insightful questions you pose in your review.
We address the weaknesses and the reviewer’s primary questions (Q3, Q4, Q5, and Q7) in this rebuttal message. However, due to the numerous questions posed by the reviewer, many of which require detailed responses, and the character limit for the rebuttal, we address three clarification questions (Q1, Q2, and Q6) in a separate comment.
> **Weakness 1:** The paper would be much stronger if some additional discussion revolving around model and vocabulary sizes was included.
Thank you for your input, we agree that further discussion on model and vocabulary sizes would enhance the paper. We address specific questions related to this issue (Q3 and Q4) below and will incorporate the corresponding answers and additional considerations into the final version of the paper.
> **Weakness 2:** The paper mentions a few existing works on confidence regulation and calibration, but does not tie the work done in this paper within the context of this broader field.
Recent work has explored various strategies to calibrate model output, including ensembling model predictions [1, 2], fine-tuning [3, 4], and prompting models to explicitly express their confidence [4, 5, 6, 7]. While our findings do not immediately suggest a strategy for improving models' confidence calibration, they provide valuable insights into the internal mechanisms that LLMs might use for this purpose. In connection with Burns et al. [8] and Azaria and Mitchell [9], our results suggest that it could be possible to estimate models' confidence states (e.g., overconfidence in specific settings) by tracking the activation values of a small set of neurons. This could potentially lead to new methods for dynamically adjusting model behavior based on real-time confidence estimates.
> **Q3:** Can you hypothesize why Gemma 2B has few to no entropy neurons? & **Q4**: Relatedly, what is the effect of model size [...]? What about the vocabulary size?
These are good questions. We believe that the small presence of entropy neurons in Gemma might be due to two characteristics that differentiate it from the other models considered: the large MLP dimensionality and the large vocabulary size.
In Gemma, the MLP layers have \~32k dimensions, which is roughly 10x larger than in GPT-2 Small and 3x larger than in LLaMA 2 7B. Given the very large number of neurons, the entropy-regulating function might be implemented in a different way within the final MLP layer.
The vocabulary used for the Gemma models is also significantly larger than for other models (\~256k tokens, 8x larger than LLaMA’s vocabulary). Such a high dimensionality in the unembedding projection might be in contrast with the presence of a null space in the projection matrix.
In general, a larger vocabulary size implies a higher-dimensionality mapping performed by the unembedding matrix. The projection from a low-dimensional space to a higher-dimensional space makes the presence of a null space in the projection matrix more costly. In other words, dedicating the same number of dimensions to a null space results in a greater loss of representational capacity, making the presence of an unembedding null space (and therefore of entropy neurons) less likely.
One non-architectural but training-related factor that we studied, in relation to the emergence of entropy neurons, is the application of dropout. Even though dropout cannot be the sole factor influencing this phenomenon, as we observe entropy neurons in LLaMA, which was trained without dropout, we observed that it has an effect on the size of the unembedding effective null space (the results are reported in Appendix G).
In conclusion, studying the architectural and training factors that determine the emergence of entropy neurons is an interesting direction for future research that warrants further investigation.
> **Q5:** By changing distribution differently for every token, aren’t we changing the logits necessarily? & **Q7:** One thing that is unclear is how token-frequency neurons are regulating confidence specifically?
Token frequency neurons, as opposed to entropy neurons, do affect the output logits directly, and you are correct in noting that they impact logit ranks. The confidence-regulation function of these neurons is achieved by shifting the output distribution closer to or further away from the token frequency distribution. This token frequency distribution is what the model can default to in cases of high uncertainty.
This phenomenon is supported by the observed anti-correlation between the entropy of the model's output distribution and its KL divergence from the token frequency distribution. Intuitively, the token frequency distribution represents an educated guess for the next-token prediction (e.g., when little or no contextual information is available), thus providing a baseline confidence level for the model. Token frequency neurons regulate the model's confidence in its predictions by adjusting how closely the output distribution aligns with this baseline.
---
In conclusion, we would like to thank the reviewer for their input. We will incorporate their comments and the corresponding answers in the final version of the paper. In particular, we will include the details about the token frequency computation (Q1) in Appendix D and reference it in Section 4. We will add the considerations about the effect of model and vocabulary size on the presence of entropy neurons (Q3 & Q4) in Section 3.4, the definition of reciprocal rank (Q6) in Section 5, and we will correct the neuron notation. Thank you again for the thorough feedback.
---
Rebuttal 2:
Comment: We believe the key parts of the review were addressed in the main rebuttal, but we include this comment for completeness.
> **Q1:** How are the token frequencies computed for the second half of the paper?
The empirical token frequency distribution that we compute is the unigram distribution: the distribution of tokens (i.e., entries in the vocabulary) over the whole training corpus. More specifically, the frequency of token $t \in \mathcal{V}$ over corpus $\mathcal{C}$ is computed as \# of occurrences of $t$ in $\mathcal{C}$ / $|\mathcal{C}|$. This distribution depends on the training data, and ideally, it should be computed for a specific model using the exact training corpus that the model was trained on. We could achieve this for the Pythia models as we had access to the data statistics of The Pile. For GPT-2, since the exact training data is not available, we used a randomly sampled 500M-token subset of the OpenWebText corpus as a proxy for the original distribution (as mentioned in Appendix G). Different training data distributions might lead to different token frequency distributions. However, we expect a model to align its output distribution with the token frequency distribution it was exposed to during training.
> **Q2:** How many entropy neurons usually exist?
The extent to which a neuron is considered entropy-regulating depends on the fraction of its effect that is mediated by the LayerNorm. This quantity varies on a continuous scale, and defining a specific number of entropy neurons would require setting a hard threshold on this measure, which is somewhat arbitrary and difficult to determine precisely. However, in our analyses, we observed that the number of neurons for which the LayerNorm-mediated effect is substantial typically accounts for around 0.1-0.3% of the MLP dimensionality.
> **Q6:** In Figure 4, what is the reciprocal rank exactly? Is a higher value better?
The reciprocal rank is computed as $ \frac{1}{r}$, where $r \in \{1, \dots, |\mathcal{V}| \}$ is the position of the correct next token in the list of all vocabulary tokens sorted by their predicted probability mass in descending order. For example, if the correct next token is ranked 1st in the predicted probability distribution $P_{\text{model}}$ (i.e., the next token prediction is correct), the reciprocal rank is 1. Lower values indicate that the correct token is ranked lower in the predicted probability distribution (i.e., that the prediction is worse).
> It will be useful to state somewhere that your neuron indexing strategy is <layer>.<neuron>, which may be unclear to a new reader.
Thank you for pointing this out. We will add a sentence to clarify the notation.
---
[1] Wang, X., et al., Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR 2023.
[2] Hou, B.,et al., Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling. ICML 2024.
[3] Jiang, Z., et al.,. How can we know what language models know?. TACL
[4] Kadavath, S., et al., Language models (mostly) know what they know. arXiv.
[5] Lin, S., et al., Teaching models to express their uncertainty in words. arXiv 2022.
[6] Si, C., et al., Prompting GPT-3 To Be Reliable. ICLR 2023.
[7] Tian, K., et al., Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback. EMNLP 2023.
[8] Burns, C., et al,. Discovering latent knowledge in language models without supervision. ICLR 2023.
[9] Azaria, A. and Mitchell, T., The internal state of an LLM knows when it's lying. arXiv 2023.
---
Rebuttal Comment 2.1:
Comment: Thank you for your response, some of my queries have been answered. I have gone through the other reviewers' comments and will maintain my score. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their thorough feedback. We are glad they found our work well-motivated, clearly presented, and insightful. We address each reviewer’s points in the respective rebuttal sections. Additionally, we attach a PDF file with the additional results referenced in the response to Reviewer Thsu.
Pdf: /pdf/f36e8acccb1dd0525da263c0ef07f9833a07c4b5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ProtGO: Function-Guided Protein Modeling for Unified Representation Learning | Accept (poster) | Summary: The authors distill the knowledge from function annotation. ProtGO acquires performance improvement through knowledge distillation. Compared with other methods only rely on structure and sequence, ProtGO outperforms these baselines.
Strengths: 1. ProtGO introduces a novel method that can utilize function information, indicating improved performance.
Weaknesses: 1. Protst[1] also utilizes function information of proteins, should also be listed as a baseline. Protst also does not need function information and only use protein sequence after alignment.
2. It's confusing to find, in ablation study, ProtGO still outperforms all the baselines without teacher. The authors should explain the difference between backbone GNN model and other baselines. Furthermore, the improvement of teacher-student module is trivial compared with the improvement of the backbone GNN. This needs to be clarified.
3. A question is that if it's necessary to introduce domain adaptation. The ablation study about this is missing.
[1]. Protst: Multi-modality learning of protein sequences and biomedical texts
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer qMp2,
We are grateful for your thorough review. Your comments are highly valued, and we would like to express our heartfelt gratitude. We do our utmost to address the questions you have raised:
**Q1** Protst[1] also utilizes function information of proteins, should also be listed as a baseline.
**A1** Thank you for your valuable feedback! In actuality, ProtGO does not function as a pre-training model, making it unjust to draw comparisons with pre-training methodologies. Consequently, we integrate ESM-2 (650M) [1] with ProtGO, denoted as ProtGO-ESM, where the ESM embeddings serve as a component of graph node features. Our evaluation juxtaposes ProtGO-ESM against pre-training techniques in tasks like protein function prediction and EC number prediction. This includes a spectrum of methods: sequence-based approaches such as ESM-1b [2] and ESM-2; sequence-function based models like ProtST [3]; and sequence-structure based methodologies like GearNet-ESM [4], SaProt [5], and GearNet-ESM-INR-MC [6]. The comparative findings are detailed in Table 1 of the one-page rebuttal pdf. Notably, our proposed model, ProtGO-ESM, emerges as the top performer across sequence-based, sequence-structure based, and sequence-function based pre-training strategies.
**Q2** It's confusing to find, in ablation study, ProtGO still outperforms all the baselines without teacher. The authors should explain the difference between backbone GNN model and other baselines. Furthermore, the improvement of teacher-student module is trivial compared with the improvement of the backbone GNN. This needs to be clarified.
**A2** Thank you for your informative reviews! (1) The student model is meticulously crafted as a Graph Neural Network (GNN) model capable of encoding protein sequences and structures simultaneously, demonstrated in Eq.4 of the manuscript. This robust protein encoder adeptly integrates protein sequences and structures, outperforming alternative sequence-structure methodologies. A sequence pooling layer is utilized to condense sequence length effectively, facilitating the aggregation of crucial patterns. In this GNN architecture, each pair of message-passing layers is succeeded by an average sequence pooling layer, totaling eight message-passing layers in the model. (2) The sequence average pooling functions execute tailored average pooling operations on input tensors based on calculated indices, dividing the sequence length by two and flooring the result. These functions aggregate and synthesize information from input tensors through scatter operations to generate output tensors. Following each average pooling layer, the number of residues is halved, expanding the radius threshold $r_s$ to $2r_s$ post-pooling. This extension enables center nodes' neighbors to encompass progressively distant and infrequent nodes, concurrently reducing computational complexity. These operations enable the model to capture both local and global features effectively. (3)
The ablation study's Table 5 showcases the student model's autonomous performance without teacher guidance, demonstrating its ability to independently model protein sequences and structures. (4) Additionally, Figure 3 in the manuscript illustrates the enhancements resulting from incorporating functional information, affirming the advantages of this augmentation.
**Q3** A question is that if it's necessary to introduce domain adaptation. The ablation study about this is missing.
**A3** Thank you for your reviews! The theory of domain adaptation forms the foundation for deriving Eq.10 and Eq.11, as detailed in Appendix F of the manuscript. In the realm of domain adaptation, employing a supervised loss is preferred when the student model operates with distinct task labels [2].
*Thank you again for all the efforts that helped us improve our manuscript! We have tried our best to address your concerns as it is important for my graduation; we respectfully thank you for supporting the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!*
[1] Xu, M., et al. Protst: Multi-modality learning of protein sequences and biomedical texts. In International Conference on Machine Learning. 2023.
[2] Berthelot, D., et al. Adamatch: A unified approach to semi-supervised learning and domain adaptation. arXiv preprint arXiv:2106.04732.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer qMp2,
We are especially encouraged by your initial review. Thank you for raising the constructive question about the effects of the AlphaFold predictions. Your inquiry provided us with the opportunity to clarify this crucial aspect of our study. We have thoroughly addressed your concerns in the rebuttal and hope to address your concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Sincerely,
The Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer qMp2,
Thanks for your review. We have tried our best to address your questions and we respectfully thank you for supporting the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!
Sincerely,
The Authors | Summary: This paper proposes to learn hybrid embeddings for the protein and the GO terms. By further applying the teacher-student training schedule, during inference, the additional input of GO terms is not necessary. Experimentally, the authors demonstrates that the model achieves better results on severls tasks, e.g., folding classification, reaction classification and GO term classification.
Strengths: [+] ProtGO proposes to improve the protein representation with extra side information, the function descriptions.
[+] With teacher-student approach, it is unnecessary to have additional functions as input for the student network, simplifying the inference process.
[+] Benchmark experiments demonstrate that ProtGO significantly outperforms state-of-the-art baselines.
Weaknesses: [-] The proposed method only demonstrate results on several small benchmarks. More results on larger scale benchmarks can be useful, e.g., residue prediction, mutation effect prediction, etc. Functions are related with the folding / reaction / GO term / EC numbers, I wonder whether it is helpful to demonstrate the performance on other problems, to demonstrate the generalization of the proposed method.
[-] The proposed method should learn better functions, therefore, binding affinity prediction may be a better downstream or zero-shot task to demonstrate the model performance.
[-] The experiment results can introduce more about the error bars.
Technical Quality: 2
Clarity: 3
Questions for Authors: I wonder how the authors avoid data leakage. What's the sequence and structure similarity (which can use AlphaFold DB to measure) between the functional annotation dataset and the other datasets.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer oeK7,
We are grateful for your thorough review. Your comments are highly valued, and we would like to express our heartfelt gratitude. We do our utmost to address the questions you have raised:
**Q1** The proposed method only demonstrate results on several small benchmarks. More results on larger scale benchmarks can be useful, e.g., residue prediction, mutation effect prediction, etc. Functions are related with the folding / reaction / GO term / EC numbers, I wonder whether it is helpful to demonstrate the performance on other problems, to demonstrate the generalization of the proposed method.
**A1** Thank you for your valuable feedback! Protein design involves the computational creation of amino acid sequences that can fold into a specific protein structure. Methods like ESM-IF1 [1], PiFold [2], and VFN-IF [3] are dedicated to protein design, distinct from protein representation learning for function prediction. By focusing on protein inverse folding, we apply our approach to CATH 4.2 dataset, as detailed in Table 2 of the one-page rebuttal pdf. Our method demonstrates exceptional performance in this context, which demonstrates the generalization of the proposed method.
**Q2** The proposed method should learn better functions, therefore, binding affinity prediction may be a better downstream or zero-shot task to demonstrate the model performance.
**A2** Thank you for your informative reviews! We use the binding affinity prediction dataset used in [4] and [5]. DNA Binding Site Prediction Result trained on DNA-573 Train, tested on DNA-129 Test. RNA Binding Site Prediction Result trained on RNA-495 Train, tested on RNA-117 Test. We use the AUC for evaluation. Our proposed achieves best results on the binding affinity prediction, illustrating its generalization ability.
|Method | DNA (AUC) | RNA (AUC) |
| :--- | :---: | :---: |
|SVMNUC [6] | 0.812 | 0.729 |
| Coach-D [7] | 0.761 | 0.663 |
| NucBind [6] | 0.797 | 0.715 |
| GraphBind [8] | 0.927 | 0.854 |
| VABS-Net [5] | 0.912 | 0.834 |
|ProtGO (Ours) | **0.941** | **0.878** |
**Q3** The experiment results can introduce more about the error bars.
**A3** Thank you for your reviews! The mean values are reported, we have calculated the ariane of results on protein function prediction. Our results have low error bars.
| Method | GO-BP | GO-MF | GO-CC | EC |
| :--- | :---: | :---: | :---: | :---: |
| ProtGO(Student) | 0.464 (0.005) | 0.667(0.002) | 0.492(0.006) | 0.857(0.008) |
**Q4** I wonder how the authors avoid data leakage. What's the sequence and structure similarity (which can use AlphaFold DB to measure) between the functional annotation dataset and the other datasets.
**A4** Thank you for your questions! (1) Employing the teacher-student framework, the student model learns from the teacher model's latent embeddings through knowledge distillation loss. To ensure data integrity, the test sets utilized in downstream tasks are excluded, addressing concerns regarding data leakage. (2) For the functional annotation dataset used in the teacher model and downstream datasets used in the student model, the sequence similarity between it and EC number prediction dataset is only 25\%, the structural similarity is 19\%; the sequence similarity between it and GO term prediction dataset is 75\%. (3) Adhering to the GearNet [9] protocol, the test sets for GO term and EC number prediction exclusively comprise PDB chains with less than 95\% sequence identity to the training set, allowing for the generation of varied cutoff splits.
*Thank you again for all the efforts that helped us improve our manuscript! We have tried our best to address your concerns as it is important for my graduation; we respectfully thank you for supporting the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!*
[1] Hsu, C., et al. Learning inverse folding from millions of predicted structures. ICML, 2022.
[2] Gao, Z., et al. PiFold: Toward effective and efficient protein inverse folding. ICLR, 2022a.
[3] Mao, W., et al. De novo protein design using geometric vector field networks. arXiv, 2023.
[4] Zhang, C., et al. Us-align: universal structure alignments of proteins, nucleic acids, and macromolecular complexes. Nature methods, 2022.
[5] Zhuang, W., et al. Pre-Training Protein Bi-level Representation Through Span Mask Strategy On 3D Protein Chains. ICML. 2024.
[6] Su, H., et al. Improving the prediction of protein–nucleic acids binding residues via multiple sequence profiles and the consensus of complementary methods. Bioinformatics, 2019.
[7] Wu, Q., et al. Coach-d: improved protein–ligand binding sites prediction with re- fined ligand-binding poses through molecular docking. Nucleic acids research, 2018.
[8] Xia, Y., et al. Graphbind: protein structural context embedded rules learned by hierarchical graph neural networks for recognizing nucleic- acid-binding residues. Nucleic acids research, 2021.
[9] Zhang, Z., et al. Protein representation learning by geometric structure pretraining. ICML, 2022b.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer oeK7,
We express our sincere gratitude for your constructive feedback in the initial review. It is our hope that our responses adequately address your concerns. Your expert insights are invaluable to us in our pursuit of elevating the quality of our work. We are fully aware of the demands on your time and deeply appreciate your dedication and expertise throughout this review.
We eagerly anticipate your additional comments and are committed to promptly addressing any further concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Sincerely,
The Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer oeK7,
Thanks for your review. We have tried our best to address your questions and we respectfully thank you for supporting the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!
Sincerely,
The Authors | Summary: In the paper entitled "ProtGO: Function-Guided Protein Modeling for Unified Representation Learning", the authors proposed a KD-based framework to incorporate GO knowledge to learn an unified, multi-modal embedding for a given protein. The cross-domain knowledge make the embeddings performs good in various downstream tasks.
Strengths: 1. The paper is well-written and easy-to-follow.
2. The proposed unified framework is novel and interest, it may lay as foundations for futural research on protein embedding.
3. The introduction of KD for transferring GO knowledges could potentially overcome the data insufficient problem.
Weaknesses: Lack of sufficient baseline method, as I believe in all modalities (sequence/structure/GO), there exists other powerful methods that the authors may ignore.
Minor:
In line 4, the authors wrote "including sequence, structure, domains, motifs, and...", generally domains and motifs are structural annotations, they should not be listed together with sequence and structure.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In table 1 and table 2, the authors may consider add ESM-1b/ESM-2/ESM-3 as sequence modality baselines.
2. Also table 1 and 2, the authors may consider add ESM-IF1 as structural modality baselines.
3. Also table 1 and 2, the authors may consider add GO/sequence+GO modality as input, add DeepGO-SE as baselines. (https://www.nature.com/articles/s42256-024-00795-w)
4. I am interested in the downstream task performance differences for proteins with GOs (where the teacher can teach the student well) and proteins without GOs (where the teacher model may fail). Moreover, GO terms consist of 3 parts, molecular functions, cellular components, and biological processes, I wonder whether the authors could explore the effectiveness of each part through comprehensive downstream experiments.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer kUCf,
We are grateful for your thorough review. Your comments are highly valued, and we would like to express our heartfelt gratitude. We do our utmost to address the questions you have raised:
**Q1** In table 1 and table 2, the authors may consider add ESM-1b/ESM-2/ESM-3 as sequence modality baselines.
**A1** Thank you for your valuable feedback! In actuality, ProtGO does not function as a pre-training model, making it unjust to draw comparisons with pre-training methodologies. Consequently, we integrate ESM-2 (650M) [1] with ProtGO, denoted as ProtGO-ESM, where the ESM embeddings serve as a component of graph node features. Our evaluation juxtaposes ProtGO-ESM against pre-training techniques in tasks like protein function prediction and EC number prediction. This includes a spectrum of methods: sequence-based approaches such as ESM-1b [2] and ESM-2; sequence-function based models like ProtST [3]; and sequence-structure based methodologies like GearNet-ESM [4], SaProt [5], and GearNet-ESM-INR-MC [6]. Notably, the outcomes of ESM-3 [7] are pending due to resource constraints. The comparative findings are detailed in Table 1 of the one-page rebuttal pdf. Notably, our proposed model, ProtGO-ESM, emerges as the top performer across sequence-based, sequence-structure based, and sequence-function based pre-training strategies.
**Q2** Also table 1 and 2, the authors may consider add ESM-IF1 as structural modality baselines.
**A2** Thank you for your informative reviews! Protein design involves the computational creation of amino acid sequences that can fold into a specific protein structure. Methods like ESM-IF1 [8], PiFold [9], and VFN-IF [10] are dedicated to protein design, distinct from protein representation learning for function prediction. By focusing on protein inverse folding, we apply our approach to CATH 4.2 dataset, as detailed in Table 2 of the one-page rebuttal pdf. Our method demonstrates exceptional performance in this context, nearly outperforming other approaches in this task.
**Q3** Also table 1 and 2, the authors may consider add GO/sequence+GO modality as input, add DeepGO-SE as baselines.
**A3** Thank you for your reviews! We have compared our model, ProtGO-ESM, with pre-training methods with sequence, sequence-structure and sequence function as inputs; the results are shown in Table 1 in the one-page rebuttal pdf.
**Q4** I am interested in the downstream task performance differences for proteins with GOs (where the teacher can teach the student well) and proteins without GOs (where the teacher model may fail). Moreover, GO terms consist of 3 parts, molecular functions, cellular components, and biological processes, I wonder whether the authors could explore the effectiveness of each part through comprehensive downstream experiments.
**A4** Thank you for your reviews! (1) The effectiveness of the teacher model in instructing the student is notably higher for GO terms with a higher frequency. Conversely, our experiments reveal that when the frequency of a GO term falls below 50, the teacher model may struggle. For instance, in the case of the GO term GO:0030027, denoted as lamellipodium, the performance of the teacher model is suboptimal.
(2) Through experimental analysis, we segmented GO terms into three categories: molecular functions (MF), cellular components (CC), and biological processes (BP). By utilizing only one category of GO terms as input for the GO encoder of the teacher model, we aimed to assess the efficacy of each category. Our findings indicate that focusing on a single category, such as MF, primarily enhances predictions related to MF in subsequent tasks. While specialization in a specific category can improve accuracy within that domain, it may not directly translate to improved predictions in the other two categories (BP and CC). This divergence stems from the distinct biological aspects represented by each category, each with its unique characteristics and interrelations.
**Q5** Minor: In line 4, the authors wrote "including sequence, structure, domains, motifs, and...", generally domains and motifs are structural annotations, they should not be listed together with sequence and structure.
**A5** Thank you for your suggestion! We will rectify this in the revised version.
*Thank you again for all the efforts that helped us improve our manuscript! We have tried our best to address your concerns as it is important for my graduation; we respectfully thank you for supporting the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!*
[1] Lin, Z., et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. bioRxiv, 2022.
[2] Rives, A., et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 2021.
[3] Xu, M., et al. Protst: Multi-modality learning of protein sequences and biomedical texts. ICML, 2023.
[4] Zhang, Z., et al. Protein representation learning by geometric structure pretraining. ICML, 2022b.
[5] Su et al. SaProt: Protein Language Modeling with Structure-aware Vocabulary. ICLR, 2024.
[6] Lee, Y., et al. Pre-training sequence, structure, and surface features for comprehensive protein representation learning. 2023.
[7] Hayes, T., et al. Simulating 500 million years of evolution with a language model. bioRxiv, 2024.
[8] Hsu, C., et al. Learning inverse folding from millions of predicted structures. ICML, 2022.
[9] Gao, Z., et al. PiFold: Toward effective and efficient protein inverse folding. ICLR, 2022a.
[10] Mao, W., et al. De novo protein design using geometric vector field networks. arXiv, 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer kUCf,
We express our sincere gratitude for your constructive feedback in the initial review. It is our hope that our responses adequately address your concerns. Your expert insights are invaluable to us in our pursuit of elevating the quality of our work. We are fully aware of the demands on your time and deeply appreciate your dedication and expertise throughout this review.
We eagerly anticipate your additional comments and are committed to promptly addressing any further concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Sincerely,
Authors | null | null | Rebuttal 1:
Rebuttal: First and foremost, we would like to express our sincere gratitude for the insightful and constructive feedback provided by the reviewers on our manuscript. We greatly appreciate their positive reception of ProtGO's potential and its timely relevance in the field of protein research.
We are particularly thankful for Reviewers **kUCf** and **qMp2**'s recognition of the novelty of our study, which may lay as foundations for futural research on protein embedding; We also appreciate their acknowledgment of the introduction of knowledge distillation for transferring GO knowledges, which could potentially overcome the data insufficient problem (Reviewer **kUCf**, **oeK7**). The reviewers have acknowledged the promising quality and significance of our work, achieving better results on tasks, e.g., folding classification, reaction classification and GO term classification (Reviewer **kUCf**, **oeK7**). Additionally, Reviewer **kUCf** has expressed the acknowledgment of our wirting, which is well-organized and easy-to-follow.
**Comparisons with pre-training methods:**
We appreciate the feedback received, particularly addressing the absence of comparisons with pre-training methods. It is essential to note that ProtGO does not operate as a pre-training model, making direct comparisons with pre-training methodologies inappropriate. To address this, we integrate ESM-2 (650M) [1] with ProtGO, forming ProtGO-ESM, where ESM embeddings enhance graph node features. Our assessment contrasts ProtGO-ESM with pre-training techniques in protein function and EC number prediction tasks. This evaluation encompasses diverse methods, including ESM-1b [2], ESM-2 for sequence-based approaches, ProtST [3] for sequence-function models, and GearNet-ESM [4], SaProt [5], and GearNet-ESM-INR-MC [6] for sequence-structure methodologies. Detailed comparative results are outlined in Table 1 of the one-page rebuttal PDF. Notably, our proposed model, ProtGO-ESM, emerges as the top performer across various pre-training strategies.
**Generalization Ability:**
In the one-page rebuttal PDF, we present results on protein design, showcased in Table 2, underscoring the generalization capacity of our method. Additionally, we conducted binding affinity prediction experiments on DNA [7] and RNA [8] datasets. As depicted in the table below, our method excels in binding affinity prediction, emphasizing its robust generalization capabilities.
|Method | DNA (AUC) | RNA (AUC) |
| :--- | :---: | :---: |
|SVMNUC [9] | 0.812 | 0.729 |
| Coach-D [10] | 0.761 | 0.663 |
| NucBind [9] | 0.797 | 0.715 |
| GraphBind [11] | 0.927 | 0.854 |
| VABS-Net [8] | 0.912 | 0.834 |
|ProtGO (Ours) | **0.941** | **0.878** |
*Once again, we sincerely appreciate the reviewers' feedback and remain committed to continuously improving our research and manuscript based on their valuable insights. Thank you again!*
[1] Lin, Z., et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. bioRxiv, 2022.
[2] Rives, A., et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 2021.
[3] Xu, M., et al. Protst: Multi-modality learning of protein sequences and biomedical texts. ICML, 2023.
[4] Zhang, Z., et al. Protein representation learning by geometric structure pretraining. ICML, 2022b.
[5] Su et al. SaProt: Protein Language Modeling with Structure-aware Vocabulary. ICLR, 2024.
[6] Lee, Y., et al. Pre-training sequence, structure, and surface features for comprehensive protein representation learning. 2023.
[7] Zhang, C., et al. Us-align: universal structure alignments of proteins, nucleic acids, and macromolecular complexes. Nature methods, 2022.
[8] Zhuang, W., et al. Pre-Training Protein Bi-level Representation Through Span Mask Strategy On 3D Protein Chains. ICML. 2024.
[9] Su, H., et al. Improving the prediction of protein–nucleic acids binding residues via multiple sequence profiles and the consensus of complementary methods. Bioinformatics, 2019.
[10] Wu, Q., et al. Coach-d: improved protein–ligand binding sites prediction with re- fined ligand-binding poses through molecular docking. Nucleic acids research, 2018.
[11] Xia, Y., et al. Graphbind: protein structural context embedded rules learned by hierarchical graph neural networks for recognizing nucleic- acid-binding residues. Nucleic acids research, 2021.
Pdf: /pdf/879a2179d5ddd65d8941cb8b2883665b9ae614ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DarkSAM: Fooling Segment Anything Model to Segment Nothing | Accept (poster) | Summary: This paper introduces DarkSAM, a prompt-free universal attack framework against the Segment Anything Model (SAM) in a quasi-black-box setting. The framework consists of a semantic decoupling-based spatial attack and a texture distortion-based frequency attack. While SAM uses geometric prompt inputs to guide segmentation of critical objects within images, DarkSAM disrupts these processes by decoupling the object features of images in both spatial and frequency domains using a universal adversarial perturbation (UAP). In the spatial domain, it scrambles SAM’s decisions by destroying the features of the foreground and background of the image separately. In the frequency domain, it decomposes images into high-frequency components (HFC) and low-frequency components (LFC), increasing the dissimilarity in the HFC of adversarial and benign examples while maintaining consistency in their LFC. Experiments are conducted on four public segmentation datasets (ADE20K, MS-COCO, CITYSCAPES, and SA-1B), with 100 images used for UAP generation and 2,000 images for testing for each dataset. Victim models include the pre-trained SAM, HQ-SAM, and PerSAM with the ViT-B backbone.
Strengths: 1. The paper is well-written and easy to follow.
2. The specific design of the shadow target strategy is tailored to SAM for prompt-based segmentation, which is unique compared to adversary attacks against traditional segmentation pipelines.
3. The attacking results are impressive. All three models show very low segmentation performance across multiple datasets.
Weaknesses: SAM is a milestone work, and a series of follow-up studies have been proposed recently. However, this paper does not provide an up-to-date review in Section 2.1 and lacks comparison in Section 4, weakening its significance. For example:
- SAM-based adversary attack:
[1] Practical Region-level Attack against Segment Anything Models, CVPR 2024.
- Other SAM-based models:
[2] From SAM to CAMs: Exploring Segment Anything Model for Weakly Supervised Semantic Segmentation, CVPR 2024.
[3] RobustSAM: Segment Anything Robustly on Degraded Images, CVPR 2024.
[4] Matching Anything by Segmenting Anything, CVPR 2024.
[5] FocSAM: Delving Deeply into Focused Objects in Segmenting Anything, CVPR 2024.
[6] ASAM: Boosting Segment Anything Model with Adversarial Tuning, CVPR 2024.
[7] BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model, CVPR 2024.
[8] Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively, ECCV 2024.
[9] CAT-SAM: Conditional Tuning Network for Few-Shot Adaptation of Segmentation Anything Model, ECCV 2024.
[10] Semantic-SAM: Segment and Recognize Anything at Any Granularity, ECCV 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does DarkSAM perform with variants of the SAM models?
2. Please include a comparison with [1] "Practical Region-level Attack against Segment Anything Models" (CVPR 2024).
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper presents a limitation that DarkSAM is tailored to SAM and cannot operate on traditional segmentation models.
Flag For Ethics Review: ['Ethics review needed: Safety and security']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Responses to Reviewer pM4H #
------
**Q1**: SAM is a milestone work, and a series of follow-up studies have been proposed recently. However, this paper does not provide an up-to-date review in Section 2.1 and lacks comparison in Section 4, weakening its significance.
**A1**: Thanks for your professional suggestions! We acknowledge that the ten papers provided by the reviewer pM4H are all latest papers from CVPR 2024 (conference dates: June 17-21, 2024) or ECCV 2024 (conference dates: September 29-October 4, 2024), and are therefore contemporaneous with our work. Since **these papers were not officially published** before the NeurIPS 2024 manuscript submission deadline (May 23, 2024), we did not include them in our submission. We are happy to include a discussion and comparison with these recent works in the revised version.
------
**Q2**: How does DarkSAM perform with variants of the SAM models?
**A2**: We have already evaluated the attack effectiveness of DarkSAM on four different SAM models across four datasets, including SAM [1], HQ-SAM [2], PerSAM [3], and MobileSAM [4] (see Tab. A1 in the Appendix). This is sufficient to demonstrate the effectiveness of our proposed method. Due to time constraints, we select the representative ASAM [5] provided by reviewer pM4H for further testing on four datasets, with results provided in Table R2. The experimental setup is consistent with that described in Sec. 4.2 of the manuscript. These experimental results further demonstrate the high effectiveness of our method against SAM and its variants.
Table R2: The mIoU (%) of DarkSAM on ASAM
| Prompt | Surrogate | ADE | COCO | CITY | SA-1B |
| ------ | --------- | ----- | ----- | ----- | ----- |
| Point | Clean | 63.57 | 59.73 | 49.07 | 75.17 |
| | ADE | 1.16 | 2.92 | 0.69 | 6.01 |
| | COCO | 1.20 | 2.04 | 1.60 | 4.20 |
| | CITY | 8.97 | 12.07 | 0.17 | 10.90 |
| | SA-1B | 3.17 | 5.43 | 1.21 | 3.04 |
| Box | Clean | 76.28 | 81.26 | 63.91 | 89.13 |
| | ADE | 1.94 | 3.05 | 1.67 | 12.16 |
| | COCO | 1.17 | 2.00 | 0.90 | 3.90 |
| | CITY | 14.41 | 20.36 | 0.39 | 22.24 |
| | SA-1B | 3.02 | 5.33 | 0.76 | 2.79 |
------
**Q3**: Please include a comparison with "Practical Region-level Attack against Segment Anything Models" (CVPR 2024)
**A3**: As mentioned in A1, [6] is a contemporaneous study that also explores adversarial attacks against SAM. However, it was not officially published before we submitted our manuscript to NeurIPS 2024. **The study differs from our work in two significant aspects: the attack objective and the attack method.**
1) **Local vs. Global**: In terms of the attack objective, [6] proposes a region-level attack aimed at concealing objects within a specific attacker-designated region, preventing SAM from segmenting them. In contrast, DarkSAM seeks to completely disable SAM, rendering it unable to segment any object in the input image, regardless of the type of prompt used.
2) **Sample-wise vs. Universal**: In terms of the attack method, [6] requires generating sample-specific noise for each image to deceive SAM. In contrast, our approach only requires a single universal adversarial perturbation (UAP) to fool SAM across a range of images, which is a more challenging task.
Due to these significant differences, a direct comparison between [6] and our proposed method is not feasible. To facilitate a fair comparison, one potential approach would be to adapt the method from [6] into a universal adversarial attack format. Unfortunately, [6] does not provide official code, making it difficult to implement and compare their method during the rebuttal period. Although we reached out to the authors immediately upon receiving the review comments, we have not yet received a response. We are willing to include a discussion and comparison with [6] in the revised version.
Additionally, compared to [6], we assess the effectiveness of DarkSAM across a range of experimental conditions, including box prompts (see Tabs. 1-3, A1-A3; Figs. 5, A3, A9, and A10), the segment-everything mode (see Figs. 5, A2, A4, A9, and A10), and multi-point prompts (see Fig. A8). Our extensive and comprehensive experimental results are sufficient to demonstrate the effectiveness and superiority of our method.
**Reference**
[1] Segment Anything, ICCV 2023.
[2] Segment Anything in High Quality, NeurIPS 2023.
[3] Personalize Segment Anything Model with One Shot, ICLR 2024.
[4] Faster Segment Anything: Towards Lightweight SAM for Mobile Applications, Arxiv 2023.
[5] ASAM: Boosting Segment Anything Model with Adversarial Tuning, CVPR 2024.
[6] Practical Region-level Attack against Segment Anything Models, CVPR 2024. | Summary: This work investigates adversarial attacks against Segment Anything Models (SAMs) and presents DarkSAM, the first universal adversarial attack designed for these models. DarkSAM leverages a single perturbation to effectively undermine SAM’s object segmentation capabilities across a variety of images and prompts. The authors conduct a comprehensive evaluation of DarkSAM across four datasets and three SAM variants (SAM, HQ-SAM, and PerSAM), covering attack performance, transferability, comparative analysis, and ablation studies.
Strengths: 1. The paper introduces a unique perspective on adversarial attacks for prompt-guided segmentation models, which is a relatively unexplored area in the literature. The proposed DarkSAM method is innovative in its approach to decoupling object features for attack optimization.
2. The research question is well-defined, and the authors thoroughly compare DarkSAM with a multitude of established baselines.
3. The paper presents both qualitative and quantitative results, effectively demonstrating the impact of DarkSAM. These results provide a thorough assessment of its performance across various conditions
Weaknesses: 1. It is recommended that the authors further supplement the experimental section with relevant analyses, such as explaining why the spatial domain attack is more critical than the frequency domain attack within the proposed framework.
2. In this paper, the usage of mIoU and ASR appears to be analogous, with both metrics conveying the same information. Could the authors provide insight into the justification for employing both metrics concurrently?
3. In Figure 7, the visualization of segmentation masks is notably dark, impeding discernibility for the reader. The authors should consider increasing the brightness of these images or employing more vivid colors.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I agree with the authors' prompt-free approach to the attack on SAM. I am curious about the types of prompts that may be more advantageous in crafting effective UAPs during the attack generation process. Additionally, could the authors provide an explanation regarding the selection of prompts?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. The paper relies on heuristic research and lacks a corresponding theoretical framework for analysis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Responses to Reviewer dgw7 #
------
**Q1**: It is recommended that the authors further supplement the experimental section with relevant analyses, such as explaining why the spatial domain attack is more critical than the frequency domain attack within the proposed framework.
**A1**: Thanks for the constructive suggestion! As observed in Fig. 6(a) of the manuscript, we can see that the spatial domain attack is more critical than the frequency domain attack within the proposed framework. This may be attributed to SAM’s reliance on pixel information in the spatial domain rather than frequency information in the frequency domain during object segmentation. We will include a detailed description of this analysis in the ablation study section of the revised version.
------
**Q2**: In this paper, the usage of mIoU and ASR appears to be analogous, with both metrics conveying the same information. Could the authors provide insight into the justification for employing both metrics concurrently?
**A2**: We use both mIoU and ASR to evaluate attack performance In this paper. Lower mIoU values and higher ASR indicate stronger attack effectiveness. On one hand, we use mIoU to *quantitatively* and intuitively demonstrate the model's robustness against the proposed method. On the other hand, we employ the *visualization-friendly* ASR to further and comprehensively showcase the superior performance of the proposed approach. These two metrics Together offer a well-rounded assessment of the attack's impact.
------
**Q3**: In Figure 7, the visualization of segmentation masks is notably dark, impeding discernibility for the reader. The authors should consider increasing the brightness of these images or employing more vivid colors.
**A3**: Thanks for your valuable feedback! We will enhance the visualization of the segmentation masks to make them more visually appealing in the revised version.
------
**Q4**: I agree with the authors' prompt-free approach to the attack on SAM. I am curious about the types of prompts that may be more advantageous in crafting effective UAPs during the attack generation process. Additionally, could the authors provide an explanation regarding the selection of prompts?
**A4**: What a valuable question! In the manuscript, we provide an evaluation of the attack performance of UAPs created using point and box prompts on SAM across three segmentation modes (see Tabs 1, 2, and Fig. 5). The results in Tab. 1 and Fig. 5 show that the attack performance using point prompts does not differ significantly from that using box prompts. However, Tab. 2 indicates that UAPs created with box prompts exhibit better transferability, which may be due to the additional information they provide (Line 255 - Line 260).
------
**Q5**: The paper relies on heuristic research and lacks a corresponding theoretical framework for analysis.
**A5**: We acknowledge that the paper primarily relies on heuristic research and does not provide a complete theoretical framework. Our intent is to explore and validate new approaches for assessing the robustness of SAM and its variants through heuristic methods (see Sec. 3.2), which lays the groundwork for future theoretical modeling. We will include a discussion of these limitations in the "Conclusions, Limitations, and Broader Impact" section. In future work, we plan to further develop and refine the theoretical framework to systematically analyze our findings.
---
Rebuttal Comment 1.1:
Title: Response from the reviewer
Comment: After carefully reviewing the comments from the other reviewers and the author's rebuttal, I find that all of my concerns have been adequately addressed. Therefore, I have decided to raise my score to 8.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer dgw7
Comment: Dear Reviewer dgw7,
Thank you for your positive feedback! We would like to express our deep gratitude for your dedicated time and effort in reviewing our manuscript. If you have any further questions, please leave us new comments.
Best regards,
The Authors | Summary: This paper introduces DarkSAM, a universal adversarial attack against the Segment Anything Model and its variants. DarkSAM aims to prevent these models from successfully segmenting objects within images. The experimental results demonstrate the effectiveness and transferability of the proposed method.
I have read the response of the authors and comments of other reviewers, I decide to keep my weak accept score.
Strengths: 1.This paper introduces a new universal adversarial attack framework for prompt-guided image segmentation models.
2.The combination of spatial and frequency domain attacks is a sophisticated approach that demonstrates a good understanding of SAM.
3.This paper is well-written. Following the introduction, I can easily understand the goal of this paper.
Weaknesses: 1.The related work can be improved. This paper could benefit from an expanded discussion on adversarial attacks targeted at traditional segmentation models.
2.Lack of specific explanation. This method is novel and interesting but I’m curious about the reason why it works. What is the exact process for determining random prompts, and does this method ensure coverage of all potential attack targets?
3.The experimental results lack error bars. Repeating the experiments a few times and reporting the results with error bars would make the findings more convincing.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Responses to Reviewer YLRY #
------
**Q1**: The related work can be improved. This paper could benefit from an expanded discussion on adversarial attacks targeted at traditional segmentation models.
**A1**: Thank you for the constructive feedback! We will include a more comprehensive discussion on adversarial attacks targeting traditional segmentation models in the revised version.
------
**Q2**: Lack of specific explanation. This method is novel and interesting but I’m curious about the reason why it works. What is the exact process for determining random prompts, and does this method ensure coverage of all potential attack targets?
**A2**: We have outlined the process of the shadow target strategy in Fig. 2 of the manuscript. For each image, we randomly generate *k* prompts, obtain the outputs from SAM for each, and then merge these results into a "Blueprint" to be used as the attack target (Line 138). The goal of this strategy is not to ensure coverage of all potential attack targets, but rather to maximize the creation of shadow targets for generating UAPs. We are willing to refine the relevant statements in the revised version.
------
**Q3**: The experimental results lack error bars. Repeating the experiments a few times and reporting the results with error bars would make the findings more convincing.
**A3**: Thank you for the valuable suggestions. We test the attack performance of the proposed method on SAM using three different random seeds across four datasets. The experimental setup is consistent with that described in Sec. 4.2 of the manuscript. The results in Tab. R1 demonstrate that our method consistently exhibits robust attack performance.
Table R1: The mIoU (%) of DarkSAM under different settings
| Prompt | Surrogate | ADE | COCO | CITY | SA-1B |
| ------ | ------- | ------------ | ------------ | ------------ | ------------ |
| Point | Clean | 65.07 ± 0.25 | 63.04 ± 0.14 | 50.43 ± 0.24 | 77.18 ± 0.05 |
| | ADE | 0.73 ± 0.33 | 2.91 ± 0.58 | 1.03 ± 0.64 | 4.50 ± 1.37 |
| | COCO | 0.16 ± 0.13 | 0.41 ± 0.38 | 0.27 ± 0.25 | 0.93 ± 0.77 |
| | CITY | 7.64 ± 1.95 | 0.13 ± 0.06 | 16.02 ± 3.57 | 11.68 ± 5.32 |
| | SA-1B | 0.60 ± 0.41 | 0.01 ± 0.01 | 2.18 ± 1.66 | 0.08 ± 0.03 |
| Box | Clean | 74.49 ± 0.24 | 79.10 ± 0.08 | 64.84 ± 0.41 | 89.24 ± 0.09 |
| | ADE | 6.57 ± 0.90 | 13.27 ± 1.35 | 1.98 ± 0.21 | 19.19 ± 2.21 |
| | COCO | 1.68 ± 0.47 | 3.21 ± 0.49 | 2.11 ± 0.82 | 9.44 ± 1.26 |
| | CITY | 20.35 ± 5.76 | 29.15 ± 7.64 | 0.78 ± 0.53 | 23.32 ± 6.56 |
| | SA-1B | 12.20 ± 3.20 | 22.40 ± 4.62 | 4.31 ± 2.45 | 5.56 ± 0.95 |
| | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding the Expressive Power and Mechanisms of Transformer for Sequence Modeling | Accept (poster) | Summary: This paper conducts a systematic study on the expressive power and mechanisms of Transformers for sequence modeling. It explores various properties of Transformers, including the number of layers, attention heads, width, and the use of dot production. The theoretical results are supported by experimental evidence.
Strengths: This paper appears to be technically solid. It provides a detailed formulation of the problem and explores the expressive capabilities of Transformers across various tasks. Notably, it includes experiments in Appendix H to validate its theoretical insights. However, I have not verified the correctness of the proofs.
The paper contains some intriguing findings. I particularly want to highlight the exploration between dropout (DP) and relative position encoding (RPE), which I find especially interesting.
Weaknesses: The key weakness of this paper lies in its unclear presentation of motivation, innovation, and contributions. The detailed reasons are as follows:
1) The motivation of this paper appears weak. The authors state their motivation as "to gain a better understanding of how Transformers work in practice." However, there are already numerous studies exploring the expressive power of Transformers, covering most of the topics in this paper. It is unclear why the authors believe the current understanding of Transformers is insufficient and why the community needs this paper.
2) The technical contributions of this work are not clearly presented. It is difficult to discern which conclusions are derived from known techniques and which are based on new techniques proposed by the authors. Additionally, the nature of these new techniques is not clearly explained.
3) The paper's analysis is based on dividing tasks into three categories: modeling fixed, long but sparse memories; modeling adaptive, long but sparse memories; and modeling essentially sparse memories. However, the necessity of this categorization and the relationships between these tasks are unclear. The authors claim these categories are prevalent, but the supporting evidence is weak.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) This paper includes numerous proofs, making it challenging to verify their correctness. Could the authors provide a proof sketch for Theorem 4.4?
2) The tightness of the bound is also uncertain. Could the authors provide a numerical result demonstrating the bound in a toy case?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our work and helpful comments. Below, we provide detailed responses to the reviewer’s questions.
- **W1.** The motivation of this paper appears weak.
**Response:** Thank the reviewer for this inquiry.
While Transformer shows remarkable capabilities in long-sequence modeling, its underlying mechanisms remain poorly understood. Specifically, several key questions inspire our work, as detailed in l.15–l.30:
- How do the key hyper-parameters, for example, the number of layers ($L$), the number of Self-attention (Attn) heads ($H$) and the with of feed-forward network (FFN) layers ($m$), affect the performance of the Transformer network?
- How do Attn and FFN layers contribute differently to the overall performance?
- How does Dot-product (DP) self-attention work, and is the DP structure necessary?
- How efficient is positional encoding (PE) in modeling long-range correlations?
Furthermore, Appendix A contains a detailed investigation of related works on the expressive power of Transformers. To the best of our knowledge, our manuscript is the **first work** that comprehensively addresses **all the above questions** within the scenario of long-sequence modeling.
- **W2.** The technical contributions of this work are not clearly presented.
**Response:** Thanks for this question. At a high level, our proofs belong to approximation theory, which inevitably involves the use of some standard approximation theory results, such as Lemma G.5 and G.6. However, the most critical step in our main theorems is based on constructive proofs. Our construction employs the specific structure of RPE, DP, and the modeling task, in contrast to many previous proofs such as those in [1][2]. In the revised version, we will add a remark to clarify our technical contributions.
- **W3.** The necessity of this categorization and the relationships between these tasks are unclear.
**Response:** Thanks for this question. We would like to clarify the necessity and relationships of our three task categories:
- **Task Complexity and Relevance:** The three tasks exhibit varying complexity, with Task II and III being more complex than Task I. They are relevant to a wide range of application areas: Task I relates to sparse Boolean functions and the traditional n-gram model in NLP; Task II pertains to various NLP tasks, such as dependency parsing, sentiment analysis, part-of-speech tagging, and continuation writing; Task III includes feature representation in CV and wavelet analysis in classical signal processing. Please refer to Section 3-5 in our manuscript for more details.
- **Insights from Separate Analysis:** More importantly, analyzing these distinct tasks separately provides insights into different components of the Transformer architecture. For example, studying Task III reveals the efficiency of positional encoding (PE) in modeling long-range correlations, which can not be covered by studying Task I/II; Analyzing Task II illustrates how the number of layers and the number of Attn heads affect the expressivity of Transformers, which can not be revealed by studying Task I/III; Comparing the analysis of Task I and Task II shows when DP is necessary and how DP and PE work together.
- **Q1.** Proof sketch for Theorem 4.4.
**Response:** Thank the reviewer for this question. Below, we provide a proof sketch for Theorem 4.4. Please refer to our proof route in l.967 to follow this sketch:
- Under the setting of Theorem 4.4, the memories are $K$-Adaptive, long but M-sparse (see l.200). We consider the case where $M>K+1$. For the initial $K$ memories, there exits a nested structure, while this nested structure does not exist for the $(K+1,\cdots, M)$-th memories.
- In our proof, the nested structure within the initial $K$ memories mandates sequential processing in the first $K$ layers, one by one. Then, in the $(K+1)$-th layer, the remaining $M-K$ non-nested memory functions $t_{K+1},\cdots,t_M$ are processed concurrently.
- In each layer, the FFN sublayer is tasked with approximating nonlinear memory functions, such as $t_i=g_i(X)$ and the readout function $f$, while the Attn sublayer is responsible for extracting the tokens from these memory locations, such as $x_{t-t_i}$.
In our revised version, we will include more proof sketches in the main text for clarity.
- **Q2.** The tightness of the bound is also uncertain. Could the authors provide a numerical result demonstrating the bound in a toy case?
**Response:** Thank the reviewer for offering this constructive suggestion. We have conducted a new experiment to verify the tightness of our bound. Due to time constraints, we focus on Theorem 3.1 for the case "type=lin".
- **Objective.** For simplicity, we consider a single memory $T$. We aim to verify the following two bounds are tight: (i)*$H$ v.s. error*. Given a memory location $T$, the error $\epsilon\lesssim\frac{1}{{\rm poly}(H)}$; (ii) *$H$ v.s. $T$*. To achieve the same error $\epsilon$, we need $H\gtrsim\exp(T)$ heads.
- **Setup.** We train single-layer DP-free Transformers with different numbers of heads ($H$) to learn a simple sparse Boolean function, which is within our theoretical framework. Specifically, the input sequence $X=(x_1,\cdots,x_{10})\in\\{\pm1\\}^{10}$, the target output is $x_{11}^*=x_{10-T}$, where $T\in[9]$ reflects the memory location.
- **Results & Conclusion.** The results are shown in **the PDF** in our **Global Response to All Reviewers**, supporting the tightness of our bounds: (i) given a memory location $T$, the error $\epsilon\sim\frac{1}{{\rm poly}(H)}$. On the other hand, to achieve the same error $\epsilon$, we need at least $H\sim\exp(T)$ heads.
[1] Jiang and Li. Approximation Rate of the Transformer Architecture for Sequence Modeling. (2023)
[2] Edelman et al. Inductive Biases and Variable Creation in Self-Attention Mechanisms. (ICML 2022)
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' detailed response and confirm that they have addressed all the concerns raised. I also find the newly added experiments to be both interesting and insightful. Based on this, I have decided to increase my score to 6.
---
Reply to Comment 1.1.1:
Comment: We would like to reiterate our gratitude for your valuable recommendation and positive feedback. Thank you for raising the score! | Summary: The paper investigates the approximation capabilities of transformers with relative positional encodings and derives approximation rates for three types of sequence modeling tasks. Each task corresponds to a particular choice of target function class. Namely, a first class of mappings are fixed, long but sparse memories (equation (5)), and the authors focus on dot-product free transformers. A second class are adaptive, long but sparse memories (equation (8)), and the authors consider deep transformers with specific precision and no SoftMax. The last class of mappings are essentially sparse memories and consist of convolving the input sequence with different kernels where, again, the considered Transformers are dot-product free.
Each approximation rate is commented on the role of the feedforward and self-attention components, the role of dot-product attention (that is, the fact that the attention maps depend on the input sequence through pairwise interactions), and the effect of relative position encoding (whether it is of linear or logarithmic type).
Strengths: - The paper is well written and easy to follow.
- The mathematical results are all supported with detailed and rigorous proofs.
- Each Theorem is structured in the same way which eases the understanding of the whole paper.
- The paper adresses the important question of approximation rates of Transformers in sequence modeling.
- The paper establishes rigorous approximation results for Transformers on sequences modeling tasks. These results are significant and give insights on the role of the different Transformer components on their approximation capabilities.
- The three problem formulations are clearly defined and commented.
Weaknesses: - Unless I missed something, the "true" attention mechanism (line 104) is never used in the paper to derive approximation rates. In sections 3 and 5 the attention is DPF, while in section 4 there is no softmax and quantization is used. I don't think this is a major problem, but it should be emphasised (perhaps by extending remark 2.2) that you never consider the transformer with all the bells and whistles.
- I understand that the dimension of the embeddings grows with $Md$. This is not mentioned in the main text, nor commented on in the supplement. How does this compare with previous work?
- Also, you mention that the number of heads needed grows at least polynomially with the $T_i$'s. I understand that this is due to the particular choice of RPE. Would other types of PE give a better dependency in the $T_i$'s ? How does the current dependency compare with previous works?
- Paragraph l. 274 to l. 277 is unclear. When do you use this alternative in the rest of the paper?
- Typo in l. 281
- l. 282, I don't understand the notation for the lower bound on $m$.
Technical Quality: 4
Clarity: 3
Questions for Authors: - Please see some questions in the Weaknesses part.
- What do you mean by rate $n$ in all the theorems ? Is there an optimal choice for $n$ ?
- What is the main reason why you did not consider the SoftMax in section 4 ?
- How does $C(n)$ behaves with $n$ ?
- Could you approximate the model with specific precision with another Transformer which does not rely on specific precision ?
- Overall, the spirit of the approximation in the paper is always to form augmented tokens $(x_{t}, x_{t-t_1}, \dots x_{t-t_M})$ using attention and to apply the MLP on top of it. Do you think this is the "optimal" way ? And if so, what are the reasons behind it ?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the appreciation of our work and insightful comments. We answer the reviewer's questions in the following.
- **W1.** Never consider transformers with all the bells and whistles.
**Response:** The reviewer is correct. Simplifying models appropriately in different settings helps avoid complex technical proofs and, more importantly, can accurately reveal the distinct roles and necessity of various components in Transformers. In the revised version, we will clarify this point in Remark 2.2.
- **W2.** Embeddings dim.
**Response:** Thank the reviewer for this insightful comment.
*Comparison with [1]:* For the Transformer $f$ in our paper, we consider embedding dim $D(f)\sim M d$. Notably, this is independent of the sequence length $T$ and represents almost the minimum embedding dim required to extract $M$ $d$-dim memories simultaneously. In contrast, the Transformer $g$ [1] considers a larger embedding dim $D(g)\sim T^2 d$ ($D(g)\gg D(f)$ due to $T\gg M$) and prove that in this setting, 1-layer Transformers are sufficient for sequence modeling, whereas our results require multiple layers for our Task II.
- **W3.** The number of heads.
**Response:** Thank the reviewer for this insightful question. The reviewer is correct that our required number of heads is due to the use of a specific RPE.
*Comparison with [2]*, which considers Transformer with absolute position encoding (APE) to study Task I. For a fair comparison, let $T:=\max_{i} T_i$. In our work, the model $f$ requires at least $H(f)\sim\text{poly}(T)$ heads, with the number of position parameters $N_p(f)=H(f)\sim\text{poly}(T)$. In contrast, the model $g$ (using APE) in [2] has $N_p(g)\sim Td$ position parameters, and $1$-head Transformers ($H(g)=1$) are sufficient for perform sequence modeling, whereas our model requires $H(f)\sim\text{poly}(T)$ heads. Therefore, model $g$ in [2] requires fewer heads than our model $f$. However, the relationship between the number of position parameters $N_p(f)$ and $N_p(g)$ remains uncertain.
- **W4.** l.274-l.277, alternative.
**Response:** Thanks for this inquiry. To clarify, this alternative is only used in Prop. 4.3; all other results do not utilize this alternative.
- **W5.** Typo in l. 281.
**Response:** We are grateful to the reviewer for identifying the typo. We will carefully read through the whole paper and correct all typos.
- **W6.** l. 282, lower bound on $m$.
**Response:** We apologize for the confusion. For example, for type = lin, this lower bound should be $m\gtrsim\max\Big\\{\max_{i\in[K]} ||g_i||_B^2,\sum\_{i=K+1}^M ||g_i||_B^2\Big\\}$.
- **Q1.** Optimal choice for $n$.
**Response:** Thanks for the inquiry. Given $T_1,\cdots,T_M$, and $H$, there exits an optimal choice for $n$ that minimizes the error term $\mathcal{E}_\text{Attn}(\text{Type})$, although calculating it may not be straightforward.
- **Q2.** The reason for removing SoftMax in section 4 ?
**Response:** Thank the reviewer for this question. This is a technical operation to simplify the proof. Generally, we use the numerator of SoftMax in a series of attention heads to approximate the memory locations. Hence, we need to cancel the normalization in the SoftMax in each attention head by using $W_V$. However, in Sec.4, the existence of Dot-product results in the normalization in SoftMax depending on the tokens, which cannot be canceled by constant $W_V$. For more details, please refer to our proofs.
- **Q3.** How does $C(n)$ behaves with $n$?
**Response:** Thanks for the inquiry. $C(n)\leq A^n$, where $A$ is an absolute constant. Notably, although $C(n)$ grows exponentially with $n$, our approximation results remain efficient because the denominator term $H^n$ also grows exponentially, resulting in $\frac{C(n)}{H^n}\leq(\frac{A}{H})^n$.
- **Q4.** Could you approximate the model with specific precision with another Transformer which does not rely on specific precision ?
**Response:** Thank the reviewer for their insightful comment. First, we would like to clarify that FFN with specific precision is a technical trick to handle the discrete values of the time.
Let the two-layer FFN with specific precision be $\tilde{f}:=[f]$. Notably, in our setting, we require an *exact representation* of $\tilde{f}$ rather than an approximation. Note that $\tilde{f}$ is a "step function", which is discontinuous at "step points". Thus, any FFN $g$ with continuous activation (such as ReLU and Sigmoid) cannot represent $\tilde{f}$ because $g$ is continuous.
- **Q5.** Do you think this is the "optimal" way ? And if so, what are the reasons behind it ?
**Response:** Thank the reviewer for this enlightening question.
- We believe this approach is general and natural. As the reviewer commented, our main idea is to form $(x_t,x_{t-t_1},\cdots,x_{t-t_M})$ using multi-layer Transformers. In this process, FFN layers and Attn layers work together, each performing their respective strengths: FFN layers approximate complex nonlinear memory functions, and Attn layers are responsible for extracting tokens at the memory locations. Additionally, our Experiment G2 and G3 verify these insights into the roles of FFN and Attn.
- However, it should be noted that we make minimal assumptions about the data. Therefore, we do not rule out the existence of better way for special problems. For example, if the data is distributed on a simple low-dimensional manifold $S$ with dimension $r$ ($r<d$), we only need to form $(P_S x_{t-t_1},\cdots,P_S x_{t-t_M})$ using attention and apply an FFN on top of it, where $P_S$ is the projection to the manifold $S$. In this case, the embedding dim only needs to be $rd$, which is smaller than our $Md$.
[1] Jiang and Li. Approximation Rate of the Transformer Architecture for Sequence Modeling. (2023)
[2] Edelman et al. Inductive Biases and Variable Creation in Self-Attention Mechanisms. (ICML 2022)
---
Rebuttal Comment 1.1:
Comment: Dear authors,
I have read the rebuttal. Thank you for responding to my questions. I have noted that the authors will incorporate the conclusions of the discussions with the reviewers in a revised version, and I have decided to maintain my score.
Best,
Reviewer iDXk
---
Reply to Comment 1.1.1:
Comment: We are delighted that our responses have addressed your questions. Thank you very much for your support! | Summary: The paper provides a thorough analysis of the expressive power of Transformers. It does so by investigating the relative importance of different architectural components, such as the mixing layer (dot product attention), the feed-forwad block and positional embeddings. More specifically, the paper studies three different tasks, characterised by different levels of complexity, sharing some key properties with relevant realistic applications such as n-gram models, sparse Boolean functions (Task I), Adaptive sparse Boolean functions, parsing, part-of-speech tagging (Task II), feature extraction in image and signal processing (Task III). On each of such tasks, the paper studies the approximation rates of transformer models defined in terms of their number of layers $L$ and heads $H$, the width of the feed-forward module $m$ and the type of relative positional embedding utilised (i.e. either Alibi or T5 style). Through its theoretical analysis the paper shows that the necessary model's complexity needs to adapt to the complexity of the task. Moreover by investigating the role of different components, the paper shows that dot-product attention is necessary for more complex tasks (Task II) while its dot-product free variant provably fails. Nevertheless, the authors show that the task can still be solved with a more parameter efficient alternative. In addition, the role of the feed-forward blocks is mainly to approximate readout functions and memory functions, aligning with classical results on approximating Barron functions. The theoretical results are corroborated with empirical results.
Strengths: * The paper meaningfully contributes to the very relevant research line on theoretical analysis of transformers models.
* The paper originally addresses important questions, such as the relative role of different components in the transformer architecture and the necessity of the dot-product attention, through a principled approach and in a controlled framework.
* Generally the paper is very well written (modulo some sparse and minor typos) and pleasant to read.
Weaknesses: * Some concepts could be better introduced: for example, it was not clear to the reviewer what the authors meant by "memories" in the introduction of the paper. While this gets clearer later, it would be helpful to clarify it earlier.
* Layer normalization, an important component of modern transformers, does not seem to be included in the analysis. What effect do you expect it to have on your results?
* The role of the embedding dimension $D$ is not considered in the reported rates. What is the effect of varying $D$ on the approximation rates of the model?
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The limitations of the paper are discussed in section 7
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's recognition of our work and helpful comments. Below, we offer detailed responses to the reviewer’s questions:
- **W1.** Some concepts could be better introduced: for example, it was not clear to the reviewer what the authors meant by "memories" in the introduction of the paper. While this gets clearer later, it would be helpful to clarify it earlier.
**Response:** Thank the reviewer for this helpful suggestion. In our paper, "memories" generally refer to the sparse "tokens" that sequence modeling relies on. For example, for the simplest Task I (sequence modeling with fixed memories), the number of memories is $M$, and they are located at fixed positions $t_1,\cdots,t_M$ (which can be very large). In the revised version, we will introduce this concept in Section 1 to enhance clarity.
- **W2.** Layer normalization, an important component of modern transformers, does not seem to be included in the analysis. What effect do you expect it to have on your results?
**Response:** We thank the reviewer for this insightful question.
- First, we would like to clarify that *the primary reason for not including Layer normalization (LN)* in our analysis is: LN is typically employed to accelerate and stabilize the training of Transformers, whereas our paper focuses on the issue of expressiveness. In fact, previous works about the expressiveness of Transformers have also omitted LN [1][2].
- Now we discuss the *potential impact of LN on our results*. From an approximation theory perspective, LN introduces certain nonlinearity. However, our Transformers include FFN layers, which are already capable of effectively approximating nonlinear functions. Therefore, incorporating LN seems unlikely to significantly improve the network's expressiveness. Technically, LN introduces a coordinate-wise normalization operation, which cannot be analyzed using our current methodology. We leave this analysis for future work.
- **W3.** The role of the embedding dimension $D$ is not considered in the reported rates. What is the effect of varying $D$ on the approximation rates of the model?
**Response:** We thank the reviewer's insightful question. To clarify, let the dimension of a single token be $d$, the input sequence length be $T$ (where $T=\infty$ in our paper) the number of memories be $M$ ($M\ll T$), and the embedding dimension of Transformer be $D$.
- *Conjecture: increasing the embedding dim $D$ can effectively reduce the requirement on the number of layers but may not decrease the total number of parameters needed.*
- *Theoretical evidence (Comparison with [3]).* In our paper, we consider embedding dim $D_1\sim M d$, which is notably independent of the sequence length $T$ and almost the minimum embedding dim required to extract $M$ $d$-dim memories simultaneously. In contrast, in [3], the authors consider a large embedding dim $D_2\sim T^2 d$ ($D_2\gg D_1$ due to $T\gg M$), and prove that, in this setting, 1-layer Transformers are sufficient to perform sequence modeling with sequence length $T$, whereas our results require multiple layers (for Task II).
- *Empirical evidence:* the "scaling law" (see Fig. 6 in [4]) partially supports this conjecture: 1-layer Transformes indeed demonstrate the scaling law. However, with the same total number of parameters, 1-layer Transformers (with larger $D$) slightly underperform compared to 6-layer Transformers (with smaller $D$). Additionally, [5] highlights that for solving iGSM math problems, given the same total number of parameters, deep Transformers (with smaller $D$) perform much better than shallow Transformer (with larger $D$).
We will discuss this open issue in Section 7 in the revised version and leave further analysis for future work.
[1] Yun et al. Are transformers universal approximators of sequence-to-sequence functions? (ICLR 2020)
[2] Bai et al. Transformers as statisticians: Provable in-context learning with in-context algorithm selection. (NeurIPS 2023)
[3] Jiang and Li. Approximation Rate of the Transformer Architecture for Sequence Modeling. (2023)
[4] Kaplan et al. Scaling Laws for Neural Language Models. (2020)
[5] Ye et al. Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process. (2024)
---
Rebuttal Comment 1.1:
Comment: Thanks again for your valuable time and effort in reviewing our work!
We are wondering if our responses address your questions or concerns.
We are happy to try to address any other comments in the time remaining.
---
Rebuttal Comment 1.2:
Comment: Dear authors,
I have read the rebuttal. Thank you for your work and the answers to my concerns. I am satisfied with the authors' reply and will keep my score.
Best,
Reviewer xF1Y
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer for the positive feedback of our response. We appreciate your support very much! | null | null | Rebuttal 1:
Rebuttal: ### **Global Response to All Reviewers.**
- First, we sincerely thank all the reviewers for their appreciation of our results, i.e., theoretical analysis of the expressive power of Transformer for sequence modeling. Our analysis provides valuable insights (also supported by experiments) into the underlying mechanisms of Transformer components, including:
- The distinct roles of the number of layers ($L$), the number of self-attention (Attn) heads ($H$), and the width of feed-forward network (FFN) layers ($m$).
- The different roles of Attn layers and FFN layers.
- The functionality and necessity of dot-product (DP).
- The efficiency of relative positional encoding (RPE) in modeling long-range correlations.
- We also express our gratitude to the reviewers for their comments and suggestions for improving our paper. In the revised version, we will correct all typos, include the proof sketches, provide complete experimental settings and results, and incorporate the discussions with the reviewers.
- The attached PDF reports a new experimental result addressing Reviewer Fdzw's Question 2 regarding the tightness of our bounds.
- We have addressed each concern raised by the reviewers through separate responses provided below.
Pdf: /pdf/5566c21ccea97441db76a8ea4a5ee7f36df36e64.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Private and Personalized Frequency Estimation in a Federated Setting | Accept (poster) | Summary: This paper studies the problem of personalized frequency estimation in federated learning. The authors propose both private and non-private algorithms based on clustering and Good-Turing estimators and demonstrate promising empirical results on real-world datasets.
Strengths: 1. The problem of data heterogeneity is very important in federated learning.
2. The proposed algorithms are practical and can be applied to large-scale datasets.
3. Empirical results demonstrate improvement over FedAvg and MAML baselines.
Weaknesses: 1. The cluster assumption requires choosing the cluster number $K$.
2. KL divergence may not work well for sparse distributions, and this is possibly why the authors need to assume $P_c[v]$ is constant for all $v$ in Theorem 5.1. The authors motivate this work by estimating the frequency of words, but they are usually very concentrated over a very small fraction of words.
3. In Theorem 5.1, the minimum KL divergence is at least order $d$ which seems restrictive at first glance, especially considering the number of words could be of order $10^4-10^5$. Also, the $k^2$ in line 266 seems to be $K^2$.
4. The $\log(1/\delta)$ in the privacy guarantee in Theorem 5.2 can be pretty large when $\delta$ is very small. Since normally we require $\delta\sim o(1/n)$ where $n$ is the number of users, this introduces a $\log n$ factor to the privacy guarantee.
5. In the experiments, the authors only show results for $\varepsilon=15$.
6. Relevant papers to cite and discuss:
[1] Ozkara, Kaan, et al. "A statistical framework for personalized federated learning and estimation: Theory, algorithms, and privacy." International Conference on Learning Representations (ICLR), 2023. OpenReview. net, 2023.
[2] Liu Y, Suresh A T, Yu F X X, et al. Learning discrete distributions: user vs item-level privacy[J]. Advances in Neural Information Processing Systems, 2020, 33: 20965-20976.
[3] Liu Y, Suresh A T, Zhu W, et al. Algorithms for bounding contribution for histogram estimation under user-level privacy[C]//International Conference on Machine Learning. PMLR, 2023: 21969-21996.
[4] Huang, Ziyue, Yuting Liang, and Ke Yi. "Instance-optimal mean estimation under differential privacy." Advances in Neural Information Processing Systems 34 (2021): 25993-26004.
=====Update=====
Increased my score to 5 after the rebuttal
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does the choice of $K$ affect the performance? Do the authors have a suggestion on how to choose $K$ in practice?
2. In line 256 ``Without any assumptions on the relationship between the user data distributions the best one can do is to locally estimate each user’s distribution'' I believe this should be true, but could the authors provide a brief justification for this claim?
3. Why did the authors not consider using privacy accounting methods such as RDP accountant which may give much tighter privacy bounds than the composition theorem of approximate DP?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors addressed the limitations and negative impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback. To address concerns, we add new experiments with different values of $\varepsilon$, number of local points $m$, and plots showing the sparsity of the estimated cluster centers in practice. We also point to our ablations on $K$ in our paper, and clarify concerns with Thm. 5.1, 5.2. We thank you for the relevant citations, and we will use the 1 extra page in the final version to incorporate the new experiments, clarifications, and citations. **Please let us know if your concerns are addressed, and if so, we would be grateful if you might raise your score.**
>> **Different values of $\varepsilon$ and local data points $m$**
We show performance evaluations under varying values of $m$, and fixed privacy budget of $\varepsilon=15$ (Fig 3 in PDF), and varying $\varepsilon$ for the values of $m$ in our paper (Fig 2 in PDF). We note that while the performance of each method drops with reducing $\epsilon$ and $m$, our algorithm has better privacy-utility tradeoffs and is more sample efficient. We also note that these privacy parameters are for a relatively small dataset (10k to 50k users). If the population size was, say, 10x larger, the privacy parameters would be 10x smaller for the same kind of results. In practical FL settings, we would often expect the population sizes to be much larger and thus the epsilons to be proportionately smaller.
>> **Choice of number of clusters $K$**
In Fig. 3a (Sec 6) and Fig. 5 (App B) we show how the choice of $K$ (number of clusters) affects the final performance of our algorithm. As expected, the performance improves as $K$ roughly matches the natural heterogeneity/clusters present in the data distribution. In practice, we can use hold out to validate $K$. See our global response for more discussion on this.
>> **Privacy accounting via RDP**
We do not use composition results for approximate DP. All our composition results rely on zero concentrated DP (zCDP) (Bun and Steinke [16]) for privacy accounting of user-level DP. We then convert the zCDP guarantees to approximate DP in Thm. 5.2. Note that zCDP is a stricter notion of privacy than RDP since it requires Renyi divergence of multiple orders to be simultaneously bounded. Further, it enjoys the same benefits as RDP for composition, since zCDP’s sequential composition is also asymptotically tight for repeated applications of the Gaussian mechanism [16].
>> **[New Experiment] Sparsity of $P_c[v]$ in practice**
In practice, the estimated cluster centers are sparse, as we see on Reddit and StackOverflow, where about 10% of the tokens carry 90% of the probability mass (Fig 4 in 1-page PDF). The assumption we make in Thm. 5.1 on P_c[v] being lower bounded by a constant is needed solely for the theoretical convergence guarantees for clustering. This is needed in theory because KL divergence does not obey triangle inequality, but $l_2^2$ distance does, and for KL between estimated to be roughly tracked by $l_2^2$, we need $P_c[v] = \Omega(1)$.
>> **Dependence on $d$ in the KL separation condition in Thm. 5.1**
There is a typo in the condition on the minimum cluster separation $\Delta$. The correct condition is $\Delta = \Omega(k^2 + \frac{k^3 d}{n})$, as derived in the proof of Thm. 5.1 in App. D.1 (below L831). We apologize for this typo and will correct it in the final version. Note that the number of clients $n$ appears in the separation condition on $\Delta$, and the convergence is determined by the condition number. Thus, the condition on KL divergence only scales as $poly(k)$ when $n=\Theta(d)$. For the datasets we use, the number of users is sufficiently large ($d/n < 2$ for all).
>> **$\log(1/\delta)$ term in Thm. 5.2**
Yes, for the privacy guarantee to be meaningful $\delta$ needs to be $o(1/n)$. But, note that the guarantee in Thm. 5.2 is $\sqrt{\log(1/\delta)}$, which for the datasets of our size (around $10^5$), translates to $\approx \sqrt{5}$.
To elaborate more on the term, the result in Thm. 5.2 is obtained by first composing the privacy mechanisms in each clustering iteration with zero-concentrated DP (zCDP) and then converting the zCDP guarantees into approx. DP using Lemma 3.6 in Bun & Steinke [16]. The $\log (1/\delta)$ term is an artifact of this conversion. On the other hand, if we were to instead compute approx. DP guarantees (instead of zCDP) at each clustering iteration and then use the advanced composition theorem to compose them, we would still end with a $\sqrt{\log(1/\delta)}$ term from the composition. So, this term is unavoidable either way.
>> **L256 Without any assumptions on the relationship … the best one can do is locally estimate**
Consider the following worst-case setting where there is no mutual information between the training datapoints sampled by any pair of users. Let each user’s true token distribution $Q_u$ be supported over subsets of tokens that are disjoint for any pair of users. Further, for any user $u$ and token $v$ with $Q_u[v]>0$, the value of $Q_u[v]>0$ is independent of the token distribution of other users. In such a case, no estimator can improve the average estimation error through collaboration/sharing of data. The optimal estimator is purely local. We will add this example to the paper.
>> **Missing citations**
Thanks for pointing out these references, we will cite & discuss them. We believe our contributions go beyond these: for e.g., while [2, 3, 4] look at private histogram estimation, they concern with algorithms that estimate a single global quantity (e.g., mean histogram) whereas ours deals with the nuances of privatizing personalization algorithms that estimate a separate distribution for each user, and learn multiple global centers. Further, [1, 4] conduct their analysis in the $l_2$ metric, whereas we look into the more appropriate KL distance for distribution estimation. KL is harder to analyze and develop algorithms for, given that it does not even obey the triangle inequality.
---
Rebuttal Comment 1.1:
Title: Request for discussion
Comment: Dear reviewer,
Apologies for bothering you! Since we only have one day left in the discussion period, we would be grateful and would sincerely appreciate if you could respond to our rebuttal, leaving us enough time to address any remaining questions.
Thanks,
Authors | Summary: This work introduces a private and personalized histogram estimation approach based on Good-Turing estimates, when the clients are clustered. Theoretically, they provide performance guarantees for different settings when cluster center is known and cluster assignments are known. In practice, they note that neither cluster centers nor assignments are known, hence inspired by theoretical results from the simpler settings they provide practical algorithms. Later they also introduce differential private version of the proposed methods whose performance can increased further with a particular initialization method. For experiments they consider three tasks where the proposed method outperforms the competition.
Strengths: -Private and personalized histogram estimation is an important problem.
-The paper is easy to understand, presentation and justification of the methods are well organized.
-The experiments are conducted on multiple datasets, the method consistently outperforms others.
Weaknesses: - Lack of comparison to local only training and other personalized estimation methods.
- The main results are rather in simpler settings, the analysis mostly builds on top of previous work; hence, theoretical contribution might be limited (compared to methodological).
- From the current experiments, performance dependence on variables such as number of clusters, clients, data samples per clients, heterogeneity etc. is not clear. I would suggest doing more ablations studies possibly using e.g. synthetic datasets to understand these.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and a positive assessment of our work. To address your concerns, we present new ablations of our algorithms under different values of per-client training data points $m$, local only training baselines, highlight existing ablations in our paper on the number of clusters, and address the point of analyzing a simpler setting. We will add the new results and discussion on the above points in the extra page we get for the final version. **Please let us know if your concerns are addressed, and if so, we would be grateful if you might raise your score.**
>> **[New experiment] Lack of comparison to local only training and other personalized estimation methods.**
In Fig 5 (1 page PDF), we find that the local only training performance where the clients never collaborate is worse than our approach (for a privacy budget of $\epsilon=15$), even as we increase the number of local data points $m$. Further, comparing this result to Fig 2 (1 page PDF), we find this trend to hold even under a very low privacy budget of $\varepsilon=2$.
>> **The main results are rather in simpler settings, the analysis mostly builds on top of previous work.**
While our analysis technique adapts results on clustering and Good-Turing estimation to our setting, its main goal is to serve as a guide that formally validates our algorithmic choices, for e.g. the local finetuning algorithm (in Eq. 2), or using the average of local Good-Turing estimates for estimating the cluster center, as opposed to the typical average of empirical estimates that is used in the K-means algorithm (see our comparison in Thm. 4.3). In the FL and privacy literature, it is also common for algorithms to be analyzed under such models of data heterogeneity (e.g., Cummings et al. [22], Cheng et al. [20]) that mimic real world distributions.
>> **[New experiment] From the current experiments, performance dependence on variables such as number of clusters, clients, data samples per clients, heterogeneity etc. is not clear.**
**Number of clusters**: In Fig. 3a (Sec 6) and Fig. 5 (App B) we show how the choice of $K$ (number of clusters) affects the final performance of our algorithm. As expected, the performance improves as $K$ roughly matches the natural heterogeneity/clusters present in the data distribution. In practice, we can cross-validate to find $K$ and use $K=10$ (close to the optimal choice for all datasets), for all our runs.
**Number of clients and privacy budget**:In the attached PDF, we show performance evaluations under varying values of $m$ and fixed privacy budget of $\epsilon=15$ (Fig 3), and varying $\epsilon$ for the values of $m$ in our paper (Fig 2). We note that while the performance of each method drops with reducing $\epsilon$ and $m$, our algorithm has better privacy-utility tradeoffs and is more sample efficient
**Heterogeneity**: Given that the natural datasets we consider already have varying levels of heterogeneity (see discussion in global response on a different choice of $K$ being optimal on each dataset) and allow us to investigate the key phenomenons, we see no compelling reason to empirically study synthetic data. We are also not aware of a sufficiently faithful model of data for our problem (and even for our own data model there are many parameters, for which it is unclear how to choose synthetic values). At the same time we welcome and will certainly consider specific suggestions about synthetic data experiments that would make our work more informative.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal, I acknowledge reading the rebuttal and will keep my score.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer,
Thank you for your acknowledgement and positive assessment of our work. We will definitely add the new ablations on data samples per client, different privacy budgets and related discussion to the main paper.
Since there is still one more day, we are also wondering if there would be some other discussion or evidence that we can provide in this period to help improve your evaluation of our paper further. Please let us know. Thanks a lot!
Thanks,
Authors | Summary: The paper presents both non-private and private per-user histogram estimation approaches that can be applied to problems such as next word predictions. The approach relies on iteratively clustering among different users to find similar user subgroups and then fine-tuning within each user to get user-specific estimations. The privacy guarantee is achieved by privately finding clusters using all users data and is quantified by the notion of joint-differential privacy. The paper theoretically analyzes the sub-optimality of their algorithm and empirically evaluate the performance by comparing to several baselines.
Strengths: - The paper empirically demonstrates that the proposed method have better performance than the baselines.
- The authors also conducted thorough ablation experiments to explain the importance of various sub-parts of the algorithm.
Weaknesses: - The largest concern to me is the presentation clarity of the paper.
- In section 4, the authors seem to present many types of assumptions regarding possible user data distributions, but the sub-titles are somewhat confusing as it tries to explain model, learning, different clustering cases, histogram estimators and also theorems. It would be much clearer if more relevant parts are put together.
- It would be easier to understand why these assumptions are stated if in the experiment section the authors could include whether certain dataset fall into which category of the assumed data structure/distributions, and in the algorithm section how the algorithms are adapt to these cases of data structure/distributions.
- It was also unclear to me regarding the final pipeline (the 'putting it all together' section). For example, to my understanding Algorithm 3 seems to be the 'main' algorithm where the cluster centers are determined, then to get user-specific estimations (the $\hat{Q}_u$s) do users fine-tune in their own ways or follow Algorithm 1 or something else?
- The novelty seems not very strong as the clustering part seems to be the major contribution but the papers apply standard K-means like clustering algorithms except changes to KL as the target is to minimize some KL as in Equation 5.
Technical Quality: 3
Clarity: 1
Questions for Authors: - In section 4 one of the assumption is 'the user histograms are well concentrated along each token’s marginal', what does 'each token’s marginal' means and can it be described in mathematical language in terms of 'well concentrated'?
- Since the authors defined their own DP relaxation definition, it will be helpful if the authors could clarify:
- Is there a conversion between scale of $\epsilon$ to the pure or approximate DP case? The $\epsilon=15$ value reported in the evaluation section seems to be a quite large number for meaningful privacy protection, how should it be interpreted under the $\rho$-zCJDP notion?
- I did not completely understand the $\rho$ parameter in Definition 3.2. Is it another relaxation defined if it's defined as the divergence term $D_{\alpha} < \rho \alpha$?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: see above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback! To address concerns we provide an algorithm’s box for the end-to-end algorithm, clarify that assumptions in Sec 4 are only for theoretical motivation, and results in Sec 6 are on real datasets where they needn't hold. We clarify our DP definition is not relaxed at all, in fact it is more stringent than RDP and the final guarantees in Thm. 5.2 are for the approx DP definition. We will use the 1 extra page in the final to incorporate them. **Please let us know if your concerns are addressed, and if so, we would be grateful if you might raise your score.**
>> **Final alg in Sec 5**
In the 1 page PDF (Fig 1), we show an algorithm box for the end-to-end algorithm that uses Alg 5 for private initialization of cluster centers, and Alg 3 for clustering, where the re-centering step uses Alg 4. Each user finetunes their corresponding cluster center using Eq. 2.
>> **Assumptions in stylized model and concentration of user histograms**
- In Sec 4, to guide algorithm design we introduce a model where the user distributions are distributed as a mixture of Dirichlets. This model has two latent variables: cluster memberships and cluster centers. For simplicity, we analyze estimators under increasing levels of knowledge in this model: cluster center known -> cluster center unknown but cluster membership known -> both cluster center and membership unknown.
- The simpler analysis in a clean model leads us to three key findings (end of Sec 4) using which we propose the clustering algorithm (Alg. 3), with the recentering step using Good-Turing estimates (Eq. 4), and finetuning given by Eq. 2.
- In Sec 6, we evaluate our proposed algorithm in the most general setting, where cluster centers/memberships are unknown. Here, we also do not assume anything about the distribution of $Q_u$’s.
- Regarding the **well-concentrated** assumption: We only need that w.h.p., for all users $u$ in cluster $c$, and any token $v$, $|P_c[v] - Q_u[v]| = O(1)$. This is naturally satisfied by our Dirichlet assumption since the $\alpha$ parameter of Dirichlet allows us to analyze our setting with varying degrees of cluster-level concentration where $|P_c[v] - Q_u[v]| = O(1/\sqrt{\alpha})$.
>> **Novelty**
The novelty of our Algorithm lies beyond the proposed K-means style clustering method. Following is a set of novel contributions we make, where the final four claims are verified empirically in Sec 6 and defined, analyzed formally in Sec 5.:
- Introduce the personalized and private frequency estimation problem in KL divergence.
- Contrary to popular beliefs (Cheng et al. [20], Wu et al. [76]) in the FL community, we show that there exists practical FL problems where finetuning a single global model (FedAvg+FT) is not sufficient for personalization.
- Specifically to improve clustering error in terms of KL divergence (as opposed to squared loss in k-means) we motivate and propose a Good-Turing based center estimator (Sec 4).
- Propose a private version of the clustering algorithm with approximate differential privacy guarantees (Thm. 5.2).
- Propose a novel data-dependent and yet private initialization for clustering (Alg. 5), which is crucial to improve clustering performance (Fig. 3b). Prior works on private-clustering (e.g., Chang et al.: Locally private k-means) do not analyze expectation maximization (EM) algorithms given their dependence on initialization which is harder to privatize (most initializers for EM directly output a set of points from the dataset (Arthur et al. [5])).
- Propose a practical 2-stage algorithm for the size-heterogeneous setting (Section 5). Naive algorithms in this setting typically hurt the privacy-utility trade off since hiding the participation of data-rich users now becomes very hard when other users have less data. We show that our proposed algorithm is able to improve estimation error from including both data-poor and data-rich users in the collaboration, without hurting the privacy utility trade off (Fig 4).
>> **Our final guarantees are in terms of approx DP and Def 3.2 is stricter than RDP**
Our final guarantees in Thm 5.2 are in terms of user-level $(\varepsilon, \delta)$-DP which is equivalent to JDP in our setting. We only use the $\rho$-cJDP (stricter than RDP) in Def 3.2 for easier and tighter composition (than approx. DP), and provide final guarantees in terms of user-level approx. DP.
**In billboard model, JDP=user-level DP**: Note that our setting is akin to the billboard model (Hsu et al. [41]) where any $(\varepsilon, \delta)$ user-level DP (Dwork [25]) algorithm is also $(\varepsilon, \delta)$-JDP. Similarly, if the algorithm satisfies user-level $\rho$-zCDP (Bun et al. [16]) then, it is also ρ-zCJDP (Def 3.2).
**Final guarantees are for user-level DP**: We use zCDP for composition at the user-level, and then use Lemma 3.6 from [16] to convert zCDP to approx. DP in Thm. 5.2.
**Why use zCDP?**: Yes, zDP bounds the $\alpha$ Renyi divergence between the mechanism outputs on neighboring datasets by $\rho\alpha$. Thus, zCDP is a stricter notion than Renyi DP (RDP) since zCDP requires Renyi divergence of multiple orders to be simultaneously bounded. Further, zCDP enjoys the same benefits as RDP for composition, since zCDP’s sequential composition is also asymptotically tight for repeated applications of the Gaussian mechanism [16].
>> **[New Experiment] Evaluating performance at different privacy budgets**
In Fig 2 (1 page PDF) we evaluate our algorithm and other baselines under different privacy budgets. Each algorithm needs to satisfy ($\varepsilon, \delta$)-JDP (which is ($\varepsilon, \delta$) user-level DP). On the x-axis we vary $\varepsilon$ and fix the same $\delta$ reported in the paper. We find that our approach continues to outperform baselines. While the clustering baseline IFCA and MAML suffer from severe performance degradation under low privacy budgets, our approach observes a much better privacy-utility trade off.
---
Rebuttal Comment 1.1:
Title: Reply to Author Rebuttal
Comment: Thanks for the explanations especially on the DP guarantees, it makes sense to me.
- Regarding clarity on Section 4: I think ``cluster center known -> cluster center unknown but cluster membership known -> both cluster center and membership unknown'' makes sense, but the current section 4 lacks such structure which makes it difficult to understand which result/findings is linked to which setting. Since the authors mention Section 4 is for theoretical motivation and Section 6 tests the most general setting only, I would expect the authors to highlight and elaborate on the key findings that lead to the design of the actual algorithm.
- Regarding novelty and experiments: I appreciate the new experiments added, but my main concern still remains which the paper proposes an algorithm that targets to minimize the KL term between users' histogram and the estimated cluster center histogram distributions, which is also the metric to evaluate in experiments, but the baselines are not designed for the same problem. For example, FedAvg, MAML and IFCA are federated/distributed deep learning frameworks that train better models in federated setting on regression/classification tasks, are gradient-based, and are not particularly privacy-preserving designed. I would expect authors to compare to more similar approaches in private frequency estimation such as [1, 2].
For the above reasons I would keep my score.
[1] Private Federated Frequency Estimation: Adapting to the Hardness of the Instance. Wu et al. NIPS 2023. https://arxiv.org/abs/2306.09396.
[2] The communication cost of security and privacy in federated frequency estimation. Chen et al. AISTATS 2023. https://arxiv.org/abs/2211.10041.
---
Reply to Comment 1.1.1:
Title: Choice of Baselines and New experiments with private frequency estimation baselines
Comment: Thank you for your comments and suggestions for private global frequency estimation baselines. We add new experiments comparing our method with suggested baselines [1, 2], and find that for all ranges of the privacy budget our approach outperforms [1, 2], highlighting the need for learning clusters and cluster-level frequency counts (as opposed to global counts) to reduce the estimation error in KL, per-client. We also clarify that the baselines (FedAvg/MAML/IFCA) we used in our paper were originally proposed for more general settings, and we adapt each of these baselines in a fair manner for a meaningful comparison. We will add these experiments and discussion to the final version. **Please let us know if this addresses all outstanding concerns, and if so, we would be grateful if you might raise your score.**
___
**Adapting FedAvg/MAML/IFCA, which are proposed for convex optimization (superset of our setting which is to minimize KL distance):** We will make this more clear, but please note that all three baselines have been proposed and analyzed as general algorithms to solve convex objectives (our metric of KL divergence is also convex) in the federated setting. We apologize if this is not clear, but for a fair comparison, we adapt the baselines specifically to our setting in the following way:
- FedAvg and MAML both optimize the KL distance (as opposed to squared loss) to learn a single distribution that is close to the empirical distributions of each user in KL distance (Eq. 3 in our paper). Note that we do not use gradient-based approaches for FedAvg, and instead use the closed-form optimal (in Eq. 3, proof in Appendix C).
- IFCA (Ghosh et al. [35]) is one of the most popular clustering baselines in FL, that was proposed for convex losses, and analyzed for k-means (squared loss), which we adapt to our setting by: 1) using the KL distance to the current cluster centers for determining the cluster affiliation of each user in a clustering round; and 2) estimating the new cluster centers by optimizing the KL objective, which is same as what we did for FedAvg/MAML but now only for clients in the cluster (Eq. 5 in our paper).
- Estimates from all three baselines are personalized (adapted) for each user using the same method as ours (Eq. 2), which is not gradient-based. Additionally, we also try other popular approaches like RTFA, gradient-based (Fig. 3d) and find our non-gradient based approach to work best for all three baselines. The local adaptation step for MAML also uses the same non-gradient based method.
___
**Comparison with private frequency estimation approaches [1, 2]**: Thank you for suggesting these baselines. Wu et al. [1] is based on the same algorithm (Count Sketch+Secure Aggregation) as Chen et al. [2], and mainly proposes a two-stage approach that adapts to the hardness of the problem instance (heterogeneity of the frequency counts). Both papers [1, 2] differ from our setting in the following ways:
1) They estimate a single global frequency count (FedAvg estimate in our case) in $l_1$ or $l_\infty$ norms, but our problem setting requires estimating personalized distributions for each user (and that too in KL divergence which is not a proper distance metric).
2) They mainly use CountSketch to encode the local frequencies to reduce the per-round communication cost.
_[1, 2] are concerned with communication cost:_ In the best case, when there is no communication constraint, their approach resorts to only using secure aggregation (which uses Gaussian mechanism in [1] and Poisson Binomial in [2]), and is thus same as ours which uses Gaussian Mechanism (Alg. 4). We are not concerned with reducing the communication cost, but if we were, it is straightforward to use the CountSketch algorithm at each round of clustering. Further, note that while CountSketch reduces the number of bits communicated by each client, the error suffered on each item's frequency (due to privacy noise) would not change at all. **So CountSketch can only improve communication complexity over our method and not privacy-utility tradeoff**.
_Similar to [2], our approach also adapts to the range/hardness:_ In algorithm 4, for private center estimation for the global center (in FedAvg) or cluster-level center (in Alg. 3), we use the common adaptive clipping technique which first uses some privacy budget to estimate the range of the true center to a small confidence interval. Then, the user-level estimates are clipped to this small confidence interval before running secure aggregation with the rest of the privacy budget. Thus, **the adaptive clipping we use is similar to the two-stage approach in [2], in the sense that both adapt to the hardness of the underlying instance.** We find this interesting and will add this comparison to the main paper.
___
**PLEASE SEE OUR CONTINUATION BELOW FOR OUR EXPERIMENTAL RESULTS** | Summary: The paper proposes a federated learning approach for frequency histogram estimation in a distributed setting. The proposed approach relies on first clustering users that have similar subpopulation distributions before performing the estimation in a privacy-preserving manner. Finally, the performance of the approach is also validated on three real-world datasets.
Strengths: The paper is well-written and the setting considered in the paper is clearly introduced. The challenges related to frequency estimation in the federated learning setting as well as the related work are also reviewed in a thorough manner.
One of the novelty of the proposed approach is the idea of clustering users first before performing the frequency estimation. The authors that the proposed approach offers strong theoretical guarantees and does not suffer from the same limitations as other methods from the state-of-the-art, such as the sensitivity to small datasets.
Based on the proposed approach, the authors have developed several variants of the personalized frequency estimation histograms, including for the private and no-private settings as well as for the size-heterogenous one. The private version of the algorithm combines several clever privacy-preserving subroutines in a clever manner.
The experiments conducted are quite extensive as they have been done on three datasets and have compared the approach to three different baselines.
Weaknesses: The paper focuses on next word prediction, which is a quite focused application. Instead, it would be more interesting to try to broaden the applicability of the approach, in particular by testing it on other type of datasets. The possibility to extend the framework developed beyond the KL-divergence should also be discussed.
The neighbouring notion considered for differential privacy seems quite strong in terms of the privacy guarantees provided but the authors failed to motivate why this is right notion to consider in the federated learning in comparison to record level differential privacy. The necessity of the Dirichlet assumption as well as the concentration of the users’ histograms should also be justified.
A discussion on the method that should be used to estimate the number of clusters K is currently missing in the paper. This is a crucial point as this parameter is likely to have an important impact on the success of the approach.
In terms of the structure of the paper, it ends quite abruptly lacks a conclusion summarizing the key findings of the paper and also discussing possible future works.
A few typos :
-« Baysian prior » -> « Bayesian prior »
-« privacy preserving analysis » -> « privacy-preserving analysis »
-« estimatation in KL divergence » -> « estimation in KL divergence »
-« For competetive estimator » -> « For competitive estimator »
-« LLoyd’s K-means » -> « Llloyd’s K-means »
-« differentially private » -> « differentially-private »
-« Privacy preserving noisy sum » -« « Privacy-preserving noisy sum »
Technical Quality: 3
Clarity: 2
Questions for Authors: What is the relationship of the privacy parameter rho compared to standard parameters epsilon and delta in differential privacy?
Please see the main points raised in the weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have a specific section discussing the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and a positive assessment of our work. To address concerns, we refer to plots in the paper that ablate $K$, clarify guarantees in Thm. 5.2 are in terms of the typical $(\varepsilon, \delta)$ approx. DP, and discuss the choice of the stricter user-level DP definition. We also explain the motivation to study next-word prediction, and the choice of the stylized model: mixture of Dirichlets. Thank you for pointing us to typos, we will correct them, add discussion on the above points and a conclusion section in the extra page we get for the final. **Please let us know if your concerns are addressed, and if so, we would be grateful if you might raise your score.**
>> **Choosing the number of clusters K**
In Fig 3a, 5 in our paper, we plot the test NLL when varying the number of clusters $K$ parameter for our clustering Alg 3. **Please see the global response for more discussion on this**.
>> **Converting cJDP to user-level DP**
Our final guarantees in Thm 5.2 are in terms of user-level $(\varepsilon, \delta)$-DP which is equivalent to JDP in our setting. We only use the $\rho$-cJDP (stricter than RDP) in Def 3.2 for easier and tighter composition (than approx. DP), and convert them to user-level approx. DP finally.
Our setting is akin to the billboard model (Hsu et al. [41]) where any $(\varepsilon, \delta)$ user-level DP (Dwork [25]) algorithm is also $(\varepsilon, \delta)$-JDP. Similarly, if the algorithm satisfies user-level $\rho$-zCDP (Bun and Steinke [16]) then, it is also ρ-zCJDP (Def 3.2). Thus, we first use zero-concentrated DP (zCDP) for privacy accounting and composition, and then use Lemma 3.6 from [16] to convert the zCDP guarantees to approx. DP in Thm. 5.2.
>> **User vs. record level DP**
We defined our problem in Sec. 3 with the goal of achieving personalized predictions for each user’s frequency counts, while satisfying user-level privacy. As you correctly point out, this is indeed stricter than item-level privacy, but it is also a common choice for privacy guarantees in cross-user federated learning (e.g., Cummings et al. [22], Levy et al. [52], Wang et al. [71]). In cross-user FL, e.g. learning to predict the next word on users’ mobile devices, it is natural to protect the participation of every user; as a user may type many different words that are correlated, hiding a single word may not be sufficient privacy protection. Indeed if we had the weaker item level DP protection with $\varepsilon = 2$ (say), the overall user-level privacy loss for a user that contributed $m=200$ words would theoretically be $\varepsilon \cdot m = 400$. Even if 10 of the words are correlated with sensitive information (e.g. related to their medical diagnosis), the $\varepsilon$ parameter increases by that factor. Our user-level guarantee ensures that even if the user were to change all 400 words they contributed, the models will not be significantly different.
>> **Using the mixture of Dirichlets model for our analysis in Sec 4**
In Sec. 4, we introduce the mixture of Dirichlets model with the goal of guiding algorithm design when the underlying population is heterogeneous, yet is structured. While each user has a different token distribution it is likely that there exists clusters of ``similar’’ users. This model motivates a setup where adapting a single global model for each user would be less effective than identifying clusters, learning a single model for the cluster and then adapting the cluster-level model for the user. We also observe this empirically in our experiments in Sec 6, where FedAvg+FT performs worse than our approach.
Note that the specific choice of Dirichlet for analysis is only so that we can easily derive the optimal Bayes estimator (Thm 4.2), which gives us our finetuning algorithm in Eq 2. For all other theoretical results in Sec. 4, 5 we only need each user’s true distribution to be concentrated along the cluster center, i.e., w.h.p., for all users $u$ in cluster $c$, and any token $v$, $|P_c[v] - Q_u[v]| = O(1)$. The $\alpha$ parameter of Dirichlet further allows us to analyze our setting with varying degrees of cluster-level concentration, since under the Dirichlet assumption $|P_c[v] - Q_u[v]| = O(1/\sqrt{\alpha})$.
>> **Why focus only on the next-word prediction problem in the FL setting? Extending to other datasets and losses.**
Large language models are next-token prediction models that can be used to solve tasks in natural language like math reasoning, coding etc. In FL, the performance of these next-token prediction models can be improved fairly across users when they are personalized (Hard et al. [38], Salemi et al. [66]). At the same time, since each user has few data points for purely local learning, global collaboration is needed to bias the local learning algorithm. Thus, we study frequency estimation (estimating the marginal distribution) as a first step towards private and personalized language models. We highlight how measuring the error in KL divergence and reducing privacy-utility trade offs requires algorithmic innovations (e.g., clustering with Good-Turing, data-dependent private initialization) that are specific to our problem and much required.
We provide an extended analysis on three real-world datasets with varying levels of statistical heterogeneity, each of which are a popular choice for analyzing next-token predictors in FL (Wu et al. [76]). Please let us know if there is a specific dataset that is characteristically different from these, and we would be happy to include experiments on it.
As we mention in Sec 1, we focus on error measured in KL as is common in language modeling (Hoffmann et al. [40]), and is equivalent to minimizing the negative log-likelihood of the sample. Distribution estimation in KL distance is also studied more generally (Drukh et al. [24], Acharya et al. [2]). We believe that extending KL to other losses is an interesting direction of future work, as opposed to a weakness.
---
Rebuttal Comment 1.1:
Title: Request for discussion
Comment: Dear reviewer,
Apologies for bothering you! Since we only have one day left in the discussion period, we would be grateful and would sincerely appreciate if you could respond to our rebuttal, leaving us enough time to address any remaining questions.
Thanks,
Authors | Rebuttal 1:
Rebuttal: We thank all reviewers for their feedback and list some of the new experiments we add to the 1-page PDF which we will incorporate using the extra page in the final version. We also address a common concern on the choice of number of clusters $K$ for our Algorithm 3.
___
## **List of new experiments in 1-page PDF.**
- Figure 1: Outline of the end-to-end algorithm that first privately initializes cluster centers and then runs private clustering. Finally, the learned cluster centers are finetuned by each user locally.
- Figure 2: We evaluate our method and baselines under different privacy budgets (varying $\varepsilon$), where each algorithm needs to satisfy $(\varepsilon,10^{-10})$-JDP. We find that our approach continues to improve over baselines even at low privacy budgets and has better privacy-utility tradeoff.
- Figure 3: We evaluate our method and baselines under different values of the number of local datapoints $m$ and find that our approach is more sample efficient than baselines.
- Figure 4: We plot the CDF plot for the distribution over tokens given by the cluster centers estimated on Reddit and StackOverflow. The plots suggest that 90% probability mass is present in 10% of the tokens indicating that the underlying cluster centers are indeed sparse. But note that the same is not true for the FedAvg estimate indicating that the global center is denser. This means that the discovered underlying clusters are sparse and diverse.
- Figure 5: We evaluate our method at $\varepsilon=15, \delta=10^{-10}$ privacy budget, compared with perfectly private local only training when local dataset size is varied. Even at higher values of $m$ our algorithm is able to learn a more accurate estimate by first collaborating with users to privately learn cluster centers and then adapting them locally for each user using their local datasets only.
___
## **Choosing the number of clusters $K$ in practice.**
In Fig 3a and Fig 5 in our original submission we plot the performance of our end-to-end algorithm as we vary the choice of number of clusters $K$ used in the private initialization (Algorithm 5) and histogram clustering (Algorithm 3). We use a hold out validation set on the Reddit dataset to identify the number of clusters $K=10$ to be the optimal choice (Fig 5). We use the same choice for the other two datasets: Amazon Reviews and StackOverflow and a post hoc reveals that $K=10$ is close to optimal for those as well (Fig 3a). In fact we note that the choice of number of clusters K is not very sensitive close to the optimal. Furthermore, we also note the varying levels of heterogeneity in the different datasets we evaluate as $K=8$ and $K=16$ does better for StackOverflow and Amazon Reviews respectively. In practice, we show that one can use a hold-out validation set to more accurately judge the choice of this hyperparameter (Fig 5).
Pdf: /pdf/1c4c20d76c8400d747a440000f7f01e1fa2542c9.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Tight Bounds for Learning RUMs from Small Slates | Accept (poster) | Summary: This paper studies the learning of Random Utility Models (RUMs). A RUM is a probability distribution $P$ over the set of permutations over $[n]$. Fix a permutation $\pi$ and consider any subset $T \subseteq [n]$, known as a *slate*. The winner of the slate (corresponding to $\pi$) is the highest ranked element in $T$ accordint to $\pi$. We can imagine fixing a slate $T$, and consider the distribution of the winner of the slate $T$ as we draw $\pi$ from $P$. In the problem of learning RUMs, one is given a set of slates, together with the corresponding distribution of the winner of that slate. The task is to reconstruct the distribution of the winner of every possible slate.
Specifically, this paper considers the problem when the example slates given to a learning algorithm are at most a size $k$. The main result of the paper is that having access to the winning distributions of all slates of size at most $O(k)$ is necessary and sufficient for the task of learning RUMs. The same result also holds if one has only sample access to the winning distributions of all slates of this size. The authors present two algorithms that achieve the upper bound: 1) a proper algorithm, that constructs a RUM in time $n^{O(n)}$, and thereafter, given any slate, returns an approximation to the winning distribution of that slate in polynomial time 2) an improper algorithm, that does not construct a RUM, but runs in time $n^{O(\sqrt{n})}$, and thereafter, given any slate, returns an approximation to the winning distribution of that slate in time $2^{O(\sqrt{n})}$. The latter algorithm thus, in time $n^{O(\sqrt{n})}$, approximates the winning distribution of the full slate $[n]$ in $l_\infty$, which is an improvement over the previous best running time of $2^{O(n)}$ for the same task, implicit in prior work. The latter algorithm can also be adapted to yield the following guarantee: given a prespecified slate $T \subseteq [n]$, one can approximate the winning distribution of $T$ in $l_\infty$ in time polynomial in $n$, by querying slates of size at most $O(|T|/\log|T|)$. The authors also show that any algorithm that only accesses slates of size $o(\sqrt{n})$ can not successfully learn the winning distribution of the full slate. This shows that their earlier two algorithms are optimal in the maximum size of the slates they access. Finally, the authors define fractional versions of two classic problems in the intersection of combinatorics and computer science: $k$-deck and trace reconstruction. Using their techniques for learning RUMs with small slates, they are able to obtain new results for the fractional versions of both these problems (similar quantitative results would be major breakthroughs for the original problems).
The upper bounds rely crucially on results by Sherstov in 2008, and on the state-of-the-art approximation of the AND function by low degree polynomials due to Huang and Viola, 2022. The lower bound is a reduction to a lower bound on the approximation of the AND function by low-degree polynomials, due to Bogdanov et al. 2016.
Strengths: This paper considers a natural constrained formulation of the problem of learning RUMs on $n$ elements: algorithms that are only given input access to slates of size at most $k$. For such algorithms, what is the smallest $k$ that is both necessary and sufficient? Satisfyingly, the authors give a compete answer to this question: $k=O(\sqrt{n})$ is necessary and sufficient. Given the pre-existing technical machinery by Sherstov, 2008, Huang and Viola, 2022 and Bogdanov et al. 2016 (albeit for problems not directly related to learning RUMs), the connections to RUMs are not too difficult to read and understand, and I found them innovative and cute. I also liked the applications of the the authors' techniques to obtain novel bounds for relaxed versions of classical hard problems in computer science. While I am not closely familiar with the literature on these problems, I imagine the fractional definitions of these problems given by the authors, together with their bounds, might be of interest to people who study these problems. Overall, I find the authors' work to be a strong and compelling study.
Weaknesses: If I am to nitpick: while the technical narration of the problem of learning RUMs given by the authors is clear (I admit that I did not even know this problem before reading the paper, but could understand almost all of the paper), I would have liked to see a little more story-building and motivation around this problem in the introduction. I felt that the authors dived into their technical contributions rather too soon. I would also have liked to see the authors elaborate more on how their work compares to prior work by Chierichetti et al. 2021, 2023. As of now, these works, while seeming to be most relevant, are only cited in passing in the related work. Also, I feel that a few more references could be provided with respect to variations of the $k$-deck and trace reconstruction problems that have been considered in the literature.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) The authors do show in Observation 13 that having access to slates of size exactly $k=\Theta(\sqrt{n})$ is also not sufficient for learning the RUM. Might it however be possible that such a result holds for some $k > \Theta(\sqrt{n})$? Say I am given all the slates of size exactly $\Theta(n^{0.6})$?
2) Are similar fractional versions of the $k$-deck or trace reconstruction problems known? I suggest doing a literature survey to cite some works that have considered variants of these problems -- this might make these sections more complete and situate your definitions of the problems better within the literature. For example, here are two works I found with a preliminary google search: 1) Approximate Trace Reconstruction from a Single Trace. Xi Chen, Anindya De, Chin Ho Lee, Rocco A. Servedio, Sandip Sinha, 2022 2) Approximate Trace Reconstruction. Sami Davies, Miklos Z. Racz, Cyrus Rashtchian, Benjamin G. Schiffer, 2020.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors have satisfactorily addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the suggestions and feedback. In the revision, we will provide more story-building and motivation around the problem.
Regarding related works on RUMs, in [Chierichetti et al., ICML 2021] they take in input a RUM $R$ and their goal is to output a new RUM $R’$ whose support is only on a linear number of permutations, while maintaining a good approximation of $R$. Therefore, they consider a lossy-compression problem rather than a learning problem, as we do instead.
In [Chierichetti et al., AISTATS, 2023], they take in input the empirical winning distributions of small slates and their goal is to output a RUM consistent with such input slates. Crucially, they do not require their RUM to work well on slates outside of the input set. Instead, we consider a learning problem where the algorithms must provide good approximations on all the slates, even the large ones not given in input. We will clarify these better in the revision.
We now address the questions:
1. Unfortunately, the answer is no. Having access to all and only the slates of size exactly $O(n^c)$ for any constant $0<c<1$ is not sufficient to reconstruct the winning distribution on the full slate. In particular, Observation 13 entails that access to slates of size $O(\epsilon \cdot n)$ leads to an $\ell_1$-error of at least $1 - \epsilon$ on the full slate. Therefore, selecting $\epsilon = n^{c-1}$ yields the claim. We will mention this in the revision.
2. Thanks for your suggestion and the references; we will mention some variations of the problems to better situate our work in the literature.
Both papers you mentioned, [Chen et al., 2022] and [Davies et al., 2020], fall into the literature of approximate trace reconstruction. In such variation, one seeks to approximately reconstruct the single hidden string $x$, up to some error in edit distance. In our version we have instead a hidden distribution over strings, rather than a fixed string.
Some results of the first paper you mentioned, [Chen et al., 2022], consider the average-case trace reconstruction. The works in this literature are maybe the most related to our version. However, our problem differs from such variations in two aspects. 1) in average-case trace reconstruction the hidden string is sampled from a uniform distribution, instead, we consider arbitrary distributions, and 2) in average-case trace reconstruction, one first samples a string $x$ and then each trace is obtained by running the same string $x$ through the deletion channel. Instead, in our fractional trace reconstruction, to collect a new trace we first sample a new string $x$ and then run it through the deletion channel. Note that the average-case trace reconstruction is not a generalization of trace-reconstruction, while our fractional trace-reconstruction is a true generalization.
We will add these comments in the revision.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: Thank you for the response. Please do include the discussion from your response, along with the references, in the updated version. I maintain my score of 8, and believe this work should be accepted. Great work! | Summary: This paper studies the problem of learning a Random Utility Model with limited information. A RUM is a distribution on the symmetric group on $n$ letters and a slate is a nonempty subset of the universe of letters. The paper studies the problem of learning the RUM given access to the probabilities that a given letter $s$ is most preferred by a sample from the RUM out of all of the elements in a given slate $S$. The authors apply classical techniques to derive an information theoretic upper bound for the necessary size of a slate in order to learn the RUM to additive error $\epsilon$, which is complemented by a matching lower bound at the end of the paper. They then provide two algorithms bounding the sample complexity of learning the RUM with such slates: one relying on solving a linear program requiring $n^{O(n)}$ samples and another algorithm relying on polynomial approximations for the Boolean AND. The latter algorithm is then applied to demonstrate that large slates can be used to learn an RUM in polynomial time to constant accuracy. The paper concludes with applications of their techniques to two other problems in the intersection of combinatorics and CS theory.
Strengths: The paper lays out a clear and interesting problem and provides a fairly complete solution using classical results in Boolean analysis in novel ways. The solution is thorough and the presentation is clear and compelling.
Weaknesses: The main weakness in the paper is the exponentially large number of slates required by the algorithm in order to learn the RUM. The simulation lemma is able to reduce this to polynomial at the cost of increasing the size of the slates, but the time is still exponential in the error. It would be nice to have a clearer idea of the landscape with respect to if polynomially many slates are sufficient even in a polynomially smaller slate setting. The authors also heavily use prior technical contributions and it might be useful to better delineate the core technical contributions of the present paper.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Is there some middle ground in the size of the slates, say $\Theta(n^c)$ for $1/2 < c < 1$ that allows for poly algorithms in approximating the RUM? As it stands there is quite a gap between the exponential algorithms in the case of $\Theta(\sqrt{n})$ and the poly algorithms in the case of $\Theta(n / \log n)$.
2. Are there weaker notions of learning the RUM that allow for either smaller slates or more sample-efficient algorithms? For example, I could imagine that some of the sample complexity is coming from learning the precise order of very low-preference items in high probability according to the RUM. If one were to only care about properties of the RUM relevant to some notion of the most preferred items, can the bounds be improved?
3. It would be nice to rigorously demonstrate that the algorithm in Theorem 5 actually can be made to run with $2^{O(n)}$ slates as opposed to $n^{O(n)}$. It is poor form to simply state this as a footnote, as future authors may wish to cite this result without being forced to rewrite your entire proof.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations have been adequately discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the feedback and suggestions.
Exponential time: please see the general comment.
On a high level, the main technical contributions are: (i) relating the approximation degree of the AND function to the slate size to learn a RUM (Theorem 9 and Theorem 12), (ii) relating the $\ell_1$-norm of the coefficients to approximate the AND function to the query time of an algorithm for RUM learning (Theorem 9 and Corollary 10), and (iii) deriving, from the lower bound for RUM learning, lower bounds to fractional versions of other well studied problems (Theorem 14 and Theorem 15). The proofs are non-trivial and the techniques we use come from seemingly unrelated areas such as cryptography and Boolean function approximation. We believe that connecting such distant areas to RUM learning is a contribution by itself. In the revision, we will ensure that our contributions are more clearly highlighted.
We now answer the questions in order:
1. This is an interesting open problem. We currently do not know the optimal slate size that allows for polynomial time simulation algorithms. (We mention this problem in the Conclusions at line 358.)
2. This is an excellent point. Our problem currently asks for a good approximation on *all* possible slates. It is possible that a PAC-learning variant of the problem (where, say, the testing and/or the learning phases have to use slates coming from a distribution) might be easier, both in terms of running time and slate size. We will mention this as a possible future direction.
Regarding the point of focusing only on the most preferred items. If we consider the task of just estimating the winning probability of the most preferred item, then our lower bound (Theorem 12) answers this in the negative: there is one item for which we cannot say if its winning probability is $\ge 1-\epsilon$ or $\le \epsilon$ in the full-slate. Thus, the slate size cannot be improved in this context.
3. Thank you for the suggestion and apologies for the lack of details; we will add a rigorous proof in the revision. The high level idea is that, instead of solving the primal LP (1) (that takes $n^{O(n)}$ time), we can solve the dual LP. The separation oracle of the dual can be solved in time $2^{O(n)}$ using an algorithm in [Chierichetti et al., 2023], therefore, employing the ellipsoid method provides the desired running time.
---
Rebuttal Comment 1.1:
Title: Thank you for responding
Comment: The authors have done a good job of responding to my questions and I maintain my recommendation to accept. The theoretical contribution and the new techniques used are of interest. | Summary: A random utility model is defined by a distribution over permutations of $1,2,\dots,n$. Given a non-empty subset $S \subseteq \{1,2,\dots,n\}$, an oracle stochastically returns which one in $S$ is the highest according to this distribution. This paper considers the problem of estimating a random utility model from winning probabilities for small $S$.
- This paper first proposes an algorithm that queries all $S$ with $|S| = \tilde{O}(\sqrt{n})$ and then estimates a RUM by solving an LP. The authors shows that this estimated model has small error even for large $S$. This algorithm takes $n^{O(n)}$ time.
- Next the authors consider an approach of expressing RUMs by polynomials. The proposed algorithm again queries all $S$ with $|S| = \tilde{O}(\sqrt{n})$. However, as the expression of an estimated RUM does not require $n!$ dimensions, both the estimation and prediction takes $n^{\tilde{O}(\sqrt{n})}$ time.
- The authors also provide a lower bound. That is, if two RUMs coincide on all $S \subseteq [n]$ with $|S| = O(\sqrt{n})$, the error for $S = [n]$ can be very large.
- The last contributions are on the fractional versions of the $k$-deck and trace reconstruction problems. The authors give lower bounds on the sample complexity for these problems based on the RUM lower bound.
Strengths: RUMs are an important decision making model in economics and statistics, and the problem of estimating it from data is significant. For this problem, this paper determines the size of queries (i.e., $|S|$) that is necessary for estimating a RUM by providing both upper and lower bounds. For the upper bound, the main technical contribution is to translate the result on the AND function by Sherstov (2008) to RUMs. For the lower bound, the authors construct a difficult instance by reducing it to the difficult instance on the AND function proposed by Bogdanovet al. (2016). These proofs are decently built on existing results on boolean functions. The lower bounds for fractional $k$-deck and trace reconstruction are also interesting.
Weaknesses: In my opinion, the largest weakness of this paper is the lack of practical insight. The main result claims that estimating a RUM requires $n^{\tilde{O}(\sqrt{n})}$ queries, which is unrealistically large for suggested applications such as web search results. Theoretically, this bound is meaningful; the upper and lower bounds coincide, so this order is tight. However, in practice, I do not know a realistic scenario in which we can query all size $\tilde{O}(\sqrt{n})$ subsets, and the proposed algorithm seems to be designed for the theoretical purpose. Actually, the authors do not provide any experimental result. I think there might be a more appropriate venue other than NeurIPS for this paper. The technical contribution of this paper is mainly to relate RUMs and the AND function, which seems to attract more audience in TCS conferences.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is there any possible future direction to develop the results of this paper for more practical applications?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The contribution of this paper is only theoretical and does not seem to have a large practical impact for now.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments. Regarding the computational complexity of the algorithms, please see the general comment. Also note that, while querying all the slates of size $O(\sqrt{n})$ is required to learn the complete RUM, if we are interested only in some target slates, our algorithm becomes more efficient as characterized in Corollary 10.
RUM learning is an important ML problem that is still not well understood theoretically. We believe our work is an attempt to fill this gap and would be of interest to the ML community, as evidenced by previous work [Farias et al., NeurIPS 2009, Soufiani et al., NeurIPS 2012, Oh and Shah, NeurIPS 2014, Chierichetti et al., ICML 2018, ICML 2021, Almanza et al., ICML 2022].
---
Rebuttal Comment 1.1:
Comment: Thank you for the feedback. Although I understand that the improvement made by this paper accelerates the running time so much, but I am still not convinced that the running time $n^{\tilde{O}(\sqrt{n})}$ is practical. Those papers about RUMs published in ICML and NeurIPS mainly considered a more restricted model expressing RUMs and proposed practical algorithms, which is a more promising direction for developing practical algorithms in my opinion. I keep the current borderline accept score for evaluating this paper's theoretical contribution. | Summary: The paper considers the Random Utility Model (RUM) problem and gives upper and lower bound on plate size.
RUM is a classic economic model that is used to understand user behavior by modeling choices from subsets of available items. In the RUM problem, there is a set of element $[n]$ and there is a probability distribution $P$ over the permutation of $[n]$. The algorithm could query a set of $k$ elements, and either observe the one with the highest rank (sampled from $P$) or the winning distribution on these $k$ elements. The paper seeks to understand the optimal value of $k$, such that the algorithm could figure out the permutation distribution (up to small error) given all plate of size $k$. The answer turns out to be $k = \Theta(\sqrt{n})$.
In particular,
1. The upper bound shows that knowledge of choices from slates of size$ O(\sqrt{n})$ allows approximating the choice distribution for any slate with high accuracy. Moreover, it gives an algorithm that learns RUMs efficiently: A proper learning algorithm with $n^{n}$ time and an improper learning algorithm with $n^{\sqrt{n}}$ time.
2. Lower bounds are derived, indicating that learning from slates smaller than $\sqrt{n}$ results in high prediction errors. These results also contribute to understanding the k-deck and trace reconstruction problems.
In terms of technique, the proofs rely on connections between RUM learning and the approximation of Boolean functions by polynomials. The paper leverages results from the approximation of the AND function to develop their bounds and algorithms.
Overall, the paper solves a meaningful problem and characterizes the optimal plater size. The drawback is that that query complexity and the runtime is exponential.
Strengths: The paper solves a meaningful problem and characterize the optimal plater size.
Weaknesses: 1. Query complexity and the runtime is exponential;
2. I think it needs some explanation on the query model. In particular, why the model can only see plate of size $k$.
3. The writing could be improved: for example, Line 58, our results are proved *by* observing; Line 95, the citation appears at a wrong place.
Technical Quality: 2
Clarity: 2
Questions for Authors: .
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments.
1. Please see the general comment.
2. We were motivated to consider this model by the observation that modern user interfaces work hard to present small and easily scanned slates for the user to consider; typical examples of such interactions are the 10 blue links of Web search, the 3 choices in Google local search, or the handful of movies in a Netflix carousel. Therefore, we allow access only to slates of size $2,\dots, k$ and our goal is to make $k$ as small as possible while providing good estimates to every slate.
3. Thank you for pointing out the typos; we will improve the writing in the revision.
---
Rebuttal Comment 1.1:
Comment: Thanks for your reply. I acknowledge that i have read the rebuttal. I will read other reviewers' comments and have discussion with them. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insights, and the time and efforts spent reviewing our work. We consider each comment carefully below, but let us begin by addressing a common concern regarding the computational complexity. While the asymptotic complexity of the algorithms is high, there are some interesting special cases for which the algorithms are practical.
1. Our learning algorithm is able to infer the winning distributions of slates up to size $\sim k^2$ by looking at the $\sim n^k$ slates of size at most $k$. For moderate values of $k$, e.g., constant $k$, this algorithm is practical and is a significant improvement over the trivial $\sim n^{k^2}$ algorithm.
2. For modest values of $n$, e.g., learn the winning distributions over the $n$ restaurants in a town, our algorithm only requires $\sim 2^{\sqrt{n} \lg n}$ queries to approximate the full-slate, while the previous best algorithm needs $\sim 2^{n}$ queries.
In general, we view our results as the necessary first steps towards understanding the learnability of RUMs, which has been a long-standing research question.
Since the CFP welcomes “Theory” contributions in “learning theory”, we hope that the theoretical nature of our contribution will not be seen as a limitation. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
BitsFusion: 1.99 bits Weight Quantization of Diffusion Model | Accept (poster) | Summary: This paper proposes a weight quantization method to quantize the UNet of SDv1.5 to 1.99 bits while maintaining model performance comparable to the floating-point model. The approach includes a series of techniques such as bit-width allocation for mixed-precision quantization, low-bit diffusion model initialization, and a two-stage training pipeline. Extensive experiments demonstrate the effectiveness of the proposed method. Results show that the quantized diffusion model can even outperform the floating-point SDv1.5.
Strengths: 1. The paper is well-written and easy to follow.
2. The per-layer sensitivity analysis is comprehensive, considering both quality and contextual information, which is helpful for future research.
3. The effectiveness of the proposed approach is validated across multiple datasets and evaluation metrics.
4. The generated results look impressive.
Weaknesses: 1. It is well-known that QAT-based approaches can achieve superior performance at the expense of additional training effort. While this work involves training the weights via QAT, it does not include comparisons with other QAT-based methods such as Q-DM and TDQ.
2. It is unclear how the sensitivity threshold in Section 3 is determined and whether this hyperparameter generalizes to other models.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The setting of activation is not mentioned in the paper. Are the activations kept in floating point?
2. What is the dataset used for stage-II training? Suppose the training set of the corresponding dataset is used. In that case, claiming the model outperformed the floating-point SDv1.5 model may be inappropriate since the SDv1.5 model is evaluated under a zero-shot setting.
3. The average bits uses $\log(2^n + 1)$ bits for calculation. Is it necessary to incorporate an optimal encoding method such as Huffman encoding to reach this average bit-width?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Comparison with QAT-based approaches such as Q-DM and TDQ.**
A1. Thanks for the suggestion. We have compared the QAT-based approach LSQ and EfficientDM in Table 3 of the main paper. Here, we provide more results for Q-DM and TDQ on PartiPrompts with a CFG scale of 7.5. Compared to Q-DM with 2-bit weight quantization, our BitsFusion achieves a CLIP score of 0.3212, which is 0.228 higher than Q-DM. For a fair comparison with TDQ, which optimizes activation quantization through dynamic quantization, we also apply 8-bit activation quantization to our 1.99-bit BitsFusion. As shown in the table below, our BitsFusion achieves a CLIP score of 0.3193, surpassing TDQ with a score of 0.2907.
>| Methods | Weight Bit-width | Activation Bit-width | CLIP score |
| :---- | :---- | :---- | :---- |
| Stable Diffusion v1.5 | 32 | 32 | 0.3175 |
| Q-DM | 2 | 32 | 0.2984 |
| BitsFusion | 1.99 | 32 | 0.3212 |
| TDQ | 2 | 8 | 0.2907 |
| BitsFusion | 1.99 | 8 | 0.3193 |
---
**Q2. How to determine the sensitivity threshold in Section 3.**
A2. Thanks for the comment. We empirically set the sensitivity threshold to achieve an average of 1.99 bits. The sensitivity threshold only affects the average bits. It can be adjusted as needed for other models with different bit requirements, thus it is generalized to other models.
We would like to kindly note here the sensitive threshold is not a hyperparameter that influences the model training. Thanks for mentioning this. We will modify our manuscript accordingly to emphasize this distinction.
---
**Q3. Activation quantization.**
A3. Thanks for the valuable question. Since this paper is focusing on reducing the model size, we explore the weight quantization on the diffusion model and keep activation as floating point value. However, 8-bit activation quantization can be directly applied to our approach. Based on 1.99 bits weight quantization, we further apply 8-bit activation quantization to all layers except for the first and last convolutional layer and train the corresponding scaling factors. The experimental settings are the same as those described in Section 5 of the main paper. With a CFG scale of 7.5, our stage-I model and stage-II model can achieve CLIP scores of 0.3172 and 0.3193 on the PartiPrompts, respectively. For the TIFA score, our stage-I model and stage-II model can achieve 0.786 and 0.805 with a CFG scale of 7.5. With two-stage training, our BitsFusion model with a 1.99-bit weight and 8-bit activation quantization can still outperform the full-precision SDv1.5.
>| Methods | Weight Bit-width | Activation Bit-width | CLIP score | TIFA |
| :---- | :---- | :---- | :---- | :---- |
| Stable Diffusion v1.5 | 32 | 32 | 0.3175 | 0.788 |
| Stage-I | 1.99 | 8 | 0.3172 | 0.786 |
| Stage-II | 1.99 | 8 | 0.3193 | 0.805 |
---
**Q4. Datasets used in the stage-II training.**
A4. Thanks for the question. Our training data consists of synthetic data and real images, and there is no overlap between the training data and evaluation data. To clarify, our model performs *zero-shot evaluation*, which is the same as SDv1.5. Specifically, we perform different evaluations to verify the effectiveness of BitsFusion, such as CLIP scores on the MS-COCO dataset and PartiPrompts, GenEval score, and TIFA score. PartiPrompts provides prompts from different categories. GenEval provides prompts to evaluate compositional image properties such as object co-occurrence, position, count, and color. TIFA provides various prompts and uses the VQA model to measure the text-to-image model if it can accurately answer the questions. Evaluating CLIP scores, TIFA score, and GenEval score do not include any reference images, and we do not have the same prompts from these datasets in the training.
---
**Q5. Is it necessary to incorporate an optimal encoding method such as Huffman encoding to reach the averaged bit-width?**
A5. Thanks for the comment. We calculate the average bits according to Section I in the Appendix with $\frac{\sum_{i} log \(2^{b_{i}^*}+1\)* N_i + 16*N_{tf}}{N_w}$, where $b_{i}^{\ast}$ is the calculated bit-width in the $i_{th}$ layer, $N_i$ is the number of weights of the $i_{th}$ layer, $N_{tf}$ is the number of parameters for pre-cached time features, and $N_w$ is the total number of weights in linear and convolutional layers. We do not need the Huffman encoding to reach this average bit-width. However, as suggested by the reviewer, Huffman encoding is a valuable tool to further reduce our storage size.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer Fq1C,
Thank you very much for your valuable feedback and the positive evaluation of our work. We have included detailed explanations in response to your questions. As the deadline for the discussion period approaches, we would appreciate your review of these explanations to confirm that they fully meet your expectations and resolve any remaining concerns. Thank you once again for your insightful contributions.
Best regards,
The Authors | Summary: The paper presents BitsFusion, a novel method for weight quantization of diffusion models, specifically applied to the UNet architecture in Stable Diffusion v1.5. The approach quantizes weights to 1.99 bits, achieving a model size reduction of 7.9 times while enhancing or maintaining image generation quality. The experiment evaluates the quantized model across various benchmarks, including MS-COCO, TIFA, GenEval, and human evaluations, and demonstrates superior performance of the quantized model compared to full-precision Stable Diffusion v1.5.
Strengths: 1.The paper is well-organized and presents its content in an accessible manner. The organization of the paper is clear.
2.This paper mainly focuses on quantizing large-scale models like Stable Diffusion v1.5 with fewer bits than 4, which has significant implications for the industry and has achieved satisfactory results.
3.The paper methodologically addresses the challenges associated with low-bits quantization in SD and provides a robust solution through the BitsFusion framework. The experiments demonstrate the effectiveness of BitsFusion in achieving superior performance compared to existing methods on various large-scale datasets.
Weaknesses: 1.The motivation of the paper seems to be heuristic and lacks necessary theoretical analysis, but the impact of this method in the industry is worth looking forward to.
Technical Quality: 3
Clarity: 3
Questions for Authors: What's the total resource consumption for this entire process? For instance, how many GPUs with what memory capacity were utilized, and for how many hours was the training conducted? How does this compare to the resources required for a baseline that trains the original SD v1.5?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. About the motivation of this paper.**
A1. The overall motivation for performing quantization on the diffusion model is to reduce significant burdens for transferring and storing the model due to its large size. To this end, we propose several methods with corresponding motivations for model quantization:
Mixed-precision Strategy:
- Mixed-precision Strategy: Various layers exhibit different sensitivities during quantization. To achieve a lower quantization error, it is crucial to employ a mixed-precision approach where more sensitive layers are quantized at higher bit-widths to retain essential information, while less sensitive layers can be quantized at lower bit-widths. In Section 3.2, we conduct the quantization error of layers for different bit-widths via appropriate metrics and obtain valuable observations to guide our mixed-precision approach.
Initialization Schemes:
- Time Embedding Pre-computing and Caching: During the inference, the embedding for a fixed time step is always the same (Line 177\~179). Therefore, we can pre-compute the embedding offline and load cached values during inference, instead of computing the embedding every time.
- Adding Balance Integer: We observe that weight distributions in the diffusion model are symmetric around zero. However, the existing quantization works disrupt this symmetric property (Line 182\~189). Therefore, we introduce an additional value to balance the original values to maintain this property.
- Scaling Factor Initialization via Alternating Optimization: Min-Max initialization leads to a large quantization error and the increased difficulty to converge in extremely low-bit quantization settings like 1-bit (Line 198\~201). Therefore, we propose using an alternating optimization method to improve scaling factor initialization.
Training Schemes:
- CFG-aware Quantization Distillation: Distillation is an effective approach to recover the performance degradation caused by quantization. Additionally, considering that CFG enhances the performance of the diffusion model, we leverage CFG-aware distillation to effectively recover the quantization error (Line 217\~218).
- Feature Distillation: To further improve the generation quality of the quantized model, we distill the full-precision model at a more fine-grained level (Line 220\~221). Therefore, we adopt the feature distillation.
- Quantization Error-aware Time Step Sampling: We analyze the quantization error during various time steps during training. As shown in Fig. 4 in the main paper, the quantization error does not distribute equally across all time steps in the quantized model (Line 230\~240). Therefore, we propose a quantization error-aware time step sampling based on Beta distribution to reduce the overall quantization error.
We believe that our comprehensive analysis and the proposed methods will contribute significantly to the field of quantization on diffusion models.
---
**Q2. About training computation.**
A2. Thank you for the suggestions. Here we provide more details for our two-stage training pipeline. For the stage-I training, we use 8 NVIDIA A100 GPUs with 40GB memory, and a total batch size of 256 to train the quantized model for 20K iterations. The total training time is within 40 hours. For the stage-II training, we use 32 NVIDIA A100 GPUs with 40GB memory, and a total batch size of 1024 to train the quantized model for 50K iterations. The total training time is within 100 hours.
Note that the stage-II training aims to further improve the performance of the quantized model such that it is better than the full-precision Stable Diffusion v1.5. If reducing the training cost would be the goal, we can remove stage-II training and only use stage-I training. Stage-I training is able to achieve similar performance as the full-precision Stable Diffusion v1.5, as shown in Fig. 5 of the main paper.
To better understand the training cost of our model, we show the training cost comparison between our model and Stable Diffusion v1.5 (the training cost is obtained from the public resource https://huggingface.co/runwayml/stable-diffusion-v1-5). We use the total trained samples to compare the training costs as the training hours depending on the actual hardware used. As shown in the following table, our stage-I training only requires 0.16% training cost compared with training the Stable Diffusion v1.5 from scratch, and stage-II training requires 1.6% training cost compared with training the Stable Diffusion v1.5 from scratch. For the training time, based on the public information (Huggingface), SD-v1.5 is trained for 30-60 days on 256 GPUs, while our model takes 5~6 days on at most 32 GPUs.
>| Stage | A100 | Batch size | Gradient accumulation | Total batch size | 256x256 iters (K) | 512x512 iterations (K) | Total training samples (M) | Percentage of total training samples compared to SD-v1.5 |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Stable Diffusion v1.5 | 256 | 4 | 2 | 2048 | 237 | 1304 | 3155.97 | 100% |
| Stage-I | 8 | 8 | 4 | 256 | \- | 20 | 5.12 | 0.16% |
| Stage-II | 32 | 8 | 4 | 1024 | \- | 50 | 51.2 | 1.6% |
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns regarding the training computation. I will keep my scores as wa.
---
Reply to Comment 1.1.1:
Title: Thank you for the response
Comment: Thank you for your quick response and positive rating! We're pleased to hear that your concerns have been addressed. We will incorporate the relevant discussion in the paper. | Summary: This paper demonstrates a quantized diffusion model called BitFusion, which successfully quantize the Stable Diffusion (SD) v1.5 to 1.99 bits with 7.9x smaller size. They first analyse the SD model in a layer perspective and assign the optimal bit based on the analysis. Then they propose a training pipeline to perform quantization-aware training (QAT) based on the analysis result. The experiment result shows that the quantized diffusion model BitFusion outperforms the full-precision SD v1.5 model.
Strengths: 1. The result is significant. BitFusion successfully compressed the SD model while maintaining its performance.
2. The authors successfully combine novel quantization techniques and existing quantization techniques, showing an effective way to quantize SD model to extreme low bit.
3. The paper provides valuable observations about different layers' behaviour under quantization.
Weaknesses: 1. The analysis and experiment according to scaling factor initialization is not comprehensive. The authors propose alternating optimization for scaling factor initialization which is based on the MSE between quantized weight and full-precision weight. However, quantized weight initialization is well-studied and there are existing weight initialization technique that optimize the same objective. Nevertheless, there are some advanced initialization technique like adaround [1]. Those techniques are adapted by other Diffusion Model quantization research [2]. An experiment to compare the alternating Opt. and those initialization techniques is necessary.
2. Stable Diffusion 1.5 is no longer the SOTA in Diffusion text-to-image model. Diffusion transformer (DiT) is the new trend in diffusion model research. The conclusion on UNet-based diffusion model like SD 1.5 might not be precise for the DiT diffusion model.
3. FID is widely adapted by most of the diffusion model quantization research. Although the authors provide human evaluation and justify that FID is not accurate compared with human evaluation, the FID comparison between BitFusion and other baseline models should be included in the main body of the paper.
4. In this paper [3], the authors propose a 1.58-bit LLM quantization approach comparable to Bit Fusion at model size reduction. However, they can perform activation quantization to 8-bit while Bit Fusion doesn't.
[1] Nagel, Markus, et al. "Up or down? adaptive rounding for post-training quantization." International Conference on Machine Learning. PMLR, 2020.
[2] Li, Xiuyu, et al. "Q-diffusion: Quantizing diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[3] Ma, Shuming, et al. "The era of 1-bit llms: All large language models are in 1.58 bits." arXiv preprint arXiv:2402.17764 (2024).
Technical Quality: 2
Clarity: 3
Questions for Authors: NA
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors honestly point out their limitation: not quantizing VAE and text encoder and not quantizing activation. For the first limitation, it is quite common that diffusion model quantization research doesn't quantize VAE and text encoder according to the literature. But for the second limitation, the authors leave it for future work and doesn't address it. Activation quantization is recommended, as the fact that activation usually has higher quantization difficulty.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. Comparison with adaround \[1\], which is used in Diffusion Model quantization research \[2\]**.
A1. Thank the reviewer for suggesting the adaround, which is a relevant method \[1\]. However, we would like to kindly emphasize that adaround \[1\] focuses on *post-training quantization* by proposing a better weight-rounding operation, while our scaling factor initialization is used for *training-aware quantization*. These two technologies are applied during different quantization pipelines. As highlighted by Reviewer WzcG, our scaling factor initialization is novel.
Indeed, as mentioned by the reviewer, the Q-Diffusion \[2\] uses the adaround, implemented by MinMax and Brute-force MSE searching. As analyzed in the paper (Line 198\~201), MinMax is able to maintain the outlier weights but overlook the overall quantization error in the extreme low-bit setting, thus sacrificing the performance of the quantized model. To further prove this, we have conducted an ablation analysis in Table 4 of the main paper. With the alternating optimization (indicated by the row labeled “+Alternating Opt.”), the CLIP score is improved from 0.3054 (indicated by the row labeled “+Balance”, which uses MinMax as the default initialization method) to 0.3100.
As for the brute-force MSE, it takes lots of time to find the ideal scaling factor than our initialization approach. Following the same setting, we measure the CLIP score where the brute-force MSE in Q-Diffusion \[2\] can achieve a score of 0.3098, while our alternating optimization achieves 0.3100. However, the brute-force MSE requires 13 minutes to initialize the scaling factor by traversing 80 different scaling factors in each layer. In contrast, our alternating optimization significantly reduces the initialization time, which only needs 5 minutes with 10 optimization iterations.
In fact, we have compared our approach with Q-Diffusion \[2\]. As shown in Table 3 of the main paper, our 1.99 bits model can achieve a 0.3175 CLIP score, even outperforming the 4 bits Q-Diffusion model which has a CLIP score of 0.3137 evaluated on the PartiPrompts.
Thanks again for recommending the comparison with adaround. We will revise the manuscript to include the above discussion.
---
**Q2. Quantization of the diffusion transformer (DiT).**
A2. We agree with the reviewer that besides UNet, DiT is another promising architecture used as the backbone for the diffusion model. This work targets the quantization of UNet for several reasons.
- First, UNet is a widely adopted architecture for diffusion models. Besides the text-to-image generation, there are numerous applications like ControlNet and IP-Adapter that are built upon the UNet-based diffusion models. However, there is no previous effort to study the extremely low-bit quantization for such an architecture. Therefore, it is important to understand how to quantize the UNet to low-bits.
- Second, UNet is more efficient than DiT during inference, especially on resource-constrained devices. Using quantization to compress the efficient architecture has important practical values, as agreed by Reviewer WzcG and W2xs. Even if we apply the quantization to DiT models, how to deploy such models on resource-constrained devices is unclear, *e.g.*, we are not able to run DiT-based text-to-image models on the iPhone.
- Third, although there is quite some work studying the improvement of DiT-based diffusion models, the UNet-based model still shows competitive performance for text-to-image generation. For instance, the open-sourced Kolors ([https://github.com/Kwai-Kolors/Kolors/tree/master?tab=readme-ov-file\#-evaluation](https://github.com/Kwai-Kolors/Kolors/tree/master?tab=readme-ov-file\#-evaluation)), which is built upon SDXL, achieves the SOTA results for text-to-image generation.
With all that being said, we believe that our approach can be applied to quantize DiT models, especially considering the quantization error analysis and training pipelines proposed in this paper are generic methods.
Thanks again for suggesting the quantization of DiT, which is an interesting topic that we would like to explore by using our proposed methods.
---
**Q3. Adding the FID comparison to the main paper.**
A3. Thanks for the suggestion. We will move the FID comparison from the Appendix to the main paper.
---
**Q4. About the activation quantization for the 1.58-bit LLM quantization approach.**
A4. This work mainly targets the size reduction of the diffusion model. Therefore, we did not show the activation quantization. Nevertheless, 8-bit activation quantization can be directly applied to our approach. Based on 1.99 bits weight quantization, we further apply 8-bit activation quantization to all layers except for the first and last convolutional layer and train the corresponding scaling factors. The experimental settings are the same as those described in Section 5 of the main paper. With a CFG scale of 7.5, our stage-I model and stage-II model can achieve CLIP scores of 0.3172 and 0.3193 on the PartiPrompts, respectively. For the TIFA score, our stage-I model and stage-II model can achieve 0.786 and 0.805 with a CFG scale of 7.5. With two-stage training, our BitsFusion model with a 1.99-bit weight and 8-bit activation quantization can still outperform the full-precision SD-v1.5.
>| Methods | Weight Bit-width | Activation Bit-width | CLIP score | TIFA score |
| :---- | :---- | :---- | :---- | :---- |
| Stable Diffusion v1.5 | 32 | 32 | 0.3175 | 0.788 |
| Stage-I | 1.99 | 8 | 0.3172 | 0.786 |
| Stage-II | 1.99 | 8 | 0.3193 | 0.805 |
---
\[1\] Nagel, Markus, et al. "Up or down? adaptive rounding for post-training quantization." International Conference on Machine Learning. PMLR, 2020\.
\[2\] Li, Xiuyu, et al. "Q-diffusion: Quantizing diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023\.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer tjXw,
Thank you very much for your insightful comments. In response to your comments, we have provided detailed explanations to clarify the points discussed and address the concerns you highlighted. As the deadline for the discussion period is quickly approaching, we are keen to know if our responses meet your expectations and have addressed all your concerns effectively. Thank you again for your valuable time and expertise.
Best regards,
The Authors
---
Rebuttal 2:
Title: Additional Comment ny authors
Comment: Dear authors,
Thanks for your valuable information. It is worth noting that the findings in prior diffusion model quantization research are mostly based on the U-Net structure, like shortcut splitting quantization in Q-Diffusion [1] and temporal information aware reconstruction in TFMQ-DM [2]. However, the findings in BitFusion, like how to decide the optimal precision, are not limited to the U-Net diffusion model. So I still suggest BitFusion's effectiveness be further verified with DiT as most of the SOTA diffusion models are DiT-based.
Best,
Reviewer tjXw
[1] Li, Xiuyu, et al. "Q-diffusion: Quantizing diffusion models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
[2] Huang, Yushi, et al. "Tfmq-dm: Temporal feature maintenance quantization for diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
---
Rebuttal Comment 2.1:
Title: Response to additional comments
Comment: Dear Reviewer tjXw,
Thank you for your additional response. We appreciate the time and effort you have invested in this paper. Please allow us to summarize our agreements and disagreements regarding the DiT quantization.
**Agreements**:
- The proposed quantization methods in BitsFusion are very generic and can be applied to other architectures, like DiT.
**Disagreements**:
- The reviewer believes DiT is the SOTA diffusion model.
- The reviewer mentioned that, since DiT is the SOTA diffusion model, lacking experiments of DiT is a limitation or concern for this work.
Here, we would like to kindly present some **facts** that support us to humbly disagree with the reviewer.
**First**, as we mentioned in the previous response, **we did not see any paper clearly showing that DiT is better than UNet**. On the contrary, the recent work [A] shows that under the same training dataset, schedule, hardware, and training time, the **UNet is better than DiT** (as seen in Fig. 6 in [A]).
**Second**, we understand some recent DiT-based works like SD3 [B] show promising text-to-image generation. However, the architecture in SD3 is more complicated than the original DiT. Additionally, the training recipe and noise scheduling are different from those used to train the UNet-based models. Even if we take the training dataset, recipe, and noise scheduling aside and only focus on the comparison of the final generated images, the open-sourced Kolors (as mentioned in the previous response), which is built upon SDXL, achieves the SOTA results for text-to-image generation. The Kolors demonstrates that the **UNet is better than SD3 (DiT-based model)**.
**Third**, taking a step back, we are still interested in applying our approach to DiT. However, it is non-trivial to achieve comparable results since DiT has a different architecture than the UNet-based model. Moreover, there might be some other new observations and problems in DiT-based models that we need to solve when doing quantization for them. Therefore, exploring the DiT-based model belongs to another research direction and we believe such experiments are beyond the scope of this paper.
We humbly hope the above arguments can make sense to the reviewer.
Best,
Authors
---
References:
[A] Li, Hao, et al. “On the Scalability of Diffusion-based Text-to-Image Generation”. CVPR, 2024.
[B] Esser, Patrick, et al. “Scaling Rectified Flow Transformers for High-Resolution Image Synthesis”. ICML, 2024. | Summary: This paper proposes a novel weight quantization method called "BitsFusion" for compressing the Stable Diffusion v1.5 model. The primary goal is to address the issue of large model sizes, which hinder the deployment of diffusion models on resource-constrained devices. The BitsFusion framework quantizes the UNet component of Stable Diffusion v1.5 to 1.99 bits, achieving a 7.9× reduction in model size while improving image generation quality. The approach involves mixed-precision quantization, novel initialization techniques, and an improved two-stage training pipeline. Extensive experiments on various benchmark datasets demonstrate that the quantized model outperforms the original full-precision model in terms of generation quality and text-image alignment. My detailed comments are as follows.
Strengths: 1. The BitsFusion framework is an innovative approach to compressing the UNet component of Stable Diffusion v1.5 to 1.99 bits, achieving a significant reduction in model size while enhancing image quality by using mixed-precision quantization, novel initialization techniques, and an improved two-stage training pipeline.
2. The paper provides a thorough analysis of quantization errors and develops a mixed-precision strategy based on this analysis, which contributes to the theoretical understanding of low-bit quantization for large-scale diffusion models.
3. The results demonstrate the effectiveness of the proposed framework. The quantized model consistently outperforms the original full-precision model across various metrics, including CLIP score, TIFA score, and GenEval score. The model's ability to achieve a 7.9× reduction in size while maintaining or improving performance highlights its practical potential for deployment on resource-constrained devices.
4. The manuscript is well-written and easy to understand, with clear explanations of the methodology and experimental procedures. Sufficient experimental details are provided, ensuring that the results can be reproduced by other researchers, which enhances the paper's credibility and utility.
Weaknesses: 1. The two-stage training pipeline, while effective, may introduce additional computational complexity and training time, which could be a drawback for some applications. However, the paper does not explicitly discuss the additional computational complexity and training time it introduces. To provide a more comprehensive evaluation of this method, it is recommended that the authors discuss the potential additional computational overhead introduced by the two-stage training pipeline and its impact on training time.
2. The study demonstrates that there is a high correlation between certain metrics, such as MSE and PSNR, and uses these correlations to validate the effectiveness of their quantization methods. However, correlation does not imply causation. The study does not delve into why these metrics are correlated or explain the intrinsic mechanisms that cause certain layers to be more sensitive to quantization (e.g., why quantization affects some layers more than others, and what intrinsic properties of these layers make them more prone to quantization-induced errors). Understanding these mechanisms is crucial for improving the quantization process, as knowing why certain layers are sensitive can lead to more targeted and effective quantization strategies.
3. The claim that the 1.99-bit quantized model outperforms the full-precision model in all evaluated metrics is a significant overgeneralization. This assertion, based on specific experimental settings and datasets, may not fully represent the diversity encountered in real-world applications. Primarily, the evaluation on the MS-COCO 2014 validation set, though extensive, does not cover a wide variety of prompts, limiting the assessment of the model's ability to handle different types of inputs (e.g., complex scenes, abstract concepts, specific artistic styles). Additionally, the tests do not include various real-world conditions such as different lighting, backgrounds, or object complexities. Therefore, the claim that the 1.99-bit quantized model outperforms the full-precision model in all evaluated metrics is premature. A more robust evaluation including varied datasets, prompts, and real-world conditions, such as [A-E], is necessary to substantiate such a broad claim.
4. The discussion regarding model compression techniques is insufficient. It would be better for the authors to present more model quantization methods [F-J].
[A] Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. International Journal of Computer Vision. IJCV. 2017.
[B] Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models. ICCV. 2015.
[C] The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale. IJCV. 2020.
[D] Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. CVPR. 2021.
[E] Scene Parsing through ADE20K Dataset. CVPR. 2017.
[F] Binary quantized network training with sharpness-aware minimization. Scientific Computing 2023.
[G] Network Quantization with Element-Wise Gradient Scaling. CVPR 2021.
[H] Single-path bit sharing for automatic loss-aware model compression. TPAMI 2023.
[I] Generative Data Free Model Quantization with Knowledge Matching for Classification. TCSVT 2023.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please check the weakness section.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1. About training computation.**
A1. Thanks for the suggestions. For the stage-I training, we use 8 NVIDIA A100 GPUs with a total batch size of 256 to train the quantized model for 20K iterations. The training time is within 40 hours.
For the stage-II training, we use 32 NVIDIA A100 GPUs with a total batch size of 1024 to train the quantized model for 50K iterations. The total time is within 100 hours.
Note that the stage-II training aims to further improve the performance of the quantized model so that it is better than the full-precision SDv1.5. If reducing the training cost would be the goal, we can remove stage-II training. Stage-I training is able to achieve similar performance as the full-precision SDv1.5 (Fig. 5 of the main paper).
Additionally, we compare training costs between our model and SDv1.5 from scratch (https://huggingface.co/runwayml/stable-diffusion-v1-5). We use the total trained samples to measure the training costs as the training hours depend on the actual hardware used. As in the following table, our stage-I training only requires 0.16% training cost compared with training the SDv1.5, and stage-II training requires 1.6% training cost compared with SDv1.5.
>| Stage | A100 | Batch size | Gradient accumulation | Total batch size | 256x256 iters (K) | 512x512 iterations (K) | Total training samples (M) | Percentage of total training samples compared to SD-v1.5 |
| :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- | :---- |
| Stable Diffusion v1.5 | 256 | 4 | 2 | 2048 | 237 | 1304 | 3155.97 | 100% |
| Stage-I | 8 | 8 | 4 | 256 | \- | 20 | 5.12 | 0.16% |
| Stage-II | 32 | 8 | 4 | 1024 | \- | 50 | 51.2 | 1.6% |
---
**Q2.1. Why certain metrics are correlated.**
A2.1. Thanks for the insightful comments! MSE, PSNR, and LPIPS are highly correlated since they measure the changes of image quality. Namely, they calculate the perceptual similarity between two images that align with human perception. More specifically, MSE quantifies the pixel-wise difference between images. Similarly, PSNR, derived from MSE, evaluates the peak error in images via $PSNR = 10 * log_{10}(Max^2 / MSE)$. LPIPS has a different calculation as MSE and PSNR. Yet, it still focuses on significant perceptual differences beyond pixel-level comparisons. Given that perceptual changes often accompany measurable pixel-wise changes, LPIPS correlates with PSNR and MSE.
On the other hand, the CLIP score measures the alignment between the image and prompt, leading to a low correlation with MSE, PSNR, and LPIPS metrics.
---
**Q2.2. Why quantization affects some layers more than others.**
A2.2. We notice that cross-attention layers and convolutional shortcut layers are more sensitive to low-bit quantization than other layers. Cross-attention layers combine textual and image information. Therefore, any quantization-induced errors for cross-attention layers might greatly change the semantics and content of the generative images, leading to substantial semantic shifts and the loss of important content. For example, in Line 147~149 and Fig. 2 of the main paper, quantizing a cross-attention key layer changes the image content from "a teddy bear" to "a person". Convolutional shortcut layers link different layers and scales of features across the UNet. They facilitate the information flow between layers, enabling the network to learn complex patterns. Quantization of these shortcut connections disrupts the well-established connectivity of features, leading to a breakdown in the coherence of the learned representations (as in Fig. 2).
---
**Q3. Some evaluation datasets, like MS-COCO 2014, may not fully represent the diversity in real-world.**
A3. We agree with the reviewer that most of the datasets used for evaluating text-to-image models have their limitations. In this paper, we evaluate our model on most of the *commonly used benchmark datasets* to get relatively accurate and fair comparisons. Specifically, we use Parti Prompts, TIFA, GenEval scores, and Human evaluation. These benchmark datasets cover a wide variety of prompts and are commonly adopted in evaluating text-to-image models in the literature, like SDXL. Additionally, we would like to kindly clarify that our human preference evaluation includes complex scenes, abstract concepts, and specific artistic styles (as in Fig. 13 and Fig. 14).
Regarding our claim that our quantized model outperforms the full-precision SDv1.5 across all the evaluations, we intend to mention that our BitsFusion can outperform full-precision SDv1.5 across commonly used metrics evaluated in the paper. Based on the feedback from the reviewer, we will modify our writing on Line 61 as our model outperforms the full-precision model *across various evaluation metrics used in the paper*. We hope it can make the paper more rigorous.
We thank the reviewer for suggesting new metrics. We will discuss them in the revised paper. These metrics have not been widely adopted for evaluating text-to-image generation. For instance, Visual Genome is for visual question answering [A], Flickr30k Entities is for evaluating image description [B], Open Images Dataset V4 is for classification, detection, and visual relationship detection [C], and ADE20K is for understanding scene parsing [E]. Adapting these metrics to evaluate text-to-image models requires a series of experiments to understand how to use them and correlate them with human perception. We take this as a future work.
During the rebuttal, we run the suggested evaluation metric from Conceptual 12M with the CLIP score [D]. By randomly selecting 1K prompts (captions) and using the same CFG scale of 7.5, our 1.99 bits model outperforms full-precision SD-v1.5 (CLIP scores of 0.3084 *vs.* 0.3067).
---
**Q4. More related methods.**
A4. Thanks for suggesting these papers, which are indeed insightful and relevant! We will discuss these papers in the revised paper.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer WzcG,
Thank you very much for your insightful feedback and the positive rating. We have included explanations to further clarify the questions. As the deadline for the discussion period is fast approaching, we are eager to see if our explanations meet your expectations and address all your concerns thoroughly. Thank you again for your time and insight.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: We thank all reviewers for their valuable suggestions and feedback. We appreciate the reviewers acknowledge the strengths of this paper, including:
- **addressing the challenges** for low-bits quantization of Stable Diffusion (Reviewer W2xs);
- **thorough, comprehensive, and valuable** analysis and observations of the quantization errors for each layer, contributing to the **theoretical understanding** of low-bit quantization for large-scale diffusion models and helpful for **future research** (Reviewer WzcG, Fq1C, tjXw);
- **innovative approaches** (e.g., innovative initialization techniques and improved two-stage pipeline) to compress the UNet to 1.99 bits (Reviewer WzcG, tjXw, W2xs);
- **significant results** for model size reduction and **superior performance** that the quantized model outperforms Stable Diffusion v1.5 across **various metrics and datasets** (Reviewer WzcG, tjXw, W2xs, Fq1C);
the quantized model has **practical potential** on resource-constrained devices and **significant implications** for the industry (Reviewer WzcG, W2xs);
- **a well-organized, well-written, and clear paper**, with experimental details provided, enhancing the paper's **credibility and utility** and ensuring the results can be **reproduced** (Reviewer W2xs, Fq1C, WzcG).
In the following, we provide detailed responses to all concerns from reviewers. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Accelerating Non-Maximum Suppression: A Graph Theory Perspective | Accept (poster) | Summary: The article "Accelerating Non-Maximum Suppression: A Graph Theory Perspective" explores a novel way to optimize Non-Maximum Suppression (NMS) in object detection using graph theory. The authors present two new methods, QSI-NMS and BOE-NMS, which greatly enhance the efficiency of NMS while maintaining mean Average Precision (mAP). They also introduce NMS-Bench, a benchmarking framework designed to evaluate different NMS methods effectively.
Strengths: - This article brings an interesting and fresh perspective to NMS by applying graph theory.
- The proposed QSI-NMS and BOE-NMS algorithms show impressive improvements in computational efficiency, with QSI-NMS achieving up to 10.7× speedup and BOE-NMS achieving 5.1× speedup, all without sacrificing mean Average Precision (mAP).
- One of the standout contributions is NMS-Bench, a benchmarking framework that standardizes the evaluation of NMS algorithms, potentially driving further advancements in the field.
- Additionally, the paper provides a detailed analysis of the intrinsic structure of NMS through the lens of graph theory, offering valuable insights into its computational bottlenecks and optimization strategies.
Weaknesses: 1. In section 5.2 Results, we can find QSI-NMS operates with some performance decrease, and the paper doesn't thoroughly discuss what aspects of the algorithm might be causing this decline (maybe missing some data but exactly what is missed).
2. The connection between the proposed methods and graph theory appears weak. QSI-NMS is essentially a quicksort-based algorithm, and the size of the weakly connected components (WCC) or their independence doesn't seem to significantly impact its performance. BOE-NMS, on the other hand, introduces a heuristic for the search space, which doesn't strongly tie into graph theory principles either.
3. Regarding eQSI-NMS, there are questions about the clarity and effectiveness of the methods. The pseudo-code in Appendix E.2 is difficult to understand, and there might be an issue with the line "for s ∈ S do s ← −s;" in eQSI-NMS. It’s unclear how this part of the code functions and whether it introduces any problems.
4. The evaluation is primarily limited to the YOLOv8-N model on the MS COCO 2017 dataset. To gain a more comprehensive understanding of the algorithms' generalizability, broader evaluations across various models and datasets would be beneficial.
Technical Quality: 3
Clarity: 2
Questions for Authors: see weakness part for details
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: delineate the constraints of their proposed method in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer HR1f's thoughtful review of our work and the insightful questions you have raised. Your feedback is invaluable in helping us improve and refine our research. We will address each of your concerns in detail.
---
> **Q1**: The connection between the proposed methods and graph theory appears weak.
**A1**: In fact, our methods are closely linked to graph theory for the following two reasons:
The first reason is that the goal of the algorithms is to construct the graph $\mathcal{G}$ or its approximate graph $\tilde{\mathcal{G}}$ more quickly and efficiently (see Section 4). However, due to the highly skillful design of our algorithm, there is no need to explicitly construct the graph in implementation.
We explained in Section 4.1 (Lines 174-177) that QSI-NMS does not require explicit construction of the graph $\tilde{\mathcal{G}}$. If we use different pivot selection methods or partitioning criteria, graph construction becomes unavoidable. The situation is similar in BOE-NMS. If we do not sort by confidence scores, we can still obtain $\mathcal{G}$, and the result after dynamic programming will be the same. However, this approach would incur additional space and time overhead. We conducted an experiment to support our analysis, where we did not sort by confidence but performed dynamic programming at the end, keeping everything else unchanged. The results tested on YOLOv8-N are as follows:
| method | latency ($\mu s$) | AP 0. 5:0. 95 (%) |
| --------------------------------- | ------------------------- | --------------- |
| Original NMS | 936. 9 | 37. 2 |
| BOE-NMS | 201. 8 | 37. 2 |
| BOE-NMS (with graph construction) | 558. 0 | 37. 2 |
The second reason is that the algorithms' design is based on the analysis of WCCs in the graph $\mathcal{G}$, ensuring the algorithm's effectiveness. In the analysis in Section 3(Lines126-135 and Figure 2), we identified two crucial characteristics of $\mathcal{G}$: the number of WCCs is large, and their sizes are generally small. The first characteristic inspired the design of QSI-NMS, and the second characteristic inspired BOE-NMS(Lines 131-135).
Specifically, QSI-NMS is based on the divide-and-conquer strategy, where subproblems do not interfere with each other. If there are few WCCs, the divide-and-conquer process will repeatedly split the same WCC, leading to accuracy issues. BOE-NMS must traverse each WCC, and the smaller the WCCs, the faster BOE-NMS will be. Assuming there are $w$ WCCs and the size of the $i$-th WCC is $n_i$, we can perform a formal analysis using the following inequality:
$$
\frac{\left(\sum_{i=1}^{w}n_i\right)^2}{w}\leq \sum_{i=1}^{w}n_i^2\leq \left(\sum_{i=1}^{w}n_i\right)^2
$$
The right half of the inequality shows that BOE-NMS is always better than Original NMS, while the left half shows the lower bound of BOE-NMS optimization. Therefore, the more WCCs there are, the greater the optimization space for BOE-NMS, and when each $n_i$ is as small as possible, it will be closer to this lower bound.
In summary, both the number and size of WCCs affect our two methods. The main reason is that the more WCCs there are, the better the accuracy performance of QSI-NMS; the smaller the WCCs, the higher the efficiency of BOE-NMS.
---
> **Q2**: In section 5. 2 Results, we can find QSI-NMS operates with some performance decrease, and the paper doesn't thoroughly discuss what aspects of the algorithm might be causing this decline (maybe missing some data but exactly what is missed).
**A2**: Your question is very insightful, and other reviewers are also interested in this issue. Therefore, we have included the response to this question in the global rebuttal. Please refer to **Q1** of the global rebuttal for the details.
---
> **Q3**: Regarding eQSI-NMS, there are questions about the clarity and effectiveness of the methods. The pseudo-code in Appendix E. 2 is difficult to understand, and there might be an issue with the line "for $s\in\mathcal{S}$ do $s\leftarrow -s$;" in eQSI-NMS. It’s unclear how this part of the code functions and whether it introduces any problems.
**A3**: There was indeed a mistake here, and we appreciate your feedback. What we intended to express is "for $c \in \mathcal{C}$ do $c \leftarrow -c$" This is because we should first sort $\mathcal{C}$ in ascending order. Next, by maintaining a stack, we can find the position of the last element greater than the current confidence score before it in $\mathcal{O}(n)$ time; let’s call this algorithm $A$. Additionally, we need to find the first element greater than the current confidence score after it, which can be done by sorting $-\mathcal{C}$ in ascending order and then performing algorithm $A$. This explanation is indeed prone to misunderstanding, and we will rewrite this part of the pseudo-code in our revised version. We will first sort $\mathcal{C}$ in ascending order, then process it from front to back and back to front once each. However, our implementation is correct, and we suggest you refer to eQSINMS.hpp in the supplementary material. Our code is easy to understand.
---
> **Q4**: The evaluation is primarily limited to the YOLOv8-N model on the MS COCO 2017 dataset. To gain a more comprehensive understanding of the algorithms' generalizability, broader evaluations across various models and datasets would be beneficial.
**A4**: We evaluated our methods on various datasets and models. On COCO 2017, we tested YOLOv8 models (one-stage, anchor-free model), YOLOv5 models (one-stage, anchor-based model), and Faster R-CNN models (two-stage, anchor-based model), as detailed in Table 1 of Section 5.2. We also tested the YOLOv8 models on the more mature and robust Open Images V7 dataset, as shown in Table 2.
--- | Summary: The paper, "Accelerating Non-Maximum Suppression: A Graph Theory Perspective," presents a novel approach to enhancing the efficiency of the Non-Maximum Suppression (NMS) algorithm used in object detection. By analyzing NMS through graph theory, it introduces two new optimization methods: Quicksort Induced NMS (QSI-NMS) and Boxes Outside Excluded NMS (BOE-NMS). These methods leverage the structure of weakly connected components in a graph to reduce computational complexity and speed up the process, with minimal impact on the mean Average Precision (mAP). The paper also introduces NMS-Bench, a benchmarking framework to evaluate NMS methods rapidly.
Strengths: 1. Innovative Approach: The paper applies graph theory to optimize NMS, a critical post-processing step in object detection, demonstrating significant improvements in computational efficiency.
2. Comprehensive Evaluation: It includes a robust evaluation using the newly developed NMS-Bench, providing detailed comparisons of performance improvements over traditional methods.
3. Practical Impact: The proposed methods, particularly eQSI-NMS, offer substantial speed increases with minimal loss in accuracy, which is highly beneficial for real-time object detection applications.
Weaknesses: 1. Complexity of Graph-Theoretical Analysis: The paper's reliance on graph theory might limit its accessibility to those without a background in this area. The proofs and theoretical explanations are dense and could be challenging to follow for non-specialists.
2. Limited Discussion on Scalability: While the paper shows efficiency improvements, it does not extensively discuss the scalability of the proposed methods across different hardware or larger datasets beyond those tested.
3. Dependency on Specific Conditions: The effectiveness of the proposed methods may depend heavily on the characteristics of the data and the specific architectures of the detection systems used, which may not generalize well to all types of object detection tasks.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Details on Graph Construction (Line 99-104): The paper mentions the construction of graph G using bounding boxes and suppression relationships. Could you elaborate on the computational overhead of this graph construction process and its impact on the overall efficiency of NMS?
2. Experimental Setup (Line 260-265): The results presented are impressive; however, could more information be provided on how the different configurations of bounding boxes were handled during the experiments, especially concerning their distribution and density?
3. Proof of Theorem 1 (Line 510-520): Could you clarify how the dynamic programming approach adapts to variations in graph structure, particularly for non-standard configurations of bounding boxes?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Complexity for Non-Specialists: The application of graph theory to optimize the Non-Maximum Suppression (NMS) process introduces complex mathematical concepts and proofs that might not be easily accessible or understandable for practitioners or researchers without a background in graph theory. This complexity could limit the broader application and adaptation of the proposed methods in the field.
Dependence on Data Characteristics: The effectiveness of the proposed QSI-NMS and BOE-NMS methods is highly dependent on the specific characteristics of the data, such as the distribution and density of bounding boxes. This dependency might restrict the generalizability of the methods across different object detection tasks and datasets where these characteristics vary significantly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable feedback. We appreciate your recognition of our innovative approach, comprehensive evaluation, and practical impact. Below, we will address each of your concerns one by one.
---
> **Q1**: Details on Graph Construction (Line 99-104): The paper mentions the construction of graph G using bounding boxes and suppression relationships. Could you elaborate on the computational overhead of this graph construction process and its impact on the overall efficiency of NMS?
**A1**: The computational overhead of constructing the graph in our algorithms is zero. Constructing the graph and topological sorting incurs an additional $\mathcal{O}(\vert \mathcal{V}\vert + \vert \mathcal{E}\vert)$ computational overhead. Although this does not affect the complexity of our methods, it can slow down their practical performance. To address this issue, we design our algorithm such that the order of the nodes we iterate over follows a valid topological sort. Therefore, dynamic programming can be completed without explicitly constructing the graph. As a result, our algorithm does not incur the extra overhead of graph construction.
For QSI-NMS, lines174-177 explain why explicit graph construction is unnecessary. The same applies to BOE-NMS, where sorting the boxes by confidence scores from high to low already provides a valid topological sort. We can also quickly locate the mutually suppressing boxes withoud sorting, thereby obtaining $\mathcal{G}$, and performing dynamic programming yields the same results.
However, In BOE-NMS, not sorting in advance would introduce additional computational overhead of constructing the graph and topological sorting. We conducted an experiment to support our analysis. We did not sort by confidence but performed dynamic programming at the end while keeping everything else unchanged. The results on YOLOv8-N are shown in the table below:
| method | average latency ($\mu s$) | AP 0.5:0.95 (%) |
| --------------------------------- | ------------------------- | --------------- |
| Original NMS | 936.9 | 37.2 |
| BOE-NMS | 201.8 | 37.2 |
| BOE-NMS (with graph construction) | 558.0 | 37.2 |
---
> **Q2**: Experimental Setup (Line 260-265): The results presented are impressive; however, could more information be provided on how the different configurations of bounding boxes were handled during the experiments, especially concerning their distribution and density?
**A2**: Our experimental data is derived from the actual outputs of commonly used models on public datasets, as described in Section 5.1. Each bounding box is represented by $x, y, w, h$ to denote its position and size, along with a confidence score $score$, as defined in Section 2. For detailed statistics related to the bounding boxes, please refer to Appendix D.
---
> **Q3**: Proof of Theorem 1 (Line 510-520): Could you clarify how the dynamic programming approach adapts to variations in graph structure, particularly for non-standard configurations of bounding boxes?
**A3**: As proven in Theorem 1, variations in the graph structure do not affect the correctness of the results obtained through dynamic programming. However, the structure of the graph can influence the efficiency of dynamic programming since its complexity is $\mathcal{O}(\vert\mathcal{V}\vert +\vert\mathcal{E}\vert)$. By pre-sorting, the complexity can be reduced to $\mathcal{O}(\vert\mathcal{V}\vert\log\vert\mathcal{V}\vert+\vert\mathcal{V}\vert)$. Therefore, when the graph has many nodes or edges, the efficiency of dynamic programming decreases, especially in cases where all bounding boxes are very dense (which are rare in real-world data).
---
> **Q4**: Complexity of Graph-Theoretical Analysis: The paper's reliance on graph theory might limit its accessibility to those without a background in this area. The proofs and theoretical explanations are dense and could be challenging to follow for non-specialists.
**A4**: We understand that the graph-theoretical aspects may be complex. To address this, we've included detailed explanations and supplementary materials to clarify the key points(see Appendix A). Additionally, we emphasize the practical applications to highlight the relevance of our work.
---
> **Q5**: Limited Discussion on Scalability: While the paper shows efficiency improvements, it does not extensively discuss the scalability of the proposed methods across different hardware or larger datasets beyond those tested.
**A5**: We did not have enough time to test on different hardware environments, but our methods indeed reduce computational overhead (see Appendix D.3, Figure 6). Moreover, we implemented our methods in the torchvision library, and their performance surpasses that of highly parallel CUDA NMS (see Appendix D.3), demonstrating that the efficiency of our methods does not rely on specific hardware and can be applied to low-power edge devices.
Additionally, we tested on a larger and more robust dataset, Open Images V7. Please refer to Section 5.2, Table 2 for details.
---
> **Q6**: Dependency on Specific Conditions: The effectiveness of the proposed methods may depend heavily on the characteristics of the data and the specific architectures of the detection systems used, which may not generalize well to all types of object detection tasks.
**A6**: We believe that our analysis of the characteristics of $\mathcal{G}$ is widely applicable (Lines 126-135).
--- | Summary: This paper presents a method from a new perspective to enhance the efficiency of the Non-Maximum Suppression (NMS) algorithm with affordable accuracy decrease.
The authors introduce a novel perspective by analyzing NMS through the lens of graph theory, revealing its intrinsic structure as a directed acyclic graph.
This insight leads to the development of two NMS optimization methods: QSI-NMS and BOE-NMS.
Furthermore, the authors also introduce NMS-Bench, an end-to-end benchmark that facilitates rapid and comprehensive validation of various NMS algorithms.
Strengths: (1)The paper innovatively applies graph theory to NMS, offering a detailed theoretical basis for the proposed QSI-NMS and BOE-NMS algorithms, which excel in enhancing speed while preserving accuracy.
(2)The experimental results does support the effectiveness of the efficiency of the proposed algorithm.
(3)The construction of the benchmark is a solid contribution to the community.
(4)The paper offers a new perspective and tool for understanding and optimizing the NMS step in object detection.
Weaknesses: (1)A case study that illustrate the overly suppressed samples is welcomed.
(2) How was the average latency calculated, more information if preferred. This is not clear.
(3) How is the performance of the proposed algorithm on the Yolo V10 and mask RCNN.
(4) Although the proposed method is tested on the proposed NMS-Bench, experimental results in detection bench-mark algorithms are still prefered.
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) How was the average latency calculated, more information if preferred. Please report the time of graph construction and the time of NMS, respectively.
(2) How is the performance of the proposed algorithm on the Yolo V10 and mask RCNN.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The effectiveness of the proposed should be evaluated on more real world cases.
More detailed analysis of which kinds of samples are miss detected by the algorithm is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable feedback and for acknowledging the value of our work. We will address your questions and concerns in the following responses.
---
> **Q1**: A case study that illustrates the overly suppressed samples is welcomed.
**A1**: This question is fundamental and others are also interested in this issue. The proof of Theorem 4 demonstrates that BOE-NMS does not overly suppress boxes compared to Original NMS, leading to the same results. In QSI-NMS, however, in some special cases, nodes within the same WCC might be incorrectly assigned to different subproblems, resulting in retaining some boxes that should have been suppressed. To provide a comprehensive and clear explanation, we have included our response in the global rebuttal. We kindly ask you to review our response in **A1** of the global rebuttal.
---
> **Q2**: How was the average latency calculated, more information if preferred. This is not clear. Please report the time of graph construction and the time of NMS, respectively.
**A2**: The latency calculation begins from the input of bounding boxes and ends when the retained bounding boxes are output. For a dataset containing $N$ images, latency is measured by using the bounding boxes generated per image as input, and the total latency for the $N$ images is averaged. To mitigate random errors, this measurement is repeated 5 times, and the average of these measurements is used as the final average latency. Thus, the average latency is the average time taken to execute NMS.
We do not need any additional time to construct the graph. Constructing the graph and topological sorting incurs an additional $\mathcal{O}(\vert \mathcal{V}\vert + \vert \mathcal{E}\vert)$ computational overhead. Although this does not affect the complexity of our methods, it can slow down their practical performance. To address this issue, we design our algorithm such that the order of the nodes we iterate over follows a valid topological sort. Therefore, dynamic programming can be completed without explicitly constructing the graph. As a result, our algorithm does not incur the extra overhead of graph construction.
For QSI-NMS, lines 174-177 explain why explicit graph construction is unnecessary. The same applies to BOE-NMS, where sorting the boxes by confidence scores from high to low already provides a valid topological sort. We can also quickly locate the mutually suppressing boxes without sorting, thereby obtaining $\mathcal{G}$, and performing dynamic programming yields the same results.
However, In BOE-NMS, not sorting in advance would introduce additional computational overhead of constructing the graph and topological sorting. Please check the table below:
| method | average latency ($\mu s$) | AP 0.5:0.95 (%) |
| --------------------------------- | ------------------------- | --------------- |
| Original NMS | 936.9 | 37.2 |
| BOE-NMS | 201.8 | 37.2 |
| BOE-NMS (with graph construction) | 558.0 | 37.2 |
---
> **Q3**: How is the performance of the proposed algorithm on the Yolo V10 and mask RCNN.
**A3**: At the time of conducting this research, YOLOv10 had not yet been open-sourced, and we currently do not have sufficient time to include experiments with it. However, we tested our methods on the Instance segmentation task using Mask R-CNN and YOLOv8, where our methods also demonstrated significant superiority over others. Please refer to the tables below.
Table 1: Latency ($\mu s$) of various methods
|| Original NMS | Fast NMS | Cluster-NMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| -------------------- | ------------ | -------- | ----------- | ------- | ------- | -------- |
| Mask R-CNN(R50-FPN) | 48.8 | 105.7 | 222.8 | 31.9 | 30.3 | **21.6** |
| Mask R-CNN(R101-FPN) | 45.3 | 113.1 | 205.0 | 32.3 | 29.8 | **21.5** |
| Mask R-CNN(X101-FPN) | 40.3 | 105.6 | 189.2 | 26.7 | 26.6 | **19.3** |
| YOLOv8n-seg | 1265.3 | 366.4 | 859.4 | 219.5 | 153.4 | **85.4** |
| YOLOv8s-seg | 740.0 | 269.2 | 736.2 | 158.6 | 115.8 | **61.9** |
Table 2: Box mAP(%) of various methods
| | Original NMS | Fast NMS | Cluster-NMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| -------------------- | ------------ | -------- | ----------- | ------- | ------- | -------- |
| Mask R-CNN(R50-FPN) | 41.0 | 40.4 | 40.9 | 41.0 | 40.8 | 40.4 |
| Mask R-CNN(R101-FPN) | 42.9 | 42.2 | 42.9 | 42.9 | 42.7 | 42.3 |
| Mask R-CNN(X101-FPN) | 44.3 | 43.6 | 44.2 | 44.3 | 44.1 | 43.7 |
| YOLOv8n-seg | 36.7 | 36.5 | 36.7 | 36.7 | 36.6 | 36.4 |
| YOLOv8s-seg | 44.7 | 44.5 | 44.7 | 44.7 | 44.6 | 44.4 |
Table 3: Mask mAP(%) of various methods
| | Original NMS | Fast NMS | Cluster-NMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| -------------------- | ------------ | -------- | ----------- | ------- | ------- | -------- |
| Mask R-CNN(R50-FPN) | 37.2| 36.9 | 37.1 | 37.2 | 36.9 | 36.7|
| Mask R-CNN(R101-FPN) | 38.6| 38.4 | 38.6 | 38.6 | 38.4 | 38.2|
| Mask R-CNN(X101-FPN) | 39.5 | 39.3| 39.5| 39.5| 39.3| 39.1|
| YOLOv8n-seg| 30.4| 30.4| 30.4| 30.4| 30.3| 30.2|
| YOLOv8s-seg| 36.7 | 36.6| 36.7| 36.7| 36.5| 36.5 |
---
> **Q4**: Although the proposed method is tested on the proposed NMS-Bench, experimental results in detection bench-mark algorithms are still preferred.
**A4**: To demonstrate the practical value of our methods, we not only tested them on NMS-Bench but also implemented them in torchvision. The results show that our methods are more efficient than the highly parallel CUDA NMS, which is the currently used program for NMS. For detailed information and experimental results, please refer to Appendix D.3.
---
---
Rebuttal Comment 1.1:
Title: Comments
Comment: Thanks for the detailed responses and additional experiments, which solve my concerns and questions. | Summary: This work focuses on improving the latency of Non-Maximum Suppression (NMS), a crucial step for nearly all object detectors. The work analyzes NMS, as a directed acyclic graph (DAG) treating bounding boxes as nodes, and suppression relationships as arcs allowing NMS solutions based on dynamic programming. Based on this graph interpreation, the work proposes two new approximate versions of NMS, named QSI-NMS, and BOE-NMS with different precision v/s latency tradeoffs. Finally, the proposed NMS approaches are evaluated on a new benchmark NMS-bench, showing improved latency with little to no mAP loss.
Strengths: 1. The paper is written adequately, and offers a new graph theory perspective for non-maximum suppression, further exploring avenues for new research in the area.
2. The proposed approaches show improved latency compared to other NMS approaches including original (greedy NMS), Fast NMS and Cluster NMS, while maintaining mAP. This is achieved without fine-tuning the underlying model.
Weaknesses: 1. The paper does not compare against (or even cite) other approximations to NMS proposed in the literature such as MaxPoolNMS[1], PSRR-MaxpoolNMS [2] or ASAP-NMS [3]. It's unclear how the contribution, and performance of proposed approximations differs from the literature. For example, the idea behind BOE-NMS, using locality b/w suppression relationships is already explored in the above works.
2. Can this approach be utilized for two-stage object detectors as well for both stages of NMS? The proposed benchmark only applies the methods this for single-stage object detectors, it's unclear how the method performs on single stage detectors.
[1] Cai, Lile, et al. "Maxpoolnms: getting rid of nms bottlenecks in two-stage object detectors." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2019.
[2] Zhang, Tianyi, et al. "Psrr-maxpoolnms: Pyramid shifted maxpoolnms with relationship recovery." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2021.
[3] Tripathi, Rohun, et al. "Asap-nms: Accelerating non-maximum suppression using spatially aware priors." _arXiv preprint arXiv:2007.09785_ (2020).
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The worst case complexity of the proposed approaches is still O(n log n), while other approaches, stated above claim O(n) complexity. Does this imply the other approaches scale better?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The work does an adequate job of discussing the limitations of the proposed work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your feedback and for recognizing the value of our new graph theory perspective on non-maximum suppression. We will address your concerns in the following responses.
---
> **Q1**: The worst case complexity of the proposed approaches is still $\mathcal{O}(n \log n)$, while other approaches, stated above claim $\mathcal{O}(n)$ complexity. Does this imply the other approaches scale better?
**A1**: Among the papers you mentioned, only PSRR-MaxpoolNMS claims to achieve $\mathcal{O}(n)$ complexity. However, we do not believe that PSRR-MaxpoolNMS has better scalability compared to our methods for two reasons:
First, we implemented PSRR-MaxpoolNMS and conducted efficiency comparisons. To better illustrate the impact of bounding box count on runtime, we performed experiments on YOLOv5-N, which has the highest number of bounding boxes, as shown in Figure 3 (a) of the global rebuttal PDF. Original NMS has the highest time cost due to its quadratic growth. For a clearer comparison between our methods and PSRR-MaxpoolNMS, we excluded Original NMS, as shown in Figure 3 (b). As the number of boxes increases, PSRR-MaxpoolNMS is faster than BOE-NMS and QSI-NMS but consistently slower than eQSI-NMS.
Second, strictly speaking, the complexity of PSRR-MaxpoolNMS is not $\mathcal{O}(n)$. PSRR-MaxpoolNMS requires prior knowledge of the input image size and generates confidence score maps related to the size of the image. This aspect is not considered in the complexity analysis (whereas our methods are designed and analyzed independently of the image size). During the Channel Recovery stage of PSRR-MaxpoolNMS, the complexity of computing the nearest distances for channel mapping is $\mathcal{O}(n_sn_rn)$, where $n_s$ and $n_r$ represent the number of anchor box scales and ratios, respectively. These quantities vary with different datasets and detectors and increase as image size and object count increase. The remaining stages: Spatial Recovery, Pyramid MaxpoolNMS, and Shifted MaxpoolNMS can all be completed in $\mathcal{O}(n)$ time. Thus, the overall complexity of PSRR-MaxpoolNMS is $\mathcal{O}(n_s n_r\lfloor \frac{W}{\beta} \rceil \lfloor \frac{H}{\beta} \rceil + n_s n_r n)$.
---
> **Q2**: The paper does not compare against (or even cite) other approximations to NMS proposed in the literature such as MaxPoolNMS, PSRR-MaxpoolNMS or ASAP-NMS. It's unclear how the contribution, and performance of proposed approximations differs from the literature.
**A2**: Our research focuses on NMS methods applicable to general cases, so we inadvertently missed these important papers. Thank you very much for pointing this out. We will cite these papers in our revised version. Next, we will illustrate how our contributions differ from these papers through the following comparisons:
Maxpool NMS and ASAP-NMS can only be used in the first stage of two-stage detectors, while PSRR-MaxpoolNMS, although applicable to various stages, is limited to anchor-based models. Our methods, however, can be used in any stage of any detector because we address general cases (see Section 2).
The complexity of Maxpool NMS is $\mathcal{O}(n_sn_r\lfloor\frac{W}{\beta}\rceil\lfloor\frac{H}{\beta}\rceil+n\log n)$, ASAP-NMS is $\mathcal{O}(n^2)$, and the complexity of PSRR-MaxpoolNMS has been analyzed in **A1**. None of these methods are more efficient than eQSI-NMS.
These methods involve many manually defined hyperparameters and are complex to implement, which limits their generalization across different models and datasets. In contrast, our methods require no additional parameters beyond those in Original NMS and are easy to implement.
---
> **Q3**: The idea behind BOE-NMS, using locality suppression relationships is already explored in the above works. What is the difference between BOE-NMS and these methods?
**A3**: Although these methods also consider locality, BOE-NMS employs a fundamentally different approach and analysis. Not suppressing non-overlapping boxes is a very intuitive idea, but the key is how to locate these boxes. We discuss this issue in Appendix C.3 and prove that directly determining non-overlapping boxes is a more difficult problem than checking whether the centroid of one box falls within another box (the basic idea of BOE-NMS), as shown in Appendix B.6. Therefore, BOE-NMS can be efficiently implemented. In contrast, methods in the literature use complex techniques to find approximate solutions to this problem. Additionally, BOE-NMS is rigorously proven not to cause any loss of accuracy (Theorem 4), whereas the methods in the literature are not as rigorous and exhibit some degree of accuracy degradation.
---
> **Q4**: Can this approach be utilized for two-stage object detectors as well for both stages of NMS? The proposed benchmark only applies the methods this for single-stage object detectors, it's unclear how the method performs on two-stage detectors.
**A4**: Yes, our methods are applicable to various stages of various detectors. We tested not only one-stage detectors but also two-stage detectors (Faster R-CNN). Please refer to Table 1 in Section 5.2. We have also added experiments for the first stage in Faster R-CNN. The results demonstrate that our methods still exhibit strong performance advantages, with QSI-NMS and eQSI-NMS even surpassing Original NMS in terms of accuracy. Please see the tables below.
Table 1:R50-FPN
||Original NMS|Fast NMS|Cluster-NMS|BOE-NMS|QSI-NMS|eQSI-NMS|
|---|---|---|---|---|---|---|
|mAP(%)|40.2|39.6|40.2|40.2|40.3|40.3|
|latency($\mu s$)|14768|2469|4501|2341|950|457|
Table 2:R101-FPN
||Original NMS|Fast NMS|Cluster-NMS|BOE-NMS|QSI-NMS|eQSI-NMS|
|---|---|---|---|---|---|---|
|mAP(%)|42.0|41.3|42.0|42.0|42.1|42.0|
|latency($\mu s$)|13089|2411|4477|2325|995|464|
Table 3:X101-FPN
||Original NMS|Fast NMS|Cluster-NMS|BOE-NMS|QSI-NMS|eQSI-NMS|
|---|---|---|---|---|---|---|
|mAP(%)|43.0|42.4|42.9|43.0|43.1|43.1|
|latency($\mu s$)|12583|2383|4463|2265|984|457|
---
---
Rebuttal Comment 1.1:
Title: Official Response
Comment: I thank the authors for their detailed response. My main concern regarding comparison with prior work, especially PSRR-MaxPoolNMS has been partially addressed in the rebuttal.
The proposed work provides a new perspective for NMS using graph theory, and the best method offered eQSI-NMS offers improved latency, while it's still unclear how much it trades-off in terms of performance compared to PSRR-MaxPoolNMS. I've improved my rating, as my perception of the paper has improved.
I would still like to see detailed mAP comparison on NMS-bench against PSRR-MaxPoolNMS. If the authors have implemented the algorithm, I don't see why mAP performance has not been reported in the rebuttal for PSSR-MaxPoolNMS?
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the time and effort you have invested in reviewing our rebuttal, and we are pleased that our response has addressed your main concern.
We are also more than happy to provide a comparison of the mAP performance comparison between our methods and PSRR-MaxpoolNMS. We apologize for being unable to fully present this in the rebuttal, as we had reached the character limit. We tested anchor-based models (Faster R-CNN, YOLOv5) where PSRR-MaxpoolNMS is applicable, using the MS COCO dataset. Please see the results in the tables below.
Table 1: Faster R-CNN R50-FPN (average #bounding boxes: 251)
||Original NMS|PSRR-MaxpoolNMS|BOE-NMS|QSI-NMS|eQSI-NMS|
| ---------------- | ------------ | ---- | ------- | ------- | -------- |
| mAP(%) |39.8| 37.5 | 39.8 | 39.5 | 39.3 |
| latency($\mu s$) |53.0| 89.0 | 43.1 | 34.3 | 24.6 |
Table 2: Faster R-CNN R101-FPN (average #bounding boxes: 236)
||Original NMS|PSRR-MaxpoolNMS|BOE-NMS|QSI-NMS|eQSI-NMS|
| ---------------- | ------------ | ---- | ------- | ------- | -------- |
| mAP(%)|41.8|39.5|41.8|41.5|41.4|
| latency($\mu s$)|45.5|86.4|38.6|31.9|23.0|
Table 3: Faster R-CNN X101-FPN (average #bounding boxes: 214)
|| Original NMS | PSRR-MaxpoolNMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| ---------------- | ------------ | --------------- | ------- | ------- | -------- |
| mAP(%)| 43.0| 40.5| 43.0| 42.7| 42.5|
| latency($\mu s$) | 37.0| 86.9| 33.4| 28.6| 20.6|
Table 4: YOLOv5-N (average #bounding boxes: 2898)
|| Original NMS | PSRR-MaxpoolNMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| ---------------- | ------------ | --------------- | ------- | ------- | -------- |
| mAP(%)| 27.8| 26.5| 27.8| 27.5|27.4|
| latency($\mu s$)|8568.5| 599.6| 906.2|628.0|325.6|
Table 5: YOLOv5-S (average #bounding boxes: 1974)
| | Original NMS | PSRR-MaxpoolNMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| ---------------- | ------------ | --------------- | ------- | ------- | -------- |
| mAP(%) | 37.2| 35.6 | 37.2 | 36.9 | 36.6 |
| latency($\mu s$) | 3858.2 | 409.4 | 547.7 | 408.2 | 217.5 |
Table 6: YOLOv5-M (average #bounding boxes: 1810)
| | Original NMS | PSRR-MaxpoolNMS | BOE-NMS | QSI-NMS | eQSI-NMS |
| ---------------- | ------------ | --------------- | ------- | ------- | -------- |
| mAP(%)| 45.1 | 43.1 | 45.1 | 44.9 | 44.5 |
| latency($\mu s$) | 2918.1| 380.5 | 424.7 | 371.3 | 197.4|
As you can see, eQSI-NMS achieves the lowest latency while maintaining a favorable trade-off with mAP. However, PSRR-MaxpoolNMS experiences a $1–2$% mAP accuracy loss in both the Faster R-CNN and YOLOv5 models. This is likely due to the following reasons:
1. Some hyperparameters need adjustment when using the MS COCO dataset, as MS COCO is more complex compared to PASCAL VOC, with a richer variety of categories and broader scenes. Therefore, the number of scales and ratios may need to be increased to suit the MS COCO dataset. In contrast, our proposed methods do not require additional parameters beyond those used in Original NMS, making them more applicable to general cases.
2. The PSRR-MaxpoolNMS paper does not mention how the input image size and the settings of $W$ and $H$ in PSRR-MaxpoolNMS are determined. In our implementation, we set this to $640 \times 640$ to accommodate all images in the MS COCO dataset. This might affect accuracy, though we believe the impact is minimal. This also indicates that our method offers better generalizability.
In the case of Faster R-CNN, the latency performance of PSRR-MaxpoolNMS is not competitive. This is because PSRR-MaxpoolNMS requires 8 maxpooling operations, which, although not affecting the algorithm's complexity, introduces a large constant factor that hampers efficiency when the number of bounding boxes is small (e.g., the average number of bounding boxes in the three Faster R-CNN models is less than 300). However, it performs well when the number of bounding boxes is large (e.g., YOLOv5-S has an average of 2898 bounding boxes). This demonstrates that the speedup of PSRR-MaxpoolNMS is highly dependent on the degree of parallelism, whereas our method directly reduces computational overhead (see Figure 6 in Appendix D.3), making it hardware-agnostic and suitable for resource-constrained edge devices.
In summary, the limitations of PSRR-MaxpoolNMS in improving efficiency while balancing accuracy are significant. In contrast, our method, which uses the same inputs and parameters as Original NMS, is a plug-and-play algorithm that can directly replace Original NMS. To demonstrate this, we implemented our method in the torchvision library and compared it with the highly parallel CUDA NMS [1], where our method exhibited significant superiority (see Table 7 in Appendix D.3).
Thanks again for the time you invested in writing your comments. We hope this response can thoroughly address your concern.
---
[1] TorchVision maintainers and contributors. TorchVision: PyTorch’s Computer Vision library, November 2016. | Rebuttal 1:
Rebuttal: We appreciate the reviewers' thoughtful feedback on our work. We are grateful for the reviewers' recognition of the following strengths:
1. **Innovative Application of Graph Theory**:
- Reviewer pEVt highlighted that our paper introduces a new graph theory perspective for non-maximum suppression (NMS), which explores new research avenues in this area.
- Reviewer snSe praised our innovative application of graph theory in the QSI-NMS and BOE-NMS algorithms, noting that it provides a detailed theoretical basis and offers a new perspective for understanding and optimizing NMS.
2. **Significant Performance Improvements**:
- Reviewer HR1f noted the impressive speedups achieved by our algorithms, with QSI-NMS providing up to a $10.7\times $ speedup and BOE-NMS achieving $5.1\times$, all while maintaining mean Average Precision (mAP).
- Reviewer kDQX recognized the substantial speed increases of eQSI-NMS with minimal loss in accuracy, which is highly beneficial for real-time object detection applications.
- Reviewer pEVt also acknowledged the improved latency of our proposed approaches compared to traditional NMS methods, including greedy NMS, Fast NMS, and Cluster NMS, without the need for fine-tuning the underlying model.
3. **Comprehensive and Robust Evaluation**:
- Reviewer kDQX appreciated our comprehensive evaluation using the newly developed NMS-Bench, which offers detailed comparisons of performance improvements over traditional methods and enhances the robustness of our results.
- Reviewer snSe commended the construction of the benchmark, highlighting its solid contribution to the community by standardizing the evaluation of NMS algorithms and supporting further research.
4. **Valuable Theoretical Insights and Practical Contributions**:
- Reviewer HR1f emphasized the detailed analysis of the intrinsic structure of NMS through graph theory, which offers valuable insights into its computational bottlenecks and optimization strategies.
- Reviewer snSe acknowledged that our work provides a new perspective and tool for understanding and optimizing the NMS step in object detection, which contributes both theoretically and practically to the field.
- Reviewer kDQX highlighted the practical impact of our proposed methods, particularly eQSI-NMS, in offering substantial speed increases with minimal loss in accuracy, which is crucial for real-time applications.
We would also like to express our gratitude to the reviewers for their thoughtful questions and valuable suggestions. These have been instrumental in guiding the refinement of our work. We will address the reviewers' concerns by clarifying concepts, providing additional experiments, and offering further explanations within the scope of the feedback provided.
---
Both Reviewer snSe and Reviewer HR1f have expressed concerns about the slight decrease in accuracy with QSI-NMS, so we address this common concern here.
> **Q1**: Why does QSI-NMS have a negligible mAP loss? What conditions can lead to accuracy loss?
**A1**: In QSI-NMS, we use a divide-and-conquer strategy, which means that bounding boxes in different subproblems do not affect each other. In some special cases, QSI-NMS may assign nodes from the same Weakly Connected Component (WCC) to different subproblems, potentially causing some nodes that should have been suppressed to be retained.
We provide a case study in the PDF with results from the YOLOv8-M model on the COCO dataset. In Figure 1 (a), the blue boxes represent the outputs of Original NMS/BOE-NMS, while Figure 1 (b) shows the outputs of QSI-NMS, with red boxes indicating additional boxes retained by QSI-NMS. It can be seen that QSI-NMS retains four additional boxes.
For example, consider the box/node numbered 188. The WCC containing this node is shown in Figure 2 (a). All other nodes in the WCC would suppress node 188, but when we use $\preceq_{M}$ to define the partitioning criterion, node 188 ends up in a different subproblem than other nodes, as shown in Figure 2 (b). The figure shows a partial structure of the QSI-tree: solid lines indicate parent-child relationships, and dashed lines indicate ancestor-descendant relationships. The red nodes are nodes from the WCC, while the black node 148 is the Lowest Common Ancestor (LCA) of nodes 188 and 201; node 156 is the LCA of nodes 194 and 193. Since each node in this WCC can only be suppressed by its red ancestor nodes (Lines 201-208), node 188 is not suppressed. However, node 194 is still suppressed because node 201 is its ancestor.
This example highlights the core of QSI-NMS design: the pivot selection and the partitioning criterion. If we choose these two appropriately, the accuracy loss of QSI-NMS can be negligible. In our algorithm design: pivot selection chooses the most representative nodes (with the highest confidence scores), so node 194 is correctly suppressed by node 201 even after being placed in a different subproblem from node 193. The partitioning criterion aims to assign nodes from the same WCC to the same subproblem as much as possible, which helps reduce cases like node 188 being incorrectly retained. We also discuss other partitioning criteria in Appendix C.2.
---
Pdf: /pdf/48d5d8f4438278974453d7f932ba2cfab8911f79.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ALPS: Improved Optimization for Highly Sparse One-Shot Pruning for Large Language Models | Accept (poster) | Summary: The paper introduces ALPS, an optimization-based framework for one-shot pruning of large language models (LLMs). ALPS leverages an ADMM-based algorithm with operator splitting and preconditioned conjugate gradient methods to achieve improvements in sparsity and perplexity over state-of-the-art methods, particularly in high-sparsity regimes.
Strengths: **S1.**
ALPS achieves substantial reductions in test perplexity and improved zero-shot benchmark performance for highly sparse models.
**S2.**
Provides theoretical convergence guarantees for $\ell_0$-constrained optimization problems with ADMM solver.
**S3.**
Implements efficient post-processing techniques via conjugate projected gradient, enhancing computational performance.
Weaknesses: **W1.**
Important Reference Missing: The paper does not cite "Fast and optimal weight update for pruned large language models" by Boža, which addresses a similar problem using an ADMM-based optimization algorithm. This omission is significant as both papers share highly similar problem definitions and solutions.
**W2.**
Limited Novelty: ALPS closely resembles methods discussed in both "Fast and optimal pruning" and "Progressive weight pruning of deep neural networks using ADMM." The primary difference is ALPS's specific application to LLMs. However, this differentiation might not be substantial enough to establish ALPS as a novel contribution. Btw, the reference of "Progressive weight pruning of deep neural networks using ADMM." seems also missing in the paper.
**W3.**
Performance at High Sparsity: At very high sparsity levels, ALPS’s perplexity remains significantly higher than the dense model. This indicates that the pruned LLMs by ALPS may still perform poorly and being practically useless, although they are better than those pruned by comparison methods.
**W4.**
Unfair Comparison: The comparison with methods like Wanda and DSnoT, which do not involve retraining after pruning, is solely based on perplexity. This is unfair because it overlooks the overall running time per iteration/epoch. ALPS’s performance should be compared with these methods by considering both perplexity and computational efficiency to provide a more balanced evaluation. In addition, I believe if Wanda or DSnoT is combined with some re-training techniques, they can achieve a much lower perplexity as well.
Reference:
Boža, Vladimír. "Fast and optimal weight update for pruned large language models." arXiv preprint arXiv:2401.02938 (2024).
Ye, Shaokai, et al. "Progressive weight pruning of deep neural networks using ADMM." arXiv preprint arXiv:1810.07378 (2018).
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to my points listed in the weakness section. Besides those,
1. Could you provide a clearer explanation of the practical implications of your theoretical contributions?
2. I encourage the authors to report the overall running time per epoch for ALPS in comparison with Wanda and DSnoT.
3. Could you provide the perplexity of the dense models (without any pruning) in all the tables in the paper?
4. How does ALPS's speed of convergence compare with the referenced methods, and what practical benefits does the novel penalty parameter update scheme offer?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have not thoroughly discussed the limitations and potential negative societal impacts. They should address the limited novelty compared to existing ADMM-based methods and discuss the risks and mitigation strategies for the misuse of more accessible powerful models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Reply to W1 and W2:** Thanks for these references—we will include them in our revised paper. ALPS differs significantly from the mentioned works as follows:
+ Comparison with "Progressive weight pruning of deep neural networks using ADMM":
This paper applies ADMM to the original loss function, requiring expensive full network training via SGD. In contrast, ALPS performs post-training pruning, applying ADMM to layerwise reconstruction problems (least squares objective) with cardinality constraints. This approach enables us to scale ALPS to 30B parameters on a single V100 GPU.
+ Comparison with "Fast and optimal pruning" (Boža): In this interesting work, Boža performs pruning post-training using a layerwise reconstruction loss. They apply ADMM to solve a least squares problem for a *given* sparsity mask. Boža selects the sparsity mask via iterative magnitude pruning. Using ADMM speeds up the back-solve procedure.
+ ALPS, in contrast, is an end-to-end approach that directly targets the $\ell_0$ constrained least squares problem with theoretical guarantees. **ALPS simultaneously optimizes over the weights and the sparsity pattern, unlike the work by Boža where ADMM is used to perform the backsolve operation.** Therefore, ALPS achieves lower layerwise reconstruction loss than Boža, as shown in Table 1 in the general rebuttal.
+ Since Boža employs ADMM for backsolving [finding the weights of the least squares problem given a mask], we also compare it with our PCG procedure, which serves as a backsolve method. We tested both approaches for computing weights on a given support. Results show that PCG outperforms Boža’s ADMM procedure in both computational time and objective value (see Table 2 in general rebuttal). The time advantage of PCG stems from its ability to backsolve without explicitly computing matrix inverses.
+ Novelty of ALPS:
Please note that ALPS has several differences/innovations compared to standard ADMM:
- A novel penalty parameter updating scheme that enables finding high-quality support and accelerates convergence. As we show in our response to Q1 and Q4, in practice, our proposed ADMM has better convergence properties compared to the usual version of ADMM for $\ell_0$-constrained least squares problems.
- As far as we know, our theory is the **first** theoretical convergence result for ADMM applied to $\ell_0$-constrained problems.
- A PCG-based post-processing technique optimized with vectorization and GPU parallelism, achieving up to 200x speedup compared to direct backsolve for updating weights on a given support.
These contributions, taken together, make our work significantly different from existing ADMM methods especially in the area of LLM pruning.
**Reply to W3:** Please note that our study demonstrates ALPS's effectiveness as a pruning method across all sparsity levels, including 60% sparsity and N:M sparsity pattern. In these cases, the pruned models maintain competitive performance.
You are right that model utility can deteriorate with one-shot pruning at very high sparsity levels — in such cases, we can recover model performance via retraining. Since ALPS-pruned models outperform models pruned by competing methods, they would require fewer retraining epochs to recover lost performance [1]. Consequently, ALPS remains valuable at high sparsity levels by reducing computational costs during the expensive retraining phase.
[1] Meng, X., Ibrahim, S., Behdin, K., Hazimeh, H., Ponomareva, N., & Mazumder, R. OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization.
**Reply to W4:** Thanks for your thoughtful comment. In our response to reviewer ZgXD, we report the running time of ALPS and other one-shot pruning methods. We emphasize that with retraining, the total runtimes of Wanda and DSnoT become much higher than ALPS. As reported in Wanda, fine-tuning LLaMa-7B with LoRA takes about 24 hours on a V100 GPU, and full parameter fine-tuning takes 24 days. In contrast, ALPS prunes the model in less than 1 hour. As a layer-wise pruning method, ALPS loads weights and activations onto the GPU layer by layer, enabling it to prune models as large as OPT-30B on a single V100 GPU, while retraining often requires more careful memory management. Given these considerations, we think that retraining methods would be much more computationally expensive than ALPS. Please recall that our focus here is on demonstrating the effectiveness of ALPS as a one-shot pruning method.
As a side note, as we explained in our reply to ZgXD, the runtime of ALPS can be further decreased by loosening the convergence criteria of ALPS.
**Reply to Q1 and Q4:** Our proposed $\rho$ update scheme, theoretically supported by Theorem 1, ensures that ALPS converges rapidly while finding high-quality solutions. In contrast, ADMM with a fixed $\rho$ may fail to converge when applied to $\ell_0$ constrained least squares problems. To demonstrate this, we compared ALPS with ADMM using fixed $\rho$ values. We examined two key metrics: reconstruction loss (objective) and the rate of change of the support (of weights) between consecutive iterations (this measures the convergence speed of the algorithm). The results are presented in the general rebuttal (Tabl 3 and 4). Results reveal that ADMM with a large $\rho(=3)$ converges quickly but yields poor solutions, while a small $\rho(=0.3)$ fails to converge. ALPS, utilizing our $\rho$ update scheme, achieves both rapid convergence and high-quality solutions.
**Reply to Q2:** Thanks for your comment — please refer to our response to reviewer ZgXD.
**Reply to Q3:** Please refer to Table 5 in general rebuttal. In the table, we omit LLaMA results on LAMBADA due to its poor performance without modifications.
---
In light of the above discussions and results, we kindly ask the reviewer to consider increasing their evaluation.
---
Rebuttal Comment 1.1:
Comment: I sincerely appreciate the authors' response, which addressed some of my previous concerns. However, given the current dispersion of ratings, I prefer to proceed cautiously and will consider the rebuttal more thoroughly in the next phase of discussion. I am inclined to maintain my current score for now but will remain open to adjusting it during the next phase.
---
Reply to Comment 1.1.1:
Comment: Thanks for your feedback and thoughtful comments. We appreciate your willingness and consideration to be open to revising your score. Please let us know if you have any additional questions or need clarification. | Summary: This work presents an LLM pruning framework that formulates the problem as finding a sparse weight matrix to reconstruct the layer-wise activations. This work incorporates the operator splitting technique and preconditioned conjugate gradient methods to solve the pruning problem. Experiments demonstrate that the proposed method achieves better single-layer reconstruction error and improved performance on downstream tasks.
Strengths: This work is motivated by a clear theoretical rationale.
Weaknesses: Experiments could be more solid since many experiments are run on OPT, which is somewhat out-of-date. It would also be better to consider more challenging benchmarks, such as GSM8K or other questions that require generating a long answer.
Technical Quality: 3
Clarity: 3
Questions for Authors: It would be better to compare with other representative pruning methods beyond the one-shot unstructured pruning, and to explore how this method could be integrated with others.
This work discusses engineering optimizations, such as incorporating PCG with GPU. Will you open-source the code or provide more efficient results from these engineering efforts?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: There is no section for limitation, but the author discusses some in the conclusion. One limitation is the setting of layer-wise activation reconstruction. Recent works, especially the activation-aware pruning methods, reported that activations are not equally important.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thoughtful feedback. Below, we provide some answers/clarifications.
**Experiments could be more solid since many experiments are run on OPT, which is somewhat out-of-date. It would also be better to consider more challenging benchmarks, such as GSM8K or other questions that require generating a long answer.**
**Reply:** We appreciate the reviewer's valuable suggestion. Based on your comment and that of reviewer ZgXD, we will consider comparing ALPS with other methods on the recent model LLaMA-3 and broaden our assessment by incorporating knowledge-intensive and reasoning-based tasks, specifically MMLU and GSM8K.
**It would be better to compare with other representative pruning methods beyond the one-shot unstructured pruning, and to explore how this method could be integrated with others.**
**Reply:** Thank you for this insightful suggestion! Beyond unstructured pruning, numerous representative pruning methods focus on structured sparsity patterns, such as block sparsity [1] and row sparsity [2]. Though direct numerical comparisons with these methods are potentially challenging due to their different settings, the idea of integrating ALPS with structured pruning methods is intriguing. Having a pruned model with *both* unstructured and structured sparsity could optimize both storage costs and inference time. It would be certainly interesting to explore such an integration as future work.
[1] Gray, S., Radford, A., & Kingma, D. P. GPU kernels for block-sparse weights.
[2] Meng, X., Ibrahim, S., Behdin, K., Hazimeh, H., Ponomareva, N., & Mazumder, R. OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization.
**This work discusses engineering optimizations, such as incorporating PCG with GPU. Will you open-source the code or provide more efficient results from these engineering efforts?**
**Reply:** If the paper is accepted, we will release the codes. Additionally, we will provide explanations of such implementations and a tutorial on their usage. | Summary: This paper introduces ALPS, a novel optimization-based framework for one-shot unstructured pruning of LLMs. The key contributions are:
- Formulating LLM pruning as an l0-constrained optimization problem solved using operator splitting (ADMM).
- A penalty parameter update scheme to accelerate convergence.
- A post-processing step using preconditioned conjugate gradient (PCG) to refine weights.
- Theoretical convergence guarantees for the proposed algorithm.
The authors demonstrate that ALPS outperforms state-of-the-art pruning methods, especially at high sparsity levels, on various LLMs including OPT and LLaMA models.
Strengths: 1. The proposed method directly addresses the pruning problem without relying on heuristics.
2. The ADMM algorithm introduced in the paper comes with theoretical convergence guarantees, adding reliability to the approach.
3. The paper includes comprehensive experiments on large-scale models (up to 30B parameters), demonstrating consistent improvements over existing methods.
Weaknesses: 1. It would be beneficial to include evaluations on more recent models such as LLaMA-3, instruction-tuned models, and extremely large models (70B+ parameters) to demonstrate the method's applicability to the latest advancements in the field.
2. The current evaluation tasks are relatively limited. Expanding the evaluation to include knowledge-intensive tasks (e.g., MMLU) and reasoning-based tasks (e.g., GSM8K and HumanEval) would provide a more comprehensive assessment of the method's effectiveness.
3. The current focus is primarily on N:M sparsity patterns. It would be great to explore structured pruning, which could lead to more significant improvements in inference speed, making the approach more versatile and practical for real-world applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the runtime of ALPS compare to other methods, especially for very large models?
2. Can (how can) ALPS be extended to handle other types (structured sparsity) of structured sparsity patterns beyond N:M?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed some limitations of their work e.g. extending ALPS to incorporate structured pruning constraints and quantization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their thoughtful feedback. Below, we provide some answers/clarifications.
**It would be beneficial to include evaluations on more recent and extremely large models to demonstrate the method's applicability. The current evaluation tasks are relatively limited. Expanding the evaluation to include knowledge-intensive tasks and reasoning-based tasks would provide a more comprehensive assessment.**
**Reply:** We appreciate the reviewer's valuable suggestion. We will consider comparing ALPS with other methods on the recent model LLaMA-3 and broaden our assessment by incorporating knowledge-intensive and reasoning-based tasks, specifically MMLU and GSM8K. As a side remark, given our academic resource constraints (we conduct our experiments on a V100 GPU with 32GB memory), the largest model we’ve tried in the paper is OPT-30B.
**The current focus is primarily on N:M sparsity patterns. It would be great to explore structured pruning. Can ALPS be extended to handle other types (structured sparsity) of structured sparsity patterns beyond N:M?**
**Reply:** While we focus on unstructured sparsity and N:M sparsity, our approach can be generalized to handle other structured sparsity patterns, as discussed below.
Let $S$ denote the set of weight matrices with the desired structured sparsity pattern (in the case of unstructured sparsity, $S$ contains all matrices with no more than $k$ non-zero elements). Our method can be extended to find the optimal matrix in $S$ by modifying the $\mathbf{D}$-update step in Eq.(4) in our paper. We can set $\mathbf{D}^{(t+1)}$ to be the projection of $\mathbf{W}^{(t+1)}+\mathbf{V}^{(t+1)}/\rho$ onto $S$. Our approach remains efficient and can find high-quality solutions for sparsity patterns with low-cost projection operations, including:
+ Block sparsity [1]
+ Hierarchical sparsity [2]
+ Row sparsity [3]
Importantly, our convergence results (Theorem 1) and proposed PCG step directly apply to these structured sparsity patterns as well.
We thank the reviewer for bringing up this point, and we will be happy to discuss these extensions in our revision.
[1] Gray, S., Radford, A., & Kingma, D. P. GPU kernels for block-sparse weights.
[2] Wu, Y. N., Tsai, P. A., Muralidharan, S., Parashar, A., Sze, V., & Emer, J. Highlight: Efficient and flexible DNN acceleration with hierarchical structured sparsity.
[3] Meng, X., Ibrahim, S., Behdin, K., Hazimeh, H., Ponomareva, N., & Mazumder, R. OSSCAR: One-Shot Structured Pruning in Vision and Language Models with Combinatorial Optimization.
**How does the runtime of ALPS compare to other methods, especially for very large models?**
**Reply:** We compare the runtime of ALPS with other methods in the following table. Here, the runtime includes the time for generating input activations $\mathbf{X}$.
|Method| OPT-1.3B | OPT-2.7B | OPT-6.7B | OPT-13B | OPT-30B | LLaMA-7B|LLaMA-13B|
|-|-|-|-|-|-|-|-|
| MP |4.7|8.9|23|47|120|25|46|
| Wanda |99|161|280|502|1027|214|407|
| DSnoT |125|213|417|758|1528|347|651|
| SparseGPT|363|728|1621|2980|6662|1263|2319|
| ALPS |963|2360|6069|14323|48366|3043|7145|
| ALPS-simple|297|599|1375|2470|5021|1013|2595|
|Method |MP | Wanda | DSnoT |SparseGPT | ALPS | ALPS-simple|
|-|-|-|-|-|-|-|
|**Loss**|1.98e-1|2.26e-1|2.06e-1|8.54e-2|5.32e-2 |7.52e-2 |
ALPS employs an advanced optimization method to solve the layerwise reconstruction problem, which results in longer running times compared to other algorithms. However, it's important to note that ALPS's runtime is still negligible when compared to fine-tuning methods for LLMs. For context, Wanda [1] reports that fine-tuning LLaMa-7B with LoRA takes about 24 hours on a V100 GPU, while full parameter fine-tuning takes 24 days. In contrast, ALPS prunes the model in less than 1 hour.
Furthermore, in our paper, ALPS is run with a tight convergence criterion. We can improve its runtime further by using a variant, ALPS-simple, which uses a looser convergence criterion. As shown in the above tables, ALPS-simple achieves shorter running times compared to SparseGPT while still maintaining better utility, as measured by single-layer reconstruction loss. We are happy to include the performance results of ALPS-simple in the revised paper if it is of interest to the reviewers.
[1] Sun, M., Liu, Z., Bair, A., & Kolter, J. Z. A simple and effective pruning approach for large language models.
---
Rebuttal Comment 1.1:
Title: Response to rebuttals
Comment: Thank you for your detailed responses. The paper is in good shape, but still have some concerns remain unsolved:
1. I understand the resource constraints, but some results on models like LLaMA-3 or a discussion on scaling to 70B+ parameters would be valuable. OPT-30B is relatively old, and to my knowledge, it's sometimes easier to prune them than more recent 'overtrained' models, e.g. Llama-3 and Qwen 1.5 / 2.
2. Including knowledge-intensive (e.g., MMLU) and reasoning-based tasks (e.g., GSM8K) would strengthen the paper.
3. Theoretical analysis of extension is promising, and empirical results or case studies on these structured sparsity patterns would provide practical insights.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback. Based on your valuable comment, we have conducted further experiments to compare ALPS with SparseGPT, currently our strongest competitor, on LLaMA3-7B at 50%, 70%, and 2:4 sparsity levels. The results, as shown in the table below, demonstrate that ALPS outperforms SparseGPT on recent models LLaMA3.
In response to your suggestions, we are currently running extended experiments to compare ALPS on pruning LLaMA3 with other competitors [in the current paper] across a range of sparsity levels. We will include our current results and new results in a revision.
| 2:4 |Lambada|Piqa|Arc-easy|Arc-challenge|
|---|---|---|---|---|
|SparseGPT|36.93|70.40|59.01|28.84|
|ALPS|42.79|70.89|61.83|28.92|
| 50% |Lambada|Piqa|Arc-easy|Arc-challenge|
|---|---|---|---|---|
|SparseGPT|61.87|75.57 |72.39|39.25|
|ALPS|64.04|76.93|73.40|40.61|
| 70% |Lambada|Piqa|Arc-easy|Arc-challenge|
|---|---|---|---|---|
|SparseGPT|8.13|61.21|41.54|20.05|
|ALPS|16.42|64.58|46.89|21.84|
Additionally, we plan to include evaluation results for MMLU and GSM8K to further enhance the paper. Thanks again for your thoughtful feedback, which has strengthened our work. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful comments. In response to their comments, we have conducted additional numerical comparisons and included additional discussions in relation to the existing work. We present a summary below:
+ Comparison with existing methods & Novelty [see Ref 1PwE]: We've clarified key differences between ALPS and prior works on ADMM. In particular, we explain how our approach differs from a direct application of ADMM, and results in better performance. We’ve provided supporting numerical evidence.
+ New experimental results. Based on referee feedback, we perform additional numerical experiments in the rebuttal:
- [see Ref 1PwE] As a backsolve method, our proposed PCG outperforms Boža’s ADMM method in both computational time and objective value. ALPS also achieves lower layerwise loss compared to Boža.
- [see Ref 1PwE] Our proposed parameter updating scheme enables ALPS to converge rapidly while finding high-quality solutions. In contrast, standard ADMM may fail to converge.
- [see Ref ZgXD, 1PwE] While ALPS is already quite efficient, we can further accelerate ALPS by using a loose convergence criterion. This results in shorter running times compared to SparseGPT while still maintaining better utility.
+ Potential extensions [see Ref ZgXD]: We discuss how ALPS can be generalized to handle other structured sparsity patterns, including block, hierarchical, and row sparsity.
---
We provide some tables here for space reasons. We refer to these tables in our response to the reviewer 1PwE.
**Table 1:** The layerwise reconstruction loss on a single layer.
| Sparsity | 0.4| 0.5 | 0.6 | 0.7 | 0.8 | 0.9|
|-|-|-|-|-|-|-|
|ALPS|3.55e-3|7.56e-3|1.47e-2|2.77e-2|5.32e-2|1.13e-1|
|Boža|4.47e-3|9.53e-3|1.83e-2|3.36e-2|6.19e-2|1.25e-1|
**Table 2:** The layerwise reconstruction loss and runtime for applying PCG and Boža to find the optimal weights on a given support.
| Sparsity | 0.4| 0.5 | 0.6 | 0.7 | 0.8 | 0.9|
|-|-|-|-|-|-|-|
|ALPS-PCG(Loss)|5.0e-3|1.12e-2|2.27e-2|4.39e-2|8.51e-2|1.78e-1|
|Boža(Loss)|5.1e-3|1.12e-2|2.32e-2|4.49e-2|8.83e-2|2.00.e-1|
|ALPS-PCG(Time)|0.01s|0.01s|0.01s|0.01s|0.01s|0.01s|
|Boža(Time)|0.11s|0.11s|0.11s|0.11s|0.11s|0.11s|
**Table 3:** The reconstruction loss (objective) over iterations, comparing ALPS with ADMM using a fixed penalty parameter $\rho$.
| Loss/Iter |5 | 10 | 20 | 30| 50| 100|
|---|---|---|---|---|---|---|
| ALPS |1.63e-1 | 1.28e-1 | 5.95e-2 | 5.32e-2 |5.31e-2 |5.31e-2 |
| ADMM($\rho=0.3$) | 7.83e-2 | 7.55e-2 | 7.50e-2 | 7.47e-2 | 7.47e-2 | 7.45e-2 |
| ADMM($\rho=3$) | 9.32e-2 | 8.18e-2 |7.64e-2 | 7.53e-2 | 7.45e-2 |7.42e-2 |
**Table 4:** The rate of change of the support (of weights) between consecutive iterations, comparing ALPS with ADMM using a fixed penalty parameter $\rho$.
| Supp change / Iter |5 | 10 | 20 | 30| 50 | 100|
|---|---|---|---|---|---|---|
| ALPS |20.2% | 17.0% | 2.8% | 0.0% | 0.0% | 0.0% |
| ADMM($\rho=0.3$) | 6.4% | 7.0% | 7.0% |7.0% | 6.9%| 6.9%|
| ADMM($\rho=3$) | 0.2% | <0.1% | <0.1% |<0.1% | <0.1% | <0.1% |
**Table 5:** Perplexity and zero-shot evaluation results for dense models.
| Model | WikiText2↓| PTB ↓ | C4 ↓ | LAMBADA↑ |PIQA↑ |ARC-Easy↑| ARC-Challenge↑|
|---|---|---|---|---|---|---|---|
| OPT-1.3B |14.63|20.29|16.07|58.80|72.36|50.93|29.44|
| OPT-2.7B |12.47|17.97|14.34|64.82|74.81|54.34|31.31|
| OPT-6.7B |10.86|15.77|12.71|68.72|76.39|60.14|34.56|
| OPT-13B |10.13|14.52|12.06|70.23|76.88|61.83|35.75|
| OPT-30B |9.56 |14.04|11.44|72.39|78.18|65.40|38.14|
| LLaMA-7B |5.47|22.51|6.97|-|78.29|69.23|39.93|
| LLaMA-13B |4.88|28.87|6.47|-|78.78|73.27|45.56| | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Near-Optimal Distributionally Robust Reinforcement Learning with General $L_p$ Norms | Accept (poster) | Summary: The paper investigates distributionally robust Markov decision processes (RMDPs) in the discounted infinite-horizon setting. The authors consider the case where the transition kernel can be arbitrarily chosen from a prescribed (given) uncertainty set centred around a reference kernel, where the uncertainty set is specified using a smooth norm--the authors consider L_p norms. Assuming access to a generative model (simulator) that can sample from the reference transition kernel, the authors analyse a model-based approach that first constructs an empirical approximation of the reference transition kernel and then applies distributional robust value iteration to it. Considering the robust value function--which measures the worst case performance over all the possible transition kernels in the uncertainty set--the authors show that their method convergences to the optimal robust policy under two commonly adopted conditions (sa-rectangularity and s-rectangularity) describing the decoupled nature of the uncertainty sets. The authors establish upper bounds and sample complexity results for the procedure they consider (or any procedure that achieves a certain optimization error), as well as algorithmic-independent lower bounds.
Strengths: The paper makes several contributions. Firstly, the authors improve upon previous upper and lower bounds, demonstrating near minimax optimality for any choice of L_p norm. They show that the sample complexity for solving robust RL is at least the same as, and sometimes (when the uncertainty level is relatively large) smaller than, that of standard RL. This generalizes the findings of Shi et al., 2023, where minimax optimality and comparisons between robust RL and standard RL were only established for the total variation distance (p=1). Secondly, the results indicate that solving s-rectangular RMDPs is not more difficult than solving sa-rectangular RMDPs in terms of sample complexity.
On the technical side, the analysis seems non-trivial, utilizing a finite-sample approach established by the authors that leverages a novel dual formulation for RMDPs optimization, which could be of independent interest. The literature review is comprehensive, and the positioning and contributions of this work are clearly articulated.
Weaknesses: Overall, I found the main paper to be well-written, though it does contain some minor typos, formatting inconsistencies, and punctuation errors within the main text.
Establishing the lower bounds in the s-rectangularity setting for any L_p norm, not just the L_{\infty} norm, would strengthen the paper, enabling it to discuss minimax optimality also in that setting.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1- At the technical level, beyond considering a new dual formulation, what is the main difference compared to the proof scheme in Shi et al., 2023?
2- What challenges are involved in establishing lower bounds that hold for any L_p norm in the s-rectangularity case?
3- Is there a fundamental limitation to extending the results to the *entire* range of the uncertainty level?
4- On the motivation side, the authors write (page 2) that "in practice, numerous applications are based on optimizations or learning approaches that involve general norms beyond those that have already been studied". Could the authors provide relevant citations?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper includes numerous remarks that qualify the level of contribution and limitations of the results. The conclusion identifies the important task of establishing f-divergences as future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answer to reviewer QVRk**
We appreciate the reviewer for careful review and insightful feedback. It is rewarding to know that the reviewer recognizes the significance of our contributions. In what follows, we provide our response to the reviewer's comments.
> Overall, I found the main paper to be well-written, though it does contain some minor typos, formatting inconsistencies, and punctuation errors within the main text.
We will try to correct as much as possible all minor typos in the next version of the manuscript.
>* 1- At the technical level, beyond considering a new dual formulation, what is the main difference compared to the proof scheme in Shi et al., 2023?
The challenge in comparing to the work of Shi et al., 2023 is the following. Is it possible to generalize the work of Shi et al., 2023 to general $L_p$ norms and to $s$-rectangular case ?
One of the key of their proof was to bound the range of the value function by $1/\sigma$ for large radius $\sigma\geq 1-\gamma$. Is it possible to adapt it to $L_p$ RMDPs and to $s$-rectangular assumptions. Moreover is there a chance to concentrate more complex dual for $L_p$ RMDPs compare to TV case in Shi et al., 2023 ?
From upper bound perspective:
* **Adaptation and generalisation of key lemma from Shi et al., 2023 to reduce sample complexity in the $L_p$ context.**
Two new key lemmas 5 and 6 which are different from $TV$ case are derived, introducing $C_g$ coefficient which is not present in Shi et al., 2023.
* **Concentration of more complex dual for $L_p$ RMDPs compare to TV case**
Concentration lemma (lemma 8) is very different from Shi et al. 2023 with additional term to control. Indeed as the dual form for $TV$ and $L_P$ , $p>1$ involve respectively a scalar optimization and a vector in the case of $L_p$, uniform concentration in lemma 8 is more challenging in the $L_p$ case with factor depending on the geometry of the norm called $C_s$.
* **Adaptation and generalisation of the proof to $s$-rectangular case**
Contrary to Shi et al., 2023 , our proof tackles the problem of $s$-rectangular. We introduce quantities related to the stochasticity of the policy in the upper bound which is not present in Shi et al., 2023 to derive bound in the $s$-rectangular case.
* Finally **from a lower bound perspective** : new lower bound using counter example with stocastic optimal policy for $s$-rectangular case are derived and result from $sa$ rectangular is extended to $L_p$ norms.
> 2- What challenges are involved in establishing lower bounds that hold for any $L_p$ norm in the s-rectangularity case? and Establishing the lower bounds in the s-rectangularity setting for any L_p norm, not just the L_\{linfty\} norm, would strengthen the paper, enabling it to discuss minimax optimality also in that setting.
The lower bounds are for the case of $s$-rectangularity, which poses entirely new challenges compared to the case of $sa$-rectangularity: the
optimal policies may be stochastic and difficult to characterise as closed forms, compared to the deterministic ones in $sa$ cases. When using different norms, the corresponding optimal policies might even not have closed forms, which is also the bottleneck to extend the $L_\infty$ case (Theorem 4) to more general $L_p$ arbitrary norms (as in Theorem 2).
> Q3 Is there a fundamental limitation to extending the results to the entire range of the uncertainty level?
Indeed, we consider the full range of the uncertainty level in our theorems.
In Theorem 1 and 3, the entire possible range of uncertainty level $\sigma$ is considered (see Theorem 1 and 3 where we defined $\sigma_{max}$ and $\tilde{\sigma}_{max}$ in the $sa$- and $s$-rectangular case.
> Q4- On the motivation side, the authors write (page 2) that "in practice, numerous applications are based on optimizations or learning approaches that involve general norms beyond those that have already been studied". Could the authors provide relevant citations?
In the context of RMDPs the work of [1] proposed to use weighted norm to define a RDMPs. Moreover, in other application without the scope of RDMPS, general norm are used in robust optmization such as in [2,3].
give examples from other areas, such as supervised learning, adversarial learning.
>[1] Reazul Hasan Russel, Bahram Behzadian, and Marek Petrik. "Optimizing norm-bounded weighted ambiguity
sets for robust mdps" arXiv preprint arXiv:1912.02696, 2019
> [2] Dimitris Bertsimas, Dessislava Pachamanova, Melvyn Sim "Robust linear optimization under general norms
> [3] J Rony, L Gustavo, R Sabourin, E Granger : Decoupling direction and norm for efficient gradient-based l2 adversarial attacks
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. I continue to hold a positive outlook on this work, and I maintain my score.
---
Reply to Comment 1.1.1:
Comment: We are grateful for your feedback and will incorporate your suggestions into the final version of the manuscript. | Summary: This paper dives into the theoretical understanding of learning robust MDPs with a generative model. The robust set is modeled as a distribution ball induced by the general $L_p$-norm centered around the nominal model. The sample complexity is provided for both $\mathcal{S}\times\mathcal{A}$-rectangular and $\mathcal{S}$-rectangular robust sets and for the former case the minimax optimality is established.
Strengths: **Originality and Significance:**
1. The paper considers a generalized smooth version of $L_p$-norm with $p\geq 2$ (whose algorithm and analysis also easily extend to the TV case) for the robust set and provides the corresponding sample complexity for the first time.
2. The paper studies both $\mathcal{S}\times\mathcal{A}$-rectangular and $\mathcal{S}$-rectangular robust sets. For both cases, the corresponding sample complexity improves over the prior art for a special case (standard $L_p$ norm provided by [1]) when the robust set size is relatively large.
3. For the $\mathcal{S}\times\mathcal{A}$-rectangular case, the paper proves a matching lower bound showing its minimax optimality. For the $\mathcal{S}$-rectangular case, the paper provides a lower bound for the special case of $L_{\infty}$-norm robust set. The results also shows the learning $\mathcal{S}$-rectangular robust MDPs with general $L_p$-norm is no harder than learning $\mathcal{S}\times\mathcal{A}$-rectangular robust MDPs.
**Clarity and Quality:**
The paper is well written. All the theoretical results are sound and are well proved. Some typos exist but are minor. See my questions below.
**References:**
[1] Clavier, P., Pennec, E.L. and Geist, M., 2023. Towards minimax optimality of model-based robust reinforcement learning. *arXiv preprint arXiv:2302.05372*.
Weaknesses: 1. One of the key message is that learning general $L_p$-norm robust MDPs is easier than learning standard MDPs. However, this result is not superising given the prior arts on certain special cases such as the TV norm robust MDP [2]. So I think it is less discussed how the exact choice of the generalized $L_p$-norm would further affect the diffuculty of learning this type of robust MDPs.
2. Although theoretically sound, it is not well discussed when and why people need to consider modeling the inherent uncertainty of the MDP parameters using the generalized $L_p$-norm.
**References:**
[2] Shi L, Li G, Wei Y, Chen Y, Geist M, Chi Y., 2023. The curious price of distributional robustness in reinforcement learning with a generative model. Advances in Neural Information Processing Systems.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Regarding Section 8.3:
- (Line 687-689) It is claimed that the dual form is derived for *arbitrary* norms, which seems like an overclaim since the proofs still depend on the assumptions in Definition 1 (e.g., Line 718). I think this is a misleading claim and needs revision.
- It seems that the proof of duality in Section 8.3 resembles that in [1] without mentioning that work. Could the authors highlight a bit more the difference between the proofs here the and the proofs for duality in the previous work [1]?
2. Regarding Theorems 3 and 4 for the $\mathcal{S}$-rectangular case. The upper bound in Theorem 3 (Equation (21)) involves a minimization over two terms (this is different from the $\mathcal{S}\times\mathcal{A}$-case where only the first term appears, which I think comes from a different upper bound on the span of the value function), but the lower bound in Theorem (Equation (23)) for a special case only involves the first term in the minimization in the upper bound, which seems like a contradiction. Could the authors explain more on that?
3. Regarding Theorem 4. For the $\mathcal{S}$-rectangular robust set case, the lower bound is only for the standard $L_{\infty}$-norm case. Could the authors elaborate more on the difficulty in proving a lower bound for the general $L_p$-norm case?
4. Some typos and grammatical mistake that I found:
- Line 137 to 138.
- Line 144: simple -> simplex.
- Line 155: typo in defining the domain of the norm.
- Inconsistent usage of label (sometimes Theorem x but sometimes Theorem (x)), e.g., Lines 62 and 71, Lines 218 and 222.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the weakness section and the question section above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Aswer to revierwer Tr2g**
We appreciate the reviewer's comprehensive feedback and recognition of the significance of our contributions.
> Q1. One of the key message is that learning general $L_p$-norm robust MDPs is easier than learning standard MDPs. So I think it is less discussed how the exact choice of the generalized $L_p$-norm would further affect the difficulty of learning this type of robust MDPs.
Thank you for the clarification question. How different $L_p$ norm metric compared to TV affects the sample complexity or difficulty of learning RMDPs is present in two coefficients $C_g$ and $C_s$. For instance, in $sa$-rectangular cases, we recall that our sample complexity results is in the order of $O(\frac{ S A}{(1-\gamma)^2 \max \{ 1-\gamma, C_g \sigma\} \varepsilon^2} + \frac{ SA C_S \left\|1_S\right\| }{(1-\gamma)^2 \epsilon})$.
- **The coefficient $C_g>0$ is related to the geometry of the norm.** For classical non-weighted $L_p$ norms, this coefficient is bigger than $1$, leading to sample complexity smaller than in classical TV case. However, for arbitrary weighed norms, this coefficient can be smaller than 1, which imply a bigger sample complexity compared to TV cases.
- **The coefficient $C_s>0$ represent how smooth is the gradient of the norm.** This coefficient does not affect the sample complexity for small error $\epsilon \leq \left(\max \{1-\gamma, C_g \sigma\}\right) /\left(C_S\left\|1_S\right\|\right)$, with $\sigma$ the radius of the uncertainty ball, and $\left\| . \right\|$ the considered norm, as it is a second order term in terms of $\epsilon$ or burn in term. The smoother the gradient of the norm is, the smaller is $C_s$ which lead to smaller burn-in term.
> Q2. Although theoretically sound, it is not well discussed when and why people need to consider modeling the inherent uncertainty of the MDP parameters using the generalized $L_p$-norm.
Thank you for the insightful suggestion. We consider general $L_p$ norm since its soundness in both theory and practice. [4] use TV for RL learning of Online 3D Bin or [6] for Offline policy optimization and [5] use $L_\infty$ for learning S-rectangular Robust MDPs. Moreover, the general $L_p$ problem formulation has unique interesting optimization properties, namely, relaxing slightly the problems will lead to a closed-form dual problems ([1,2] developed respectively Value Iteration and policy-based methods to solve RMDPs with $L_p$ norms).
>[1] Navdeep Kumar, Esther Derman, Matthieu Geist, Kfir Levy, Shie Mannor "Policy Gradient for Rectangular Robust Markov Decision Processes"
>[2] Navdeep Kumar, Kfir Levy, Kaixin Wang, Shie Mannor "Efficient Policy Iteration for Robust Markov Decision Processes via Regularization"
>[4] Adjustable Robust Reinforcement Learning for Online 3D Bin Packing
Yuxin Pan, Yize Chen, Fangzhen Lin
>[5] B Behzadian, M Petrik, CP Ho Fast Algorithms for $ L_\infty $-constrained S-rectangular Robust MDPs, Advances in Neural Information Processing Systems, 2021
>[6] Lee, J., Jeon, W., Lee, B., Pineau, J., and Kim, K.-E. (2021). Optidice: Offline policy optimization via
Stationary distribution correction estimation.
> Q3 (Line 687-689) It is claimed that the dual form is derived for arbitrary norms, which seems like an overclaim since the proofs still depend on the assumptions in Definition 1 (e.g., Line 718). I think this is a misleading claim and needs revision.
Yes it is true, we will replace it, this is a typo.
> Q4 It seems that the proof of duality in Section 8.3 resembles that in [1] without mentioning that work. Could the authors highlight a bit more the difference between the proofs here the and the proofs for duality in the previous work [1]?
There are many differences between our work and Clavier et al. [2023], such as our key lemma 5 and 6 to improve the sample complexity, the Variance decomposition (equation 64 of our work) which is central in our paper and differ from Clavier et al. [2023]. Please refer to the General Response for all technical contributions and new challenge addressed in our paper upon Clavier et al. [2023].
> Q4. Regarding Theorems 3 and 4 for the $\mathcal{S}$-rectangular case. The upper bound in Theorem 3 (Equation (21)) involves a minimization over two terms (this is different from the $\mathcal{S} \times \mathcal{A}$-case where only the first term appears, which I think comes from a different upper bound on the span of the value function), but the lower bound in Theorem (Equation (23)) for a special case only involves the first term in the minimization in the upper bound, which seems like a contradiction. Could the authors explain more on that?
Thank you for the insightful question. In equation (20) of Theorem 3, using the result for the $L_\infty$ gives that the upper bound is exatly proportional to $\frac{SA}{\epsilon^2(1-\gamma)^2\max\{1-\gamma,\sigma \} }$. The other cases of the upper bound in equation (20) involve the quantity
$\min_{s \in \mathcal{S}} ( \lVert \pi_s^*\ \rVert_* \lVert 1_A \rVert , \lVert \hat{\pi_s} \rVert_* \lVert 1_A \rVert )=1$
in the case of $L_\infty$ RMDPs because.
$\min_{s \in \mathcal{S}} ( \lVert \pi_s^*\ \rVert_* \lVert 1_A \rVert , \lVert \hat{\pi_s} \rVert_* \lVert 1_A \rVert )=\min_{s \in \mathcal{S}} ( \lVert \pi_s^*\ \rVert_1 \lVert 1_A \rVert_\infty , \lVert \hat{\pi_s} \rVert_1 \lVert 1_A \rVert_\infty )=1$
So there is no contradiction with our lower bound which exactly match the upper bound.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response! I agree with your clarifications on several points that I am concerned with. But I think the proof of the duality result in the paper still ought to mention the existence of the results in Clavier et al. [2023]. I currently have no further questions. Given the answer and the contributions of the work, I am willing to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response, we will revise the final version of the manuscript, following your recommendations.
---
Rebuttal 2:
Title: Folllowing of the Aswer to revierwer Tr2g
Comment: > Q5 Regarding Theorem 4. For the $\mathcal{S}$-rectangular robust set case, the lower bound is only for the standard $L_{\infty}$-norm case. Could the authors elaborate more on the difficulty in proving a lower bound for the general $L_p$-norm case?
The lower bounds are for the case of $s$-rectangularity, which poses entirely new challenges compared to the case of $sa$-rectangularity: the
optimal policies can be stochastic and difficult to characterize as closed forms, compared to the deterministic ones in $sa$ cases. When using different norms, the corresponding optimal policies might even not have closed forms, which is also the bottleneck to extend the $L_\infty$ case (Theorem 4) to more general $L_p$/arbitrary norms (as in Theorem 2).
> Q6. Some typos and grammatical mistake that I found:
Many thanks for pointing out the typos. We have revised them and check throughout the entire paper again. | Summary: The paper presents an analysis of the sample complexity of solving distributionally robust Markov decision processes (RMDPs) with general Lp norms as the distance metric for the uncertainty set. The authors consider both the sa-rectangular and s-rectangular settings and provide near-optimal upper and lower bounds on the sample complexity.
Strengths: For sa-rectangular RMDPs, the authors derive a near-minimax optimal sample complexity upper bound that matches the lower bound, showing their result is tight for almost the full range of the uncertainty level. This improves upon prior work which only achieved minimax optimality for the specific case of TV distance.
For s-rectangular RMDPs, the authors provide the first sample complexity upper and lower bounds, showing that solving s-rectangular RMDPs is not harder than solving sa-rectangular RMDPs in terms of sample requirement. This is an interesting and non-trivial result, as s-rectangular RMDPs have a more complicated optimization formulation.
The authors show that solving robust MDPs can be at least as sample-efficient as, and sometimes more sample-efficient than, solving standard MDPs. This provides important motivation for the study and use of distributionally robust RL.
This work develops new technical tools such as new concentration lemmas to obtain tighter sample complexity bounds.
Weaknesses: The primary issue is that this paper closely aligns with Clavier et al. [2023]. To clarify the unique contributions, the authors should include more in-depth discussions comparing their work to Clavier et al. [2023]. For instance, the proof in Section 8.3 appears to follow Clavier et al. [2023] without proper citation.
A potential limitation of the paper is its focus on the tabular setting. It would be valuable to explore whether these insights can be applied to the function approximation setting as well. Nevertheless, this does not diminish the importance of the contributions presented in this work.
Technical Quality: 3
Clarity: 2
Questions for Authors: na
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have discussed limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answer to reviewer 5t5a**
We appreciate the reviewer's careful review and insightful feedback. It is rewarding to know that the reviewer recognizes the significance of our contributions. In what follows, we provide our response to the reviewer's comments.
> Q1) The primary issue is that this paper closely aligns with Clavier et al. [2023]. To clarify the unique contributions, the authors should include more in-depth discussions comparing their work to Clavier et al. [2023]. For instance, the proof in Section 8.3 appears to follow Clavier et al. [2023] without proper citation.
There are many differences between our work and Clavier et al. [2023], e.g., our key lemma 5 and 6 to improve the sample complexity and the Variance decomposition (equation 64 of our work) which is central in our paper and differ from Clavier et al. [2023]. Please refer to the General Response for all technical contributions and new challenge addressed in our paper upon Clavier et al. [2023].
Section 8.3 of this work about optimization duality is similar to Clavier et al. [2023], while our proof is slightly more general as it works for weighted $L_p$ norms. We will refer to it in our last version of the proof of this manuscript. Moreover, the difference in optimization part is that we are not relaxing the dual of RMDPs problem like in Lemma C2 of Clavier et al. [2023].
> Q2) A potential limitation of the paper is its focus on the tabular setting. It would be valuable to explore whether these insights can be applied to the function approximation setting as well. Nevertheless, this does not diminish the importance of the contributions presented in this work.
Thanks for proposing this interesting direction, investigating robust RL with linear function approximation or other settings such as online settings, or model-free algorithms are definitely interesting directions in the future work. Robust RL with linear function has inspired some recent works such as [1,2,3]. We believe the findings of current results in tabular cases about norms lays a solid foundation to carry out to cases with linear function approximation, e.g., the finding of using any $L_p$ norm may lead to less sample size requirement. While the entire pipeline in this work will need to adapt or change a lot since this setting involves additional challenges from problem definition (how to define a reasonable uncertainty set is still relatively open) towards algorithm design.
> [1] Blanchet, J., Lu, M., Zhang, T., and Zhong, H. (2024). Double pessimism is provably efficient for distributionally robust offline reinforcement learning: Generic algorithm and robust partial coverage. Advances in
Neural Information Processing Systems, 36.
> [2] H Wang, L Shi, Y Chi Sample complexity of offline distributionally robust linear markov decision processes arXiv preprint arXiv:2403.12946, 2024
> [3] Liu, Z. and Xu, P. (2024a). Distributionally robust off-dynamics reinforcement learning: Provable efficiency
with linear function approximation. arXiv preprint arXiv:2402.15399.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The clarification provided by the authors addresses my main concerns, and I would like to raise the score. | Summary: This paper proposes tighter than prior art sample complexity bounds for Robust Markov Decision Processes. In sa-rectangularity and s-rectangularity conditions with non-zero uncertainty measured using $L_p$ around a nominal transition kernel, the upper bound is $\frac{SA}{(1-\gamma)^3\\epsilon^2}$. The setup assumes access to a simulator to collect $N \times S \times A$ samples to first learn a transition kernel and determines optimal policy using distributional robust value iteration. The goal is utilize few samples there after to identify the optimal policy
Strengths: The paper is generally well-written, and results improve existing bounds. However, I didn't thoroughly read the proofs.
Weaknesses: It would be better has NSA been not used in Eq. 17, 18, 19, 20, 21 . Although the intention of the authors is clear, but that would improved readability.
In Table 1, the bounds are used in order sense, it would improve readability to have written in $\mathcal{O}(.)$, especially since the lower and upper bounds are of same order
Technical Quality: 3
Clarity: 3
Questions for Authors: It would great improve the draft with some empirical evaluation to back the theoretical bounds, even a toy example.
Especially important since the bounds are not significant improvement from the prior art
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please add experimental work to back the theoretical bounds
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Answer to Reviewer 6HyR**
We appreciate the reviewer for recognizing our contributions, and for providing constructive suggestions. Here some comments on the different questions raised :
> Q1: NSA been not use in Eq. 17, 18,19,20,21. Although the intention of the authors is clear, but that would improved readability.
Thanks for the suggestions. We add a new notation $N_{\mathsf{all}} = NSA$ to represent the total number of samples and use $N_{\mathsf{all}}$ in Eq. 17, 18,19,20,21 or other equations to make it clearer for the readers.
> Q2: In Table 1, the bounds are used in order sense, it would improve readability to have written in $\mathcal{O}(.)$, especially since the lower and upper bounds are of same order.
Thank you for the clarification question. Throughout the paper (including Table 1), we use the same metric to measure the sample complexity --- the order that defined as:
- "Let $\mathcal{X} := \big( S, A, \frac{1}{1-\gamma}, \sigma, \frac{1}{\varepsilon}, \frac{1}{\delta} \big)$. The notation $f(\mathcal{X}) = O(g(\mathcal{X}))$ or $f(\mathcal{X}) \lesssim g(\mathcal{X})$ indicates that there exists a universal constant $C_1>0$ such that $f\leq C_1 g$, the notation $f(\mathcal{X}) \gtrsim g(\mathcal{X})$ indicates that $g(\mathcal{X})=O(f(\mathcal{X}))$, and the notation $f(\mathcal{X})\asymp g(\mathcal{X})$ indicates that $f(\mathcal{X}) \lesssim g(\mathcal{X})$ and $f(\mathcal{X}) \gtrsim g(\mathcal{X})$ hold simultaneously. Additionally, the notation $\widetilde{O}(\cdot)$ is defined in the same way as ${O}(\cdot)$ except that it hides logarithmic factors."
We added the footnote inside Table 1 to introduce the definition of order as above and clarify that we omit $\widetilde{O}(\cdot)$ in the caption, as the reviewer suggested.
> Q3: It would great improve the draft with some empirical evaluation to back the theoretical bounds, even a toy example.
Thank you for the constructive suggestion. We will conduct an experiment with simple RMDPs defined with $L_p$ balls on a toy example to show the dependency in the radius of the ball $\sigma$ on the sample complexity.
> Q4: Does our sample complexity improvement significant improve prior art?
For RL problems, especially in theory, the sample complexity dependency on the salient parameter $\frac{1}{1-\gamma}$ is one of the significant terms that dominate the sample efficiency. A line of work has committed to the endeavor of gradually improve this dependency towards optimal in various RL problems including but not limited to standard RL [1,2], and robust RL [3,4,5] that this work focus on.
In $sa$-rectangular cases. We focus on improving the sample complexity when $\sigma >1-\gamma$, where the prior art is still far from optimal. In $sa$-rectangular cases, we improve upon the prior art [5] sample complexity $\widetilde{O}(\frac{S A}{(1-\gamma)^4 \varepsilon^2}$) to $\widetilde{O}(\frac{S A}{(1-\gamma)^2 \sigma \varepsilon^2})$ by at least a factor of $\frac{1}{1-\gamma}$ and goes to $\frac{1}{(1-\gamma)^2}$ when $\sigma \geq O(1)$. Notably, this results is minimax-optimal in all salients parameters that match our lower bound for general $L_p$ norms, as long as the accuracy level $\epsilon$ is in a reasonable range. This is a significant improvement and near-optimal result. The same improvement is done for the $s$-rectangular case, by at least a factor of $1/(1-\gamma)$ compared to [5].
>[1] Azar, Mohammad and Munos, Rémi and Kappen Hilbert , Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
>[2] Gheshlaghi Azar, R Munos, HJ Kappen - Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
>[3] Wenhao Yang, Liangyu Zhang, and Zhihua Zhang. Toward theoretical understandings of robust
Markov decision processes: Sample complexity and asymptotics
>[4] Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, Matthieu Geist, Yuejie Chi The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model
>[5] Pierre Clavier, Erwan Le Pennec, Matthieu Geist Towards minimax optimality of model-based robust reinforcement learning
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your response.
Updated score | Rebuttal 1:
Rebuttal: # General response:
First, We would like thank the reviewers for their careful reading of the paper and their insightful and valuable feedback.
### Highlight of our new challenges and technical contributions
We would like to highlight the technical challenge to answer reviewers 6HyR and 5t5a and explain the comparison to prior work Clavier et al. [2023]. Our main contributions focus on statistical perspective.
Compared to Clavier et al. [2023], the motivation was to improve the sample complexity when the radius of the uncertainty set $\sigma>1-\gamma$. The bottleneck of Clavier et al. [2023] is that when $\sigma>1-\gamma$, the sample complexity is proportional to $1/(1-\gamma)^4$, which is far from optimal. Towards optimal and more broader results, our technical contributions is summarized as:
1) **Use a tighter error decomposition.** Clavier et al. [2023] use a decomposition of Q functions related to the work of [1] (Lemma C1 in their paper). However, this decomposition is not fine enough for large radius $\sigma$ to obtain tighter bounds. The Variance decomposition in our paper allow tighter control of errors and is completely different (equation 64 of our work) compared to their work. Denoting $V^{\star, \sigma}$ the robust value function and the different kernel , and $P^\pi$, the projection $\pi$ projection of kernel $P$ and $\underline{P}^V$ the worst kernel such as $\underline{P}^V =\arg \min_{P} PV$,
we use :
$ \sqrt{Var_{P^{\pi^{\star}}}\left(V^{\star, \sigma}\right)}=\left(\sqrt{Var_{P^{\pi^{\star}}}\left(V^{\star, \sigma}\right)}-\sqrt{Var_{\widehat{P}^{\pi^{\star}}}\left(V^{\star, \sigma}\right)}\right)+\sqrt{Var_{\widehat{P}^{\pi^{\star}}}\left(V^{\star, \sigma}\right)} $
$
\leq\left(\sqrt{Var_{P^{\pi^{\star}}}\left(V^{\star, \sigma}\right)}-\sqrt{Var_{\widehat{P^{\pi^{\star}}}}\left(V^{\star, \sigma}\right)}\right)
+\sqrt{\left|Var_{\widehat{P}^{\pi^{\star}}}\left(V^{\star, \sigma}\right)-Var_{\widehat{\underline{P}}^{\pi^{\star}, V}}\left(V^{\star, \sigma}\right)\right|}+\sqrt{Var_{\widehat{\underline{P}}^{\pi^{\star}, V}}\left(V^{\star, \sigma}\right)} .$
>[1] Azar, Mohammad and Munos, Rémi and Kappen Hilbert , Minimax PAC bounds on the sample complexity of reinforcement learning with a generative model
2) **Use exact dual of RMDPs problem and not a relaxion.** Clavier et al. [2023] have an extra term in their upper bound proportional to $\frac{2 \gamma \beta|S|^{1 / q}}{1-\gamma}$ contrary to our work. To control error in their second theorem (Th.2), they use a relaxation of the dual form or RMDPs in their Lemma C2, which lead to non-minimax bound for large radius $\sigma$ contrary to our proof. On contrary, we use the exact form of the dual in our proof in Theorem 1 and 3.
3) **Use a key lemma about the range of the value function.** Clavier et al. [2023] cannot achieve sample complexity lower than $1/(1-\gamma)^3$ as they did not use fundamental idea than the range of value function in RMPDS is constrained for large radius. The key idea that allows us to get smaller sample complexity, namely lemma 5 and 6 for $sa$- and $s$-rectangular for radius $\sigma>1-\gamma$ for $L_p$ norms, is that the range of the value function is bounded for $L_p$ RMDPs.
**From a lower bound perspective**, we derive the first minimax lower bound for general $L_p$ in the $sa$ rectangular cases. In addition, we develop the first lower bound $s$-rectangular cases using divergence function for $L_\infty$.
The main technical contributions focus on the lower bounds are for $s$-rectangularity, which poses entirely new challenges compared to the case of $sa$-rectangularity: the optimal policies can be stochastic and difficult to characterize as closed forms, compared to the deterministic ones in $sa$ cases. When using different $L_p$ norms, the corresponding optimal policies might even not have closed forms, which is also the bottleneck to extend the $L_\infty$ case (Theorem 4) to more general $L_p$ arbitrary norms (as in Theorem 2). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neural collapse vs. low-rank bias: Is deep neural collapse really optimal? | Accept (poster) | Summary: The paper studies the optimality of neural collapse in intermediate layers of a deep neural network on classification tasks with more than 2 classes.
1. The paper shows that the NC2 property (formation of a simplex ETF/orthogonal frame) in the intermediate layer features is not optimal, and constructs a strongly regular graph (SRG) based lower-rank solution leading to lower loss than previously proposed deep neural collapse (DNC) optimal solutions.
2. The NC1 property (variability collapse of features) is still shown to be optimal in these settings.
Strengths: The key contribution of this paper is the construction of a strongly regular graph (SRG) based lower-rank solution for a generic $L > 2$ layer neural network trained on classification tasks with $K > 2$ classes. The authors show that this construction leads to lower training loss than the deep neural collapse-based solution.
Weaknesses: 1. There seems to be an error in the Sherman-Morrison formula-based inverse in the final result of Lemma 12, i.e, $(I + \frac{1}{r-1}11^\top)^{-1} = I - \frac{1}{2(r-1)}11^\top$. (The authors can correct me on this). Since this result is used in Lemma 13 and affects the results in Theorem 5, verifying this result is important.
2. Can Definition 4 be split into multiple parts for better readability? For instance, split it into parts w.r.t:
- collapsed class means are unaffected by non-linearity.
- The class means are scaled versions of $T_r$ where each row is scaled individually.
- Choice of $A_L$ for the final layer.
This breakdown highlights a major assumption that the non-linearity is not in effect till the last layer. Additionally, the complete set of trainable and non-trainable parameters should be made clear. The full definition in Appendix A.1 mentions extra trainable terms corresponding to $q$ terms.
3. In section 6.1 (DUFM training) Figure 2 (first row, last column), the plots for the singular values of class means seem to have a rank of around 7 or 8 for intermediate layers as well. The ideal rank $K=10$ might not exactly hold for intermediate layers but these observations should be discussed since it is one of the main claims in the paper.
4. The experimental results in section 6.3 for MNIST datasets claim that the network trained using gradient descent is the SRG solution. A justification for this observation is missing. The heatmap for MNIST in Figure 4 corresponds to $M_5$ which as per notation indicates the class-means of the penultimate layer post-activations. However, shouldn't the SRG solution correspond to similar heat maps across intermediate layers? Also, a statement such as low-rank implying an SRG solution is not justified.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Do numerical experiments with the 5-layer MLP head include a ReLU activation for L-DUFM and end-end training?
2. The authors mention in section 6.3 that "We also observe that the rank deficiency is the strongest in the mid-layer of the MLP head". Can further justification be provided for this? Especially which aspect of the spectrum of Layer 3 or 4 in Figure 5 justifies this observation?
3. Since the SRG solution relies on feature class means being non-negative, how valid is this assumption (based on your practical experiments)?
nit: In the notation for $H_L = \sigma(W_{L-1}X_{L-1})$ in section 3, shouldn't $X_{L-1}$ be replaced with $H_{L-1}$ ? The same applies to $\widetilde{X}_l$.
nit: Please fix the formula for NC1 in the first line of section 6. The pseudo-inverse is not required when dividing the traces.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for a detailed review. The typos pointed out by the reviewer will be corrected in the revision, and we thank the reviewer for spotting those. We address all the other concerns below (including the technical issue about the Sherman-Morrison formula), and we would like to kindly ask the reviewer to raise the score (together with the soundness score) in case no additional issue remains.
**An issue with Sherman-Morrison formula.**
We carefully checked our computation and believe there is no error. The matrix we have to invert is indeed $I+\frac{1}{r-2}\mathbb{1}\mathbb{1}^T$ (not $I+\frac{1}{r-1}\mathbb{1}\mathbb{1}^T$, as in the review), as this is the main factor of the previous computation of $T_r T_r^T$, where we exchange the multiplier of $\mathbb{1}\mathbb{1}^T$ from $\frac{1}{r-1}$ to $\frac{1}{r-2}$ by pulling $\frac{r-2}{r-1}$ in front of the parenthesis. Then, we use the Sherman-Morrison formula $(A+uv^T)^{-1}=A^{-1}-\frac{A^{-1}uv^TA^{-1}}{1+u^TA^{-1}v}$, where $A$ is the identity matrix and both $u$ and $v$ are $\frac{1}{\sqrt{r-2}}\mathbb{1}$. Thus, we get $I$ as the left summand, $\frac{1}{r-2}\mathbb{1}\mathbb{1}^T$ as the numerator in the right summand, and $1+\frac{r}{r-2}=\frac{2(r-1)}{r-2}$ as the denominator. The term $r-2$ cancels in numerator and denominator, and we are left with the right hand side we write in our computation.
**The definition 4 should be split and clarified.**
Thank you for pointing this out, we agree that this definition is rather complicated. In the revision, we will split the definition into different bullet points where we distinguish the construction of the intermediate layers and the final layer, highlight the non-negativity of intermediate activations and more clearly define the trainable parameters which are the scales of the involved matrices.
**In Figure 2 the rank is 7-8, which is not K=10. This should be discussed more.**
The rank 5-8 we found in our experiments is indeed smaller than the rank of DNC solution, which in this case is 10. This is exactly in agreement with Theorem 5, which shows that NC2 solutions are not globally optimal. We will make this explicit clarification in the revision, adding to the discussion in lines 259-263.
**The sentence “solution found by gradient descent is the SRG solution” from line 313 should be more justified.**
Thank you for pointing this out, we will clarify the sentence in the revision. What we meant is that the particular solution (i.e. a single run) whose gram matrix is displayed in lower row right of Figure 4 is an SRG solution, not that all solutions that SGD finds are SRG. Regarding the justification of this claim, as the reviewer dZSf correctly pointed out, there is a typo in the label, the plotted matrix is in fact $\tilde{M}_5^T\tilde{M}_5.$ However, we note that the gram matrix looks exactly the same (up to scalar scaling) for all other layers (please see our global response where we upload a PDF showing all 5 layers). This can be also seen theoretically since all the involved matrices have by the construction of the SRG solution the same gram matrix. The reason why the gram matrices look exactly like this can be seen from the fact that the $H_l$ matrices are defined to be incidence matrices of the complete graph and their gram matrix is therefore naturally the adjacency matrix of the triangular graph. For a more detailed explanation, we refer you to Lemma 15 and Lemma 16, which are basically entirely devoted to compute the gram matrices and their entries and singular values explicitly. These calculations exactly agree with the plot in Figure 4. We will explain this in more detail in the revision.
**Do numerical experiments include the ReLU activation?**
Yes, we include the ReLU activation in all our experiments in all layers.
**What do we mean by “the rank deficiency is strongest in the middle layers of the MLP head” and how can one see this?**
Thank you for this question, we will clarify the sentence. We do not mean that the rank in these middle layers is smaller than in the other layers, but rather that the tail singular values are closer to zero than the singular values of the other layers (i.e., 0.001 instead of 0.01). In fact, by looking closely at the green and red curves corresponding to layers 3 and 4, one can notice that their right tails are slightly below those of the other curves, meaning that the corresponding singular values are closer to 0 than those in the other layers.
**Is the assumption on feature class-means being non-negative valid in experiments?**
This is a great question. Indeed, in our experiments we observe that the intermediate features of the MLP head are non-negative before the application of the ReLU, regardless of whether we obtain the SRG solution or not. We will clarify this point in the revision.
---
Rebuttal 2:
Comment: Thank you for addressing the concerns. I have increased my score based on the responses.
One more suggestion to consider [optional]
- I think it would be better to also quantitatively measure the rank. For example, a simple measure such as effective-rank can be used. If the notion of a low-rank used here is purely qualitative, and quantitative measures have limitations [if any], then such a discussion can further improve the paper quality.
---
Rebuttal Comment 2.1:
Comment: Thank you for reconsidering your evaluation.
Thank you for suggesting using effective ranks, it is an interesting idea. We will shortly report results with effective rank measures in the revision. However, to best convey our message the best rank measure is hard rank or its thresholded approximation.
---
Reply to Comment 2.1.1:
Title: Effective ranks for Figure 4
Comment: For completness of the discussion, we computed mean effective ranks (computed as exponential of the entropy of singular values after normalizing to sum one) for the two series of experiments in the Figure 4. Here are the results:
CIFAR10: {'layer_1_er': 8.96, 'layer_2_er': 7.46, 'layer_3_er': 6.88, 'layer_4_er': 7.04, 'layer_5_er': 7.73, 'layer_5_er_post_relu': 9.52}
MNIST: {'layer_1_er': 6.82, 'layer_2_er': 5.43, 'layer_3_er': 5.37, 'layer_4_er': 5.44, 'layer_5_er': 6.12, 'layer_5_er_post_relu': 9.47}
Although the main message of our paper is orthogonal to to these quantitative values, they still reinforce our side claim that the low-rank bias is most pronounced somewhere in the middle of the MLP head, something we discussed previously. Thanks for suggesting this, there is indeed some value to these numbers, we will add them to the manuscript. | Summary: Deep Neural Collapse (DNC) refers the the neural collapse phenomenon that has been observed on intermediate layers of a deep network (nb. whereas neural collapse (NC) focuses only on the penultimate layer). Similar to Unconstrained Feature Model (UFM) for analyzing NC, Deep UFM (DUFM) is the comparable framework to study DNC. However, as opposed to UFM for NC, it's been observed that once you go beyond two layers or two classes, DNC is no longer optimal for DUFM.
In this paper, the authors suggest that the reason for this is a low-rank bias of commonly used multi-layer regularization schemes, meaning the low-rank bias leads to optimal solutions that are lower rank than the neural collapse.
This paper extends the current research by looking at DUFM when there are more than two classes and the model is non-linear. Previous work has focused only on DUFM for binary classification with non-linear models or deep linear models or shallow models with multiple classes.
Most notably, they show that in this more complicated setting, DNC is not optimal wrt DUFM. Namely, the class means fail one of the original conditions of NC; i.e., they fail to form an orthogonal frame. The authors suggest that the reason for this is the low-rank bias under L2 regularization. It is worth noting that the condition DFC1 (which states within class variability goes to zero) is still optimal and it's only the DNC2 property (which states that class means are orthogonal) that conflicts with the low-rank bias.
Their findings are supported with strong theoretical proofs and empirically on benchmark datasets such as MNIST and CIFAR10.
One of the main takeaways from this paper is that…if a DNC solution is found, it is not because of DUFM or global optimality. Instead, it is because of an implicit bias of the optimization procedure.
Strengths: The literature review is excellent and they do a great job motivating the problem they aim to address and how it compares with the existing literature.
The mathematical notation is clear and precise. The theory itself is very well developed with careful and detailed proofs.
The experiments they cover are solid and in agreement with their theory.
The authors pose some interesting open questions for future work related to the DNC1 condition.
Weaknesses: Based on my understanding of the current work, I don't see any major weaknesses. I have some minor points of clarification or questions below but no major flaws.
Technical Quality: 4
Clarity: 4
Questions for Authors: line 209 (clarification): Does the statement regarding K=2 or L=2 indicate that the argument of your theorem also be used to show DNC is optimal in this setting as shown in [51, 48] or are you just saying that as comparison when K=2 or L=2 by [51, 48] we know DNC is in fact optimal?
line 343: why the MSE loss? I didn't catch that part.
(typos/nits)
line 206 (typo): according to Theorem 5, should L \geq 3 be replaced with L = 3?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and the positive evaluation of our paper. We address all questions below.
**line 209 (clarification) about K=2, L=2**
Our statement was only meant to say that the cases $K=2$ and $L=2$ were already treated in prior work [51, 48]. We will clarify this in the manuscript.
**Why the MSE loss?**
We focus on the MSE loss primarily for theoretical reasons, because the MSE loss allows for a much more explicit analysis of the loss function. However, we expect that this is without much loss of generality, as both the MSE loss and the CE loss have similar behavior for well-fitting solutions. In addition, the low-rank bias does not come from the fit part of the loss, but rather from the regularization.
**Typo:**
Thank you for pointing this out, we will correct it in the revision.
---
Rebuttal Comment 1.1:
Comment: Ok. Thank you for the clarification. I have no further concerns. I'll keep my score as it is. Nice work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation. | Summary: This paper theoretically explores the deep neural collapse (DNC) phenomenon within non-linear deep models for multi-class classification. Neural collapse (NC) is a phenomenon in deep overparameterized networks where, the last layer's feature vectors align with class means and their corresponding classifier vectors, while also maximizing separation between classes. Previous works have studied the propagation of this symmetric geometry to earlier layers and proved its optimality which is termed as DNC. These works focus on DNC's optimality in multi-class linear networks or binary non-linear networks. This paper, however, extends the analysis to multi-class non-linear deep networks with ReLU activation, using the deep unconstrained features model (DUFM), where input features are treated as unconstrained parameters in the training objective.
While in the binary or linear case, DNC was shown to be optimal across all layers of DUFM, the authors show that in the multi-class case and in the presence of non-linearity, DNC is not always optimal. They prove this by explicitly constructing a solution, called SRG, that achieves a lower objective value than DNC. They show SRG is rank-deficit and the rank/loss gap between SRG and DNC increases as the network becomes deeper or the number of classes increases. They further numerically show on DUFM that, although DNC is not optimal, wider networks have an implicit bias toward finding the DNC solution. To further support their findings, they conduct additional experiments on benchmark datasets MNIST and CIFAR10 and ResNet architecture.
Strengths: This paper closes a gap in the theoretical analysis of the NC literature. The result is interesting as it challenges the optimality of DNC established in previous works and the accuracy of the abstract UFM for modeling the dynamics of deep overparameterized models.
Weaknesses: The proof of non-optimality of DNC in the paper relies on constructing a solution (SRG) that has a lower objective value. However, SRG might not be optimal. So, showing that SRG has a lower rank does not necessarily mean that the optimal solution also follows this low-rank structure. Still, the results and observations presented in this work remain interesting.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As discussed in the paper, the last-layer feature vectors before the activation ($\tilde{M}_L$), have a lower rank, and after activation ($M_L$) recover the full rank structure of NC. This is also clear from Fig. 2 (right). However, what I find missing is whether after activation $M_L$ follows the NC geometry or not, i.e., $M_L \propto I_K$? Or does it follow a different rank $K$ structure?
2. Line 263 mentions that only for a few runs the training solutions match SRG. In the rest of the runs, do the solutions achieve a lower objective value than SRG (thus proving that SRG is a local optimal at best)?
3. What about $W_\ell$'s? Do they also exhibit a similar low-rank structure in the SRG solution as well as the numerical experiments?
I also don't see why DNC3 is not well-defined in this setup (line 250). Still, at optimality, we might have the mean feature vectors $M_\ell$ and classifiers $W_\ell$ align in a low-rank structure. This alignment property doesn't hold in either the SRG or SGD solutions in the experiments?
4. Is there a small typo in Fig. 4 caption? The bottom right heatmap cannot be $M_5$ since the features after ReLU are non-negative and the heatmap cannot have negative values.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments and the positive evaluation of our paper. We address your concerns and questions below:
**While the SRG solution has low-rank, it is not clear whether a globally optimal solution also has one.**
This is a great point. We agree that we do not rigorously show this in our paper. However, we provide an intuitive explanation in lines 140-150. Our view is also aligned with a line of work on low rank bias [1, 2, 3]. In fact, [2] even proves the global optimality of low-rank solutions for sufficiently deep networks. Moreover, we highlight that the reason why SRG achieves a better loss than DNC is primarily its low-rank structure, as can be seen from our Lemma 13 which shows that the cost of intermediate layers linearly depends on their rank. Nevertheless, proving that global optima have low rank in our setting remains an interesting open problem.
**Do the last layer’s post activation features follow the NC geometry? Or a different structure?**
In our experiments, we do not recover the exact NC2 geometry, see Figure 2 (right) or Figure 4 (middle). In fact, the singular values are not all the same, and the tail is around 2-3x smaller than the largest singular value, which implies that the matrix cannot be orthogonal. This effect could be due to the large weight decay considered in our experimental setup. We also do not observe any specific type of different structure in the last layer.
**Do the solutions that don’t achieve SRG structure outperform the SRG solution?**
We did not find a solution that outperforms the SRG construction when the number of classes $K$ and number of layers $L$ are moderate. However, when $K$ and $L$ are very large, there are indeed solutions that outperform the SRG construction. This indeed shows that the SRG solution is not optimal in general, and we suspect that even lower rank structures are optimal in those settings.
**Do the weight matrices also exhibit low-rank structure? What about the DNC3?**
Yes, the weight matrices exhibit the same low-rank structure as features. This can be seen from the formula $W_l=H_{l+1}H_l^\dagger$ for the min-norm interpolator. This shows that the rank of $W_l$ is at most the minimum between the rank of its input and the rank of the output layer features. If those two are the same, $W_l$ must match the rank of the features exactly. Regarding DNC3, in our numerical experiments we do not observe exact alignment of rows of weight matrices with columns of feature matrices, and we do not expect this to happen theoretically. In fact, the rows of the weight matrices are not aligned with the columns of the feature matrices in our SRG solution.
**Typo in Figure 4?**
You are right, thank you for spotting this. The layer in question is in fact $\tilde{M}_5$ instead of $M_5.$
----
[1] G. Ongie and R. Willett. "The role of linear layers in nonlinear interpolating networks." arXiv preprint arXiv:2202.00856 (2022).
[2] A. Jacot. "Implicit bias of large depth networks: a notion of rank for nonlinear functions." ICLR, 2023.
[3] A. Jacot. "Bottleneck structure in learned features: Low-dimension vs regularity tradeoff." NeurIPS, 2023.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. I find the results interesting, and I believe clarifying these additional points can help with the message. I'll maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation. | Summary: Papyan et. al that showed that at the terminal phase of training there are four phenomena called neural collapse on the last layer of a deep neural network architecture. The authors extended Papyan's work by considering the earlier layers of nonlinear deep neural networks. They proved the first neural collapse condition (convergence to class means) is optimal for DNNs (Theorem 6 in manuscript). They also proved the second neural collapse condition (within-class variability) does not hold if number of layers and classes are high enough by showing there exists a scenario with better performance (Theorem 5). The claims are also supported by the experiments.
Strengths: Authors extended our understanding of how deep neural networks work by analyzing the neural collapse for deep neural networks.
Weaknesses: The results only hold under certain assumptions such as using multi-layer regularization.
Technical Quality: 4
Clarity: 4
Questions for Authors: -
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your positive review. If any questions come up during the discussion phase, we’ll be happy to address them.
---
Rebuttal Comment 1.1:
Title: Area Chair to Authors
Comment: Authors: Unfortunately this review will not be considered as part of the decision, unless the reviewer updates their review to include a justification of the score. The reviewer has already been reminded twice, but the review is unusable in its current form. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their reviews. Here, as part of an answer to reviewer hQr6's question, we upload a PDF representing the gram matrices of class-means in all 5 layers of the SRG solution corresponding to the lower row (right) of Figure 4. As can be seen, all gram matrices (up to small imperfections) match the same SRG structure of the triangular graph.
Pdf: /pdf/8aadd6722a3ebb43e83c5d4dd0518d753b015e56.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Mobility-LLM: Learning Visiting Intentions and Travel Preference from Human Mobility Data with Large Language Models | Accept (poster) | Summary: This submission presents a framework that leverages a large language model (LLM) as the backbone architecture for human-mobility-related tasks, including user identification, next location prediction, and arrival time prediction. The authors introduce a POI (Point of Interest) Point-wise Embedding Layer and a Visiting Intention Memory Network as an encoder to process check-in data. Additionally, they establish a behavior prompt pool to incorporate behavior preference information with the embeddings from the Visiting Intention Memory Network. The framework is evaluated using four real-world datasets, and the proposed model outperforms baseline methods in terms of most metrics.
Strengths: 1. Overall, this paper is well-written and well organized. The authors illustrate the motivation of this study sufficiently and review the related work thoroughly.
2. The proposed framework finetunes the LLM using semantic data, which is novel in human mobility related tasks.
3. The experimental studies are thorough and the hyperparameter analysis is well presented.
Weaknesses: 1. While the framework offers a novel approach to addressing human mobility-related tasks, the robustness of the modules is questionable, particularly in the formulation of the human travel preference prompt. Additionally, the overall technical contribution is limited.
2. The paper provides no discussion of choice of scoring function.
3. There is a lack of novelty in PPEL and HTPP.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The reviewer suggests that the HTPP is the most significant idea in this study. However, However, several concerns need to be addressed: (i) The use of cos-similarity based scoring function to determine the significance score needs more illustration. Why is cos-similarity sufficient to evaluate the semantic similarity in urban mobility tasks? What is the impact of choice of the scoring function? (ii) What is the motivation of the selected domains as well as the prompt words? Are there any theoretical support?
2. This paper essentially treats each domain separately during top-k prompts selection. The reviewer suggests that it is the combination of prompts of each domain that presents meaningful and promising semantic information, i.e., a doctor (Domain 2) goes to exercise (Domain 1) to keep healthy (Domain 3).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have discussed the limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive comments.
### **[W1&W2&Q1]**
We design the VIMN module to reprogram the check-in sequences. After passing through the VIMN module, the check-in sequences are aligned to the natural language semantic space, producing the resulting $\mathbf{h}_i$*.* This ensures that the two representation vectors sent to the HTPP module are in the same semantic space. We believe that in the same semantic space, vectors that are close in distance have similar semantics. Therefore, if the cosine similarity calculated by the score-function is high, it indicates that $\mathbf{h}_i$ itself has semantic relevance in the semantic representation space. In summary, the key to evaluating the semantic similarity in urban mobility tasks lies in the design of the VIMN reprogramming module, with cos-similarity only serving to compare the similarity of vectors in the same semantic space.
### **[W2]**
While the PPEL and HTPP modules leverage existing technologies from a technical perspective, our design principles are innovative and introduced for the first time in tasks related to human activities. These modules are integral, as they employ LLM's word token embeddings for both check-in sequence embeddings and prompt tokens, achieving semantic alignment and integration with other modules. Ablation studies have shown that these modules significantly enhance performance, thereby validating the efficacy and rationality of our design.
### **[Q2]**
We considered the scheme you originally mentioned. However, listing all combinations creates a very large space, reducing the model's efficiency. Manually screening reasonable combinations will cost a huge amount of human resources and may fail to cover all possibilities due to the diversity of human activity data and the many situations that can arise. Additionally, case studies show that the existing scheme is superior in efficiency and accuracy compared to the one you mentioned.
---
Rebuttal 2:
Comment: I am disappointed with the author's rebuttal. The author did not sufficiently address my questions; the responses lacked clarity and evaded the main issues. For instance, I don't understand the purpose of the combined response to W1&W2Q1. Your answer is completely off-topic, and I feel that W2's question should correspond to W3. Specifically, my questions are:
1. I have concerns about the robustness of the human travel preference prompt (HTPP) module. For example, your HTPP module includes several critical parameters, such as D. Why is the domain set to 3? Is this a limitation of the dataset, an arbitrary definition, or derived from experiments? I am unclear. What is the reason for setting m to 16? Even if the author did not mention this in the paper, it should be adequately addressed in the rebuttal. Unfortunately, I did not find any relevant answers.
2. My question is why you chose cosine similarity instead of other methods like Euclidean distance or Manhattan distance. These could all be discussed. Of course, I know that cosine similarity can measure the similarity of vectors in a unified semantic space, but what are its specific advantages? Did you determine through experiments that cosine similarity is better? Can't these topics be further discussed?
3. Your response claims that the existing solution is superior to the idea I proposed in Q2. The original text states, "We provide three case studies to visualize the improvements brought by HTPP and VIMN." I do not understand the relevance of the cases you provided to my question. I need a clearer explanation, such as comparing my question's definition with your case studies. Your response only added to my confusion.
Other reviewers have also provided insights from different perspectives, for example:
1. Reviewer NycQ believes that the performance improvement is not significant enough, which is indeed a problem. Your rebuttal mentioned that the t-value between Mobility-LLM and the best baseline is very high. However, I think this is because both models are very stable, and changing the random seed does not overly affect the model's performance, leading to a low standard deviation and a high t-value. This is meaningless. The core issue is that your improvement over the best baseline is only 0.4%. While such an improvement might be acceptable for other deep learning papers, this paper is based on LLM, requiring better GPU, more computing time, and potential API costs, making the slight performance improvement questionable.
2. For example, the classic baseline Graph-Flashback [1] for next location prediction shows more than double the performance improvement on Gowalla compared to DeepMove, and 4% higher on Foursquare. Why didn't you compare with some classic baselines, or if your performance improvement could also reach this level, it would be more reasonable. DeepMove is only a baseline published in 2018, but according to Table 1, your model only shows about a 1% improvement over DeepMove on Gowalla and Foursquare.
3. The authors claim that their convergence speed is faster than other baselines, but I did not see any relevant evidence. If you are referring to fewer epochs, I think this is unreasonable. The time each model consumes per epoch is completely inconsistent. The speed of convergence should consider the overall training time. What I observed is that Mobility-LLM consumes more memory, requires more training time, and inference time. The increased computational cost only brings minimal performance gains.
4. Reviewer Mti9 considers your Figure 1 to be rather cluttered. Why didn't the authors use the global response to upload a PDF showing the improved figure? After your adjustments, we are not clear about what the final version will look like.
5. Additionally, CACSR was not discussed. Why didn't you present a relevant discussion in the rebuttal?
Overall, although the authors initially received an overall positive comment, I do not see the authors taking the reviewers' questions seriously in the rebuttal, and the evasive answers have deepened my doubts. It is undeniable that the completion of this paper, including the production of charts and experimental results, is very high. However, the vague explanations of many details, the additional computational costs, and the insufficiently superior performance lead me to lean towards rejecting this paper, despite the positive attitude of other reviewers.
[1] Rao, Xuan, et al. "Graph-flashback network for next location recommendation." Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining. 2022.
---
Rebuttal Comment 2.1:
Comment: We sincerely apologize for not accurately addressing your question. We realize that we did not fully understand your comments before. In this discussion, we will first address the questions raised in your current comment in detail and then respond to the issues raised by other reviewers in the subsequent comments.
### 【Yours-Q1】
The selection of domains in the HTTP module is based on our research summary of various materials and reports describing human activity characteristics. Initially, besides these three domains, we also considered other fields such as age, travel time, and activity range.
After testing on a small dataset, we discovered that the selected three domains had the most significant impact on the model's performance. For each set of terms within one domain, we asked an LLM to provide a comprehensive set of commonly used terms for that domain.
We found that selecting 16 terms sufficiently met our experimental needs. Choosing more could lead to redundancy, while fewer terms would fail to cover the domain knowledge comprehensively. Ultimately, this exploration led to the current experimental setup.
### 【Yours-Q2】
Cosine similarity measures the angle between two vectors rather than the distance, making it suitable for measuring directional similarity rather than magnitude. In high-dimensional spaces, text or user behavior data is represented as high-dimensional sparse vectors, making it more effective in capturing textual feature similarity without being disrupted by data sparsity. The direction of the vectors often reflects the semantic similarity of words or sentences.
In contrast, Euclidean distance and Manhattan distance reflect absolute positional differences, which can be problematic in high-dimensional sparse data. The presence of zero dimensions might reduce similarity, leading to indistinguishable distance calculations that fail to capture actual similarity[1]. Therefore, it is more reasonable to use cosine similarity in the HTPP module compared to other measurement methods.
[1] Turney, P. D., & Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37, 141-188.
### 【Yours-Q3】
We acknowledge that our previous explanation may have been unclear, and we apologize for the confusion.
**The case study mentioned in our previous response actually refers to the discussion in Q2 of the initial Rebuttal** and is not related to the case study in the paper.
We would like to elaborate on the point we intended to express in the rebuttal as follows.
At the outset of our model design, we tried three different approaches, including the one you mentioned in Q2:
1. Listing all possible combinations of all domains,each combination is composed of one term from each domain.
2. Manually selecting reasonable combinations in approach 1.
3. Our current approach.
We compared the performance and efficiency of these three approaches. During the performance comparison, we looked into the results of approaches 1 and 3 through a case discussion. We found that the results from approach 3 were more reasonable, while the combinations from approach 1 were often unsatisfactory. During the training, the semantic of a sequence might match multiple terms in each domain, which means that if approach 1 was adopted, more than one combinations might be matched. Since the combinations contain semantic from multiple domains is coupled together in approach 1, this could cause interference among the matched combinations, resulting in unsatisfactory selections. However, in approach 3, each domain is decoupled and matched separately, so this issue does not exist.
Additionally, approach 1 was time-consuming, so after the case discussion, we discarded it.
If the approach 2 is adopted, where multiple terms are selected from each domain, the number of combinations will be extremely large, making it impractical to manually select reasonable combinations.
Therefore, approach 3 is optimal in terms of efficiency and performance, so we ultimately chose approach 3. This choice of approach and the above case discussion were not mentioned in the paper before. We will also add the above content to Section 5.5 of the paper. We are grateful for your suggestion, which helped us make our experiment and paper clearer and more complete.
---
Rebuttal Comment 2.2:
Comment: ### 【Other-Q1】
We will explain from three aspects: parameter search, performance bottleneck, and multi-task framework.
1. To ensure fairness, we did not use default parameters for the experiments to save time. Instead, we conducted a comprehensive parameter search for all baseline models, selecting the optimal hyperparameters for each on different datasets to ensure they perform at their best. During the search, we found that some baseline models performed significantly better with certain unexpected parameter combinations (e.g., the DeepMove model's performance improved by 37% when the Dropout parameter was set to 0.9 compared to the default parameters). Therefore, the results we reported reflect the best possible performance of the baseline models, surpassing their default settings.
2. To verify the effectiveness of our model on a larger dataset, we increased the sample size by including users with fewer than 10 check-ins (while other papers typically exclude users with fewer than 50 or 100 check-ins). This inclusion of more sparse samples makes further prediction improvements more difficult. The performance gap between these baselines is minimal, reaching a bottleneck where further improvements with conventional deep learning models are challenging. Thus, the current performance improvements are considered valuable.
3. Our model can handle multi-task frameworks. To demonstrate its powerful performance, we compared it with multi-task framework models(such as ReMVC, VaSCL, SML, CACSR) and various SOTA models designed for specific downstream tasks. Our model achieved SOTA results in nearly all metrics, with improvements exceeding 26% in some cases, and outperformed other SOTA multi-task framework models by over 15%.
---
Rebuttal Comment 2.3:
Comment: ### 【Other-Q2】
#### 【Other-Q2-1】
We first explain why we did not include Graph-Flashback as a baseline model from the perspectives of application scenarios, data processing methods, model architecture input, and dataset support.
1. **Application Scenario**: Graph-Flashback is a classic and effective baseline model, but it requires a fixed-length historical sequence as input. Our study focuses on the representation and prediction of variable-length sequences. Therefore, the baselines in our paper are models that can handle variable-length sequences.
2. **Data Processing Methods**: In next location prediction research, it is common to handle variable-length sequences. Processing samples often involves filtering out POIs with less than 10 visits, users with less than 10 check-ins, limiting historical days to no more than 30, and ensuring that the next location visit interval does not exceed one day to qualify as a label. This approach leads to variable-length sample sequences that better reflect real-world scenarios. In contrast, the data processing method of Graph-Flashback, as mentioned in the original paper, is: "We discard inactive users who have less than 100 check-ins and sort each user’s check-ins in ascending order of timestamp. The first 80% check-ins of each user are split into multiple length-equally (e.g., 20) sequences, which are chosen as the training set. Likewise, the remaining 20% are regarded as the testing set."
3. **Model Architecture Input**: The input to this model is a fixed-length sequence. We thoroughly reviewed the code of Graph-Flashback, and the historical sequences it processes are fixed-length sequences of 20. We also attempted to modify it into a variable-length prediction model, but since the entire framework is designed for fixed-length sequences, almost the entire framework would need adjustment, and many components can only handle fixed-length sequences. Further modifications would cause the model to deviate significantly from the original design of Graph-Flashback.
4. **Dataset Support**: Not all datasets provide friendship information. Only the Gowalla and Foursquare datasets provide friendship networks, while Weeplace and BrightKite do not disclose friendship networks due to privacy reasons, or the disclosed friendship networks are unreliable. Therefore, Graph-Flashback cannot run on our other two datasets.
We firmly believe that Graph-Flashback is a classic baseline model, and we have cited it in the Related Work section and will include Graph-flashback in our related work and make fully discussion in our revised version. However, due to the multiple issues mentioned above that cannot be unified or fairly addressed, Graph-Flashback could not be fairly included in our baseline model comparison.
#### 【Other-Q2-2】
Next, we explain why the comparison between our model, Graph-Flashback, and DeepMove is not valid:
1. **Different Scenario Settings**: The models we selected, such as DeepMove, are designed to handle variable-length sequences, and their advantages are more evident in such scenarios. Although Graph-Flashback includes many baseline models that handle variable-length sequences, these baselines, including DeepMove, cannot fully exploit their advantages in the scenario set by Graph-Flashback. In the comparison experiment under Graph-Flashback's setting, the gaps between the baselines are large, while in our setting, the gaps are small. These differences are not on the same scale, so a direct comparison cannot distinguish the superiority of the models.
2. **Different Experimental Settings**: We were surprised to find that the DeepMove model's performance improved by 37% when the Dropout parameter was set to 0.9. Although this parameter may seem unreasonable, we still respect the facts. However, we are unsure whether Graph-Flashback conducted experimental adjustments and a comprehensive parameter search for the Dropout parameter of DeepMove during the experiment.
3. **Unified Framework**: Our model is a unified framework capable of handling multiple tasks. To demonstrate the powerful performance of our framework, we compared it with the most classic and SOTA end-to-end models in various downstream tasks. Therefore, more attention should be paid to the overall performance improvement.
---
Rebuttal Comment 2.4:
Comment: ### 【Other-Q3】
As you mentioned, the number of epochs alone cannot effectively demonstrate the convergence speed. The total training time should be used to prove the speed of convergence. In further experiments, we tried reducing the early stopping patience from 10 to 3 to decrease the model's training time. So, we adjusted the patience to 3 and were surprised to find that some of the model's metrics actually improved with less traing time. After our analysis, we found that the reason is that the patience setting during the training of our model was relatively large compared to the total number of training epochs. This led to some unnecessary time overhead and caused our model to slightly overfit.
Below, we present the GPU memory usage, training time for the LP task on the WeePlace dataset, and the results of the LP task under the new setting on four datasets. We also included variants of Python-70M (as presented in Table 5 and A.6 of the paper).
|Method|Memory|Training Time|Training Epoch|
|-|-|-|-|
|PLSPL-Patience 10|3.4G|4.73h|79|
|PLSPL-Patience 3|3.4G|4.45h|75|
|Mobility-LLM (TinyLlama)-Patience 10|11.3G|5.45h|17|
|Mobility-LLM (TinyLlama)-Patience 3|11.3G|2.96h|10|
|Mobility-LLM (pythia-70M)-Patience 10|2.74G|2.76h|39|
|Mobility-LLM (pythia-70M)-Patience 3|2.74G|1.47h|21|
|Datasets|Gowalla||||WeePlace||||Brightkite||||FourSquare||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Metric (e-2)|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|
|The Best Baseline|11.47±0.03|24.12±0.17|33.82±0.15|17.44±0.05|19.66±0.04|37.68±0.18|53.44±0.13|28.46±0.06|51.42±0.12|66.25±0.08|71.46±0.11|57.59±0.14|16.92±0.04|36.05±0.04|49.39±0.16|26.02±0.04|
|Mobility-LLM (TinyLlama)-Patience 10|11.87±0.04|25.14±0.04|36.36±0.09|18.29±0.11|20.47±0.13|39.22±0.19|56.69±0.10|29.21±0.15|53.18±0.17|68.31±0.14|74.11±0.18|59.89±0.03|17.29±0.03|37.17±0.02|53.16±0.20|26.47±0.05|
|Mobility-LLM (TinyLlama)-Patience 3|11.90±0.03|25.17±0.05|36.43±0.11|18.30±0.10|20.50±0.07|39.27±0.17|56.71±0.09|29.24±0.13|53.18±0.16|68.34±0.11|74.17±0.18|59.91±0.04|17.31±0.04|37.20±0.03|53.19±0.22|26.44±0.02|
|Mobility-LLM (pythia-70M)-Patience 10|11.03±0.16|24.86±0.11|35.74±0.07|17.91±0.17|19.88±0.18|38.02±0.03|54.19±0.04|28.46±0.13|52.03±0.10|66.34±0.02|72.09±0.05|57.93±0.22|17.01±0.04|36.20±0.09|50.88±0.13|25.84±0.16|
|Mobility-LLM (pythia-70M)-Patience 3|11.29±0.13|24.98±0.18|35.56±0.16|17.98±0.02|19.93±0.11|38.21±0.22|54.69±0.03|28.79±0.07|52.14±0.04|66.87±0.10|73.21±0.17|58.90±0.05|17.12±0.09|36.33±0.04|52.13±0.04|26.03±0.14|
The experimental results show that our model can maintain its previous advantages while using less training time than the conventional deep learning models, with some metrics even improving.
The new experimental setup shows that the TinyLlama base model can achieve better results in less time. We also conducted further experiments with a variant of pythia-70M and found that setting patience to 10 indeed caused slight overfitting. Our pythia-70M base model can outperform the PLSPL baseline model with less time and memory usage. To verify whether PLSPL also suffers from overfitting, we conducted experiments with patience set to 3, and the results are as follows:
|Datasets|Gowalla||||WeePlace||||Brightkite||||FourSquare||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|PLSPL-Patience 10|11.26±0.02|24.12±0.17|33.82±0.15|17.44±0.05|18.77±0.11|37.31±0.02|53.31±0.18|27.86±0.10|51.42±0.12|65.34±0.09|71.46±0.11|57.59±0.14|13.73±0.07|30.65±0.10|43.18±0.09|21.42±0.04|
|PLSPL-Patience 3|11.22±0.07|24.08±0.17|33.81±0.02|17.40±0.02|18.81±0.14|37.26±0.10|53.31±0.09|27.84±0.10|51.46±0.09|65.31±0.04|71.44±0.11|57.57±0.18|13.70±0.12|30.60±0.11|43.17±0.15|21.38±0.05|
We observe that conventional deep learning models do not exhibit overfitting under the patience 10 setting And the total training time was also roughly the same.
In summary, we appreciate the reviewer for raising these issues, which have helped us improve and further explore our model's potential. We were previously unaware of the impact of training epochs on our model. After changing the patience setting to 3, we were pleasantly surprised by better performance in less time than the conventional deep learning models. In Section 5.5 of the paper, we also discussed that the parameter size of the base model does not necessarily correlate with performance improvements. With the future advancement of more "refined" large language base models in huggingface, our model's potential will be further unleashed.
---
Rebuttal Comment 2.5:
Comment: ### 【Other-Q4】
We have optimized the overall Figure 1 by reducing the coupling of elements, enhancing clarity, and improving the layout. This makes Figure 1 clearer and more intuitive. The revised version is available in our anonymous gitHub(our manuscript line 187).
### 【Other-Q5】
We have refined the Related Work section and placed the improved version in section A.1 of the Appendix, along with the full list of references in our anonymous github.
---
Rebuttal Comment 2.6:
Comment: I agree with Reviewer 6rUV that the small performance gain is an important problem. Such marginal perfomance gain cannot justify the much larger model size. In the followup response, the authors justify this with different experiment settings. But these differences are not contextualized in the original submission or rebuttal. It is difficult to evaluate these claims. The authors also argue the proposed method is multi-task. However, as also noted by other reviewers, it was only applied to check-in data, which is a subset of mobility data that contain rich semantic information that probably will favor an LLM solution.
I was really on the fence about this paper, because it also had relatively good presentation and completeness. However, after reading other reviewer's interactions with the authors, I also lean towards a rejection.
---
Rebuttal 3:
Comment: Thank you for your further reply, which basically solved my concerns (Yours part). Please also answer my comments on other questions.
【Other-Q1】
Taking Table 1 as an example, for acc@1, can I understand that the performance of deepmove under the default settings is only 10.51/1.37=7.67?
【Other-Q2】
What about baseline GETNext [1], which supports variable-length input? This is the most popular baseline for next location prediction in the past two years.
[1] Yang, Song, Jiamou Liu, and Kaiqi Zhao. "GETNext: trajectory flow map enhanced transformer for next POI recommendation." Proceedings of the 45th International ACM SIGIR Conference on research and development in information retrieval. 2022.
【Other-Q3】
Mobility-LLM (TinyLlama)-Patience 3 11.3G 2.96h 10
The results in this row seem to indicate that the computational efficiency of the model is slightly weaker than that of ReMVC, and the memory it occupies is much larger than that of ReMVC, but the performance improvement is very significant, right?
【Other-Q4】
In the main figure, LLM (Partially-Frozen+LoRA) is missing the number (d)
In sub-figure (a), the order of the attention module is v, k, q, and in sub-figure (d) the order is q, k, v. It is recommended to unify the order. In addition, why are q and k in sub-figure (d) below the weight matrix, while v is part of the weight matrix?
---
Rebuttal Comment 3.1:
Comment: ### Other-Q1
Yes
### Other-Q2
We conducted experiments on GETNext, and the results of GETNext is comparable to PLSPL:
|Datasets|Gowalla||||WeePlace||||Brightkite||||FourSquare||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Metric (e-2)|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|
| GETNext | 11.15 | 24.04 | 33.58 | 17.23 | 18.69 | 37.17 | 53.16 | 27.73 | 51.22 | 65.23 | 71.40 | 57.33 | 13.67 | 30.54 | 43.05 | 21.28 |
### Other-Q3
I might not have fully understood your point. When you mentioned ReMVC, were you possibly referring to the PLSPL model? The line "Mobility-LLM (TinyLlama)-Patience 3 11.3G 2.96h 10" is intended to illustrate that compared to PLSPL, our model requires less training time, uses more memory, but offers significant performance improvement. If compared to ReMVC, the performance improvement would be even more significant.
### Other-Q4
We have modified the figure according to your request. The `q` and `k` were positioned at the bottom due to a formatting error when exporting to PDF, which has been corrected. The updated version is available on the anonymous GitHub.
---
Rebuttal 4:
Comment: Thank you for your detailed feedback. We appreciate the concerns raised and would like to provide a comprehensive response that emphasizes the significance and contributions of our work:
1) Innovative Use of Reprogramming Techniques
Our model represents the first application of reprogramming techniques (e.g. VIMN and HTPP Module) to Large Language Models (LLMs) specifically for understanding and predicting human activity sequences from check-in data. This approach is novel and transformative within the field of check-in sequence prediction, allowing LLMs—traditionally used for natural language processing—to be effectively adapted for this unique task. By reprogramming LLMs for the prediction of check-in sequences, we are pioneering a new direction in the field, demonstrating that these models can be utilized to extract and interpret complex human activity patterns, which were previously challenging to model with traditional methods.
2) Semantic Understanding and Accurate Predictions
Our experiments clearly show that our reprogramming design enables the LLM to understand the semantic information embedded within human activity sequences. This understanding is crucial, as check-in data is not just about locations and times; it carries rich context about human intentions, behaviors, and routines. The model's ability to leverage this semantic understanding to make accurate predictions of future check-ins highlights a significant advancement in the field. It demonstrates that LLMs, when reprogrammed appropriately, can effectively capture and utilize the intricate patterns and meanings in human activity data, leading to more precise and contextually aware predictions.
3) Beyond Pure Performance Metrics
We recognize the importance of performance metrics; however, the value of our work extends beyond marginal performance improvements. The primary contribution of our model lies in its innovative application of LLMs to check-in sequence prediction, introducing a new methodology that bridges the gap between language models and human activity prediction. Evaluating our work solely based on performance metrics does not fully capture the broader implications and future potential of this approach. We believe that the introduction of such a novel methodology provides a foundation for future advancements and improvements in the field, which may yield even greater performance gains as the techniques mature.
4) Model Size and Memory Requirements
Regarding concerns about the model size, it's important to note that our model's memory footprint, which is under 12GB, is well within the capabilities of modern GPUs. With the widespread availability of GPUs with large memory capacities, our model is practical and accessible for both academic research and potential industry applications. Additionally, we have demonstrated through our experiments with the pythia-70M base model that our approach can achieve superior performance while requiring less memory and computational time compared to other baseline models. This showcases our model's efficiency and adaptability, ensuring it remains practical even in resource-constrained environments.
5) Robustness Across Multiple Datasets
A key strength of our model is its robustness and stability across multiple datasets. In the context of check-in sequence prediction, where data variability is common, our model consistently performs well across different datasets. This generalization ability is crucial, as it indicates that our model is not overfitted to specific scenarios but is instead capable of adapting to various types of human activity data. The consistent improvements observed across four different datasets validate the effectiveness of our design and its potential for broader application in diverse real-world scenarios, such as personalized services, urban planning, and more.
6) Future Prospects and Model Versatility
As LLMs continue to evolve, particularly with the trend towards smaller and more efficient models, our approach is poised to become even more effective. The ongoing advancements in LLM technology will allow our reprogrammed models to achieve better results with reduced computational resources. Our model is designed to be versatile and adaptable. It can be easily integrated with any large model available on huggingface, ensuring that our approach remains relevant and capable of benefiting from the latest developments in LLMs. This adaptability makes our model a future-proof solution for human activity prediction.
In conclusion, we believe that our work represents a pioneering effort in the field of check-in sequence prediction and human trajectory analysis. The innovative application of reprogramming techniques to LLMs on our tasks not only demonstrates the feasibility and effectiveness of this approach but also provides a robust and adaptable solution that can inspire and guide future research. | Summary: The paper proposes a novel architecture, MobilityLLM, for utilizing pre-trained LLMs with various specific embedding modules for various mobility tasks. Specifically, the authors proposed to use any pretrained LLM with adding four unique components: PPEL (POI location embedding), VIMN (timestamp embedding with GRU), HTPP (user travel preference prompting), and Multi-task training strategy. MobilityLLM finds a unique place in LLM-based human mobility modeling applications by incorporating spatiotemporal aspects of mobility characteristics using learnable components (neural networks). The experiments are tested on three different tasks: location prediction, trajectory user-link, and time-prediction. The proposed model performs consistent outperformance compared to benchmarks.
Strengths: Here is the set of strengths of the paper:
1. It introduced several novel components that can be embedded in any LLM for mobility-specific applications.
2. The three main components, PPEL, VIMN and HTTP greatly enhances the performance of LLMs on downstream mobility tasks.
3. MobilityLLM has a lightweight architecture with LoRA trainable weights and makes it an accessible tool.
4. The experiments supports promising results on different datasets.
5. Lastly, appending location embedding (GeoHash) to the attention layer is a very clever way of dealing with locations.
Weaknesses: * The paper is relatively well written; however, there are multiple typos and abbreviation errors throughout the paper. Here the two typo examples: (reference error in table 4. Instead of Table 3 it references Table 1) and (Figures 3 and 4 have typos: HIBP -> HTTP, IIMN -> VIMN)
* While it is designed more on check-in datasets, I wonder how applicable the model is to trajectory generation tasks.
* Figure 1 is informative but very complicated to read without the methodology section.
* I am aware of the page limitation, but some important details are not mentioned in the main body and are left to appendixes. For instance, few-shot learning(prediction) requires a more detailed explanation; it is not intuitively clear in the main body.
* The Related Works section is fragile in terms of the compared baselines. They are not discussed very well in the main body and appendix. For instance, CACSR is not even discussed in both places.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can you elaborate on the statement at the end of the methodology section, "We use the β and α with a mean pooling projection head to predict the user uˆ who generates this check-in sequence?"
2. While the metrics mainly evaluate the performance in terms of piece-wise similarities, it would be interesting to see distributional similarities as well, such as the JS similarity metric, which can be employed.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: LLMs are powerful machines but the proposed model is right now specific to one dataset. It would be interesting to explore the combination of diverse scenarios and datasets.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive comments.
### **[W1]**
Thank you for pointing out these errors. We have made the corrections and checked other parts of the paper for similar issues.
### **[W2]**
Compared to trajectory data (such as vehicle trajectories), check-in data are recorded more sparsely. Each point in a trajectory might record a passing-by, while each point in a check-in sequence corresponds to a purposeful visit to a POI. To achieve optimal performance on trajectory data, some modules of the proposed methods may need to be redesigned to fit the characteristics of trajectories.
### **[W3]**
Thank you for your suggestions. We have adjusted the overall framework diagram to highlight key modules and provided detailed diagrams of each module as separate illustrations.
### **[W4]**
Thank you for your suggestions. We will supplement the implementation settings for the few-shot scenario. In this scenario, we first divide the entire dataset into training, validation, and test sets in a 6:2:2 ratio based on the number of check-in sequences. Then, we only reduce the training set portion to 20%, 5%, and 1% of the original training set, while keeping the validation and test sets intact.
### **[W5]**
Sorry for the oversight. We have added all the baseline models in the main text and appendix and included detailed discussions in the appendix.
### **[Q1]**
The dimension of $\alpha$ is $(B,n,d)$, and the dimension of $\beta$ is $(B, 3, d)$. First, we concatenate along the second dimension, resulting in a dimension of $(B,n+3,d)$. Then, we average pool along the second dimension to obtain $(B,1,d)$. Finally, after a fully connected layer and softmax operation, we obtain the predicted user. Here, $B$ represents batch size, $n$ represents sequence length, and $d$ represents embedding dimension.
### **[Q2]**
We would like to point out that JS divergence is more commonly used for tasks where the output is a sequence. Our model's prediction task only predicts a single future point, so we choose the current evaluation metrics, which are also widely used in baselines and most existing works.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I am not going to change my score but I could not find any insight from your responses regarding my questions. I would suggest you to pay more attention in your next rebuttal.
---
Reply to Comment 1.1.1:
Comment: We apologize that our previous response did not provide the help you needed.
We have optimized the overall Figure 1 by reducing the coupling of elements, enhancing clarity, and improving the layout. This makes Figure 1 clearer and more intuitive. The revised version is available in our anonymous github(our manuscript line 187).
Also, we have refined the Related Work section and placed the improved version in section A.1 of the Appendix, along with the full list of references in our anonymous github. | Summary: This paper combines large language models (LLMs) to better analyze check-in sequences and understand human mobility behaviors, and proposes a visiting intention memory network(VIMN) and a shared pool of human travel preference prompts (HTPP) to capture the semantics of human visiting intentions . The experiments show promising improvement. It is a good work in both theoretical and experimental perspectives.
Strengths: 1. It is interesting to apply large language models and neural networks to capture the latent semantic information of check-in sequences.
2. This paper is well-organized, which clearly explains the research purpose, methods and conclusions.
3. The paper has done a lot of experiments, and the experimental results show that the method proposed in this article outperforms state of-the-art methods on three downstream tasks.
Weaknesses: 1. In the figure1, it’s not clear how the user embedding U_i\ is obtained?It is best to explain it in the article.
2. There are some inconsistencies in the writing of this article. For example, in Section 5.1, the LP task uses the MLE loss, but in Appendix B, the LP task uses the cross-entropy loss.
3. The model training part in the article is not detailed enough. For example, what is the parameter L_F when fine-tuning LLM? Are the three downstream tasks trained separately or uniformly?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the HTPP module, if there is possible that the k prompt words selected in each domain have semantic contradictions?
2. When doing TUL tasks, are user embeddings also passed into LLM as part of the input? User information should not be included in the input.
3. In section 5.5, why not experiment with the GPT-3.5 as a language model variant?As a large language model that performs well in various fields, is it possible to achieve better results by using GPT-3.5 as the backbone network?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: When doing downstream tasks in this article, different parts of the LLM output are used for different tasks. What are the design considerations for this part? I concerned what should I do if I want to apply this model to other downstream tasks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive comments.
### **[W1]**
The user embedding $U_i$ is an index-fetch embedding module implemented using the nn.Embedding module in PyTorch. It finds the corresponding embedding vector for a given user index.
### **[W2]**
Sorry for the confusion. We want to clarify that these two expressions are actually equivalent in our implementation, and we will ensure consistency and standardize the terminology thanks to your suggestions. Below is a detailed explanation of why these two losses are equivalent in our case.
> Cross-entropy loss is a method used to measure the difference between two probability distributions, commonly used in classification problems. It is often used to train neural networks, especially in classification tasks.
>
>
> Assume we have a classification task where the true label distribution is $y$ and the predicted probability distribution is $\hat{y}$. The cross-entropy loss is defined as:
>
> $H(y, \hat{y}) = - \sum_{i=1}^n y_i \log \hat{y}_i$
>
> For a binary classification problem, the cross-entropy loss can be simplified to:
>
> $H(y, \hat{y}) = - (y \log \hat{y} + (1 - y) \log (1 - \hat{y}))$
>
> In classification problems, if we assume that the probability distribution $\hat{y}$ predicted by the model is obtained through the softmax function, then maximizing the log-likelihood function is equivalent to minimizing the cross-entropy loss.
>
> Specifically, assume we have a neural network for a classification task, and the output layer uses the softmax activation function to convert the model output into category probabilities. For each sample $x_i$, its true label is $y_i$, and the probability predicted by the model is $\hat{y}_i$. Maximizing the log-likelihood function:
>
> $\ell(\theta) = \sum_{i=1}^n \log P(y_i|x_i; \theta) = \sum_{i=1}^n \log \hat{y}_i$
>
> is equivalent to minimizing the cross-entropy loss:
>
> $H(y, \hat{y}) = - \sum_{i=1}^n \log \hat{y}_i$
>
> Therefore, in this case, minimizing the cross-entropy loss when training the model is actually performing maximum likelihood estimation.
>
### **[W3]**
As stated in line 203, we mentioned that different parameter freezing strategies are applied to the first $1 - L_F$ layers and the last $L_F - L_{F+U}$ layers. Here, $L_F$ is the number of frozen layers in the middle. The three tasks are trained separately.
### **[Q1]**
We believe that in most cases, the selected $K$ words from the model will not contradict each other. This is because contradictory words are semantically different, meaning their distances in the embedding space are large. Thus, the score-matching function used in the HTPP module will rarely match contradictory words simultaneously.
### **[Q2]**
As mentioned in line 225, the user embedding will not be used as input when performing the TUL task.
### **[Q3]**
Our method requires full access to the implementation code and pre-trained parameters of an LLM to fine-tune some of the parameters. Since GPT-3.5 is not open-sourced and only limited functionalities can be accessed through the internet interface provided by OpenAI, GPT-3.5 cannot be used as the foundation for the proposed method.
### **[Limitations]**
Most downstream tasks can be performed by attaching prediction modules (usually implemented with MLP) to the output latent vectors of LLM. For example, in our implementation, we use classification prediction modules for the LP and TUL tasks, and a regression prediction module for the TP task.
Regarding the choice of attachment position, an intuitive design is to attach it to the position corresponding to the most relevant information for the task. Experimental evaluation is also needed to determine the optimal position and implementation details.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I maintain my origin score. | Summary: This paper aims to leverage large language models (LLMs) to predict human mobility behavior recorded in social media check-ins. The authors argue existing models fail short to model the visiting intentions and travel preferences embedded in check-in sequences, which could be addressed by the semantic understanding capabilities of LLMs. The authors propose to use visiting intention memory network (VIMN) to encode the visit intention in each check-in record, which is feed into the LLMs along with the shared pool of human travel preference prompts (HTPP). The employed LLMs are trained with Low-Rank Adaptation (LoRA) algorithm. The proposed method is evaluated on four benchmark datasets with the tasks of next location prediction, trajectory user link and time prediction.
Strengths: 1. The proposed Mobility-LLM framework is well motivated and clearly illustrated.
2. The proposed method is evaluated on open datasets, and the source code is made publicly available.
3. The authors provide comprehensive experiment results of their model, including the analysis of few-shot learning, language model variants, ablation study and hyperparameter sensitivity.
Weaknesses: 1. The performance gain is not substantially enough. The main results in Table 1 show the performance improvement for next location prediction is often less than 1%. Moreover, it is unclear if the performance gain is statistically significant.
2. The proposed Mobility-LLM framework leverages large language models for mobility prediction, which have substantially larger model size compared to other deep learning baselines. The authors should provide a cost analysis (e.g. complexity of training and inference) to discuss whether the performance gain justifies additional cost.
3. The model design of visiting intention memory network (VIMN) is not adequately explained. The authors propose to use LLM to capture the visiting intention and travel preferences in check-in sequences, but then they claim these semantics information cannot be directly interpreted by LLM, which needed to be reprogramed by VIMN. This design seems a bit self-contradictory and VIMN needs better justification.
4. In the Limitation section, the authors mention their Mobility-LLM is hindered by the differing user and POI counts in check-in data. This is an important observation and it should be discussed with experiment results.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. Please provide the statistical significance of the performance gains.
2. Please provide a cost analysis of the proposed method.
3. Please explain the design choice of visiting intention memory network.
4. Please discuss the limitation of different data sizes in more detail.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful and constructive comments.
### **[W1&Q1]**
In the paper, we ran each set of experiments 5 times and reported their mean values. Therefore, the reported results are not single accidental one. Below, we also report the variances of metrics for the proposed model and the best baseline on the Location Prediction task, along with the corresponding t-value measuring statistical significance.
|Datasets|Gowalla||||WeePlace||||Brightkite||||FourSquare||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Metric (e-2)|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|
|The Best Baseline|11.47±0.03|24.12±0.17|33.82±0.15|17.44±0.05|19.66±0.04|37.68±0.18|53.44±0.13|28.46±0.06|51.42±0.12|66.25±0.08|71.46±0.11|57.59±0.14|16.92±0.04|36.05±0.04|49.39±0.16|26.02±0.04|
|Mobility-LLM|11.87±0.04|25.14±0.04|36.36±0.09|18.29±0.11|20.47±0.13|39.22±0.19|56.69±0.10|29.21±0.15|53.18±0.17|68.31±0.14|74.11±0.18|59.89±0.03|17.29±0.03|37.17±0.02|53.16±0.20|26.47±0.05|
|t-value|29.85|13.40|37.84|37.95|45.25|19.13|55.95|27.99|32.77|57.54|53.97|36.75|20.67|62.57|52.63|10.07|
The formula for calculating t-value is:
$t = \frac{\mu_{\text{Mobility-LLM}} - \mu_{\text{Best Baseline}}}{\sqrt{\frac{\delta_{\text{Best Baseline}}^2}{n}}}$
The standard deviation is that of the best baseline, with (n = 5). We use a significance level of $\alpha = 0.05$ and a two-tailed t-test with degrees of freedom $d_f = 4$. For this t-distribution, the critical t-value is approximately 2.776. If the calculated t-value is greater than 2.776, the difference is considered significant. As shown in the table, all the calculated t-values are well above the critical value of 2.776.
### **[W2&Q2]**
Below, we provide an empirical performance and efficiency comparison between the proposed model and the optimal baseline for LP and TUL tasks.
**Performance comparison for LP task:**
|Datasets|Gowalla||||WeePlace||||Brightkite||||FourSquare||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Metric (e-2)|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|
|PLSPL|11.26|24.12|33.82|17.44|18.77|37.31|53.31|27.86|51.42|65.34|71.46|57.59|13.73|30.65|43.18|21.42|
|Mobility-LLM|11.87|25.14|36.36|18.29|20.47|39.22|56.69|29.21|53.18|68.31|74.11|59.89|17.29|37.17|53.16|26.47|
|Improvement (%)|5.42%|4.23%|7.52%|4.87%|9.06%|5.12%|6.34%|4.84%|3.42%|4.54%|3.71%|3.99%|26.00%|21.25%|23.13%|23.60%|
**Efficiency comparison for LP task:**
|Method|Memory|Training Time|Inference Time|Training Epoch|
|-|-|-|-|-|
|PLSPL|3.4GB|4.73h|2.77m|79|
|Mobility-LLM (TinyLlama)|11.3GB|5.45h|6.69m|17|
**Performance comparison for TUL task:**
|Datasets|Gowalla||||WeePlace||||Brightkite||||FourSquare||||
|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|
|Metric (e-2)|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|Acc@1|Acc@5|Acc@20|MRR|
|ReMVC|68.75|74.4|73.19|70.02|65.78|73.09|71.64|66.15|73.85|82.55|87.93|77.93|58.18|66.84|72.67|65.14|
|Mobility-LLM|80.43|86.29|88.56|83.18|79.03|88.04|91.48|83.21|83.06|88.52|90.35|85.73|72.08|79.67|84.32|75.71|
|Improvement (%)|16.93%|15.99%|20.99%|18.81%|20.18%|20.47%|27.69%|25.76%|12.48%|7.24%|2.75%|9.99%|23.87%|19.15%|16.03%|16.19%|
**Efficiency comparison for TUL task:**
|Method|Memory|Training Time|Inference Time|Training Epoch|
|-|-|-|-|-|
|ReMVC|4.6GB|2.77h|1.49m|45|
|Mobility-LLM (TinyLlama)|11.7GB|3.65h|6.98m|13|
We observe that our model increases memory usage due to the inclusion of LLM. However, since our model uses a pre-trained language model, its convergence speed is much faster than other baseline models.
On the high-quality Foursquare dataset, our model shows an improvement of over 21% compared to PLSPL, with the Acc@1 metric reaching up to 26%. Additionally, our model's advantage is more pronounced in the few-shot scenario. Therefore, the total training time is justified compared to traditional models.
In summary, we believe the increased cost is worthwhile for the performance improvement.
### **[W3&Q3]**
LLM is designed to understand semantic information in natural language but not to interpret information in check-in sequences. Therefore, VIMN is proposed to reprogram the check-in sequence. This way, the output of VIMN provides semantic information aligned with natural language, allowing the LLM to process it.
### **[W4&Q4]**
Sorry for causing misunderstandings. What we are trying to discuss in the Limitation section is that the sets of POIs in different datasets (which usually cover different regions) are unique. Therefore, if the proposed model is trained on one dataset, its learned information about the set of POIs is not easily transferable to another dataset. Different sets of POIs have different functionalities and usually have a different number of POIs, making many modules (such as embedding and predictor) technically untransferable in a zero-shot setting.
---
Rebuttal 2:
Comment: I thank the authors for providing additional experiment results. They should be included in the manuscript for completeness. However, I still think the performance gain is not substantial enough, especially considering the significantly larger model size. After reading other reviewers' comments, I lean towards a rejection now.
---
Rebuttal Comment 2.1:
Comment: In the LP task, our model performed exceptionally well on Foursquare, and across all datasets, our model demonstrated outstanding performance in the TUL task. A key strength of our model is its robustness and stability across multiple datasets. In the context of check-in sequence prediction, where data variability is common, our model consistently performs well across different datasets. This generalization ability is crucial, as it indicates that our model is not overfitted to specific scenarios but is instead capable of adapting to various types of human activity data. We also discussed our views on marginal performance improvements in the Comment section under reviewer 6rUV: **"To verify the effectiveness of our model on a larger dataset, we increased the sample size by including users with fewer than 10 check-ins (while other papers typically exclude users with fewer than 50 or 100 check-ins). This inclusion of more sparse samples makes further prediction improvements more difficult. The performance gap between these baselines is minimal, reaching a bottleneck where further improvements with conventional deep learning models are challenging. Thus, the current performance improvements are considered valuable."**
Additionally, the value of our work extends beyond marginal performance improvements. The primary contribution of our model lies in its innovative application of LLMs to check-in sequence prediction, introducing a new methodology that bridges the gap between language models and human activity prediction. Evaluating our work solely based on performance metrics does not fully capture the broader implications and future potential of this approach. We are pioneering a new direction in the field, demonstrating that these models can be utilized to extract and interpret complex human activity patterns. We believe that the introduction of such a novel methodology provides a foundation for future advancements and improvements in the field, which may yield even greater performance gains as the techniques mature.
Regarding concerns about the model size, it's important to note that our model's memory, which is under 12GB, is well within the capabilities of modern GPUs. With the widespread availability of GPUs with large memory capacities, our model is practical and accessible for both academic research and potential industry applications. Additionally, we have demonstrated through our experiments **with the pythia-70M base model that our approach can achieve superior performance while requiring less memory (2.74GB) and computational time compared to other baseline models**. As LLMs continue to evolve, particularly with the trend towards smaller and more efficient models, our approach is poised to become even more effective. The ongoing advancements in LLM technology will allow our reprogrammed models to achieve better results with reduced computational resources. Our model is designed to be versatile and adaptable. It can be easily integrated with any large model available on huggingface, ensuring that our approach remains relevant and capable of benefiting from the latest developments in LLMs. This adaptability makes our model a future-proof solution for human activity prediction. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning Complete Protein Representation by Dynamically Coupling of Sequence and Structure | Accept (poster) | Summary: This paper considers the problem of learning numerical protein representations using both sequential and structural data, and presents CoupleNet, a method that uses two graph types, one based on the amino acid sequence and the other based on the protein's tertiary structure, to extract protein representations via graph convolutions. Compared with several baselines, it is demonstrated that CoupleNet improves the performance of the learned representations across multiple downstream tasks.
Strengths: Combining sequence and structure data in proteomics is an important and timely problem. The performance gains of the proposed approach against baseline methods seem significant to some extent.
Weaknesses: - My main concern with the paper is the lack of comparisons (both in terms of the performance and the novelty of the method) with state-of-the-art protein language models and their structure-aware versions. It would be beneficial to compare the performance of CoupleNet with methods such as ESM-2 [A], ESM-S [B], S-PLM [C], ESM-GearNet [D], or SaProt [E].
[A] Lin, Zeming, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Nikita Smetanin et al. "Evolutionary-scale prediction of atomic-level protein structure with a language model." Science 379, no. 6637 (2023): 1123-1130.
[B] Zhang, Zuobai, Jiarui Lu, Vijil Chenthamarakshan, Aurélie Lozano, Payel Das, and Jian Tang. "Structure-informed protein language model." arXiv preprint arXiv:2402.05856 (2024).
[C] Wang, Duolin, Mahdi Pourmirzaei, Usman L. Abbas, Shuai Zeng, Negin Manshour, Farzaneh Esmaili, Biplab Poudel et al. "S-PLM: Structure-aware Protein Language Model via Contrastive Learning between Sequence and Structure." bioRxiv (2023): 2023-08.
[D] Zhang, Zuobai, Chuanrui Wang, Minghao Xu, Vijil Chenthamarakshan, Aurélie Lozano, Payel Das, and Jian Tang. "A systematic study of joint representation learning on protein sequences and structures." arXiv preprint arXiv:2303.06275 (2023).
[E] Su, Jin, Chenchen Han, Yuyang Zhou, Junjie Shan, Xibin Zhou, and Fajie Yuan. "Saprot: Protein language modeling with structure-aware vocabulary." bioRxiv (2023): 2023-10.
- The writing quality of the paper could be substantially improved, in my opinion. Some examples:
- It seems that the graph $G$ on lines 114-115 is fully connected based on the definition of $\mathcal{E}$, whereas it is not necessarily the case based on the following sections.
- The statement "$T_g$ and $S_g$ are the transformations" on line 127 needs further elaboration.
- What is the domain and range of $\mathcal{F}$ on line 130?
- The description of a message-passing layer in Eq. (4) is unnecessarily restrictive, since not all message-passing mechanisms use batch normalization or a fully-connected layer. It also does not take edge weights/features into account.
- What is the high-level intuition behind Eqs. (5) and (6)? What does the $\times$ notation mean? Is it elementwise multiplication?
- What are the input/output dimensions of the FC layer in Eq. (10)? Are the edge features $e_{ij}$ multiplied elementwise by the neighboring node features?
Technical Quality: 3
Clarity: 2
Questions for Authors: - On line 133, it is mentioned that the positions can be recovered from the geometric representations. I believe that is not entirely correct, because the recovered positions may not be unique, but they can be uniquely recovered modulo the transformation $T_g$. Please verify that this is correct.
- Could you please explain how average pooling is done and why the number of residues is reduced by half after every pooling layer? Does this statement mean that every pair of nodes will be combined into one node? How are the pairs determined? How are the edges determined for the combined nodes (since every node now corresponds to two residues with two different positions, instead of a single residue in the original input graph)?
- On lines 227-228 in Section 3.4, it is mentioned that convolutions are performed on edges as well. However, based on the message-passing operation in Eq. (10), it seems that only node-level convolutions are performed, and there are no edge-level convolutions (even though the edge features are used for aggregating features at each layer).
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The main limitation of this work is the lack of sufficient structural data as compared to sequential data for proteins, which the authors also allude to in Section 5. It would be helpful to comment on possible extensions to the case where, for some proteins, only sequential data is available, and what would happen if, for those proteins, only the sequence graph is considered, whereas for the rest of the proteins, both sequence and structure graphs are used in the CoupleNet pipeline.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer JwZm,
We are grateful for your thorough review. Your comments are highly valued, and we would like to express our heartfelt gratitude. We do our utmost to address the questions you have raised:
**Q1** Comparisons with state-of-the-art protein language models and their structure-aware versions.
**A1** Thank you for your valuable feedback! Factually, CoupleNet is not a pre-training model; it is not fair to compare it with pre-training methods. Thus, we combine ESM-2 (650M) [1] with CoupleNet, named ESM-CoupleNet, using generated ESM embeddings as one part of graph node features. We compare ESM-CoupleNet with pre-training methods on protein function prediction and EC number prediction, including sequence-based methods, ESM-1b [2], ESM-2; sequence-function based methods, ProtST [3]; sequence-structure based methods ESM-S [4], GearNet-ESM [5], SaProt [6]. The comparison results are provided in Table 1 of the one-page rebuttal pdf. From this table, we can see that our proposed model, ESM-CoupleNet, achieves the best results among sequence-based, sequence-structure based, and sequence-function based pre-training methods.
**Q2** Writing quality.
**A2** Thank you for your suggestions! (1)In the following sections, we have definitions about the edge $\varepsilon_{ij}$, it is important to ensure the preciseness of the graph definition. (2) As shown in Section 3.1, in the definition of Invariance and Equivariance, the transformations $T_g$ and $S_g$ represent the actions of the symmetry group (like rotations and translations). (3) The range of $\mathcal{F}(\cdot)$ includes the geometric representations of the input 3D graphs, which are as listed in the paper, like $\Phi, \Psi, \Omega, \omega, \theta, \varphi$. (4) We present Eq. 4 as we update the message-passing mechanism in Eq. 10, and we will rectify our descriptions. (5) Eq. 5 is the definition of the local coordinate system as first presented in [7], which is shown in Figure 7(a) in the appendix. The symbol $\times$ represents the cross product operation. (6) As shown in Appendix F, the input/output dimensions of the FC layer in Eq.10 are 256. We do not update the edge features $\boldsymbol{e}_{ij}$ by the neighboring node features, which is different from GearNet [8].
**Q3** On line 133, it is mentioned that the positions can be recovered from the geometric representations.
**A3** Thank you for your reviews! The definitions of complete geometric representations are preliminaries proposed by [9]; this is correct as the geometric representations $\mathcal{F}(G)$ are complete, the structures can be calculated by these features if we use complete geometries.
**Q4** Questions about average pooling.
**A4** Thank you for your valuable reviews! As shown in Section 3.4, Appendix F and Figure 2, we employ complete message passing and sequence pooling layers to obtain the deeply encoded graph-level representations. Every two message-passing layers are followed by an average pooling layer. There are eight message-passing layers in the model. The sequence average pooling functions perform customized average pooling operations on the input tensors based on the calculated indices (dividing the length of the sequence by 2 and flooring the result). It aggregates and summarizes information from the input tensors using scatter operations to produce the output tensors (torch_scatter.scatter_mean). (2) After one average pooling layer, the number of residues reduces by half; for example, only one of the two consecutive graph nodes ($v_i, v_{i+1}$) remains. We reconstruct the graphs by Eq. 9 by expanding the radius threshold $r$ to $2r$ after once pooling, which makes neighbors of center nodes gradually cover more distant and rare nodes, also reducing the computational complexity.
**Q5** On lines 227-228 in Section 3.4...
**A5** Thank you for your comments! In Lines 227-228, we stated that the message passing mechanism only executes on nodes in CoupleNet instead of on nodes and edges alternately used in GearNet. CoupleNet is our proposed model.
**Q5** The lack of sufficient structural data as compared to sequential data for proteins...
**A5** Thank you! For some proteins, only sequential data is available, we can consider using the AlphaFold predicted structures. This model is only developed for modeling protein sequences and structures concurrently. In the era of AlphaFold, it is a trend to modeling protein sequences and structures, using the structural information to enahce the model's performance.
Thank you again! In case our answers have justifiably addressed your concerns, we respectfully thank you that support the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!
[1] Lin, Z., et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. bioRxiv, 2022.
[2] Rives, A., et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 2021.
[3] Xu, M., et al. Protst: Multi-modality learning of protein sequences and biomedical texts. ICML. 2023.
[4] Zhang, et al. Structure-informed protein language model. arXiv, 2024.
[5] Zhang, et al. A systematic study of joint representation learning on protein sequences and structures. arXiv, 2023.
[6] Su et al. SaProt: Protein Language Modeling with Structure-aware Vocabulary. ICLR, 2024.
[7] John, et al. Generative models for graph-based protein design. NeurIPS, 2019.
[8] Zhang, et al., Protein Representation Learning by Geometric Structure Pretraining. ICLR. 2023.
[9] Liu, et al. Spherical message passing for 3d molecular graphs. ICLR, 2021.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer JwZm,
We express our sincere gratitude for your constructive feedback in the initial review. It is our hope that our responses adequately address your concerns. Your expert insights are invaluable to us in our pursuit of elevating the quality of our work. We are fully aware of the demands on your time and deeply appreciate your dedication and expertise throughout this review.
We eagerly anticipate your additional comments and are committed to promptly addressing any further concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Sincerely,
The Authors | Summary: This work proposes the CoupleNet, a novel framework designed to interlink protein sequences and structures to derive informative protein representations.
It integrates multiple levels and scales of features, constructing a dynamic graph to capture both local and global structural geometries.
Experimental results demonstrate that CoupleNet outperforms state-of-the-art methods in multiple different tasks, e.g., folding / reaction classification, GO term / EC number prediction.
Strengths: [+] The proposed method is well-introduced and easy to follow.
[+] The improved performance in protein function prediction makes CoupleNet highly valuable for practical applications in biology and medicine.
Weaknesses: 1. The experiment results do not include error bars and related analysis about multiple trials.
2. The proposed method looks quite general. Applying the architecture to more real-world problems, e.g., residue design / engineering may be helpful. Or, including some discussions about these topics may be useful.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. For the structure module, is the model robust to perturbations? Or, the model can aware of small changes in the structures? NMR PDB files which includes multiple models can be a test set.
2. The structure files are not available for real problems. I wonder what's the difference (in the model output space) between Alphafold structures or other folded structures and the real structures? Especially, on multiers or interface data.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer ANTD,
We are grateful for your thorough review. Your comments are highly valued, and we would like to express our heartfelt gratitude. We do our utmost to address the questions you have raised:
**Q1** The experiment results do not include error bars and related analysis about multiple trials.
**A1** Thank you for your valuable feedback! We have done multiple trials. The performance is measured with mean values for five different initializations. We report the mean (variance) for the proposed CoupleNet:
| Method | Fold | SuperFamily | Family | Enzyme Reaction | GO-BP | GO-MF | GO-CC | EC |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| CoupleNet | 60.6(0.36) | 82.1(0.63) | 99.7(0.04) | 89.0(0.17) | 0.467(0.005) | 0.669(0.002) | 0.494(0.005) | 0.866(0.008) |
**Q2** The proposed method looks quite general. Applying the architecture to more real-world problems, e.g., residue design / engineering may be helpful. Or, including some discussions about these topics may be useful.
**A2** Thank you for your informative reviews! Protein design is the computational approach to designing amino acid sequences that fold into a predefined protein structure. ESM-IF [1], PiFold [2], and VFN-IF [3] are methods aiming to do protein design, where the task differs from protein representation learning of function prediction. Removing the sequence information, we do protein inverse folding by our method on CATH 4.2, the results are shown in Table 2 in the one-page rebuttal pdf. Our method also achieves almost the best results on this task.
**Q3** For the structure module, is the model robust to perturbations? Or, the model can aware of small changes in the structures? NMR PDB files which includes multiple models can be a test set.
**A3** Thank you for your reviews! We have conducted the noise analysis experiments, the results are shown in Section 4.2. Proteins in the test set are categorized into four groups based on their similarity to the training set, 30\%, 40\%, 50\%, 70\%, not by the default split rate of 95\%. The results indicate that even when there is a low similarity between the training and test sets, our model also has the highest scores, which demonstrates the robustness of the proposed model. The model can be aware of small changes in the structures, as we have constructed the complete structural representations to ensure global completeness; this has been demonstrated in Appendix H. Such global completeness is theoretically guaranteed to incorporate 3D information completely without information loss, while the local view would miss the long-range effects of subtle conformational changes happening distantly.
**Q4** The structure files are not available for real problems. I wonder what's the difference (in the model output space) between Alphafold structures or other folded structures and the real structures? Especially, on multiers or interface data.
**A4** Thank you for your valuable reviews! The proposed model is a basic framework to model protein sequences and strucutres, if the real structures are not available, we can use Alphafold predicted structures. We have not consider the difference between predicted structures and the real structures, as in experiments, the strucutres we used are all from PDB files. The proposed model can also model protein complex.
Thank you again for all the efforts that helped us improve our manuscript! In case our answers have justifiably addressed your concerns, we respectfully thank you for supporting the acceptance of our work. As you know, your support holds great significance for us. Also, please let us know if you have any further questions. Look forward to further discussions!
[1] Hsu, C., et al. Learning inverse folding from millions of predicted structures. ICML, 2022.
[2] Gao, Z., et al. PiFold: Toward effective and efficient protein inverse folding. ICLR, 2022.
[3] Mao, W., et al. De novo protein design using geometric vector field networks. arXiv, 2023.
[4] Zhang et al. Protein Representation Learning by Geometric Structure Pretraining. ICLR, 2023.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer ANTD,
We express our sincere gratitude for your constructive feedback in the initial review. It is our hope that our responses adequately address your concerns. Your expert insights are invaluable to us in our pursuit of elevating the quality of our work. We are fully aware of the demands on your time and deeply appreciate your dedication and expertise throughout this review.
We eagerly anticipate your additional comments and are committed to promptly addressing any further concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Sincerely,
The Authors | Summary: The authors tackle the limitation of modeling inter-dependencies between protein sequences and structures. To solve this limitation, this work proposes CoupleNet which dynamically couples protein sequences and structures. Specifically, the authors propose to construct two-type dynamic graphs (sequential graph and radius graph) and execute convolutions on nodes and edges to encode proteins.
Strengths: * The proposed method is memory-efficient as it adopts hierarchical pooling.
* Considering different types of features such as torsion angles, dihedral angles, and planar angles, and conducting the ablation study on them is interesting.
Weaknesses: * The tackling problem has already been solved by recent works [1, 2] which weakens the contribution of the proposed work. ESM-GearNet [1] successfully fuses the sequential and structural representations, and SaProt [2] develops a structure-aware vocabulary allowing to fusing the sequential and structural properties in one model. Also, the pre-training strategy of GearNet [3] could solve the problem as it aims to learn the similarity between the subsequence graph and the structural (radius) graph. I suggest the authors concisely compare those works and elaborate on the advantages of the proposed method.
* For the experimental results, the related works [1, 2] that model the sequences and structures simultaneously should be compared.
* The ablation study is not related to the core techniques of this paper. Even though the core techniques of this paper are dynamic pooling and constructing sequential and radius graphs, the authors do not consider them for the ablation study, hindering the readers from knowing what component largely contributes to the performance.
* As dynamic pooling is one of the core ideas of this paper, in the main body, the authors should explain what kind of pooling strategy is leveraged, what the effect of the proposed dynamic pooling is, and why it is important to model the two graphs.
[1] Zhang et al., "Enhancing Protein Language Models with Structure-based Encoder and Pre-training", ArXiv, 2023.
[2] Su et al., "SaProt: Protein Language Modeling with Structure-aware Vocabulary", ICLR, 2024.
[3] Zhang et al., "Protein Representation Learning by Geometric Structure Pretraining", ICLR, 2023.
Technical Quality: 1
Clarity: 2
Questions for Authors: * Which pooling strategy did you adopt? Please elaborate on the pooling process.
* Could you explain the advantages of this work from the difference with existing methods in Section 3.4?
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 3
Limitations: The authors discuss the limitation in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer fLEX,
We are grateful for your thorough review. Your comments are highly valued, and we would like to express our heartfelt gratitude. We do our utmost to address the questions you have raised:
**Q1** I suggest the authors concisely compare those works and elaborate on the advantages of the proposed method.
**A1** Thank you for your valuable feedback! (1) As shown in Lines 221-229 in the manuscript, there are only two different types of graphs in the proposed CoupleNet: radius graph and sequential graph; we did not use the k-nearest graph, as some nodes have distant neighbors when having $k$ neighbors. (2) The threshold in the radius graph in GearNet is set to be constant, but we change the threshold of radius dynamically to learn different distance relationships. (3) The message passing mechanism only executes on nodes in CoupleNet instead of on nodes and edges alternately used in GearNet. CoupleNet performs convolutions on nodes and edges simultaneously with several pooling layers. (4) ESM-GearNet combines ESM embeddings with GearNet embeddings; SaProt also uses ESM embeddings; it only uses the $C_\alpha$ coordinate at the structure level, but CoupleNet models the coordinates of all backbone atoms completely.
**Q2** Comparisons with the related works.
**A2** Thank you for your informative reviews! Factually, CoupleNet is not a pre-training model; it is not fair to compare it with pre-training methods. Thus, we combine ESM-2 (650M) [1] with CoupleNet, named ESM-CoupleNet, using generated ESM embeddings as one part of graph node features. We compare ESM-CoupleNet with pre-training methods on protein function prediction and EC number prediction, including sequence-based methods, ESM-1b [2], ESM-2; sequence-function based methods, ProtST [3]; sequence-structure based methods ESM-S [4], GearNet-ESM [5], SaProt [6]. The comparison results are provided in Table 1 of the one-page rebuttal pdf. From this table, we can see that our proposed model, ESM-CoupleNet, achieves the best results among sequence-based, sequence-structure based, and sequence-function based pre-training methods.
**Q3** The ablation study is not related to the core techniques of this paper.
**A3** Thank you for your reviews! (1) In this paper, a novel two-graph-based approach for modeling sequential and 3D geometric features is proposed, ensuring global completeness in protein representation, which has been demonstrated in the appendix. We have done the ablations on the sequential and radius graphs by removing different input features, as shown in Table 3 in the paper. (2) CoupleNet performs concurrent convolutions and sequence poolings on nodes and edges. Convolution and pooling operations consist of the network. We remove the pooling operations in the network, and the results are shown in the following table. We can see without pooling operations the model's performance degrades significantly.
| Method | GO-BP | GO-MF | GO-CC | EC |
| :--- | :---: | :---: | :---: | :---: |
| CoupleNet | 0.467 | 0.669| 0.494 | 0.866 |
| w/o pooling | 0.362 | 0.535 | 0.420 | 0.748 |
Besides, in Figure 4 of the manuscript, CoupleNet demonstrates its capability to capture long-range relationships; higher accuracies are observed for relatively large proteins with sequence lengths surpassing the mean length. This also illustrates the effectiveness of the dynamic pooling operations.
**Q4** Questions about pooling strategy.
**A4** Thank you for your valuable reviews! (1) As shown in Section 3.4, Appendix F, and Figure 2, we employ complete message passing and sequence pooling layers to obtain the deeply encoded graph-level representations. Every two message-passing layers are followed by an average pooling layer. There are eight message-passing layers in the model. The sequence average pooling functions perform customized average pooling operations on the input tensors based on the calculated indices (dividing the length of the sequence by two and flooring the result). It aggregates and summarizes information from the input tensors using scatter operations to produce the output tensors (torch_scatter.scatter_mean). (2) After one average pooling layer, the number of residues reduces by half, and we expand the radius threshold $r$ to $2r$ after once pooling, which makes neighbors of center nodes gradually cover more distant and rare nodes, also reducing the computational complexity. These operations make the model capture the local to the global features.
**Q5** Could you explain the advantages of this work from the difference with existing methods in Section 3.4?
**A5** Thank you! We have stated the differences in Section 3.4. We will rectify it to explain extra advantanages in the revised version. For example, our model are more memory-efficient as we adopt dynamic (hierarchical) pooling to reduce the seuqnce length, the dynamically changed graphs can better model the node-edge relationships. We model the sequential and 3D geometric features, ensuring global completeness in protein representations.
Thank you again! In case our answers have justifiably addressed your concerns, we respectfully thank you that support the acceptance of our work. Also, please let us know if you have any further questions. Look forward to further discussions!
[1] Lin, Z., et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. bioRxiv, 2022.
[2] Rives, A., et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 2021.
[3] Xu, M., et al. Protst: Multi-modality learning of protein sequences and biomedical texts. ICML. 2023.
[4] Zhang, et al. Structure-informed protein language model. arXiv, 2024.
[5] Zhang, et al. A systematic study of joint representation learning on protein sequences and structures. arXiv, 2023.
[6] Su et al. SaProt: Protein Language Modeling with Structure-aware Vocabulary. ICLR, 2024.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer fLEX,
We express our sincere gratitude for your constructive feedback in the initial review. It is our hope that our responses adequately address your concerns. Your expert insights are invaluable to us in our pursuit of elevating the quality of our work. We are fully aware of the demands on your time and deeply appreciate your dedication and expertise throughout this review.
We eagerly anticipate your additional comments and are committed to promptly addressing any further concerns.
Once again, we extend our heartfelt thanks for your time and effort during the author-reviewer discussion period.
Sincerely,
The Authors | null | null | Rebuttal 1:
Rebuttal: First and foremost, we would like to express our sincere gratitude for the insightful and constructive feedback provided by the reviewers on our manuscript.
We are particularly thankful for the Reviewers' recognition of the method of our study; it is important to combine sequences and structures (Reviewer **fLEX**, **JwZm**). We also appreciate their acknowledgment of the experiments we have conducted, which is interesting and better than other methods, making it highly valuable for practical applications in biology and medicine (Reviewer **fLEX, JwZm, ANTD**). Reviewer **fLEX** acknowledged that the proposed method is memory-efficient as it adopts hierarchical pooling, and Reviewer **ANTD** presented that our method is well-introduced and easy to follow.
We are grateful for the feedback received, particularly regarding the lack of comparisons with the pre-training methods; indeed, our method is a basic network that models protein sequences and structures, which is not a pre-training model, in order to compare with pre-training models fairly, we use ESM-2 [2] embeddings as the graph node features and develop ESM-CoupleNet. We compare ESM-CoupleNet with pre-training methods on protein function prediction and EC number prediction, including sequence-based methods, ESM-1b [2], ESM-2; sequence-function based methods, ProtST [3]; sequence-structure based methods ESM-S [4], GearNet-ESM [5], SaProt [6]. The comparison results are provided in Table 1 of the one-page rebuttal pdf. From this table, we can see that our proposed model, ESM-CoupleNet, achieves the best results among sequence-based, sequence-structure based, and sequence-function based pre-training methods.
In the one-page rebuttal pdf, we also provide the results on protein design, which is to design amino acid sequences that fold into a predefined protein structure, different from protein representation learning of function prediction. Removing the sequence information, we do protein inverse folding by our method on CATH 4.2, the results are shown in Table 2 in the one-page rebuttal pdf. These results illstrates the generalization of our method.
Once again, we sincerely appreciate the reviewers' feedback and remain committed to continuously improving our research and manuscript based on their valuable insights. Thank you again!
[1] Lin, Z., et al. Language models of protein sequences at the scale of evolution enable accurate structure prediction. bioRxiv, 2022.
[2] Rives, A., et al. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. PNAS, 2021.
[3] Xu, M., et al. Protst: Multi-modality learning of protein sequences and biomedical texts. ICML. 2023.
[4] Zhang, et al. Structure-informed protein language model. arXiv, 2024.
[5] Zhang, et al. A systematic study of joint representation learning on protein sequences and structures. arXiv, 2023.
[6] Su et al. SaProt: Protein Language Modeling with Structure-aware Vocabulary. ICLR, 2024.
Pdf: /pdf/e45ade12c0b5ed195c5846006bf4b9e5e2fef059.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DeNetDM: Debiasing by Network Depth Modulation | Accept (poster) | Summary: This paper presents a useful theoretical framework that shows that samples that exhibit spurious correlations lie on a lower rank manifold and that the depth of a network acts as an implicit regularizer for the rank of the attribute subspace. Building upon this, the paper proposes a method *DeNetDM* that creates a biased strong encoder (deep network) and a debiased weak encoder (shallow network) and then leverages this to train a final strong encoder that is debiased.
Strengths: 1. The theoretical characterization provided in this paper is extremely intuitive and useful. Confirming the intuitions that examples with spurious attributes lie on a lower-dimension manifold formally is the most important contribution of this paper in my opinion. Supplementing this with the explanation that the depth of the network acts as implicit regularizer makes the method of this paper theoretically sound.
2. The confirmation of theoretical findings using synthetic experiments helps validate the theoretical claims.
3. Putting the theoretical findings together into a method that does outperform prior work illustrates the effectivness of this approach.
Weaknesses: 1. Empirical Evaluation: The datasets chosen by the authors to evaluate their method are not standard in this literature. Common datasets such as Waterbirds [1] and CelebA [2] are missing. Moreover, including newer more challenging datasets such as UrbanCars [3], SpuCoAnimals [4] and SpuCoMNIST [4] could further improve this paper.
2. Insufficient discussion of similarity to related work: Two related works seem very similar in method to the proposed method. 1) Overcoming simplicity bias in deep networks using a feature sieve and 2) Learning from failure: De-biasing Classifier from biased classifier. The paper would greatly benefit from discussing the similarities and differences between the proposed approach and these 2 closely related approaches. Moreover, Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias is another relevant theoretical / empirical study of spurious correlations which would be useful to compare & contrast with to better understand the contributions of the authors' work.
[1] https://arxiv.org/abs/1911.08731
[2] https://arxiv.org/abs/1411.7766v3
[3] https://arxiv.org/abs/2212.04825
[4] https://arxiv.org/abs/2306.11957
Technical Quality: 4
Clarity: 4
Questions for Authors: See above in weaknesses.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Empirical evaluation of method: discussed in greater depth in weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback.
**W1.** Empirical Evaluation: Including newer more challenging datasets could further improve this paper.
**A.** In response to the reviewers' suggestions, we have evaluated the effectiveness of our approach on the CelebA dataset, where blonde hair is the target attribute and gender is the spurious attribute. Due to time constraints, instead of using the full CelebA dataset, we employed a subsampled version as done and described in [A], maintaining the same data splits for consistency. We employed the same architectures that were applied to the BFFHQ dataset, ensuring that the target architecture remains consistent with [A]. To ensure a fair comparison, we reference results from [A] on the same dataset split for methods such as ERM, JTT, Disentangled Feature Augmentation, and DCWP. Currently, we lack results for LC and LfF on this specific dataset version, as they used the original CelebA dataset in their study. We plan to incorporate these comparisons before finalizing the camera-ready version of our paper.
| Method | Worst Group Accuracy |
|:--------:|:--------------------:|
| ERM | 47.02 |
| DisEnt | 65.26 |
| JTT | 76.80 |
| DCWP | 79.30 |
| DeNetDM | **81.04** |
As the results indicate, DeNetDM achieves relatively high worst-group accuracy compared to other approaches.
We also conducted experiments on two variants of SpucoMNIST with low and medium spurious difficulty magnitude respectively and compared our results to those reported in [B]. The results were averaged over three random seeds. It can be observed that DeNetDM performs on par with its supervised counterparts, such as GroupDRO and Group Balancing on SpucoMNIST dataset.
| Method | Worst Group Accuracy (Medium difficulty) | Worst Group Accuracy (Low difficulty) |
|:----------------:|:----------------------------------------:|:-------------------------------------:|
| ERM | 66.0 (5.0) | 97.0 (0.5) |
| GDRO | 90.4 (1.9) | 96.6 (0.4) |
| JTT | 65.0 (18.4) | 94.8 (1.0) |
| Group Balancing | 92.9 (0.1) | 95.8 (0.6) |
| DeNetDM | **93.91 (1.1)** | 96.85 (0.1) |
**W2.** Insufficient discussion of similarity to related work: Two related works seem very similar in method to the proposed method. 1) Overcoming simplicity bias in deep networks using a feature sieve and 2) Learning from failure: De-biasing Classifier from biased classifier.
**A.** We appreciate the reference to the following papers: [C] "Overcoming Simplicity Bias in Deep Networks Using a Feature Sieve," [D] "Learning from Failure: De-biasing Classifiers from Biased Classifiers," and [E] "Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias." A commonality between our work and these approaches is the emphasis on examining the early training dynamics of neural networks to identify and address biases. Specifically, the authors of [C] propose that simple features are learned quickly, appear in early layers of the neural network, and tend to propagate throughout the subsequent layers. Similarly, [D] posits that biases that are easy to learn are captured during the initial phases of training. In [E], the authors demonstrate that, in the early training iterations, the influence of a spurious feature on the network output increases linearly with the level of spurious correlation. While our approach also focuses on the early training dynamics of neural networks, it uniquely characterizes the differences in these dynamics across models with **varying depths** using a Product of Experts architecture, an aspect not explored by the aforementioned approaches. Instead of concentrating on a single neural network, we investigate how the early training dynamics differ between deep and shallow models, demonstrating how core and bias attributes are learned throughout the training process for each model. As suggested by the reviewer, we will incorporate this discussion into the final version of our paper.
[A] Training Debiased Subnetworks with Contrastive Weight Pruning, CVPR 2023.
[B] Towards Mitigating Spurious Correlations in the Wild: A Benchmark & a more Realistic Dataset, Arxiv, Sep 2023.
[C] Overcoming Simplicity Bias in Deep Networks Using a Feature Sieve, ICML 2023.
[D] Learning from Failure: De-biasing Classifiers from Biased Classifiers, NeurIPS 2020.
[E] Identifying Spurious Biases Early in Training through the Lens of Simplicity Bias, AISTATS 2024.
---
Rebuttal Comment 1.1:
Comment: I find the author's rebuttal to be convincing and continue to recommend this paper for acceptance. | Summary: This paper proposes the unsupervised debiasing strategy via modulating network layers depth. It proves that the network with deep layers exploits bias attributes more than that with shallower layers, and shows that training on such less-biased network with shallow layers exhibit debiased learning. The proposed method outperforms other baselines in several datasets including CMNIST, C-CIFAR10, BAR, and BFFHQ.
Strengths: - It is novel contribution that identifies how the rank of attributes, specifically regarding dataset bias, is related to network depth.
- DeNetDM shows superior debiasing performances against existing baselines across different benchmark datasets and correlation severities.
Weaknesses: 1. Empirical analysis on networks' depth to learning of different ranks (bias-aligned / bias-conflicting) is limited to CMNIST. Additional experiments on other benchmark datasets, e.g., C-CIFAR10, BAR, and BFFHQ, are required for validity.
2. As shown in Tables 8 and 10, different pairs of networks result in significantly different results (about 20% differences in conflicting accuracy in Table 8). How sensitive are the bias-aligned (bias-conflicting) attributes with regard to depth of deep (shallow) networks, and the following performances of DeNetDM in stage 2?
3. Although authors focus on evaluation on biased dataset, it is unclear whether DeNetDM maintains the accuracy when applied to unbiased dataset.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weaknesses.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and suggestions.
**W1.** Empirical analysis on networks' depth to learning of different ranks (bias-aligned / bias-conflicting) is limited to CMNIST. Additional experiments on other benchmark datasets, e.g., C-CIFAR10, BAR, and BFFHQ, are required for validity.
**A.** Please refer to the global response.
**W2.** As shown in Tables 8 and 10, different pairs of networks result in significantly different results (about 20% differences in conflicting accuracy in Table 8). How sensitive are the bias-aligned (bias-conflicting) attributes with regard to depth of deep (shallow) networks, and the following performances of DeNetDM in stage 2?
**A.** As the reviewer correctly pointed out, Table 8 in the main paper provides insights into how variations in network depth affect the performance of DeNetDM. As shown in Table 8, DeNetDM can better distinguish between bias and core attributes when there is a significant difference between the depths of shallow and deep branches. However, this also depends on the complexity of the features. For instance, in the case of C-CIFAR10, ResNet 32 may capture core features more effectively than ResNet 8 due to the architectural differences. Therefore, pairing ResNet 50 with ResNet 32 in our approach enhances the learning of core attributes in ResNet 32, as reflected in the conflicting accuracy shown in Table 10.
Regarding sensitivity, if the shallow architecture can partially learn the core attributes and there is a significant difference in the depth between the deep and shallow networks, DeNetDM efficiently aids in learning the bias and core attributes within the deep and shallow models, respectively. Debiasing the target architecture in Stage 2 is more effective when the deep and shallow models accurately capture the bias and core attributes respectively in the first stage, allowing us to leverage the distilled information from these networks. Thus, it can be concluded that DeNetDM is sensitive to the depth variations between the deep and shallow branches.
**W3.** It is unclear whether DeNetDM maintains the accuracy when applied to unbiased datasets.
**A.** Our approach operates under the assumption that the data is biased. In such scenarios, deep networks tend to prioritize biased attributes, while shallow models focus on core attributes. When the data is unbiased, the model treats all attributes equally, allowing them to be considered core attributes for the prediction task. In a Product of Experts (PoE) setting, each expert learns a portion of these core attributes, which are then used for joint prediction using both the deep and the shallow branches.
As a result, it cannot be guaranteed that either the deep or shallow branches will perform well individually on the prediction task, since both models contribute to the joint prediction through PoE. We validate this intuition by training stage 1 of DeNetDM on an unbiased CMNIST dataset using a 5-layer MLP and a 3-layer MLP. The deep model and shallow model achieve accuracies of 75.04% and 67.95%, respectively. However, the PoE model, which learns to jointly predict from both deep and shallow experts, achieves an accuracy of 98.64%.
Our approach remains effective if we consider the PoE model from stage 1 as a whole (as the core model) in stage 2 of DeNetDM, rather than focusing on individual shallow or deep models as in the current setting. However, this requires a strategy for model selection (PoE or the shallow model) based on the degree of bias in the data, which could be explored in future work.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the rebuttal regarding the concern of architecture sensitivity and unbiased dataset. My further responses are found below:
1. I appreciate the authors' response and the issue has been resolved.
2. I’m still concerned that DeNetDM’s sensitivity requires _significantly large amount of additional hyperparameter tuning_ for network architectures in both shallow and deep networks. Specifically, if there are $N$ and $M$ candidate of architectures for DeNetDM, it requires $N\times M$ validation for choosing the best model, which significantly increases the overall costs for validations when integrated with other hyperparameters. I believe it needs more systematic analysis on how sensitive the DeNetDM is, and therefore how consistently it outperforms other baselines that do not require such tuning, e.g., LfF, DFA.
3. Thanks for the clarification, and I believe such discussion regarding the limitation of DeNetDM when faced with unbiased test set should be included in the paper.
---
Reply to Comment 1.1.1:
Title: Response to Reviewer CcWa
Comment: We thank the reviewer for their valuable feedback. We provide additional clarifications with the hope of addressing the remaining concerns.
1,3 : We will include these results in the final revision of the paper.
2: We acknowledge the reviewer's point that if there are $N$ and $M$ candidate architectures for DeNetDM, selecting the optimal model could theoretically require up to $N \times M$ validations. However, the list of candidate architectures is manageable because we impose specific constraints on $N$ and $M$ based on our observations and assumptions. Specifically, we assume that the deep branch is either the same depth as the target network or one layer deeper, which limits the hyperparameter search space to $M=2$ across all cases. For the shallow network, we choose $N$ such that the depth of the shallow branch is significantly less than that of the deep branch, as our approach performs better when there is a substantial difference in depth between the shallow and deep networks. This further reduces the search space. For instance, in the case of CMNIST, where the target model is a 5-layer MLP, we consider the following pairs for hyperparameter search: (5,3), (5,4), (6,3), (6,4), and (6,5), which requires only five additional validations. Similarly in the case of C-CIFAR10, where the target model is a ResNet-18, we consider the following pairs for hyperparameter search: (ResNet-18, 3-layer CNN), (ResNet-18, 4-layer CNN), (ResNet-18, 5-layer CNN), (ResNet-20, 3-layer CNN), (ResNet-20, 4-layer CNN), (ResNet-20, 5-layer CNN) which requires only 6 additional validations.
A similar strategy is applied for hyperparameter tuning on other datasets. The best-performing pair from this limited search space is used to report results in Tables 1 and 2. More extensive hyperparameter search might yield better solutions.But, even within the restricted search space, we demonstrate that our approach consistently outperforms all the baselines. Moreover, methods like LfF and DFA also introduce their own set of hyperparameters, such as $q$ in LfF (e.g., $q \in (0.7, 0.8)$) and $(\lambda_{dis}, \lambda_{swapb}, \lambda_{swap})$ in DFA, necessitating method-specific hyperparameter tuning to obtain the best model.
The architectures used in Table 10 for C-CIFAR-10 were not originally included in the hyperparameter search space for C-CIFAR-10, as the deep network in question (ResNet-50) is significantly deeper than the target network (ResNet-18). The primary motivation for presenting the experiments in Table 10 is to demonstrate that DeNetDM can scale effectively with more advanced ResNet architectures. As discussed in the initial rebuttal, ResNet32 is more effective in capturing core features than ResNet8, which is a fundamental property of the architecture and not the depth. However, in both the cases, the inductive bias of depth modulation still holds since the deeper branch (ResNet-50) consistently captures bias irrespective of the depth of the shallow network. Hence, the inductive bias relying on depth modulation in DeNetDM is agnostic of such fundamental benefits / weaknesses stemming from architecture choices, and its sensitivity is a reflection of the best attainable accuracy from any given
architecture. | Summary: The submission proposes that deeper networks are more likely to use biased features than shallower ones. Using this idea, they develop a training algorithm to encourage reliance upon non-spurious attributes. This is done by training a deep and shallow network as a product-of-experts, then distilling the shallow network into a target network.
Analytic experiments on synthetic datasets indicate that deeper networks do tend to fixate on simpler features as training progresses. A theoretical development of the key intuitions are provided, along with experimental results suggesting that the proposed algorithm can improve over baselines.
Strengths: Originality: The core idea in the submission about depth modulation is original to my knowledge. Some existing work (e.g. Gradient Starvation: A Learning Proclivity in Neural Networks) has discussed how being able to learn some target-correlating features can impede the learning of others, so it seems plausible that a network that learns a simpler feature quicker is less likely to also learn other features.
Quality: There is a good set of comparisons to related methods, on some of the standard datasets in the literature. The ideas presented and the intuitions appear sound. Limitations such as anticipated difficulties in scaling beyond single-biases are acknowledged.
Clarity: The submission is generally intuitively clear and easy to follow.
Significance: The ML community continues to consider ways to approach bias in datasets, since specific domains that are data-scarce and unique enough that foundation models may not be available continue to motivate explorations along these lines.
Weaknesses: One key weakness in approaches of this flavour is the reliance upon use of a “debiased’ validation set to pick crucial hyper-parameters. This presumes knowledge of the bias, and if one knows it exactly, one can likely adopt other bias-informed approaches in practice. There is some work (e.g. Systematic Generalization with Group-Invariant Predictions) that have used a differently-biased validation set to pick hyperparameters relative to the test set, which can be a slightly better alternative than use of an “oracle” validation set.
In practice, it is unclear if a model is always going to be operating in an OOD setting. In fact, it might be the case that OOD settings arise relatively infrequently in deployment. From this perspective, it might make more sense to consider both aligned and conflicting accuracies of all baselines. A competing model that takes less of a hit in-distribution might be preferable.
The learning dynamics of decodability throughout training are interesting, but based only on the simpler datasets with a more drastic difference in the “complexity” of the core and biasing attributes. It is unclear how these trends play out for more realistic data.
Minor:
Is an equation with the full loss for training the target model missing? I’m assuming the objective used for training the target network is really L_t + \lambda*L_dist.
A related baseline might be relevant for inclusion: Simple and Fast Group Robustness by Automatic Feature Reweighting, Qiu et al., ICML 2023.
For the mathematical development, it might be better to include some of the key results from Huh et al., Wang and Jacot, etc. for sake of completeness, and also make it easier to gauge the additional contributions brought in this submission.
Using \citep to might be better to enclose citations in parentheses/brackets.
Technical Quality: 3
Clarity: 3
Questions for Authors: The submission mentions use of a “small” validation set, could they clarify the exact size, relative to training and testing (I couldn’t find it easily upon a quick look)?
It would be interesting to look at the decodability dynamics on the more complex datasets such as BFFHQ and BAR. Are there technical difficulties in showcasing these trends that I missed?
Would it be possible to compare both aligned and conflicting accuracies for all methods?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The submission acknowledges limitations and broader impact in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer's insightful suggestions and comments. Below, we provide detailed responses to each of the questions and concerns raised.
**W1/Q1.** One key weakness in approaches of this flavour is the reliance upon use of a “debiased’ validation set to pick crucial hyper-parameters.
**A.** The authors of [A] provide empirical evidence that group annotations in the validation set are crucial for hyperparameter tuning in bias mitigation. They show that tuning hyperparameters for worst-group validation accuracy significantly improves worst-group test performance compared to tuning for average validation accuracy. Following this approach, as well as the methodology in [B], we tune hyperparameters using a small validation set with group annotations whenever available. For the CMNIST and C-CIFAR10 datasets, we create an unbiased validation set with 250 samples to optimize hyperparameters. In the case of BFFHQ, where an unbiased validation set is not available apart from the test set, we use 40 out of 1000 samples from the test set for validation. For the BAR dataset, we do not perform hyperparameter tuning due to the lack of group annotations and instead apply the same hyperparameters used for the C-CIFAR10 dataset.
We thank the reviewer for citing a reference that employs a differently biased validation set for hyperparameter tuning. However, we believe this is a completely different scenario that warrants independent investigation and could be explored as a future work.
**W2/Q3.** It might make more sense to consider both aligned and conflicting accuracies of all baselines. Would it be possible to compare both aligned and conflicting accuracies for all methods?
**A.** We computed the aligned and conflicting accuracies for all baselines, as well as for our approach, on the BFFHQ dataset. Due to rebuttal time constraints, we could not compare the accuracies with the baselines for the other datasets. As shown in the table, approaches like LfF and DFA reduce accuracy on aligned points while attempting to improve accuracy on conflicting points for BFFHQ dataset. This is likely because these methods focus on enhancing performance in conflicting distribution by reweighing or augmenting minority groups. In contrast, our approach achieves strong performance on conflicting points without sacrificing in-distribution accuracy.
| Method | Bias Aligned Acc (%) | Bias Conflicting Acc (%) |
|:--------:|:---------------------:|:------------------------:|
| ERM | 93.9 (0.2) | 56.7 (2.7) |
| JTT | 96.56 (2.9) | 65.3 (2.5) |
| LfF | 87.49 (4.5) | 62.2 (1.6) |
| DFA | 79.59 (2.7) | 63.9 (0.3) |
| LC | 97.8 (1.8) | 70.0 (1.4) |
| DeNetDM | **98.55 (0.7)** | **75.7 (2.8)** |
**W3/Q2.** The learning dynamics of decodability throughout training are interesting, but based only on the simpler datasets with a more drastic difference in the “complexity” of the core and biasing attributes. It is unclear how these trends play out for more realistic data.
**A**. Please refer to the global response.
**W4.** Minor corrections.
**A.** We thank the reviewer for these minor corrections. We will address them in the final version of the paper.
[A] Liu et al. (2021). Just Train Twice: Improving Group Robustness without Training Group Information, ICML 2021.
[B] Wang et al. (2022). On Feature Learning in the Presence of Spurious Correlations, NeurIPS 2022.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response! Having read the rebuttal and the other reviews, I am keeping my original rating. | Summary: This work makes several contributions in relation to network depth and dataset bias. It shows that bias-aligned samples lie on a lower rank manifold compared to bias-conflicting samples. This is linked with network depth by showing how deeper networks tend to prefer spurious correlations, which is demonstrated by the decodability of bias and core features from networks of different depths. Based on these insights, the authors propose DeNetDM, a debiasing approach to train deep (biased) and shallow (debiased) branches, with target debiased model corrected via knowledge distillation. Empirical results demonstrate effectiveness on both synthetic and real-world image classification datasets.
Strengths: [S1] The claims in the paper are backed both theoretically and empirically. Theorem 1 predicts lower-rank manifolds for bias-aligned samples and Theorem 2 shows how deeper networks prefer lower-rank features, thereby providing a solid theoretical backing for the argument that deeper networks prefer bias-aligned samples. The decodability vs depth plots support these claims.
[S2] The proposed debiasing method does not rely on bias annotations or data augmentations, which is advantageous. The efficacy is demonstrated on evaluation benchmarks with varying ratios of biased/unbiased samples, which clearly shows benefits over existing debiasing methods.
Weaknesses: [W1] It is unclear why the biased branch needs to be deep. Clearly, both core and spurious features are available at shallower depths. Would it not be more beneficial to build a shallower bias detector instead? Other works including: [1, 2] have already explored leveraging shallower layers for bias correction, which is more efficient.
[W2] DeNetDM is trained in two stages, but it is unclear why this is necessary. Is it not possible to train both biased branch and perform debiasing in parallel as done in [1]?
[W3] The study does not take into account the impacts of loss functions/regularization. For instance, would the deeper networks exhibit similar proclivity to spurious features with spectral decoupling [3]?
[W4] All the experiments are performed on image classification tasks, which raises questions on generalization to other tasks.
[1] Shrestha, Robik, Kushal Kafle, and Christopher Kanan. "Occamnets: Mitigating dataset bias by favoring simpler hypotheses." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[2] Clark, Christopher, Mark Yatskar, and Luke Zettlemoyer. "Learning to model and ignore dataset bias with mixed capacity ensembles." arXiv preprint arXiv:2011.03856 (2020).
[3] Pezeshki, Mohammad, et al. "Gradient starvation: A learning proclivity in neural networks." Advances in Neural Information Processing Systems 34 (2021): 1256-1272.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions correspond to the points listed in the weaknesses section:
[Q1] Why not build a shallower biased branch? Is there any advantage a deeper biased branch offers compared to a shallower one?
[Q2] Does the approach need to be multi-staged? Could the debiasing occur in parallel to the training of the biased branch?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations section should mention that is not tested on non-vision/non-classification tasks. Would the approach need any modifications for other tasks?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and questions.
**W1/Q1.** Why not build a shallower biased branch?
**A.** TL;DR: Both core and spurious features are indeed available at shallower depths. However, since we do not use explicit bias annotations, we have no a priori way of telling them apart. So, we aim to segregate them automatically. We observe (in Theorem 2) that although both core and spurious features are available at shallower depths, at greater depths, only spurious features are available, while core features are not. So, deep networks provide us with a unique way to identify spurious features (by looking at the features that survive to greater depths), something that is not possible with shallow networks.
Longer answer: In Theorem 2 (via Lemma 2), we show that the propagation probability $\pi_i$ is inversely proportional to the rank of an attribute $r_i$. Specifically, for any two attributes $a_1$ and $a_2$ with ranks $r_1$ and $r_2$ respectively, if $r_2 > r_1$, then at any given depth $d$, $a_1$ is more likely to propagate through depth $d$ than $a_2$. Simply put, attributes with a lower rank would propagate to a greater depth than those with a higher rank.
Since spurious attributes are typically of a much simpler nature (say, color, texture, etc.) relative to core attributes (such as shape), they are more likely to lie on a lower dimensional manifold, effectively having a lower rank. Combining this with Theorem 2, we thus have that for deep networks, spurious attributes survive to greater depths than core attributes. For this reason, deep networks give us a unique way to segregate bias and core attributes that shallow networks cannot offer since both bias and core attributes can survive in the early layers (shallow network) due to their higher propagation probabilities (stemming from higher effective ranks (Huh et al., 2023)), but only spurious / bias attributes can survive up to the later (deeper) layers.
**W2/Q2.** Does the approach need to be multi-staged?
**A.** It is possible to train the deep model, shallow model, and target model in a single stage. In each iteration, the deep and shallow models can be updated using a product of experts (PoE) approach, followed by updating the target model through distillation within the same iteration. We believe that this approach may be less stable than the two-stage method. This instability could arise because the segregation of the deep and shallow models as experts in bias and core attributes, respectively, would not have been established in the early epochs. Consequently, the target model may receive mixed signals during the initial iterations. While parallel training is feasible, it introduces the additional challenge of stabilizing the training dynamics of the target model. Also, training and backpropagating through all three networks simultaneously demands significant GPU memory resources which can be reduced using multiple stages of training. Due to time constraints, we could not experiment with DeNetDM in this setting, so the single-stage approach could perform better or worse than anticipated.
**W3.** Would deeper networks show similar tendencies toward spurious features with spectral decoupling?
**A.** Thanks for the suggestion, which led to valuable insights. To test the hypothesis that “depth captures biases” under debiasing regularizers like spectral decoupling, we performed ERM on the CMNIST dataset with both deep and shallow networks. We found that while accuracies on bias-aligned points remained near 100%, spectral decoupling(**SD**) improved accuracies on bias-conflicting points more significantly for the deep network than the shallow network (see "Reference to Fig. A in the PDF attached to global response" for results and the best bias-conflicting accuracies in the table below).
| Method | Without SD | With SD | Improvement (With SD - Without SD) |
|:--------------------:|:------------:|:---------:|:----------------------------------:|
| Shallow (3-layer) MLP | 43.56 | 47.44 | +3.88 |
| Deep (5-layer) MLP | 36.90 | 48.74 | +11.84 |
This indicates that the deep network, being more prone to capturing bias, benefited more from SD in terms of debiasing and accuracy on bias-conflicting points. The shallow network, already focusing on core attributes, saw only marginal improvement. This supports our claim that deeper networks are more bias-prone than shallower ones.
Additionally, the deep network with SD slightly outperformed the shallow network. We hypothesize that SD alleviates rank bottlenecks in deep networks, improving the propagation of core attributes while leveraging the network's larger capacity. This suggests that a deeper network with SD can be both bias-free and more capable, a phenomenon warranting further theoretical exploration.
**W4.** Generalization to other tasks.
**A.** We evaluated the performance of DeNetDM on the CivilComments dataset [A], which involves natural language debiasing. The task requires classifying online comments as toxic or non-toxic, with labels spuriously correlated with mentions of certain demographic identities. As shown in Table 1, our approach performs comparably to state-of-the-art methods. Due to the constrained rebuttal timeline, we just applied our model out of the box, without any reasonable hyperparameter tuning. This was just to illustrate its applicability to domains beyond vision. We believe that with further hyperparameter tuning, we would be able to even surpass the marginal differences that we have with SOTA on CivilComments.
| Method | Worst Group Acc (%) |
|:--------:|:-------------------:|
| ERM | 58.6 (1.7) |
| JTT | 69.3 (-) |
| LfF | 58.3 (0.5) |
| LC | 70.3 (1.2) |
| DeNetDM | 68.33 (-) |
[A] WILDS: A benchmark of in-the-wild distribution shifts, ICML 2021.
---
Rebuttal 2:
Comment: I thank the authors for the response. One follow-up I have is:
W1/Q1. Is it necessary to use explicit bias annotations to tell apart the core and spurious features? Existing works [1,4] use bias amplification to segregate the spurious features.
Apart from that, I find the responses for Q2, W3, W4 to be satisfactory. I think the additional results will strengthen the paper. I
[1] Shrestha, Robik, Kushal Kafle, and Christopher Kanan. "Occamnets: Mitigating dataset bias by favoring simpler hypotheses." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
[4] Nam, Junhyun, et al. "Learning from failure: De-biasing classifier from biased classifier." Advances in Neural Information Processing Systems 33 (2020): 20673-20684.
---
Rebuttal 3:
Title: Response
Comment: Thank you for your feedback. We are glad that we could address most of your concerns. We will add the additional results to the final revision of the paper.
**W1/Q1**: As the reviewer rightly pointed out, bias amplification is a well-established method for addressing spurious features without needing explicit bias annotations, as shown in [1], and [4]. However, our approach leverages depth modulation as an alternative strategy for debiasing. Specifically, our deeper branch amplifies bias via depth modulation, a concept grounded in our theoretical insights. Our method also eliminates the need for explicit bias annotations to distinguish between core and spurious features, as the deep and shallow branches inherently act as biased and debiased models, respectively. Furthermore, we demonstrate that DeNetDM outperforms existing approaches like LfF, where bias amplification is achieved using GCE loss.
We hope this clarifies your concern.
---
Rebuttal Comment 3.1:
Comment: Thank you for the response. I keep my original score of 7, recommending acceptance. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their valuable feedback and thoughtful comments. We appreciate the opportunity to address the concerns raised and provide clarifications. We offer detailed responses to each reviewer, aiming to address the issues and enhance the clarity of our work. We utilize the global repsonse to address a common concern raised by **Reviewer 9dFt** and **Reviewer CcWa**.
Reviewer 9dFt raised a concern that the decodability experiments are limited to simple dataset, CMNIST. Reviewer CcWa also raised a concern that the empirical analysis on networks' depth to learning of different ranks (bias-aligned / bias-conflicting) is limited to CMNIST. We provide a common response to both the concerns below:
Analogous to Figure 2b in the main paper, Fig. B in the attached PDF illustrates the variation in feature decodability for corruption (bias) and object (core) in the C-CIFAR10 dataset as ERM training advances. We chose ResNet 20 as the deep network and a 3-layer CNN as the shallow network since these are the architectures used for DeNetDM. The training dynamics show a similar trend to those observed in ColoredMNIST concerning bias and core attributes. As training progresses, corruption (bias) becomes highly decodable by both deep and shallow networks, with the deep branch slightly outperforming the shallow branch. However, the object attribute (core) is more decodable by the shallow network as training progresses, during the initial training dynamics. These observations align with the early training dynamics observed in CMNIST.
As explained in Section 3.2 of the main text, an unbiased dataset is crucial for training the feature decodable layer to assess attribute decodability. For synthetic datasets like CMNIST and C-CIFAR10, we can generate unbiased datasets. However, this is not the case with natural datasets. The only option is to use existing unbiased test data, but the number of test data points in BFFHQ and BAR is as low as 1,000 and 656, respectively, which we believe is insufficient to train the decodable layer effectively. Additionally, BAR lacks group annotations even for the test set. As a result, we have concerns about the accuracy of feature decodability assessments on these datasets. However, we believe that the early training dynamics observed in C-CIFAR10 provide valuable insights into how these dynamics unfold for complex features.
Pdf: /pdf/1f10956a2591baa558c41af661fe7a61f670748a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Non-parametric classification via expand-and-sparsify representation | Accept (poster) | Summary: This paper addresses how to use expand-and-sparsify techniques to non-parametric classification. More specifically, it uses expand-and-sparsify on the test data point $x$ and then use the response regions of $x$, which are the sets of training data sharing same activation coordinates as $x$, to serve as the neighbourhood of $x$ and then predict the label $y$. It proposes two types of sparsification algorithms for this problem. The first algorithm chooses the $k$ largest values of the higher dimension that $x$ maps to. The authors proves the convergence rate of this algorithm and shows that it is minimax-optimal. The second algorithm chooses the empirical $k$-thresholding for sparsification. Unlike the first algorithm, the authors assume the matrix $\Theta$ which maps $x$ to higher dimension to be multivaraite Gaussian. The authors then prove that the second algorithm is also minimax-optimal. Finally, the authors empirically show the performance of algorithm 1 and compare it with KNN and random forest under different dimensions ($m$).
Strengths: 1. The problem setting is novel as the authors combined the expand-and-sparsify technique to non-parametric classification.
2. The authors provide two different algorithms for sparsification and show that both of these are minimax-optimal.
Weaknesses: 1. typos: line 137, $\{(x_1, y_1)\}_{i=1}^n \to \{(x_i, y_i)\}_{i=1}^n$. line 348: should be \textit{performance of our proposed classifier}? line 380: Big Oh changes to Big $\mathcal{O}$?
2. The motivation and the use cases of the problem is not addressed, why is it necessary to use expand-and-sparsify in non-parametric classification and if it it used, in which scenarios will this help?
3. The datasets used in the experiments are not explained and there is no experiments for algorithm 2 and different values of $k$ for the sparsification.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is it possible to compare and contrast the two algorithms and specify which algorithm is better under different scenarios?
2. Why the performance of algorithm 1 in the experiments increases with m but the other methods are the same when m increases?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The motivation and applicability of the techniques used in the paper is not specified and the experiments only show the performance of the first algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing constructive feedback and pointing out the typos (weakness 1). We will fix those.
Regarding weakness 2, it is not ``necessary" to use expand-and-sparsify representation in non-parametric classification. Note that there different types of non-paranetric classifiers, such as, histogram classifiers (partitioning estimates), $k$-nearest neighbor classifiers, kernel methods, random forest and so on. Our proposed classifier a new non-parametric classifier that utilizes expand-and-sparsify representation. We establish solid theoretical properties of our proposed classifier -- for appropriate choice of hyper-parameter, excess Bayes risk of our proposed classifier achieves minimax-optimal convergence rate (the best rate possible). Experimental results show a strong performance almost on par with the nearest neighbor and random forest classifiers.
Regarding weakness 3, as mentioned in line 347, details of the datasets are provided in appendix A. As per Theorem 3.12, we use $k= d\log m$ in our experiments.
To answer question 1, note that our Alg 1 achieves minimax-optimal convergence rate, the best possible in the non-parametric setting. This convergence rate decays exponentially slowly with (ambient) dimension $d$. However, we have shown in section 3.5 that even if the data lie on a low dimensional manifold with intrinsic dimension $d_0\ll d$, there exists a smooth regression function $\eta$ such that the quantity $\mathbb{E}_{x,D_n}|\eta(x)-\hat{\eta(x)}|$, where $\hat{\eta}(x)$ is the conditional probability estimate of Alg 1, decreases at a rate no faster than $n^{-\frac{1}{d+1}}$. Therefore, we can not expect excess Bayes risk to decay (a bit faster) at a rate that decays exponentially in $d_0$, instead of $d$, which means Alg 1 can not adapts to the low dimensional manifold structure. Alg 2 is designed for this very purpose. When data lie on a low dimensional manifold with intrinsic dimension $d_0\ll d$, convergence rate of Alg 2 decays exponentially slowly in $d_0$ (instead of $d$) and is again minimax-optimal. Please also see our rebuttal to reviewer Q16i to have more discussion on this subject.
To answer question 2, please note that $m$ is a hyper-parameter of our proposed Alg 1 and this hyper-parameter does not affect the performance of other non-parametric classifiers such as $k$-nearest neighbor ($k$-NN) classifier or random forest (RF) classifier. Our theoretical result (Theorem 3.12) suggests that excess Bayes risk decreases with increasing $m$. We wanted to verify this behavior in our empirical results. As a baseline, we chose two other non-parametric classifiers, namely $k$-NN amd RF. For $k$-NN, we chose two values of $k$, namely $k=1$ and $k=10$, which are pretty standard. For RF, we chose the number of trees via grid search and cross validation. Since $m$ is not a hyper-parameter of $k$-NN or RF, for different values of $m$, $k$-NN and RF test accuracies are constant.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' reply, the answers to questions 1 and 2 are pretty reasonable. I still feel the motivation is not clear for the algorithm and setting proposed by the paper, but I have bumped my score to 5 as the main contribution, as specified by the author, is to provide a novel framework and a theoretical guarantee.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging that our answers are reasonable. We apologize that the motivation of the algorithm and the setting are not clear. Let us try to argue about our motivation.
In recent years, it has been observed that a striking neural architecture appears in the sensory systems of several living organisms, which essentially is a transformation from a low-dimensional dense representation of sensory stimulus to a much higher-dimensional, sparse representation. This has been found, for instance, in the olfactory system of the fly (Wilson [2013]) and mouse (Stettler and Axel [2009]), the visual system of the cat (Olshausen and Field [2004]), and the electrosensory system of the electric fish (Chacron et al. [2011]).
For example, in the olfactory system of Drosophila (Turner et al. [2008], Masse et al. [2009], Wilson [2013], Caron et al. [2013]), the primary sense receptors of the fly are the roughly 2,500 odor receptor neurons (also known as, ORNs) in its antennae and maxillary palps, which can be clustered into 50 types, based on their odor responses, leading to a dense, 50-dimensional sensory input vector. This information is then relayed via projection neurons to a collection of roughly 2000 Kenyon cells (KCs) in the mushroom body, with each KC receiving signal from roughly 5-10 glomeruli. The pattern of connectivity between the glomeruli and Kenyon cells appears random (Chacron et al. [2011]). The output of the KCs is integrated by a single anterior paired lateral (APL) neuron which then provides negative feedback causing all but the 5% highest-firing KCs to be suppressed (Lin et al. [2014]). The result is a *random* sparse high-dimensional representation of the sensory input, that is the basis for subsequent learning.
The primary motivation of this paper is to study the benefit of this type of naturally occurring representation, namely expand-and-sparsify representation (EaS for short) in the supervised classification setting. In our setting, given a dense vector $x\in \mathcal{S}^{d-1}$ on unit sphere, we first obtain high dimensional transformation $y=\Theta x \in \mathbb{R}^m$, where $d\ll m$, by multiplying $x$ with a random projection matrix $\Theta$. Following this, we obtain EaS representation of $x$, which is a $k$ sparse $m$ dimensional binary vector, by setting (activating) the largest $k$ entries of $y$ to 1 and the remaining entries to 0 (suppression). It turns out that all the examples $x\in\mathcal{S}^{d-1}$ that set the $j^{th}$ bit, for any $j=1,\ldots,m$, to 1 in their respective EaS representations can not be located too far. In fact, the diameter of the ball containing these points can be made small by appropriately choosing large $m$. Thus, we can estimate the average conditional probability of these regions using labeled data. Given any $x$, we can estimate the conditional probability $P(y=1|x)$ by taking the average of conditional probability estimates over $k$ regions corresponding to the $k$ non-zero bits in the EaS representation of $x$. Once we have such a conditional probability estimate, comparing it with $1/2$, we predict the class label in the binary classification setting. This is precisely the intuition of our algorithm 1. It turns out, that this proposed classifier admits the form of a locally weighted average classifier and can be analyzed using the ideas from non-parametric statistics as detailed in section 3. Alg 2 is a bit more complicated and is analyzed in section 4.
**If the paper is accepted we will have one extra content page and if you are satisfied with the motivation and setting presented above, we can summarize the above discussion in the camera ready version.**
Wilson [2013] : R. Wilson. Early olfactory processing in Drosophila: Mechanisms and principles. Annual Review of Neuroscience, 36:217–241, 2013.
Stettler and Axel [2009]: D. Stettler and R. Axel. Representations of odor in the piriform cortex. Neuron, 63:854–864, 2009.
Olshausen and Field [2004]: B. Olshausen and D. Field. Sparse coding of sensory inputs. Current Opinion in Neurobiology, 14:481–487, 2004.
Chacron et al. [2011]: M. Chacron, A. Longtin, and L. Maler. Efficient computation via sparse coding in electrosensory neural networks. Current Opinion in Neurobiology, 21:752–760, 2011.
Turner et al. [2008]: G. Turner, M. Bazhenov, and G. Laurent. Olfactory representations by Drosophila mushroom body neurons. J. Neurophysiol., 99:734–746, 2008.
Masse et al. [2009]: N. Masse, G. Turner, and G. Jefferis. Olfactory information processing in Drosophila: review. Current Biology, 19:R700–R713, 2009.
Caron et al. [2013]: S. Caron, V. Ruta, L. Abbott, and R. Axel. Random convergence of olfactory inputs in the Drosophila mushroom body. Nature, 497:113–117, 2013.
Lin et al. [2014]: A. Lin, A. Bygrave, A. de Calignon, T. Lee, and G. Miesenbock. Sparse, decorrelated odor coding in the mushroom body enhances learned odor discrimination. Nature Neuroscience, 17(4), 2014. | Summary: In this paper, the authors studied the problem of binary classification when the feature vectors are on the unit sphere in $\mathbb{R}^d$. The paper proposed a non-parametric algorithm based on a method called EaS (Expand and Sparsification). They proved that this method is universally consistent and that its errors converge to the Bayes error with a rate of $\mathcal{O}(n^{-\frac{1}{d}})$. They also proposed an algorithm for when the data lie in a $d_0$-dimensional manifold and proved that its errors converge to the Bayes error with a rate of $\mathcal{O}(n^{-\frac{1}{d_0}})$. The paper builds upon the work of Dasgupta and Tosh [2020], which is cited in the paper.
Strengths: - The proofs are rigorous.
- The writing is very good and easy to follow.
- The messages of the paper are very clear.
Weaknesses: - The contribution with respect to Dasgupta and Tosh [2020] is very marginal, and many of the ideas come from that work.
- In practice, other methods like random forests seem to perform better than the proposed method.
- It is not mentioned in the paper whether this is the best theoretical work with this rate of convergence.
- When data lie in a $d_0$-dimensional manifold, the convergence rate again seems exponentially slow with respect to $d$. Although this limitation is mentioned in the paper, I think it is a weakness.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I think in line $42$, $\mathbb{P}(y=1 \vert x)$ should be $\mathbb{P}(y=1 \vert C_j)$.
- In line $537$, what is $\nu$?
- The proofs in lines $536$ to $542$ are very unclear, and there are some claims in these lines which I think need mathematical proof.
- I think there are some inconsistencies in the notation of the proof of Lemma $3.5$ with the main body of the paper; please correct these.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing constructive feedback.
Regarding weakness 1, while our work is inspired by the work of Dasgupta and Tosh [2020], we have clearly articulated how our work is different and advances the result of Dasgupta and Tosh [2020] in lines 96-111.
Regarding weakness 3, in the non-parametric setting, minimax-optimal convergence rate is the best possible convergence rate one can hope for. For appropriate choice of $m$, our proposed method achieves the minimax-optimal convergence rate (Remark 3.14, 4.4), therefore, this is the best possible rate.
Regarding weakness 4, yes, this is a limitation of Alg 1 as Alg 1 does not adapt to low dimensional manifold. We have explicitly established a lower bound in Theorem 3.15. To address this shortcomings, we have developed Alg 2 whose convergence rate, when data lie on a low dimensional manifold with intrinsic dimension $d_0\ll d$, decays exponentially slowly in $d_0$ (Theorem 4.2, Corollary 4,3, Remark 4.3) which is minimax-optimal. It is well known (Tsybakov [2008], Gyorfi et al. [2002]) that, in the non-parametric setting, minimax-optimal convergence rate decays exponentially slowly with the dimension. When data lie on a low dimensional manifold with intrinsic dimension $d_0$, where $d_0\ll d$ ($d$ is the ambient dimension), convergence rate of our Alg 2 is minimax-optimal in-terms of $d_0$, without teh knowledge of $d_0$ is. In other words, our Alg 2 adapts to intrinsic dimension.
Thank you for pointing out the typo in line 42, we will fix this. In line 537, we overload $\nu$ so that $\nu(r)$ denotes the smallest probability mass of any ball centered at $x$ with radius $r$. We will add a footnote to address this confusion. We do not see any incorrectness in the proof, in particular in lines 536-542, the required claim is given in Lemma D.1 which is a standard result from Chaudhuri and Dasgupta [2010]. However, we will spend some time to wordsmith this part of the proof so that it is more readable. Thank you for pointing out notational inconsistency in the proof of Lemma 3.5, we will fix this.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their answers. They addressed my questions, so I increased my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for increasing the score, we will integrate the clarification from the rebuttal into our revised version. Should there be any further questions or clarifications we can offer, please do not hesitate to let us know. | Summary: The paper studies expand and sparsify the approach to non-parametric classification. The high-level idea is to first lift the example $x \in \mathcal{X} = \mathcal{S}^{d-1}$ to a $\{0,1\}^m$, a $m$-dimensional Boolean cube with exactly $k$ ones. The lifting is done by a linear mapping $x \mapsto \Theta x$ followed by top-$k$ selection using the magnitude of $\Theta x$, where the matrix $\Theta$ is sampled randomly. Let $\Theta x \mapsto z$ be the boolean feature obtained after the top-$k$ selection. The authors define a simple learning rule for the weight vector $w \in [0,1]^m$ and propose a linear classification rule $\mathbf{1}[w^{\top} x \geq k/2 ]$. Under the assumption that the score function $\eta: \mathcal{X} \to [0,1] $ of the Bayes optimal classifier is Holder-continuous, the proposed classifier is shown to be minimax optimal. The paper also proposes another classifier that can adapt if the example space $\mathcal{X}$ has a low-dimensional structure.
Strengths: - The classifier is very natural and can be implemented.
- Experimental results show a strong performance almost on par with the nearest neighbor and random forest classifiers.
- The idea that one can lift the feature to a large enough boolean cube and run a sparse linear classifier is beautiful. Hopefully, there will be some followup works using this idea to solve other problems.
Weaknesses: I did not find any apparent weakness in the paper. The paper delivers on its promise to provide a minimax optimal learner based on the expand and sparsify approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: Does Theorem 3.12 imply the consistency result in Theorem 3.3? If true, I suggest that the authors just state quantitative bound and infer consistency in a remark. The authors can use the saved space to give a more detailed proof sketch of Theorem 3.12. On the other hand, if the conditions for consistency in Theorem 3.3 and the quantitative rates in Theorem 3.12 are different, then some comments on that difference would be helpful.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing encouraging feedback. We are in fact working on using this expand-and-sparsify representation idea to solve other ML problems.
The question you have is an important one. The conditions for consistency in Theorem 3.3 and the quantitative rates in Theorem 3.12 are slightly different. The reason is, to establish consistency we simply used the classical Stone's theorem without caring much about the convergence rate (often for a new classifier, the first question one typically asks is whether the classifier is statistically consistent or not). To derive convergence rate, especially to establish minimax-optimality, we did a bit more careful analysis. However, convergence rate does imply consistency and we will add a comment explaining this difference. If accepted, the camera ready version will provide one extra content page and we will add a proof sketch of Theorem 3.12 in the main paper as well.
---
Rebuttal Comment 1.1:
Comment: I acknowledge the author's response and will retain my score. Thanks!
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your positive feedback. We will incorporate the clarifications into the revised version of our manuscript. Should there be any further questions or clarifications we can offer, please do not hesitate to let us know. | Summary: The paper introduces two algorithms for non-parametric binary classification utilizing the expand-and-sparsify (EaS) representation. The first algorithm employs a winners-take-all approach for sparsification. It demonstrates consistency and achieves a minimax-optimal convergence rate that depends on the data dimension. The second algorithm is designed for scenarios where the data lies in a low-dimensional manifold. By utilizing the empirical k-thresholding operation, this algorithm attains the minimax optimal convergence rate dependent on the dimension of the low-dimensional manifold rather than the data dimension.
Strengths: 1. Relating non-parametric classification with expansion and EaS representation is both interesting and reasonable. The intuition behind it is straightforward: EaS offers a way to identify the 'neighbors' (active regions) of each data point, or in other words, points that are proximate to the data point in a certain representation space. By aggregating the conditional probability distribution of these 'neighbors,' we obtain a consistent classifier.
2. The proposed algorithm is simple and easy to implement. It leverages ideas from unsupervised learning representation and adapts them naturally to the supervised learning setting. It reminds me of several empirical machine learning algorithms, such as prototypical networks, which learn a representation for each class by computing the mean of the support examples' feature representation. During inference, the algorithms classify new examples based on the closest class prototype in the feature space. I believe this paper offers a way to conceptualize a class of empirically successful supervised learning algorithms, which involves learning representations and classifying data using examples that are close to it in the representation space.
3. The paper is well-organized and easy to read. The contributions, theorems, and underlying intuitions are presented clearly.
Weaknesses: 1. Most parts of the algorithms and some sections of their proofs come from [Dasgupta and Tosh, 2020] directly. The authors have pointed out that technically, they relax the Lipschitz continuous to Holder continuous and consider the effects of sample size. However, overall, the technical contribution is not significantly substantial in this regard. I would not view this point as a negative but rather as a neutral one.
2. I actually appreciate the way you have written the related work section, as it effectively illustrates the intuition behind EaS and how this idea has been developed. It also highlights the differences between this paper and previous works. However, there is a lack of related work on non-parametric classification, particularly focusing on their consistency and convergence results.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. As in Weakness 2, I suggest including more related work on non-parametric classification, particularly focusing on their consistency and convergence results.
2. In Algo 2, how do we choose t?
3. I appreciate your lower bound result in Theorem 3.15. However, we can modify Algo 1 by turning down the effects from $\theta_j$ that are far from $x$, specifically we can choose larger m and k, and then set eas[i] to be 0 when $\theta_i$ is not t-close to $x$. Does this modification make Algo 1 applicable to the manifold setting? If not, what is the intuition behind Algo 2 working in this scenario while the modified Algo 1 does not? In other words, how does 'empirical-k-thresholding' provide an advantage over 'k-WTA'?
4. The EaS method was initially proposed for unsupervised learning because it does not require labels. Given labeled data, we actually possess much more information. For instance, points with the same label should be closer in the representation space. However, your algorithm learns the representation and identifies the 'k-neighbors' primarily based on the data itself rather than their labels. From this perspective, I believe there could be modifications to the algorithm that better utilize the labels during the training phase. This could be a worthwhile area for further exploration.
## Minor:
1. In Algo 2, why we use first and second half of training data separately? In my understanding, the first half of the training data is used to determine the threshold, while the second half is used to determine $\eta_j$. So, we can probably use the whole batch of data twice to do the two things instead.
2. Page 3 line 137. $(x_1,y_1)$ should be $(x_i,y_i)$.
3. Page 4 line 153. It should be “An average of y values”.
4. Page 4 line 163. “Non-parametric”.
5. Page 8 line 323. “Alg. 2”: a missing “.”
6. Page 8 line 324. A missing $($.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for providing thoughtful and encouraging feedback. The reviewer's suggestion of including consistency and convergence rates of various non-parametric classification is an excellent one and if accepted we will definitely include a discussion on consistency and convergence rates of various non-parametric classifiers, such as, partitioning estimates (histograms), k-nearest neighbors, kernel methods and random forests.
Regarding question 2, $t$ should be linear in $k$, for example $t=k/2$ or $t=k/4$, to have minimax-optimal convergence rate. However, it is possible to choose even $t=1$.
Regarding question 3, your suggested modification of Alg1 is an interesting one. Note that in $k$-WTA sparsification, for any $x$, the closest $k$ $\theta_i$ are are activated (i.e., the corresponding bits in EaS representation are set to 1) since they have the largest $k$ dot products. Because of this property, your suggestion (choosing large $k$ first and then setting $eas[i]=0$ when $\theta_i$ is not $t$ close to $x$) boils down to choosing $k=t$. Intuitively, when data lie on or near a low dimensional manifold embedded within a high dimensional unit sphere, and $\theta_i$ are chosen uniformly at random from the high dimensional unit sphere, only a tiny fraction of the $\theta_i$ will ever be used (activated for some $x$, i.e, corresponding bits in EaS representation will be 1), that is, they are one of the $k$ closest ones to at least one $x$ lying on the manifold. In other words, to have reasonable conditional probability estimates of the activated regions, which in turn may be used to provide a good estimate of $\eta(x)$, we may have to sample large number (exponential in $d$) of $\theta_i$s, in spite of the fact that most of them will possibly be unused (never activated at all)!
In comparison, the empirical-k-thresholding is done in such a way that EaS representation of each $x$ is $k$-sparse in expectation but every single $\theta_i$ is used, that is, activated for roughly $k/m$ fraction of the data points, (thus, reduces wastage as was the case for $k$-WTA sparsification), however, depending on the manifold structure, every single activated region $C_i$ may not be local anymore, and thus, $\eta(C_i)$ may not be reliable if we want to use it to estimate $\eta(x)$. For Alg 2, we first show in Lemma B.3 that when $\theta_i$ is **good** in a sense that it is close to the manifold (please see Appendix B.3 for precise meaning) then $C_i$ is local in the sense that diameter of $C_i$ is small and depends only on $d_0$ instead of $d$ as was in the case of $k$-WTA sparsification. Then, we show in Lemma B.4 that when $\theta_i$ are chosen from appropriate Gaussian distribution, there are enough ``good" $\theta_i$ for each $x$ on the manifold with high probability. We set $t$ to be the number of **good** $\theta_i$ and for any $x$ on the manifold, the average of the $t$ corresponding conditional probability estimates $\eta(C_i)$ is a reasonable estimate of $\eta(x)$.
Regarding question 4, your suggestion makes a lot of sense and we will definitely pursue this avenue in our future research.
Finally, thank you so much for pointing out the typos. We will fix those. Regarding usage of two sets of data in Alg2, our motivation was to keep the estimated thresholds (computed from one set of data) independent of the other set used to estimate $\eta_j$. With a careful analysis, it may be possible to use only one set of data to compute both the thresholds and the $\eta_j$'s.
---
Rebuttal Comment 1.1:
Title: After Rebuttal
Comment: Thanks for the reply. The authors have addressed my questions. I will keep my score.
---
Reply to Comment 1.1.1:
Comment: We greatly appreciate your positive feedback. We will incorporate the clarifications into the revised version of our manuscript. Should there be any further questions or clarifications we can offer, please do not hesitate to let us know. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation | Accept (poster) | Summary: This paper introduces a new algorithm for source estimation, the task of estimating a model's parameters probability distributions consistent with observations. In opposition to previous algorithms, the proposed method is sampling-based and can hence be used when the model is a simulator, implicitly defining the likelihood even if the observations are high-dimensional. The authors highlight that several probability distributions might be consistent with the observations and advocate favoring the one with the highest entropy. Their method provides a way to target high-entropy solutions and shows that this new objective leads to a unique optimum.
Strengths: ### Originality
* The method is new to me.
### Quality
* I did not identify flaws in the mathematical derivations.
* The method is evaluated on a wide variety of benchmarks, and several runs have been made. The experiments showcase that the entropy regularization indeed leads to higher entropy sources.
### Clarity
* All the necessary details are provided
* The paper is well articulated.
* The method is clearly explained
* It is clearly stated how the method fits in the literature.
* Figures are clean.
### Significance
* Source estimation is a task that has received attention in recent years.
* This paper fills a gap in the literature by providing a method that works with high-dimensional simulators.
Weaknesses: ### Originality
* I have no concerns regarding originality.
### Quality
* I find that the Lotka-Volterra and SIR experiments are not totally convincing. First, it lacks comparison with NEB, which is claimed to perform poorly in such settings. I agree that NEB will probably provide poor results, but this should still be shown empirically, in my opinion. Second, the Wasserstein distance is hard to interpret. It is claimed that a low value is observed, but that is not clear to me why this value can be considered a low value as there are no baselines to compare with.
### Clarity
* I have no concerns regarding clarity.
### Significance
* I have no concerns regarding significance.
Technical Quality: 3
Clarity: 4
Questions for Authors: * Is there something that prevents you from comparing against NEB in Figure 4?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the positive feedback of our work.
**W/Q1: "Is there something that prevents you from comparing against NEB in Figure 4?"** We thank the reviewer for their suggestion, and have now compared Neural Empirical Bayes (NEB) with Sourcerer on the high-dimensional simulators (SIR and Lotka-Volterra model). Results for this experiment can be found in the provided rebuttal supplement (Fig. R2). In summary, we find that Sourcerer performs better on both tasks: on SIR, Sourcerer achieves a C2ST accuracy (lower is better) of 55% between data and predictive simulations of learned source distribution, compared to 76% for NEB; similarly, on Lotka-Volterra, Sourcerer achieves a C2ST of 56%, compared to 62% for NEB. Thus, Sourcerer is demonstrably better than NEB on these high-dimensional tasks.
To perform this experiment, we trained surrogates for each simulator. While this was not necessary for Sourcerer, attempting to perform NEB on the differentiable likelihood of the differential equation simulators was computationally infeasible due to the large minibatches required for NEB. To enable surrogate training, we reduced the observation dimensionality of both models to 25 (instead of 50 and 100 respectively), and added Gaussian noise to the previously deterministic SIR simulator. For a fair comparison, we use the same surrogate model for both Sourcerer and NEB.
We argue that Sourcerer's improved performance over NEB on these tasks is due to the different ways in which the two approaches use the surrogate model - NEB is sensitive to the exact likelihood, and thus may suffer from a lack of robustness to misspecification as in the case of a surrogate model. Our method, on the other hand, requires only samples, and thus the exact likelihoods produced by the surrogate model do not have a detrimental effect.
**W1.1 "The Wasserstein distance is hard to interpret. It is claimed that a low value is observed, but that is not clear to me why this value can be considered a low value as there are no baselines to compare with"**: We agree with the reviewer that the numerical value of the Sliced-Wasserstein Distance (SWD) is hard to interpret in isolation. We had therefore computed the expected SWD between two sets of independently generated observations (simulated from the true source distribution). This minimum achievable distance is indicated by the black dotted lines in Figure 4 of our original submission. We found that for both models, the distance between simulations (from the estimated source) and observations is very close to the expected distance between simulations, and will emphasize this in the revision. Finally, we also provide the more interpretable C2ST accuracies for the SIR and Lotka-Volterra experiments in the Appendix (Fig. 8). In both cases, Sourcerer achieves a C2ST accuracy close to 50%.
---
Rebuttal Comment 1.1:
Comment: Thanks for the update.
I have read the rebuttal and the other reviews and keep my score unchanged.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. | Summary: This paper deals with the problem of identifying the distribution of a source variable $s$ that generates observations $x$.
## The problem
The authors propose to minimize a classical objective consisting of two terms:
1. A reconstruction term, that encourages the recovered source distribution to induce a distribution over observations that is close to the data distribution, in a certain divergence (here, the sliced Wasserstein)
2. A regularization term, that encourages the recovered source distribution to be close to a reference distribution, in a certain divergence (here, the reverse KL).
Many different source distributions can minimize the first term alone, which motivates the second term to contrain the solution space.
## The probability model
The probability model $p(x)$ can be expressed in terms of two other model distributions: the source distribution $p(s)$ and the likelihood $p(x | s)$.
Term (1) uses a (sliced) Wasserstein divergence. Computing this divergence requires data samples and samples from the model in two steps: $s \sim p(s)$ then $x | s \sim p(x | s)$. Differentiating through this divergence requires the probability model, and therefore $p(s)$ and $p(x | s)$, to be samplable and the sampling process should be differentiable.
Term (2) uses a reverse KL divergence. Computing this divergence requires evaluating the densities of the model and reference distributions, as well as samples from the model in two steps: $s \sim p(s)$ then $x | s \sim p(x | s)$. Differentiating through this divergence requires the probability model, and therefore $p(s)$ and $p(x | s)$, to be samplable and the sampling process should be differentiable. Additionally, the model density should be known or else a differentiable approximation of the divergence (the authors use the Kozachenko-Leonenko estimator) should be available.
## Numerics
The authors validate their method using synthetic data where the source is known, and real data where the observations are neurophyisiological recordings and the likelihood is given by a mechanistic model.
Strengths: The presentation is clear and well-explained.
Weaknesses: Overall, I have two concerns.
1. The computational constraints on the model are heavy
The choices of divergences (Sliced-Wasserstein and reverse KL) in the cost function impose a number of computational constraints of the model.
Term 1 in the cost uses a vanilla Sliced-Wasserstein divergence estimator that is sample-based. This means the likelihood model $p(x | s)$ should be samplable and the sampling process should be differentiable. The authors propose to train a differentiable approximator of a black-box simulator. To be fair, the authors acknowledge that this is a general issue beyond their paper.
Term 2 in the cost uses a vanilla KL divergence estimator which requires knowing the density of the model, but the authors circumvent this by using another estimator named "Kozachenko-Leonenko". Given that the authors already used a sample-based divergence in Term 1, wouldn't using another sample-based divergence in the Term 2 (e.g. the Sliced-Wasserstein again, or an MMD) between the model and reference distributions not add any further computational constraints on the model?
2. The evaluation procedure or goal remains unclear to me
The framework of minimizing a cost made up of a reconstruction and regularization term is well-studied in optimization. Once could imagine two natural evaluation procedures:
- how well the true sources are recovered? But the authors are clear that designing an identifiable probability model where the sources can be recovered (up to acceptable indeterminacies) is not the goal here.
- how efficient is their method on a new test dataset, as compared to other similar methods?
It seems to be that the authors answer the latter, but it is not clear to me what are the other methods to benchmark against.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors address the weaknesses part?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We address the reviewer's concerns below.
**W1: "The computational constraints on the model are heavy"**: The reviewer raises concerns about computational constraints on the model, which we clarify here for the two terms in the cost function:
First, the entropy/KL-divergence term (term 2) imposes negligible computational constraints on the model because it is computed on samples in parameter space, and not in data space. In particular, given samples $\theta_i$ from our source model, we only need to compute their nearest-neighbor distances $d_i$ to estimate the entropy (Eq. 8 of our submission). Thus, its computation is completely independent of the simulator or surrogate. Hence, this concern is not a limitation of the method.
In addition, while other distances (such as Sliced-Wasserstein or MMD, as suggested by the reviewer) can be used to regularize the source in the second term, we emphasize that the choice of entropy (KL-divergence wrt. uniform distribution) stems from the original motivation of the work (i.e., the theoretical justification in Prop 2.1.)
Second, we agree with the reviewer that requiring the simulator to be differentiable for term 1 is a computational constraint on our model. However, we emphasize that our approach is an improvement over prior work (in particular, Vandegar et al. (2020)) in this regard. As the reviewer identified, the requirement to train surrogates for non-differentiable models is a broader limitation of the field. However, typical works in the field require training a surrogate that has a tractable _and_ differentiable likelihood. We only require differentiability. We do not require tractable likelihoods, which reduces the computational constraints. This is a relative strength of our method relative to previous approaches, not a limitation.
[Vandegar et al.] - Neural Empirical Bayes: Source distribution estimation and its applications to simulation-based inference. In International Conference on Artificial Intelligence and Statistics, 2020.
Finally, in terms of computational cost, the choices we make for the two terms in the loss function are highly efficient. For the mismatch term (term 1), the Sliced-Wasserstein distance is linearithmic in the number of observations, and thus scales well to millions of observations. Furthermore, the distance computation of individual 1-D slices is embarrassingly parallelizable. Our use of Kozachenko-Leonenko estimators to estimate entropy (term 2) is also scalable, since computing the estimate requires solving the all-nearest-neighbors problem, which can also be solved in linearithmic time. We now perform a runtime comparison between NEB and Sourcerer for the benchmark tasks, and find that Sourcerer is significantly faster (Table R1).
**W2: "The evaluation procedure or goal remains unclear to me"**: We apologize for not making our evaluation procedure sufficiently clear. For the benchmark tasks as well as the SIR and Lotka-Volterra experiment, we always evaluate the match between simulations and observations on a second, unseen ‘test’ set of observations that was generated independently using samples from the ground truth source distribution. Thus, we already do the evaluation in the way the reviewer asks, and just failed to point this out in the manuscript. We will clarify in the revised version.
More generally, our evaluation is always based on two criteria:
1. How well do the simulations (generated by the estimated source) match the set of observations? We measure this match using Classifier Two Sample Tests (C2ST) and the SWD.
2. How high is the entropy of the estimated source? We measure this by estimating the differential entropy with samples from the source using the Kozachenko-Leonenko estimator.
We benchmark our method against Neural Empirical Bayes (NEB), a state-of-the-art approach for source distribution estimation. For the benchmark tasks, we showed that Sourcerer obtains sources that are comparable to NEB in terms of simulation fidelity (point 1), while having a higher entropy (point 2). We now also perform an additional comparison on the high-dimensional data tasks of SIR and LV (Fig. 2), and find that our method also significantly outperforms NEB in terms of simulation fidelity (point 1). On SIR, Sourcerer achieves a C2ST accuracy (lower is better) of 55% between data and predictive simulations of learned source distribution, compared to 76% for NEB; similarly, on Lotka-Volterra, Sourcer achieves a C2ST of 56%, compared to 62% for NEB.
---
Rebuttal Comment 1.1:
Title: Answer to authors
Comment: I thank the authors for their clarifications and am happy to raise my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. | Summary: This paper proposes an approach to estimating a maximum-entropy source distribution (akin to a prior distribution over simulator parameters) for a given set of observations and simulation model. Their method assumes a differentiable simulator that may be deterministic or stochastic, and it uses neural samplers to approximate the prior and a variational objective that encourages proximity of both the marginal likelihood to the true data distribution and the estimated source distribution to some known prior distribution (which may be uniform, corresponding to entropy regularisation). In particular, proximity to the true data distribution is measured with a sliced Wasserstein distance, due to its fast computation and differentiability (preserving the differentiable pipeline). The authors present experiments on four benchmark tasks, before extending to higher dimensional and more complex examples such as the single-compartment Hodgkin-Huxley model.
Strengths: *Originality*
The authors consider using a sample based loss to capture mismatch between the simulated and true data distributions, in contrast to using likelihood-based notions of distance. This is useful since simulation models often lack tractable likelihood functions.
*Quality*
The experimental section presents good and extensive empirical testing on a number of toy benchmark models, in addition to two additional more complex simulation models. Their approach is also well-motivated via Proposition 2.1.
*Clarity*
The clarity of writing is generally good. There are a few minor errors in the writing that made it not absolutely clear all the way through:
- The last sentence of the first paragraph of Section 1 isn't a full sentence (or, if it is, then it not well-written because I have read it multiple times and could not turn it into a full sentence in my head).
- There is a subscript $\phi$ missing in Equation 4.
*Significance*
I think this paper will be of some interest to the community and the techniques presented used by practitioners.
Weaknesses: My main concern is that the work presented looks like it is not a large or significant contribution beyond what is already present in the literature. From my perspective, it looks like someone could reasonably argue that the contribution is just to use a different sample-based loss function to capture the discrepancy between the data distribution and simulator's marginal likelihood. It also isn't quite clear to me how exactly the approach that uses the surrogate model (Sur. in Table 1) is different from what is presented in Vandegar et al. (2020).
Technical Quality: 3
Clarity: 3
Questions for Authors: Could the authors provide some more detail and clarification on how their approach differs/relates to/builds on previous works please? Ideally at least those papers already cited such as Vandegar et al. (2020), but I think some additional related literature is also missing, such as references [1] (relevant since it discusses variational approaches to targeting generalised Bayesian posteriors) and [2] (relevant to the problem of source estimation in complex simulation models) below. Is there anything that we lose by using something like the Sliced Wasserstein Distance instead of a likelihood-based discrepancy?
[1] _Knoblauch, J., Jewson, J., & Damoulas, T. (2022). An optimization-centric view on Bayes' rule: Reviewing and generalizing variational inference. Journal of Machine Learning Research, 23(132), 1-109._
[2] _Dyer, J., Quera-Bofarull, A., Bishop, N., Farmer, J. D., Calinescu, A., & Wooldridge, M. (2024, May). Population Synthesis as Scenario Generation for Simulation-based Planning under Uncertainty. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (pp. 490-498)._
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors provide a good discussion of the limitations towards the end of the paper. Perhaps something additional to discuss would be limitations associated with simulators that involve discrete randomness, which (as far as I can tell) aren't tested in this paper but which often appears in simulation modelling and presents an additional complication when the requirement is that sampling from the simulator is a differentiable operation (see e.g. [1] below). I also do not see any problems related to the broader social impact of this work.
[1] _Arya, G., Schauer, M., Schäfer, F., & Rackauckas, C. (2022). Automatic differentiation of programs with discrete randomness. Advances in Neural Information Processing Systems, 35, 10435-10447._
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and careful reading. We address the main concerns and questions raised by the reviewer below. Thank you for identifying typos, we will correct them.
**W1: "My main concern is that the work presented looks like it is not a large or significant contribution beyond what is already present in the literature"**: We summarize the two main contributions of our work:
First, we provide an approach to resolve the ill-posedness of the source estimation problem by using the maximum entropy principle, and show that it leads to a unique solution. Empirically, we demonstrate that our approach can estimate high-entropy sources without sacrificing the quality of the match between simulations (from the estimated source) and observations.
Second, our approach is fully sample-based. This provides significant computational advantages over previous methods, since our method can use differentiable simulators directly, without training surrogate models.
**W1.1: "Detailed comparison to Neural Empirical Bayes"**: Our work makes conceptual, theoretical, and empirical contributions that go beyond Neural Empirical Bayes (NEB, Vandegar et al. (2020)):
First, a limitation of NEB - as directly acknowledged in their publication (Section 2, third paragraph) - is that the source distribution estimation problem is ill-posed. This ill-posedness is not addressed by the NEB method. In our work, we propose to use the maximum entropy principle as a way to resolve this ill-posedness, and make the theoretical contribution of showing that the maximum entropy source distribution is unique (Prop. 2.1). Furthermore, we demonstrate empirically (Table 1, Figures 3,4,5) that our method finds higher entropy source distributions than NEB, without any decline in the predictive performance of the estimated source distributions.
Second, our approach is entirely sample-based, i.e. it does not require access to likelihoods (or surrogate models with likelihoods). We show the advantages of our sample-based approach over NEB in the context of high-dimensional, differentiable simulators. In this setting, our method (unlike NEB) does not require the training of surrogate models. We now additionally show (Fig. R2) that our sample-based objective is more robust than NEB even in the case where a surrogate is available for higher-dimensional tasks. For the SIR task, our method achieves a C2ST accuracy (lower is better) of 55% between data and predictive simulations of the learned source distribution, compared to 76% for NEB; similarly, on Lotka-Volterra, our method achieves a C2ST of 56%, compared to 62% for NEB. Thus, our method is demonstrably superior to NEB on these high-dimensional tasks.
**W1.2: "Comparison to additional works"**: The reviewer also asked us to contrast our approach with previous work.
Regarding Knoblauch et al. (2022) [1]: The idea of using other distance measures between distributions is the key approach of Generalized Bayesian Inference (GBI) methods, as discussed in our related work and in [1]. Our method shares similarities with GBI in that it uses non-likelihood-based objective functions. However, we emphasize that our work is concerned with source distribution estimation. This is a different problem than inference, in that we seek to identify a distribution of parameters that is consistent with a population of independent observations (many to many), rather than to explain a single or multiple observations from a single parameter (i.e., posterior inference, one to one or many to one). These are not technical differences, but rather that these methods simply address different goals.
Regarding Dyer et al. (2024) [2]: This problem setting is different from ours. In this paper, the authors seek to find a distribution that produces simulations “matching closely a target, user-specified scenario”. This is different from the source distribution estimation problem, where we try to find a distribution that reproduces observed data. In addition, the entropy regularization has a very different interpretation. In Dyer et al. (2024), the entropy regularization “exhibits the trade-off between (a) the diversity of synthesized populations and scenarios and (b) the ability to identify the most extreme manifestations of the target scenario”. In our work, we show theoretically (Prop. 2.1) and empirically (Table 1, Figures 3,4,5) that no such trade-off is necessary for source distribution estimation.
In summary, we believe that our work is a significant step upon previous work and hope that our response addresses the reviewer’s concern that our contribution is limited to the use of a sample-based loss.
**Q1: "Is there anything that we lose by using something like the Sliced Wasserstein Distance instead of a likelihood-based discrepancy?"**: We thank the reviewer for their question. We believe that the main drawback of using sample-based distances is in the low sample limit, where the number of observations $N$ in the true data is very small. Likelihood-based objectives may perform better in this setting, as the value of the likelihood itself contains more information than the sample alone.
We also evaluated Sourcerer’s performance in a low sample count regime ($N=100$ instead of $N=10000$) in a new experiment for the Two Moons task (Fig. R1a). We observe that our method, using the sample-based Sliced-Wasserstein distance, is still able to recover good sources in this case.
**Q2: "Perhaps something additional to discuss would be limitations associated with simulators that involve discrete randomness"**: We thank the reviewer for pointing out this limitation. While we believe that an investigation of discrete simulators is outside the scope of this work, we agree that it merits acknowledgement in the limitations section.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, please read the above rebuttal and evaluate whether it answers your concerns. If your evaluation remains unchanged, please at least acknowledge that you have read the author's response.
---
Rebuttal Comment 1.2:
Comment: Thanks for your detailed response – I'm happy with all of your points.
---
Reply to Comment 1.2.1:
Comment: We thank the reviewer for their response. | Summary: The authors propose to use the maximum entropy principle (possibly tempered with a prior) in order to reduce the ambiguity in solving the problem of source distribution estimation. They propose to use a sample-based technique that optimizes for the Sliced-Wasserstein distance measuring the discrepancy between the simulated data and the real data. They propose experiments on both synthetic and real-world models.
Strengths: - The paper is extremely clear, concepts are presented consequentially and the method and experiments are easy to follow
- The method is simple, but principled and effective on both synthetic and realistic data
- The paper represents an interesting contribution to the field of source estimation methods.
Weaknesses: I do not see any major weakness in the work. A minor one:
- W1: it would be interesting to see a more thorough study for models with even higher dimensionality, and how the proposed method's performance scales.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Q1: In some lines authors frame as the maximum entropy principle as something intrinsically beneficial. As they already discuss, this can be modulated by using a reference distribution. Maybe some statements that make the maximum entropy principle stand out as a sort of gold standard could be toned down a bit?
- Q2: Would it be possible for the authors to show what could happen with the choice of other or wrong distance types?
- Q3: An interesting case the authors could discuss is when multiple losses may be required for different subsets of parameters?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors already address the limitations, or the limitations are intrinsic to the field and cannot be attributed to the proposed methodology.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work. We address the reviewer’s questions below.
**W1: "It would be interesting to see a more thorough study for models with even higher dimensionality"**: We thank the reviewer for their suggestion to investigate the performance of our approach in high-dimensional parameter spaces. To do so, we perform an additional experiment on the Gaussian Mixture benchmark task. The model is identical to the previous Gaussian Mixture model described in our benchmark task (Appendix 1.4), but now the dimension has been increased to 25. We again find sources that reproduce the data well, and that when we apply entropy regularization, we obtain higher entropy source distributions without any degradation to the pushforward accuracy (Fig. R1b). We also observe that the threshold value of $\lambda$, at which the pushforward C2ST degrades is now lower than in the 2-dimensional case. We believe that this is due to the entropy of the higher dimensional sources varying on a larger scale, thus restricting our method to smaller values of $\lambda$ to avoid the entropy term dominating the mismatch term.
**Q1: "Maybe some statements that make the maximum entropy principle stand out as a sort of gold standard could be toned down a bit?"**: We agree with the reviewer that statements about maximum entropy being the "gold standard" need to be toned down, as this was not our intention. To clarify, we propose maximum entropy as one possible way to solve the ill-posedness of the source distribution estimation problem, which has a number of advantages such as uniqueness, and coverage of all feasible parameters for downstream inference tasks. These are not "intrinsically" optimal (and there might be specific cases where other regularizers would be more appropriate), and we will clarify the relevant statements in the revised version.
**Q2: "Would it be possible for the authors to show what could happen with the choice of other or wrong distance types?"**: We thank the reviewer for their suggestion to use distance metrics other than the Sliced-Wasserstein distance (SWD). Therefore, we repeat our experiments on the benchmark tasks, replacing the SWD with MMD using an RBF kernel and the median distance heuristic for selecting the kernel length scale (Fig. R3). We observe that the results are comparable in terms to the SWD results presented in our work.
The main considerations in choosing the SWD in our work were computational. We expect that any sample-based, differentiable, and computationally efficient distance metric to measure the mismatch in our objective function (Eqs. 3, 4) will provide reasonable performance. Our initial choice of using SWD was motivated by its simplicity and scalability - we did not need to choose kernels or other hyperparameters (except for the number of slices).
**Q3: "An interesting case the authors could discuss is when multiple losses may be required for different subsets of parameters?"**: We strongly agree with the reviewer that there are applications where a general mismatch function such as the SWD might be insufficient to measure the distance between distributions, and some combination of distance functions would be required for different subsets of the data. For example, for simulators that produce multimodal outputs, the SWD may not be an appropriate metric. Instead, we could consider designing a distance custom distance metric that combines the distances over the different modalities of the data. We believe that this is outside the scope of the work, but believe that this is an important point to add to the discussion in our submission.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I have read both the rebuttal and the other reviewers' concern. I do not think the lack of differentiability of the simulators is a concern, as indicated by the authors in the rebuttal. I also appreciate the comparison with NEB and additional experiments on convergence. Given the rebuttal addresses satisfactorily all the points I raised and I do not find the points raised by other reviewers concerning I maintain my positive assessment of the work.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for their response. | Rebuttal 1:
Rebuttal: We would like to thank the reviewers for their detailed engagement and constructive and positive feedback. Our paper introduced Sourcerer, a method for estimating maximum-entropy source distributions with sample-based distances. Reviewers found our approach to be “well-motivated” (J2pY) and “simple, but principled and effective” (DWPT). They commended our evaluation “[on] a wide variety of benchmarks” (8Zuo) to be “extensive” (J2pY) and “rigorous” (swSP). Finally, all reviewers found the presentation of our work to be good or excellent, with DWPT calling it “extremely clear”.
We have responded to each of the reviewers in detail to clarify their questions and address their concerns. We have performed several additional experiments (see supplement) to support our response. Below, we summarize the main points raised by the reviewers, in addition to the results of the new experiments. We hope that our responses address the reviewers’ concerns and allow them to recommend our work for acceptance.
**Clarifying the use of the maximum entropy principle (swSP, J2py, DWPT)**: The source distribution estimation problem is known to be ill-posed - we here proposed to use the maximum entropy principle to address this issue, and showed (Prop. 2.1) that the resulting source distribution is unique. Maximum entropy approaches are often motivated by the idea of “maximum ignorance” - i.e., finding a distribution that makes the least assumptions about the model while satisfying all the known information. A maximum entropy approach is also arguably desirable when source estimation is used to learn prior distributions, as it ensures that a wide range of possible parameters is considered. We showed empirically that our regularized approach is able to obtain a high entropy source distribution without sacrificing the quality of the match between simulations (from the estimated source) and observations.
Reviewer swSP raised a concern about the case where “the source distribution is highly concentrated in an area with high probability mass”. However, if the parameters outside this area are not consistent with the observed data, our method will result only in distributions that are concentrated in this area. Nevertheless (and as with any regularization term), there are cases where other properties are desirable and where there might be more appropriate objectives to select the source distribution (as also pointed out by DWPT). We will provide a balanced discussion of when entropy regularization might be desirable.
**Clarifying the requirement of differentiable simulators (JA3k, swSP, J2py)**: Another shared concern was the requirement that the simulators be differentiable. We emphasize that this requirement is less restrictive than the ones from previous work (Vandegar et al. (2020)), which required the simulator _and_ its (log-) likelihood to be differentiable. Furthermore, if the simulator is not differentiable, or if computing its gradients is too expensive, our method can still be used. In this case it is necessary to train a surrogate model of the simulator. For example, in our manuscript, we train a surrogate model in the Hodgkin-Huxley experiment because some of the summary statistics used are not differentiable.
[Vandegar et al.] - Neural Empirical Bayes: Source distribution estimation and its applications to simulation-based inference. AISTATS, 2020.
## New Experiments
For new figures (Fig. Rx) and Table R1, please see the **PDF**.
**1. Estimating source distributions with higher dimensionality and less observations (DWPT, swSP)**: Reviewers asked for additional experiments to study the performance of our method in two cases. We now study the robustness of our method to a small number of observations in the dataset (swSP). We repeat the benchmark Two Moons task with $N=100$ observations, and find that our method still identifies high entropy source distributions that reproduce the observed dataset very well (Fig. R1a). We also add a new experiment studying the robustness of our method in the high-dimensional source distribution limit (DWPT). We repeat the Gaussian Mixture task with the source distribution dimension increased to $D=25$ (Fig. R1b), which is almost twice the dimensionality of the highest-dimensional source estimated in our original submission ($D=13$ in the Hodgkin-Huxley example). Again, we find that our method identifies a high entropy distribution with excellent match to the observed data.
**2. Additional comparison against NEB baseline (8Zuo, swSP, JA3k)**: Reviewers requested comparisons to baseline models in terms of convergence speed. We now show empirically that Sourcerer is significantly faster than the Neural Empirical Bayes (NEB) baseline across all benchmark tasks (Table R1 in supplement). In addition, we now also compare our results for the high-dimensional tasks of the SIR and LV models to the NEB baseline (Fig. R2) and show that Sourcerer performs significantly better on these tasks.
**3. Sourcerer with MMD (swSP, DWPT, JA3k)**: Reviewers asked whether the choice of the Sliced-Wasserstein distance (SWD) to measure the mismatch between simulations from our source model and the data was the only appropriate choice. We have clarified the constraints that our approach requires of the distance function to satisfy, namely that the distance must be sample-based, differentiable, and fast to evaluate. We chose to use the SWD because it meets all of these requirements and also requires few hyperparameter choices. However, other sample-based distance metrics are also possible: We have now performed additional experiments on our benchmark tasks, replacing the SWD with the Maximum Mean Discrepancy (MMD). We observe similar quantitative performance as with SWD (Fig. R3).
Pdf: /pdf/10a76d07949483f8ea33feda3fe5c8dfdf90f853.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper introduces “Sourcerer,” a new method for source distribution estimation, focusing on maximum entropy distributions to effectively handle ill-posed problems common in simulating scientific phenomena. This approach leverages the Sliced-Wasserstein distance for sample-based evaluation, offering a significant advantage for simulators with intractable likelihoods, and is demonstrated to recover high-entropy source distributions without sacrificing simulation fidelity across various tasks.
Strengths: - Utilizing the Sliced-Wasserstein distance for evaluating distribution discrepancies enables the method to operate effectively with simulators that have intractable likelihoods.
- The approach is rigorously tested across multiple scenarios, including high-dimensional observation spaces and complex simulators. The results demonstrate that it not only maintains high fidelity in reproducing the empirical distributions but also achieves higher entropy in estimated source distributions compared to existing methods.
Weaknesses: - The proposed method only applies to differential simulators. I know a lot of these problems in computational biology have non differentiable simulators, as well as the Ising models. This is the drawback of the proposed method.
- I also wish there is more theoretical results in terms of the statistical consistency and error bounds of using the sliced Wasserstein distance. Because the sliced Wasserstein distance is approximated via empirical measures. The estimation error is highly depending on the sample size.
- I'm not sure if the maximum entropy approach is aligned with the intuition. What if the source distribution is highly concentrated in an area with high prob mass? I know $\lambda$ can be tuned to control it but will it affect the original problem a lot?
Technical Quality: 3
Clarity: 3
Questions for Authors: - what is the motivation of using the sliced Wasserstein distance? There are other types of distances can be used for this problem. For example, the maximum-mean discrepancy (MMD) and the kernelized Stein discrepancy (KSD). MMD and KSD also have their sliced versions.
- Is there any principle of how to choose the hyperparameter $\lambda$?
- I know the sliced Wasserstein distance is not really informative because it is estimated via samples thus the density information is a sort of missing. Is there any empirical result of the convergence speeds for the proposed method vs the benchmark NEB?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see the above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough feedback. The reviewer raises some questions and concerns about our approach, which we address below.
**W1: "The proposed method only applies to differential simulators"**: In the case where the simulator is neither differentiable nor provides explicit likelihood evaluations, our method can still be used, but then requires training a surrogate model. On our benchmark tasks (Table 1), we have shown that using a surrogate instead of the simulator led to comparable results in terms of simulation fidelity to data, and entropy. The results for the realistic Hodgkin-Huxley task (Fig. 5) were also obtained using a surrogate model. We emphasize that our method has less requirements than previous approaches: We do not require that the simulator have an explicit and differentiable likelihood, but only that the simulator be differentiable. Thus, the set of simulators to which we can apply our method _without_ training a surrogate is larger.
**W2: "I also wish there is more theoretical results in terms of the statistical consistency and error bounds of using the sliced Wasserstein distance"**: The Sliced Wasserstein distance (SWD) is well established and its theoretical properties are actively studied. In particular, the sample complexity of the convergence rate has been studied in several existing works. For example, Nadjahi et al. (2020) showed that the finite sample estimate of the SWD between two distributions converges with rate $\sqrt{N}$ in the number of samples $N$. Furthermore, Nadjahi et al. (2019) showed that generative models trained to minimize the SWD using finite sample size $N$ also converge to the true optimizer (in distribution) at rate $\sqrt{N}$. We will add a summary of these properties in our revisions.
We agree with the reviewer that our benchmark tasks would benefit from exploring how our method performs in the low data limit. Therefore, we provide an additional empirical result by repeating the Two Moons benchmark task with a smaller observed dataset of only 100 samples (Fig. R1a). We observe similar behavior to the baseline case presented in our work for 10000 observations.
[Nadjahi et al. (2020)] - Statistical and Topological Properties of Sliced Probability Divergences. NeurIPS, 2020.
[Nadjahi et al. (2019)] - Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance. NeurIPS, 2019.
**W3: "I'm not sure if the maximum entropy approach is aligned with the intuition"**: We apologize for not making the intuition for using the maximum entropy principle sufficiently clear. We provide further clarification here and will reflect this in our revisions.
The maximum entropy principle does not conflict with the case where “the source distribution is highly concentrated in an area with high probability mass”. If parameters outside this area are not consistent with the data observed, our method will result only in distributions that are concentrated in this area. We observe this empirically in the deterministic SIR experiment (Fig. 4), where the source distribution is concentrated and regularization does not lead to higher entropy sources.
Instead, our use of the maximum entropy principle can be viewed as resolving the non-uniqueness of the source distribution estimation problem. According to the maximum entropy principle, given a choice between two source distributions _that are consistent with the data_, one should conservatively choose the source distribution with the higher entropy (which is more dispersed). This is often desirable when source estimation is used to learn prior distributions, as it ensures that a wide range of possible parameters is considered.
**Q1: "What is the motivation of using the sliced Wasserstein distance?"**: We thank the reviewer for their question. We agree that other choices of sample-based, differentiable distances are possible. We now perform a new experiment on the benchmark tasks replacing SWD with MMD with an RBF kernel and the median distance heuristic for selecting the kernel length scale (Fig. R3). We observe that the results are comparable to the SWD results presented in our work in terms of simulation fidelity to data and entropy.
The constraints on the distance function D are computational; D should be differentiable, fast to compute, and not require probability densities. Our new empirical results show that indeed other distances that satisfy these constraints lead to good performance of our method. Our initial choice of using SWD was motivated by its simplicity and scalability - we did not need to choose kernels or other hyperparameters (except for the number of slices).
**Q2: "Is there any principle of how to choose the hyperparameter?"**: We empirically find that a small value of $\lambda$ (e.g. $\lambda=0.015$), together with our linear decay schedule, is sufficient to obtain substantially higher entropy source distributions in all our experiments. Thus, starting with a small $\lambda$ is likely to be sufficient. In addition, a run without any regularization can be performed to verify that the regularization does not negatively affect the quality of the estimated source. We agree with the reviewer that more robust methods for choosing $\lambda$ are an interesting question for future research, and will reflect this in our revisions.
**Q3: "Is there any empirical result of the convergence speeds for the proposed method vs the benchmark NEB?"**: We thank the reviewer for their question, and agree that our work can benefit from explicitly comparing the empirical convergence speed of Sourcerer to the baseline. We provide an empirical measurement of the convergence times of our method (with and without regularization) as compared to NEB, measured on the benchmark tasks (Table R1). The source model architectures $q_\phi$ are the same for all methods, as reported in Table 1 of our original submission. Sourcerer is faster than NEB on all benchmark tasks.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, please read the above rebuttal and evaluate whether it answers your concerns. If your evaluation remains unchanged, please at least acknowledge that you have read the author's response. | null | null | null | null | null | null |
Generalization of Hamiltonian algorithms | Accept (poster) | Summary: This work provides a novel approach to bound the generalization error (high probability bounds) of the Gibbs algorithm as an important example of Hamiltonian algorithms. The authors also applied their method to a different example of the Hamiltonian algorithm.
Strengths: Strengths:
1- A novel approach is proposed for bounding the generalization error of the Gibbs algorithm.
2- There are some improvements in comparison with other works.
3- The proof approach is well-discussed.
4- Section about Randomization of stable algorithms is interesting.
Weaknesses: 1- In the case of the Gibbs algorithm, the contribution of this work is incremental. In particular, the authors should elaborate on why their results for the Gibbs algorithm are novel.
2- For the Gibbs algorithm, the main challenge is studying the asymptotic regime, where $\beta$ ---> \infty. It would be useful to study this regime or mention it as limitation of this analysis.
Technical Quality: 3
Clarity: 3
Questions for Authors: Sea weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Sea weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please also see the general rebuttal for planned improvements to the paper.
1 - The improvements for the Gibbs algorithm go beyond constants. (4) is
smaller than the inequality in the next display by a factor of $1/\sqrt{\ln
\left( 1/\delta \right) }$. If the confidence parameter $\delta $ goes to
zero, the quotient of the next display to (4) goes to infinity. Similarly
(5) is better than the competing bounds by a factor of $1/\ln \sqrt{n}$. And
even if logarithmic factors are considered irrelevant: does a simpler and more
general method of proof, which in some special case gives only incrementally improved results, not provide
a clearer perspective on the underlying phenomenon?
2 - It is planned to use an extra page for a future-work/limitations
section. There will be reference to the challenge of the $\beta \rightarrow
\infty $ limit, and the $\beta >n$ regime. These problems seem to require
techniques different from those developed in the paper.
---
Rebuttal Comment 1.1:
Title: Feedback
Comment: I want to thank the authors for their response.
I hope that the asymptotic discussion will be included in the final version. | Summary: Summary
-------
The papers presents a method to bound the `generalization gap', i.e the difference between
the expected and empirical losses for a hypothesis h drawn from a probability class
returned by the stochastic learning algorithm. The authors present a general-purpose
method to bound the exponentiated difference, and apply this technique to Gibbs sampling
(and other applications).
As someone who is not an expert on the topic, but still theoretically inclined, I was
unable to appreciate the contributions of this paper. It didn't help that the paper's
presentation was not up to scratch either (see comments below). I have given a
middle-of-the-line score to reflect this but will wait to hear from expert reviewers.
Detailed comments
=================
Questions
- Is h a hypothesis or a loss function? In 9, 10 and equation (1), it is written as a loss
but in line 12 it is treated as a hypothesis?
- What is the motivation for bounding the difference between the expected loss of a
a *learned* hypothesis, and the error on the training set? In typical use cases, one is
interested in the test/generalization error after training.
- From what I could gather from the Gibbs sampling use case in 4.1, this can be used to
bound the rate of convergence of Gibbs sampling. Is this correct?
The presentation of the paper could improve significantly.
- It is customary (at least in CS venues), to include an outline of the results, describe
their significance of each result, and outline the key challenges and
techniques used in the paper.
- The paper would also benefit from a dedicated related work section, where the authors
explicitly state the most relevant work and how their results compare to them. The most
interesting comparison I saw was in section 4.1 where they compare their bound to
Rivasplata et al, and the improvement in the \sqrt{\log(1/\delta)} term. Even here, the
authors have not explained why this gap \Delta is significant. Does it lead to fast
convergence rates for Gibbs sampling?
Perhaps the authors could state this result up front to prime the reader for what they
can expect from this paper.
- On the same note, the authors should do a better job of highlihgting the key results in
this paper. Currently, there are several lemmas/theorems presented one after the other
and it is hard to appreciate which ones are the most significant, and why they are
significant. My recommendation would be to identify the ~2 main theorems of the paper
Strengths: See above
Weaknesses: See above
Technical Quality: 2
Clarity: 2
Questions for Authors: See above
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See above
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please also see the general rebuttal for planned improvements to the paper.
Q1: $\mathcal{H}$ is a loss-class and $h\in \mathcal{H}$ is a hypothesis
composed with a fixed loss function. Loss-classes are a notational
simplification frequently used in the analysis of generalization.
Q2: The bounds apply to the difference of expected and training error after
training.
Q3: The bounds apply to hypotheses sampled from the Gibbs distribution and
give convergence in the sample size $n$. They do not refer to the
convergence of Monte Carlo methods to the Gibbs distribution.
---
Rebuttal Comment 1.1:
Title: Increasing score
Comment: Thanks for your response. Based on the rebuttal and the other reviews, I have increased my score. | Summary: This submission is a purely theoretical work, whose main goal is to bound $\Delta(h,\mathbf{X})=\mathbb{E}[h(X)]-\frac{1}{n}\sum^n_{i=1}h(X_i)$, i.e. equation (1). After some preliminary results, the results that show this bound under different conditions are Theorem 3.4, Theorem 3.5 and Theorem 3.8. Some possible (theoretical) applications are also shown, in the form of the Gibbs algorithm, randomisation of stable algorithms and PAC-Bayes bounds with data-dependent priors.
Strengths: Unfortunately, I have not worked in any of the subfields that this paper is concerned with, so my judgment on its importance should be taken with a pinch of salt, but I am convinced of its significance and the contributions that this paper makes.
I have also tried to go through some of the maths in detail to check for its soundness, and barring a couple of very minor errors (see "Questions"), I think this paper is excellent in that regard.
I really liked the way this paper was written, straight to the point with minimum fuss, and my impression is overwhelmingly positive.
Weaknesses: I think the authors made a deliberate choice to be concise with the proofs so that the proof can be included in the main body of the paper. I really like this, but in some places, the proof is too short, so that the readers are asked to do a significant amount of algebra by themselves. If possible, it would be great if the authors could use the extra page to flesh out the proofs a little bit.
Technical Quality: 4
Clarity: 4
Questions for Authors: Displayed equation after L89: This seems to be a pointwise statement for both $h$ and $\mathbf{x}$, so should it be “for all $h\in\mathcal{H}$ and for all $\mathbf{x}\in\mathcal{X}^n$? Unless the statement should be “… is called a Hamiltonian for $Q_\mathbf{x}$” instead of “Hamiltonian for $Q$”?
Displayed equation after L162: Shouldn’t $2bc$ be $\frac{bc}{2}$, taking into account the factor of $\frac{1}{8}$? The $2nc^2$ in the numerator of the fraction should also be $\frac{nc^2}{2}$. It looks like this is corrected on L163, and working through the maths, I indeed got the claimed inequality in Theorem 3.4(ii).
L183: I think you are missing a $\ln$ in front of the exponential?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: L88: "A function $H:\mathcal{H}\times\mathcal{X}^n$" perhaps it is better to write “A function $H$ on $\mathcal{H}\times\mathcal{X}^n$? I leave this up to the authors.
Displayed equation after L153: The spacing is a bit strange on the left?
L185: an -> and
Displayed equation after L195: The comma should be a full stop.
L178, L389: “Assume that all” -> “Assume that for all”
L391: everz -> every
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Special thanks to you for the encouraging words and the careful reading,
which uncovered several typos and inaccuracies.
Please also see the general rebuttal for planned improvements to the paper.
Question 1: L89. Yes, it should be "for all $h\in \mathcal{H}$ and $\mathbf{x%
}\in \mathcal{X}^{n}$".
Question 2: Display after L162. You are absolutely right, thank you. No idea
how this got into the paper.
Question 3: L183. You are right again, thank you.
L88: Will be changed. The other typos will be corrected accordingly.
---
Rebuttal Comment 1.1:
Comment: Dear authors,
Thank you for your response. I have read them, also the other reviews, and I retain my positive evaluation of this submission. | Summary: This paper introduces a general method to bound the logarithm of the expecattion of the exponential of the generalization gap for stochastic learning algorithms. This method is applicable when the distribution of the algorithm concentrates exponentially around its mean, extending to cases where the Hamiltonian form satisfies a bounded difference condition or is sub-Gaussian. Further, the paper discuss applications to the gibbs algorithm, improving existing bounds, Lastly the paper extends generalization guarantees to hypotheses sampled once from stochastic kernels centered at the output of uniformly stable algorithms. This advancement strengthens previous results and improves the understanding of generalization properties for algorithms based on stochastic kernels.
Strengths: The paper is very-well written. it is crisp and clear about the notations, contributions and the context of the novel results with respect to the exsiting literature. The theoretical rigor is well-presented and the results are very well supported through coherent arguments. I specially appreciate the "self-contained" nature of the paper, where everything including the very fundamental markov inequality is written in the appendix for convenience of the reader so there is no need to refer to other papers to understand, verify and appreciate the contributions of the paper.
The paper makes non-trivial theoretical contributions. It provides generalization guarantees for bounded hypothesis (section 3.1) and unbounded hypothesis through subgaussianity (Section 3.2). The contributions include a general method for bounding generalization gaps for the specific case of Hamiltonian algorithms, thereby identifying an important subclass of problems to bound the notorious ln E_X E_h [exp \lambda \Delta] which is often not amenable for general problems. This contribution is important. Further, the authors specify precisely where the bounds improve upon exisiting results (e.g. dependence on the confidence parameter \delta is improved from previously known O(\sqrt(1/\delta) to O(1/\delta) for the gibbs algorithm), often with simpler proof techniques and cleaner constants/exponents.
Weaknesses: The paper is very notation-heavy, and likely not accessible to everyone. This is not really a critique of the paper tbh, but more of the nature of the result/paper and the short conference format. The authors have done a fair job of explaining the notations. But It is easy to get lost in keeping track. For example, the notion of canonical Hamiltonian H_Q and its usefulness (for example in the simple proof of Prop 3.1) is easy to overlook unless one really gets into the weeds. I would suggest adding a couple of sentences cementing the importance of H_Q in the analysis. Beyond helping through and simplifying some results, is there additional importance of H_Q ?
The conclusion and future directions/discussions/limitations section is severely lacking. Can the authors talk about limitations of the proposed method, and any further possible open problems that the reader might be interested in.
There is no empirical verification, even with toy setups. Can the authors comment on how the presented theory can be empirically demonstrated/verified?
Technical Quality: 4
Clarity: 4
Questions for Authors: In addition to the weaknesses:
In context of the paper, and the definition of \Delta(h,X), which we know is centered, I am not sure how Prop 3.1 is useful or applied in this paper. I understand it could be potentially useful where F and H_Q can be handled separately (assuming F being non-centered is still somehow amenable to this). But for this paper, this is not the case.
The notation S^k_X f(\mathbf{x}) used in prop 3.2 onwards is clunky at best, and badly overloaded at worst. May I suggest using f(S^k_X \mathbf{x}) ?
theorem 3.4 needs h(y’) \in [0,b] ? I think you mean \forall t \in \mathcal{X} h(t) \in [0,b] ?
“In practice Q is often defined by specifying some Hamiltonian H , so HQ (h, x) = H (h, x) − ln Z (x) in general” – I am not sure what this sentence means. Please expand. Are you saying the canonical H_Q can be trivially written like that? How is Q being defined through that?
L186 “generalization guarantee of rapid convergence” .. could you expand on this please? What order is considered as “rapid”?
Minor: In a couple of places some of the references seem to be hard-coded ? e.g. references to A.1 because those are not clickable links.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Partially.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Please also see the general rebuttal for planned improvements to the paper.
Experiments to demonstrate the theoretical results are planned for future
work. For real data the costly part is the repeated
sampling from the Gibbs distribution, because one has to
await the mixing time between Monte-Carlo-samples for approximate independence. Recording for each sample
the training error and a test error for independent data provides the
information to test the predictions on both the expected and un-expected
generalization gaps, and their dependence on the training error. Note that
the benefit of the un-expected bounds, once one is willing to accept their
correctness, is that only one sample is needed in the test.
Question 1: A separate treatment of $F$ and $H_{Q}$ is used in the proofs of
the Bernstein-type inequality Theorem 3.5 and Theorem 3.8 (ii). Proposition
3.1 is essential in both of these cases.
Question 2: The notation $S_{X}^{k}f\left( \mathbf{x}\right) $ has entered
the paper through some copy/paste process. It is essentially a typo for
which we apologize. As you say, $f\left( S_{X}^{k}\mathbf{x}\right) $ should
be used.
Question 3: In the statement of Theorem 3.4 the universal quantifier was
understood to apply to \textit{all} of $k,h,y,y^{\prime }$ and $\mathbf{x}$.
While this still seems correct, it is probably clearer to state the uniform
boundedness of the $h\in \mathcal{H}$ separately, as you propose.
Question 4: "In practice...". An example is the Gibbs-algorithm. One
specifies $H\left( h,\mathbf{x}\right) =-\left( \beta /n\right)
\sum_{i}h\left( x_{i}\right) $. Then the density is $Q_{\mathbf{x}}\left(
h\right) =Z\left( \mathbf{x}\right) ^{-1}\exp \left( H\left( h,\mathbf{x}%
\right) \right) =\exp \left( H\left( h,\mathbf{x}\right) -\ln Z\left(
\mathbf{x}\right) \right) =\exp \left( H_{Q}\left( \mathbf{x}\right) \right)
$.
Question 5: "rapid convergence". What was meant is that then $\Delta \left( h,%
\mathbf{X}\right) $ is approximately of order $1/n$, rather than $\sqrt{1/n}$
as in Theorem 3.4. But, as this also depends on the order which $c$ has in $%
n $, it really depends on the concrete application of Corollary 3.7. Since
"convergence" may also be confused with the convergence of a
Monte-Carlo-method to its limiting distribution, the text will be modified.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: The authors have clarified my minor doubts and I confirm my opinion that this is a solid theoretical contribution. Some experiments, even simulated ones, to validate the results will improve the quality of the paper even further. | Rebuttal 1:
Rebuttal: Many thanks to the reviewers, who provided many useful comments. Major
planned improvements are:
1. The revision will contain a section on future directions and
limitations. It will mention potential applications to iterated algorithms
and weakly dependent data. The limitations part will point to the remaining
challenges of the $\beta >n$ regime and the limit $\beta \rightarrow \infty $.
.
2. A glossary of notation in tabular form will be included at the
beginning of the appendix. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning | Accept (poster) | Summary: This paper introduces the Eye-gaze Guided Multi-modal Alignment (EGMA) framework, which leverages radiologists' eye-gaze data to enhance the alignment of medical visual and textual features. By using synchronously collected eye-gaze data during diagnostic evaluations, EGMA improves generalization and achieves state-of-the-art performance in image classification and image-text retrieval tasks across four medical datasets. The study also investigates the impact of varying amounts of eye-gaze data on model performance, demonstrating the feasibility and utility of integrating this auxiliary data into multi-modal alignment frameworks.
Strengths: Innovative Use of Eye-Gaze Data:
The introduction of the Eye-gaze Guided Multi-modal Alignment (EGMA) framework is a novel approach that leverages eye-gaze data from radiologists to improve the alignment of visual and textual features in medical data. This innovative use of auxiliary data opens new avenues for enhancing model performance and provides a unique perspective on incorporating human cognitive processes into machine learning models.
Empirical Validation and Generalization:
The EGMA framework is rigorously evaluated across four different medical datasets, demonstrating its ability to achieve state-of-the-art performance in image classification and image-text retrieval tasks. This comprehensive empirical validation underscores the robustness and effectiveness of the proposed method, highlighting its potential for broader application in various medical contexts.
Exploration of Eye-Gaze Data Utility:
The paper not only introduces a novel framework but also explores the impact of varying amounts of eye-gaze data on model performance. This detailed analysis provides valuable insights into the feasibility and utility of integrating eye-gaze data into multi-modal alignment frameworks, offering practical guidelines for future research and application in the field.
Weaknesses: Dependency on Eye-Gaze Data Collection:
Issue: The reliance on eye-gaze data from radiologists introduces a significant dependency that may not be feasible for all institutions due to the need for specialized equipment and the additional effort required to collect this data. Especially when the eye-gaze prior is not accurate.
Impact: This dependency could limit the scalability and widespread adoption of the EGMA framework, particularly in resource-constrained settings or where the collection of eye-gaze data is not practical.
Technical Quality: 3
Clarity: 3
Questions for Authors: Impact of Eye-Gaze Data Quality and Quantity:
Question: How does the quality and quantity of eye-gaze data affect the performance of the EGMA framework? Are there specific thresholds or optimal amounts of eye-gaze data required to achieve the best results?
Context: Detailed insights into the relationship between eye-gaze data quality/quantity and model performance would help in optimizing the framework and understanding its practical requirements.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W: "The reliance on eye-gaze data from...where the collection of eye-gaze data is not practical."**
We thank the reviewer for this great comment. In fact, when designing experiments for zero-shot classification and zero-shot image-text retrieval, EGMA took this issue into account. Experimental validation has shown that after multimodal training on datasets with eye-gaze data, the model performs well on other datasets without eye-gaze data (Tab. 1\&2\&3 in main paper). Furthermore, with advancements in eye-gaze data collection systems [1-4], acquiring multimodal data with eye-gaze has become increasingly feasible. Compared to having radiologists spend additional time and effort annotating refined labels, eye-tracking allows for data collection during their routine diagnostic work, minimizing disruption and reducing annotation burdens. Additionally, some studies have demonstrated [3,5,6] that eye-gaze data can achieve performance comparable to refined annotations. Therefore, we propose initially pre-training on scenarios where eye-gaze data is more easily collected and then fine-tuning on more challenging datasets. This approach is similar to current mainstream multimodal pre-training models, where a robust model is trained on a broadly available dataset and then fine-tuned on other datasets to perform downstream tasks. The performance of EGMA in this work supports the feasibility of this approach.
**Q1: "How does the quality and quantity of eye-gaze data affect the performance of the EGMA framework?"**
We thank the reviewer for this great question. The quality of eye-gaze data is crucial for model performance. In this work, we first filter out interfering data such as blinks and saccades by using only the fixation data from the eye-tracking recordings. We then generate a 2D attention heatmap from these fixation points to further mitigate the influence of interfering data. Figure R1 in the attached PDF shows an example of denoising in MIMIC-EYE dataset. Additionally, in the design of the EGMA framework, we use the EGM module to further filter out potentially problematic regions. For catastrophic errors could happened, the MIMIC-EYE dataset provides a robust solution by involving multiple expert radiologists in data collection. This approach ensures overall data quality and allows for the compensation of errors made by one or two radiologists with correct data from other radiologists. Figure R2 in the attached PDF shows an example of this compensation.
Regarding the issue of the quantity of eye-gaze data, we conducted extensive comparisons in the ablation study of this work (Tab. 4 in main paper). Specifically, we evaluated the model performance with 1\%, 5\%, 10\%, and 50\% of the eye-gaze data. We found that with a small amount of data, such as 1\% (approximately 37 samples), the model's performance improvement was limited, which is attributed to insufficient auxiliary information. As the proportion of eye-gaze data increased, the performance of EGMA improved progressively across the three datasets. This aligns with the expectation that providing more supervised information allows the model to learn better feature representations and consequently enhance performance on the datasets.
**Q2: "Are there specific thresholds or optimal amounts of eye-gaze data required to achieve the best results?"**
We thank the reviewer for this great question. For our EGMA framework, having more eye-gaze data is certainly beneficial, similar to other types of annotations. Additionally, EGMA is designed to handle situations where eye-gaze data may be limited, as it supports training with both eye-gaze and non-eye-gaze data. In our ablation experiments (Tab. 4 in main paper), we also investigated the impact of the proportion of eye-gaze data on model performance. Our goal was to demonstrate that even with limited eye-gaze data (e.g., using only 185 samples), EGMA's performance can still improve steadily, highlighting the framework's flexibility and effectiveness.
References:
[1] Stember, J.N., et al. (2020). Integrating eye tracking and speech recognition accurately annotates MR brain images for deep learning: proof of principle.
[2] Khosravan, N., et al. (2019). A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning.
[3] Ma, C., et al. (2023). Eye-gaze-guided vision transformer for rectifying shortcut learning.
[4] Men, Q., et al. (2023). Gaze-probe joint guidance with multi-task learning in obstetric ultrasound scanning.
[5] Wang, S., et al. (2022). Follow my eye: Using gaze to supervise computer-aided diagnosis.
[6] Ma, C., et al. (2023). Rectify vit shortcut learning by visual saliency. | Summary: The paper proposes EGMA, a novel framework for medical multi-modal alignment integrating eye-gaze data into vision-language pre-training. EGMA outperforms existing methods in image classification and image-text retrieval tasks, demonstrating significant advancements and improved feature representation with even minimal eye-gaze data.
Strengths: Innovative approach that diverges from traditional reliance on annotated datasets, providing a fresh perspective on multi-modal learning in medical contexts.
Robust experimental design that includes comparisons with state-of-the-art methods, highlighting the efficacy of the proposed framework.
Comprehensive visualizations and tables that effectively communicate the results and support the paper's claims.
Provides a scalable solution that shows strong generalization across different datasets, indicating its broader applicability in various medical settings.
Weaknesses: The paper primarily focuses on classification and retrieval tasks. Including additional evaluations, such as lesion localization or segmentation tasks, could provide a more comprehensive assessment of the framework’s effectiveness.
The use of eye-gaze data raises potential privacy issues, as such data can inadvertently reveal sensitive information about the observers. Addressing these concerns through robust de-identification methods would strengthen the ethical considerations of the work.
Although the paper includes ablation studies, more detailed analysis on the impact of different components of the model, such as varying the amount of eye-gaze data or different types of medical images, could provide deeper insights into the robustness and adaptability of the proposed framework.
The model relies on multi-modal datasets like MIMIC-EYE, which simultaneously collect eye-gaze data, medical images, and diagnostic text. This dependency might limit the generalizability of the approach to other datasets that lack such rich multi-modal annotations.
Technical Quality: 4
Clarity: 3
Questions for Authors: The paper mentions that even a small amount of eye-gaze data can enhance model performance. Can you provide more quantitative results or analysis on how performance scales with different amounts of eye-gaze data?
Have you considered applying the EGMA framework to other types of medical imaging modalities (e.g., MRI, CT scans) or diagnostic tasks? If so, what challenges or modifications would be necessary?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors discuss privacy concerns associated with using eye-gaze data, suggesting the use of de-identification methods or releasing data in the form of heatmaps to mitigate these issues.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1:** We greatly appreciate the reviewer's suggestion. Supervision information used for localization and segmentation tasks, such as bounding boxes and masks, is stronger than the labels used for classification tasks. Additionally, the amount of refined manual annotation required for localization and segmentation is relatively limited. Therefore, we believe that if EGMA performs well on classification tasks with relatively weak supervision and across multiple classification datasets, it is likely to also excel in localization and segmentation tasks. Furthermore, the visualization results in our work already provide preliminary evidence of EGMA's localization capability. Finally, thank the reviewer again for this suggestion. Conducting additional validation experiments is indeed a promising direction for our future work.
**W2:** We thank the reviewer for this great comment. Privacy concerns in eye-gaze data are often overlooked by researchers. Studies [1,2] have shown that eye-gaze data can reveal private information such as the collector's personality traits, age, gender, and mental state, making privacy protection crucial. One approach [3] involves adding noise to the original eye-gaze data to obscure some private information, but this can compromise the quality of the data. Another commonly used method [4], which is employed in EGMA, is to convert the original eye-gaze data into a two-dimensional attention heatmap. This heatmap retains rich visual information while mitigating the risk of privacy leaks compared to raw eye-gaze data. Therefore, the EGMA framework protects the privacy of the participants during the data preprocessing phase.
**W4:** We thank the reviewer for this great comment. In fact, when designing experiments for zero-shot classification and zero-shot image-text retrieval, EGMA took this issue into account. Experimental results show that after multimodal training on datasets with eye-gaze data, the model performs well on other datasets without eye-gaze data. Furthermore, with the advancement of eye-gaze data collection systems [5-8], collecting multimodal data with eye-gaze has become increasingly feasible. Therefore, we propose a strategy where pre-training is conducted in scenarios where eye-gaze data is more easily collected, followed by fine-tuning on more challenging scenarios. This approach is similar to mainstream multimodal pre-training models, where a strong pre-trained model is first trained on a widely available dataset and then fine-tuned on other datasets for downstream tasks. The performance of EGMA in this work supports the feasibility of this approach.
**W3\&Q1:** We thank the reviewer for this great question. In fact, we have discussed this issue in the ablation study of our paper (Tab. 4 in main paper). Given the current scarcity of eye-gaze data, EGMA was designed to support training with both eye-gaze and non-eye-gaze data within the same batch. In Tab. 4, we conducted comparative experiments with four different proportions of eye-gaze data: 1\%, 5\%, 10\%, and 50\%, corresponding to approximately 37, 185, 370, and 1848 instances with eye-gaze data in the training set, respectively. It can be observed that with 1\% of eye-gaze data, the model shows only slight improvement on the CheXpert5x200 dataset due to limited auxiliary information. As the proportion of eye-gaze data increases, the performance of EGMA gradually improves across the three datasets. This aligns with the expectation that providing more auxiliary information allows the model to learn better feature representations and thus enhance its performance on the datasets.
**Q2:** We thank the reviewer for this great question. Our EGMA model is designed based on the CLIP architecture and supports the co-training of both eye-gaze and non-eye-gaze data, which provides it with significant flexibility. Recently, a study has seamlessly integrated eye-tracking technology into ultrasound imaging [8], where ultrasound images, eye-gaze data, and diagnostic texts were collected. EGMA can be easily adapted to this dataset with minimal modifications, primarily involving the replacement of the encoder with one pre-trained on ultrasound images.
References:
[1] Kröger, J.L., et al. (2020). What does your gaze reveal about you? On the privacy implications of eye tracking.
[2] Katsini, C., et al. (2020). The role of eye gaze in security and privacy applications: Survey and future HCI research directions.
[3] Steil, J., et al. (2019). Privacy-aware eye tracking using differential privacy.
[4] Liu, A., et al. (2019). Differential privacy for eye-tracking data.
[5] Stember, J.N., et al. (2020). Integrating eye tracking and speech recognition accurately annotates MR brain images for deep learning: proof of principle.
[6] Khosravan, N., et al. (2019). A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning.
[7] Ma, C., et al. (2023). Eye-gaze-guided vision transformer for rectifying shortcut learning.
[8] Men, Q., et al. (2023). Gaze-probe joint guidance with multi-task learning in obstetric ultrasound scanning. | Summary: This paper proposes a cross-modal alignment method that can optionally learn from eye tracking data that is collected together with the speech of radiologists. The proposed Eye-gaze Guided Multi-modal Alignment (EGMA) system consists of losses that optionally incorporate alignment objectives between sentences and gaze regions (expressed as heatmaps) which allows the contrastive learning scheme to out-perform existing vision-language models for radiology, such as MedCLIP. This is in terms of linear evaluation (supervised classification), zero-shot classification, image-to-text retrieval and text-to-image retrieval. The EGMA method also shows highly interpretable cross-modal attention maps and well clustered embeddings (with respect to known ground-truth classes).
Strengths: A key challenge with papers that propose to incorporate eye gaze data into vision-language model training, is the lack of paired training samples. This paper proposes a very clever solution that is clever in two ways: (a) they propose fine-grained alignment between sentence tokens (originating from speech) and scanpath heatmaps corresponding to the sentences, and (b) design their contrastive objectives to allow for gaze supervision to be optional. This results in a rare demonstration of the benefit of gaze data for improving not only performance (as measured by classification or retrieval) but also interpretability.
The paper is also written very well, with sufficient figures and descriptive text - which is a plus.
The presented quantitative results are comprehensive, on several datasets, and consistently out-performing multiple existing state-of-the-arts. The ablation study, in particular, with regards to varying the amount of available gaze data is very interesting. It shows a gradual and (mostly) consistent increase in zero-shot classification performance with an increasing number of gaze-labeled data. The value of EGMA is further supported by the fact fine-tuning SotA models using EGMA typically results in performance improvements (Appendix F).
Weaknesses: Despite the authors’ best efforts, Fig. 2C and Fig. 2D (and the corresponding Sec. 3.2 and 3.3) are hard to understand. In particular, it is unclear to me why the Mean(Max()) reduces features are considered to be “fine-grained” (it rather seems to be an instance-level feature that allows for gaze-free supervision), and why the cross-modality mapping is necessary.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please clarify how the Mean(Max()) operation described in Sec. 3.2 enables fine-grained alignment.
I would also like to understand the motivation for the EGM module, in relation to existing prior art.
Thank you.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors are very forward and clear about the limitation of using or collecting data where gaze and speech are aligned. However, they also mention that medical practises are evolving to allow for this better. In this reviewer’s opinion, the framework proposed in this paper could be easily applied to non-medical areas as well as its design is domain-agnostic.
Privacy or societal concerns are also adequately mentioned.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1\&W1: "Please clarify how the Mean(Max())..." "Despite the authors’ best efforts, Fig. 2C and Fig. 2D...allows for gaze-free supervision)"**
We thank the reviewer for this great question and apologize for any confusion caused by casual description in our paper. First, to clarify the role of fine-grained alignment, we have rewritten the Mean(Max()) operation in Sec. 3.2 of main paper as follows:
$$
\hat{z}^I_k= \frac{1}{m} \sum^{m}_{j=1}[\underset{j}{max} {(x^{P2S}_k)} ], ~~ 1 \leqslant j \leqslant m ~~ (1)
$$
$$
\hat{z}^T_k= \frac{1}{n} \sum^{n}_{i=1}[\underset{i}{max} {(x^{S2P}_k)} ], ~~ 1 \leqslant i \leqslant n ~~ (2)
$$
In the above equations, $x_k^{P2S} \in \mathbb{R}^{n \times m}$ and $x_k^{S2P} \in \mathbb{R}^{m \times n}$ are similarity matrices between local features within a single instance, and these matrices have been optimized with eye-tracking data as shown in Eq. 4 in main paper. We know that CLIP's contrastive loss can align features well within a batch by computing the similarity between global features of the samples. To continue using this contrastive loss on top of instance-level local features, we need the global similarity between the corresponding text and image for each instance. In this work, instead of directly using the global features output by the encoder as CLIP does, we first calculate local feature similarities and then derive global similarity from the local similarity matrix. Specifically, to compute the global similarity between an image and a diagnostic text within a batch, we first calculate the similarity matrix $x_k^{P2S}$ between the local features of image patches and sentence tokens using Eq. 3 in main paper. Then, we take the maximum value of each column of this matrix, meaning each sentence token gets a similarity score with its most related image patch. Finally, we average these values to obtain the global similarity refined by local similarity. The calculation of the fine-grained global similarity from diagnostic text to image within a batch is similar. This discussion and results will be incorporated into the revised version of the paper.
**Q2\&W2: "I would also like to understand the motivation for the EGM module..." "why the cross-modality mapping is necessary."**
We thank the reviewer for this great question. In fact, the EGF and EGM modules proposed in our work are similar to the diagnostic process of radiologists. Many studies [1-5] have pointed out that radiologists perform a global search first and then conduct a detailed examination of the local areas when suspicious lesions are found. Firstly, the EGF module of EGMA corresponds to the global search by radiologists, aligning the local image patch features with the individual sentence features. Secondly, EGM uses the key values from the local similarity matrix computed in the first step as weights, focusing the alignment on certain important image patches and texts, akin to the detailed observation stage after identifying suspicious lesions. Thus, by mimicking the real diagnostic behavior of radiologists, our EGMA further enhances the model's multimodal processing capability and diagnostic accuracy. This discussion will be incorporated into the revised version of the paper.
**L: "In this reviewer’s opinion, the framework proposed in this paper could be easily applied to non-medical areas as well as its design is domain-agnostic."**
We greatly appreciate the reviewer's suggestion. In fact, thanks to the inherent flexibility of the EGMA model, it is well-suited for multimodal alignment tasks involving natural images, such as in human-robot interaction and control as well as for education/training purposes. Therefore, we believe this is a promising direction for future expansion. Once again, thank the reviewer for this valuable suggestion. This discussion will be incorporated into the revised version of the paper.
References:
[1] Nodine, C.F., et al. (1987). Using eye movements to study visual search and to improve tumor detection.
[2] Swensson, R.G.. (1980). A two-stage detection model applied to skilled visual search by radiologists.
[3] Kahneman, D.. (2003). A perspective on judgment and choice: Mapping bounded rationality.
[4] Kundel, H.L., et al. (2007). Holistic component of image perception in mammogram interpretation: gaze-tracking study.
[5] Drew, T., et al. (2013). Informatics in radiology: what can you see in a single glance and how might this guide visual search in medical images?
---
Rebuttal Comment 1.1:
Comment: Thank you very much for the detailed and kind answers.
I understand the EGF and EGM losses slightly better after reading your explanation. One follow-up question would be whether the EGF loss, as it expects fine-grained feature consistency (specifically, that the max value corresponds to the correct location of interest), would require that the image encoder is pre-trained prior to pre-training. I see in Sec. B.1 that you mention the use of SwinTransformer and BioClinicalBERT as the image and text encoders, respectively, but it is not clear to me whether you begin from the publicly available pre-trained weights.
I would think that the EGF loss may not work very well when training from scratch (as taking the max can lead to improper features being selected).
In any case, I will retain my rating as I believe the authors have addressed both mine and most other reviewers' concerns. The authors need not respond to me. | Summary: This paper proposes utilizing eye-gaze information to aid in learning representations from paired images and texts. The method was validated on four medical datasets, demonstrating the feasibility of integrating gaze information for multi-modal alignment.
Strengths: The idea of using additional information beyond the image-text pairs is interesting.
Weaknesses: First, if eye-gaze coordinates are noisy and contain errors, introducing eye-gazing into the learning process could negatively impact representation learning with image-text pairs. The image-text pairs are well-defined and reliable data resources. In contrast, the quality of eye-gaze information can be highly dependent on the operators, working conditions, and devices. Introducing potentially noisy and/or unreliable information into the learning process is risky and may lead to degraded performance.This paper does not mention and address this issue and the proposed method lacks for dealing with potentially noisy gaze information (could happen.)
Eye-gaze locations may fall on the cardiac areas in both normal and abnormal cases, but the critical consideration in learning is to develop a representation that can distinguish between these situations. Eye-gazes might provide little to no useful information for this learning process, as there is can be no significantly clear distinction between the eye-gaze coordinates in abnormal versus normal cases. Introducing eye-gaze data is not useful for learning to distinguish between these conditions. The only benefit might be identifying the cardiac area, which can be easily captured by conventional contrastive learning without the need for explicitly provided location information.
The writing is poor. The notations and formulas used in the method section are unprofessional. Notations such as COS, Mean, and Max are too casual and should be changed to be more rigorous and formal.
Technical Quality: 2
Clarity: 2
Questions for Authors: If the gaze information is not very accurate, for example, if the gazes occasionally shift to locations not exactly on the objects (e.g., cardiac), would using the gaze information harm the learning performance? If not, why, and if yes, how would you handle such an issue?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Please see weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1\&Q1:** We thank the reviewer for these great questions and apologize for any confusion caused by the lack of discussion in our paper. Indeed, noise and errors in eye-gaze data are as common as noise in images, and these can all affect the model's final performance. In this work, the errors in eye-gaze data primarily stem from two factors: involuntary saccades and microsaccades of the radiologists' eyes [1], and subjective fixation errors [2].
Human eye movements can be categorized into two main types: saccades and fixations [1]. Saccades are the rapid movements between different areas of the visual field and are generally considered noise data that do not involve cognitive processes. Fixations, on the other hand, are brief pauses of the eyes on a small area and are regarded as the primary cognitive behavior. Additionally, due to the structure of the eye, fixations are accompanied by microsaccades, causing the fixation point to drift within the fixation area. Thus, microsaccades are also a major source of noise in eye-gaze data. Fortunately, eye-tracking technology has evolved significantly over the past century, and noise reduction techniques for eye-gaze data have become highly advanced. Currently, all commercial eye-tracking devices come with preprocessing software that uses adaptive methods (e.g., [3]) to filter out noise, greatly facilitating the use of eye-tracking technology. In the attached PDF, Figure R1 shows an example of denoising process in MIMIC-EYE dataset.
Additionally, due to variations in radiologists' expertise and cognitive levels, fixation data may contain errors, such as the reviewer mentioned, "occasionally shift to locations not exactly on the objects."
For the minor shift error of fixations, where the duration of incorrect fixations is small compared to the overall duration, their values in the attention heatmap are low, thus having minimal impact on the mlce loss utilized in our work (Eq. 4 in main paper). In our EGMA, EGF and EGM modules (Sec. 3.2 and Sec. 3.3 in main paper) can also filter out this minor errors in the heatmap. For the significant fixation errors, MIMIC-EYE data has been cleaned to address this issue at the first time. Specifically, these data were first reviewed by professionals to exclude inconsistent diagnoses and fixation errors. Moreover, the MIMIC-EYE dataset includes eye-gaze data from six radiologists. Even if one or two radiologists' fixations are incorrect, the correct fixations from the other radiologists under the same semantic context can mitigate the adverse effects. In the attached PDF, Figure R2 shows an example of this compensation.
As the reviewer pointed out, effectively identifying and mitigating errors in radiologists' diagnostic behavior during model training is crucial, and this will be a focus of our future work. This discussion and results will be incorporated into the revised version of the paper.
**W2:** We thank the reviewer for this insightful question and apologize for any confusion caused by the lack of relevant discussion in our paper. In fact, numerous studies have investigated and proven the close relationship between radiologists' gaze behavior, image content, and diagnostic results. For example, studies [4-9] have found that radiologists fixate more on diseased areas than on healthy ones. Moreover, they discovered that novice radiologists, due to their lack of experience, repeatedly look at diseased areas, resulting in even more fixations in these areas than those of expert radiologists. Figure R3 in the attached PDF shows some comparison cases.
Additionally, the abnormal regions with more fixations have higher weights in the attention heatmap. Our EGMA model can leverage these weights, along with image content and text, to learn better disease diagnosis capabilities and feature representations. This discussion and results will be incorporated into the revised version of the paper.
**W3:** We are grateful for the reviewer’s attentive analysis and helpful feedback. We have revised the formulas as follows: (1) Change "COS" (Eq. 1 and Eq. 3 in main paper) to lowercase, as shown in Equations 1 and 2 below:
$$
s_{k,l}^{I2T} = cos(z_k^I, z_l^T), \ s_{k,l}^{T2I} = cos(z_k^T, z_l^I) \quad 1 \leqslant l \leqslant b ~~ (1) \\
$$
$$
x_k^{S2P} = cos(S_k^j, P_k^i), \ x_k^{P2S} = cos(P_k^i, S_k^j) ~~ (2)
$$
(2) Change "Mean/Max" (Sec. 3.2 in main paper) to the following Equations 3 and 4:
$$
\hat{z}^I_k= \frac{1}{m} \sum^{m}_{j=1}[\underset{j}{max} {(x^{P2S}_k)} ], ~~ 1 \leqslant j \leqslant m ~~ (3)
$$
$$
\hat{z}^T_k= \frac{1}{n} \sum^{n}_{i=1}[\underset{i}{max} {(x^{S2P}_k)} ], ~~ 1 \leqslant i \leqslant n ~~ (4)
$$
These revised formulas will be incorporated into the revised version of the paper.
References:
[1] Chamberlain, L.. (2007). Eye tracking methodology: theory and practice.
[2] Brady, A.P.. (2017). Error and discrepancy in radiology: inevitable or avoidable?
[3] Nyström, M. and Holmqvist, K.. (2010). An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data.
[4] Nodine, C.F., et al. (1996). Nature of expertise in searching mammograms for breast masses.
[5] Mallett, S., et al. (2010). Tracking eye gaze during interpretation of endoluminal three-dimensional CT colonography: visual perception of experienced and inexperienced readers.
[6] Voisin, S., et al. (2013). Investigating the association of eye gaze pattern and diagnostic error in mammography.
[7] Giovinco, N. A., et al. (2015). A passing glance? Differences in eye tracking and gaze patterns between trainees and experts reading plain film bunion radiographs.
[8] Van der Gijp, A., et al. (2017). How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.
[9] Tourassi, G., et al. (2013). Investigating the link between radiologists' gaze, diagnostic decision, and image content.
---
Rebuttal Comment 1.1:
Comment: I have read the authors' rebuttal which has addressed my concerns to some degree. I have raised my rating accordingly. Thanks.
---
Reply to Comment 1.1.1:
Comment: We appreciate the timely response. We are pleased to know that we have successfully addressed your concerns.
Thanks for engaging with our work and adjusting your evaluation. | Rebuttal 1:
Rebuttal: We would like to thank all reviewers for their careful reading, valuable comments, and recognition of the contributions of our work. We have provided itemized responses to the questions and suggestions from the reviewers. We are also pleased to receive the positive feedbacks from reviewers, particularly:
1. Novel and interesting method [Reviewers hPhG, YZxW, V2ie, and n21y].
2. Thorough evaluations and ablation studies [Reviewers YZxW, V2ie, and n21y].
3. Potential for broader application [Reviewers V2ie and n21y].
4. Excellent and clear written [Reviewer YZxW].
We found that reviewers are particularly concerned about the general availability and quality of the eye-gaze data. Here are brief responses to these two concerns:
**Data availability issue [Reviewers V2ie and n21y].**
Multiple studies [1-5] have investigated the cognitive behavior in radiologists' eye-gaze data, demonstrating its close relationship with images and diagnostic results. This highlights the availability and value of eye-gaze data in the medical field. In addition, with the growing research and solution development in the practical application of eye-tracking systems in the medical field [6-13], we envision that both the quality and the availability of eye-gaze data will be greatly improved. The recent success of eye-tracking technology in commercial applications (e.g., Apple Vision Pro, eye-tracking in iOS 18) further highlights its core role as a next-generation human-computer interaction technology. In our work, the model's zero-shot capability on other datasets significantly improved after eye-gaze guidance. This demonstrates that the model has learned more generalizable feature representation through the use of eye-gaze data. Furthermoer, we conducted ablation experiments (Tab. 4 in the main paper) to verify that our model can benefit from very limited eye-gaze data. Therefore, the ability to efficiently utilize eye-gaze data can provide valuable assistance to the model both now and future. Detailed explanations are provided in response to the respective questions from the reviewers (**Reviewer V2ie**:W4, **Reviewer n21y**:W).
**Data quality issue [Reviewers hPhG and n21y].**
Eye-tracking technology has evolved significantly over the past century, and noise reduction techniques for eye-gaze data have become highly advanced [14-16]. We also included examples in the attached PDF (Figure R1 and Figure R2) to illustrate how our model can mitigate the impact of noise and errors in the eye-gaze data. Detailed explanations are provided in response to the respective questions from the reviewers (**Reviewer hPhG**:W1\&Q1, **Reviewer n21y**:Q1).
References:
[1] Nodine, C.F., et al. (1987). Using eye movements to study visual search and to improve tumor detection.
[2] Swensson, R.G.. (1980). A two-stage detection model applied to skilled visual search by radiologists.
[3] Kahneman, D.. (2003). A perspective on judgment and choice: Mapping bounded rationality.
[4] Kundel, H.L., et al. (2007). Holistic component of image perception in mammogram interpretation: gaze-tracking study.
[5] Drew, T., et al. (2013). Informatics in radiology: what can you see in a single glance and how might this guide visual search in medical images?
[6] Khosravan, N., et al. (2019). A collaborative computer aided diagnosis (C-CAD) system with eye-tracking, sparse attentional model, and deep learning.
[7] Stember, J.N., et al. (2020). Integrating eye tracking and speech recognition accurately annotates MR brain images for deep learning: proof of principle.
[8] Wang, S., et al. (2022). Follow my eye: Using gaze to supervise computer-aided diagnosis.
[9] Ma, C., et al. (2023). Eye-gaze-guided vision transformer for rectifying shortcut learning.
[10] Men, Q., et al. (2023). Gaze-probe joint guidance with multi-task learning in obstetric ultrasound scanning.
[11] Ma, C., et al. (2023). Rectify vit shortcut learning by visual saliency.
[12] Ji, C., et al. (2023). Mammo-net: Integrating gaze supervision and interactive information in multi-view mammogram classification.
[13] Zhao, Z., et al. (2024). Mining gaze for contrastive learning toward computer-assisted diagnosis.
[14] Salvucci, D.D. and Goldberg, J.H.. (2000). Identifying fixations and saccades in eye-tracking protocols.
[15] Smeets, J.B. and Hooge, I.T.. (2003). Nature of variability in saccades.
[16] Nyström, M. and Holmqvist, K.. (2010). An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data.
Pdf: /pdf/44d9c83d05776cbd890da6a4070c0ad30c793754.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FlowLLM: Flow Matching for Material Generation with Large Language Models as Base Distributions | Accept (poster) | Summary: This paper proposes using a second-step Riemannian flow matching (RFM) model, separately trained, to improve the quality of crystal structures generated by a pre-trained large language model (LLM), closely following Gruver et al. [1].
Specifically, the authors first follow Gruver et al. to fine-tune an LLM to enable the generation of crystal structures, and then train an RFM model from scratch to improve the stability, or in other words, quality, of the generated crystals.
The motivation of this work is clear. The reason for introducing this kind of post-processing machine learning model is to address the issue that LLMs cannot deal with real values, while atom positions and lattice parameters in crystals are usually real values.
The experimental results show that introducing this second-step RFM model can improve the stability of generated crystals. However, it faces limitations, such as the inability to generate materials with specific properties, as mentioned in the limitations section. Additionally, there are other well-established pipelines to refine the generated crystal structures that may need to be compared and discussed.
References:
[1] Nate Gruver et al. “Fine-Tuned Language Models Generate Stable Inorganic Materials as Text”. In: arXiv preprint arXiv:2402.04379 (2024).
Strengths: ## Strengths:
1. Clear motivation: LLMs face challenges when dealing with real values, and crystal structures are usually represented by lattice parameters and atom fractional coordinates that are real values. Introducing a post-process module to refine the generated structures of LLMs is reasonable.
2. Good performance when compared with models without such second-step refinement modules. The experimental results show that introducing such a second-step RFM model will increase the overall stability of generated crystals.
Weaknesses: ## Weaknesses:
1. Missing discussions with other second-stage refinement strategies that are well-established. The generated crystal structures from LLMs can also be refined by machine learning force fields (MLFF), such as M3GNet, CHGNet (which the authors have used when calculating stability), or MACE-MP. For these MLFFs, one can directly use them without training a separate RFM model from scratch to refine the generated crystal structures.
2. The contribution may be a little limited. Given that there are well-established methods for the proposed issue in this paper, the contribution of introducing an RFM model to increase stability is somewhat limited.
3. The stability rate drops significantly when removing the duplicates (to SUN rate), up to more than three times (from 17.8 to 4.92). This means a majority of the generated stable crystals are similar or the same as each other.
4. Inability to generate materials with specific properties as mentioned in the limitations section. If the ability of this model is limited to generating stable crystal structures, there are other computationally cheaper models like DiffCSP that the authors have compared with for this task. Thus, it would be beneficial to add more discussions.
Technical Quality: 3
Clarity: 3
Questions for Authors: ## Questions:
Most of my concerns are listed above in the weaknesses. Addressing the weaknesses points and the following questions may impact the final score.
1. Why not just use MLFFs like CHGNet to increase the stability of generated crystals from LLMs?
2. Are a majority of the generated stable crystals similar or the same as each other? If not, why does the stability rate drop significantly when removing the duplicates (to SUN rate), up to more than three times (from 17.8 to 4.92)?
3. Why only compare with methods without this kind of second-step refinement module?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Limitations are discussed in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## > “Why not just use MLFFs like CHGNet to increase the stability of generated crystals from LLMs?”, “Why only compare with methods without this kind of second-step refinement module?”
We appreciate the reviewer’s thorough knowledge on the matter of refinement, but believe a clarification is necessary. All models evaluated in our paper, including the baselines, utilize a second-step refinement process involving relaxation with a machine learning force field (specifically, CHGNet) followed by DFT relaxation. Therefore, the comparison presented in the paper is fair and focuses on the effectiveness of different generative models in producing high-quality structures that lead to stable materials after refinement.
The superior stability and SUN rates we obtained for FlowLLM suggest that **our flow matching based refinement provides significant benefits BEYOND those provided by second-stage refinement with an MLFF.**
One explanation for this superior performance is that MLFF relaxations only find a local energy minimum close to the generated structure, while conditional generation using riemannian flow matching does not have such a limitation. Indeed, on visual inspection of the generated structures, we found many cases where the flow matching refinement produced significantly different structures than what they started with.
## > The contribution may be a little limited. Given that there are well-established methods for the proposed issue in this paper, the contribution of introducing an RFM model to increase stability is somewhat limited.
As mentioned above, the RFM based refinement goes beyond what an MLFF provides, and our superior results compared to other models + MLFF relaxation demonstrates this. We also address novelty in the global rebuttal above.
## Inability to generate materials with specific properties
The reviewer points out the limitation of FlowLLM in generating materials with specific properties. We acknowledge this limitation in the paper, and leave extending our model to property optimization for future work.
However, it's crucial to recognize that the generation of stable materials itself is a significant and challenging problem in materials science. The vast majority of theoretically possible materials are unstable and therefore not synthesizable. The ability to efficiently generate a high proportion of stable materials, as demonstrated by FlowLLM, is a substantial contribution in its own right. It dramatically reduces the search space and computational cost associated with identifying promising candidates for further investigation and potential synthesis. The focus on stability serves as a crucial filter, ensuring that the generated materials have a higher likelihood of practical relevance and applicability.
We also want to point out that the LLM aspect of our proposed model leaves open the possibility of conditioning to produce specific properties in a way that is unheard of for other diffusion-based approaches: asking the LLM to produce materials with those properties. Trying to generate materials with specific properties using LLM prompting is a very interesting, albeit distinct and significant *further* contribution that we leave for future work.
## Stability vs SUN rate
To compute SUN rates, we remove duplicates, and also any generated structures that are close to a structure in the training or validation data.
For our best model (τ=0.7, p=0.9), 1,782 of the 10,000 generated structures are stable, out of which 926 were close to training data samples. Out of the remaining 856 structures, 492 were unique structures. Therefore, the majority of the difference between stability in SUN rates is explained by the fact that our model generates many structures close to structures in the training data. This is expected behavior for a generative model that has been trained to accurately capture the distribution of the training data. Note that the most important aspect is that the SUN rate is high, not whether the non-SUN materials are in the training data or not.
We believe this offers great opportunity for future work where we can use the flexibility of LLMs for complex conditioning, while retraining the ability of our network to accurately predict stable structures (Crystal Structure Prediction) due to FlowMM’s strong performance.
## Cheaper methods
While methods such as FlowMM or DiffCSP may not require tuning a LLM, they also produce far fewer SUN structures. In this dimension, FlowLLM is unequivocally better as its SUN rate is ~50% higher than the runner up.
You raise an important point that one should carefully consider cost tradeoffs in this process, but they are difficult to quantify.
1. Inference costs have been addressed in the global rebuttal. The generation time for FlowLLM is comparable to FlowMM, which is significantly cheaper than DiffCSP, as it requires fewer integration steps.
2. There is no doubt that an LLM with FlowMM will have a higher training cost, but it results in much stronger performance that cannot be replicated by merely applying existing MLFF (or DFT) relaxations to generations from the LLM. The two-step procedure makes a quantitative difference in SUN materials with higher training costs, but still low inference costs compared to DiffCSP.
3. Finally, the majority of the cost in a material discovery pipeline lies in the DFT relaxation that is run to check if the generated material is stable or not. Since FlowLLM generates more SUN structures, and these are generally closer to their ground state structures (see “RMSD to ground truth structures” section in the rebuttal to Reviewer sJw2), FlowLLM requires much less compute for DFT relaxations than competing methods like FlowMM. On average, structures generated by FlowLLM only need 7.08 ionic steps of DFT relaxation, while those generated by FlowMM require 14.35 steps. Therefore, **FlowLLM is able to cut the DFT cost by a factor of 2.**
---
Rebuttal 2:
Title: Thanks for the rebuttal
Comment: Thank you for your responses. In summary, my concerns about following points have been addressed in a proper way.
1. My concern about “why not apply post process MLFF to other baselines” has been addressed, due to the fact that when calculating SUN scores, results from all baseline methods have been further relaxed using MLFF.
2. My concern about the contribution has been addressed, since further applying a flow model as a post process refining module is indeed helpful.
My questions about the drop from stability rate to SUN rate, and the ability to do property conditioned generation have been answered, and these points seem to be valid limitations of the current proposed method.
Overall, this seems to be an interesting paper to be accepted. And I increased score from 5 to 6. | Summary: The paper proposed the FlowLLM, a hybrid approach for material generation that combines LLMs and RFMs, effectively leveraging their complementary strengths. Namely, this method generates samples from the parametric distribution using a two-step procedure, using 1) LLM and 2) RFM. By adopting this hybrid approach, experiments demonstrated that it exhibited better generation capabilities especially in terms of SUN and stability compared to the existing baseline.
Strengths: 1. Since the LLM has been trained on material data, the proposed method can greatly simplify the denoising process.
2. The approach of utilizing both LLM and RFM, two generative model approaches, to complement each other’s weaknesses is intriguing.
Weaknesses: 1. Since the output of the LLM is used as the starting point for the RFM denoising process, the generation time is longer compared to using a simple data distribution if there is no prior sampling of a large number ($N_{tr}$) of examples from the base distribution in advance. I would appreciate a comparison of the generation times between using 1) RFM with a simple base distribution and 2) FlowLLM without prior sampling from the LLM. Additionally, since the fewer required integration steps in the structural validity-number of integration steps plot could be attributed to the benefits of RFM, it would enhance credibility to compare this with FlowMM.
2. Using the trained LLM as the base distribution for the RFM necessitates additional LLM training costs.
3. The Preliminaries section of the paper overlaps significantly with that of the FlowMM paper. The structure and content of the Preliminaries section are very similar, which raises some questions.
4. It is insufficient to verify the performance of the proposed method on only the MP-20 dataset. I believe it would be fairer to cover all the datasets addressed in other baselines.
5. This appears to simply combine the models of CrystalLLM and FlowMM, making the novelty seem somewhat weak. If there are contributions beyond this combination, please let me know.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Are there any experiments supporting the claim that LLMs are weak with continuous type data and denoising models are weak with discrete type data?
2. In FlowMM, bit representation was applied to atom types. Is there a specific reason for not using this approach?
3. Approximately what is the rate of generating invalid crystals?
4. Could you explain the process of how $\lambda_f$ and $\lambda_s$ were determined?
5. Could you please explain in more detail about the invariance of flat tori and fractional coordinates?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Please refer to the Weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Generation Time
Considering the overall generation “cost” is a good point. Could you please clarify the following?
> I would appreciate a comparison of the generation times between using 1) RFM with a simple base distribution and 2) FlowLLM without prior sampling from the LLM.
We interpret this to mean: What is the time comparison between FlowLLM and FlowMM to generate a sample? If so, we have added these to the global rebuttal.
## Training Time of LLMs
While we acknowledge the additional cost of training the LLM, we believe that the significant performance improvements justify this investment. Additionally, since we fine-tune LLAMA-2 models, the actual cost of fine-tuning these LLMs is comparable to that of DiffCSP. An additional benefit of fine-tuning LLMs is that as better pre-trained LLMs become available, the fine-tuning time and generation quality of these models will keep improving.
## Comparing structural validity vs NFE
We have updated the plot comparing structural validity versus number of integration steps for FlowMM and FlowLLM (see attached fig). FlowLLM converges significantly faster than FlowMM.
## The Preliminaries section of the paper overlaps significantly with that of the FlowMM paper.
We have made some revisions to the Preliminaries section to reduce the overlap, and plan to make a larger revision in the final version. Here we will outline the intended format of our preliminaries section so that you can consider it:
1. Introduction of crystals
a. the representation in math (similar)
b. the representation in an LLM (new)
2. Symmetry of crystals (shorter, focusing merely on what symmetries exist)
3. FlowMM (one or two paragraph summary about how FlowMM works)
4.CrystalLLM (one or two paragraph summary about how CrystalLLM works)
The format is similar to the current paper, but we will reduce emphasis on the sections that build up flow matching for materials and instead focus on summarizing the contributions from FlowMM and CrystalLLM. We will expand the representation section so the reader understands exactly what goes into the algorithm.
We will use the extra space to showcase new results requested by reviewers (e.g. energy/rmsd from predicted to ground state, rejection rates, more datasets) and motivate the ways in which LLMs and Flow Matching methods synergize well.
## More datasets
Including other datasets is certainly a good idea, but it is unfortunately not possible to do so within the rebuttal period given the time constraints.
As a reminder, typically used datasets are Perov, Carbon and MPTS-52. Perov and Carbon are not stable structures and therefore not relevant to unconditional generation. MPTS-52 is indeed interesting due to its chronological data split and we will focus on that for the final version of the paper.
We want to emphasize that CrystalLLM was only tested on MP-20, and FlowMM and DiffCSP were only tested on MP-20 for De Novo Generation, which is the only task we focus on. While we agree that more datasets are better, there is significant precedent for using only MP-20.
## “LLMs are weak with continuous data and denoising models are weak with discrete data”
*Denoising models are weak with discrete data:* This can be seen from the compositional validity metric, where denoising models significantly lag language models. Compositional validity measures the fraction of crystals that are charge-neutral (which is a function of the atom types). This implies that the denoising models are generating many structures with invalid atom type combinations.
*LLMs are weak with continuous data:* Effectively training LLMs on materials data requires using low precision representations of atom positions [Gruver et al. 2024], which limits the range of structures that can be generated by LLMs compared to denoising models.
This can be seen from the coverage recall metric, which measures the portion of the test distribution that is “covered” by generated structures. Crystal LLM obtains a lower recall than the denoising models (Table 1).
## Bit representation for atom types from FlowMM
Our motivation was based on the following observations: 1) the atom types generated by the LLM are superior to those learned by the FlowMM using bit representations; 2) the FlowMM excelled at generating structures conditioned on the atom types (“CSP” task), significantly outperforming prior methods.
These observations suggested to us that we should fix the atom types generated by the LLM and only use FlowMM for updating the atom positions and lattice parameters (similar to the CSP task). This allowed us to leverage the LLM’s strong comprehension of discrete data types AND FlowMM’s strong “conditional” generation capabilities.
It would, however, be interesting to try updating the atom types during denoising in future work.
## How were 𝜆𝑓 and 𝜆𝑠 obtained?
We fixed $\lambda_l = 1$, and swept over $\lambda_f \in $ {100, 200, 300, 400} (added to the manuscript).
## Invariance of flat tori and fractional coordinates
Could you please clarify this question? There are a number of invariances that could be relevant.
The idea of using fractional coordinates is to represent cartesian positions $x$ decomposed into a natural representation of the unit cell $l$ and the fractional coordinates $f$. Due to the fact that there are symmetries of the distribution of materials in cartesian space, some of those symmetries are inherited by the fractional coordinates. An important one is: translating all atoms by a fixed vector does not change the energy of a crystal, and therefore does not change its position in the thermodynamic competition for stability. That means we want to represent a distribution that is invariant to translation in fractional coordinates. We do this using our neural network for flow matching, like in FlowMM. In principle, the LLM part is not translation invariant, although the work by [Gruver, et. al. 24] shows that the LLM is approximately translation invariant.
---
Rebuttal Comment 1.1:
Comment: Thank you again for your valuable feedback! Since we are close to the end of the discussion period, we would greatly appreciate your feedback on our response. We would be happy to provide more details, or answer any additional questions. | Summary: This paper presents a generative model for crystals that uses a LLM as a base distribution and a flow matching model to refine the 3D structure. Both the LLM and flow matching parts are mostly borrowed from prior studies. However, the authors demonstrate a significant improvement in the stability of generated materials by using LLM generated crystal as the prior for the flow matching model.
Strengths: The main strength of the paper is the strong empirical result that shows FlowLLM significantly improves the percentage of stable crystals by replacing the prior distribution used in FlowMM with a LLM prior. The results are validated with DFT calculations and thus convincing.
Ablation study is conducted to isolate the contribution of LLM prior. They show that the atom type is the most important factor that contributes to higher stability.
Weaknesses: Despite the significant improvement in empirical results, the method presented is a straightforward combination of 2 prior works. The method is also unprincipled in some cases. For example, the LLM prior doesn’t always generate valid crystals in the defined domain. It may also generate lattices with angles outside the 60-120 range. The authors say that they simply reject invalid samples from LLM. It would be nice to report the rejection rate.
In Table 1, the SUN rate is significantly lower than the stability rate for FlowLLM. It means either the model is generating non-novel or non-unique crystals. Can the authors also report the novel and unique rates?
The wdist for density is significantly higher than DiffCSP and FlowMM. It indicates that the density of the generated crystals is less accurate.
Technical Quality: 3
Clarity: 4
Questions for Authors: Have the authors looked at the RMSD of the generated crystals to their equilibrium structures? The wdist for density is high for FlowLLM. It might indicate that the model is generating structures far away from equilibrium
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors noted the key limitation of FlowLLM is that it lacks end-to-end differentiability. However, the authors didn’t discuss potential negative societal impact of the model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Novelty
Addressed in the global rebuttal.
## Rejection Rate of the LLM
Addressed in the global rebuttal.
## Novel and Unique Rates
The reviewer's suggestion to report the novel and unique rates separately is well-taken. For our best model, 48% of the generated structures are stable and novel, of which 58% are also unique. Novel structures are defined as those that are not close (as determined by StructureMatcher) to any structure in the training or validation sets. We have added these novel and unique rates to the revised manuscript (section 5.3).
Note that it is possible to obtain more diverse outputs by increasing the softmax temperature of the LLM. The softmax temperature controls the randomness of the sampling process – a higher temperature leads to more diverse and creative outputs increasing the novel and unique rates, but hurts stability rates. This can be seen from our results (Table 1) where different sampling parameters lead to different stability rates, but roughly similar SUN rates. Because we can select different trade-offs between stability, novelty, and uniqueness by tweaking this parameter, we chose to focus on the SUN rate which combines all of them.
## wdist for density
The higher wdist for density in FlowLLM compared to FlowMM suggests that FlowLLM's generated structures might deviate more from the test set distribution in terms of density. However, this doesn't necessarily imply less accurate crystal structures, as FlowLLM could be learning a slightly different distribution of densities that is still physically valid. The fact that FlowLLM produces significantly more stable, unique, and novel materials than FlowMM after relaxation further supports this notion. The difference in density distribution could stem from the distinct base distributions used in each model. FlowMM employs a carefully designed base distribution for the unit cell, which has been shown to improve performance. It's possible that the LLM-learned base distribution in FlowLLM is inherently more challenging to transform to the target distribution, potentially due to multimodality or disconnectedness. Investigating the impact of different base distributions and their interaction with the flow matching process is an interesting avenue for future research.
Given that FlowLLM produces many more SUN materials after relaxation, while maintaining a high coverage, this may not be a major concern.
## RMSD to ground state structure
Checking the RMSD to the equilibrium (relaxed) structure is a nice suggestion. Using methods in `pymatgen`, this requires that the relaxed structure and the generated structure “match” according to `StructureMatcher`. In addition, we also compute the average difference in energy between the initial and relaxed structures, and the average number of steps to relax the structures. All of these were computed for the CHGNet relaxation.
The results are as follows:
* FlowLLM: RMSD = 0.023, Match Rate = 94.9%, Delta Energy = 0.0898 eV/atom, Num steps = 37.97
* FlowMM: RMSD = 0.096, Match Rate = 74.3%, Delta Energy = 0.3031 eV/atom, Num steps = 191.98
Here, match rate is the fraction of cases where the generated and relaxed structures match. **These results show that FlowLLM actually produces structures much closer to the ground state.**
We mentioned in the global rebuttal that inference time depends on training, generation, and (pre-)relaxation. This result shows that FlowLLM is making big improvements by reducing the number of pre-relaxation steps to find the local minimum for an atomic arrangement. We will include this in the paper as another strong result of our method.
## Potential negative societal impact
We thank the reviewer for pointing this out. We have updated the broader impact section to include the following:
*“The quality of technologies, e.g. batteries, depend directly on the materials that comprise them. By upgrading the underlying materials, we could improve the technology by making it more effective, cheaper, or more biodegradable. This is a double edge sword because advanced, cheap technology can both provide strong benefits to society, but can also lead to more waste or other problems. An example case is plastic. Plastic provides untold benefits to humans in convenience and preventing the spread of disease; however, it also produces a huge amount of waste that has a significant environmental impact.*
*One strong point of consideration is that the materials generated by FlowLLM only exist in a computational form. It is expensive and complicated to go from a computational representation of a material with useful predicted properties to an actual technology available to the public. For this reason, we believe that the potential negative impact from computational tools for material discovery to be quite limited.”*
---
Rebuttal Comment 1.1:
Title: Thanks for your response
Comment: Thanks the authors for detailed response and additional results. They've address most of my concerns, so I'd like to increase my score from 5 to 6.
The improved RMSD is a nice result. It would be helpful if the authors can provide additional insights on the source of improvements. Can it be attributed to the better prior or the better flow matching process?
Although I understand that authors use CHGNet to get a fast result on RMSD, DFT result is preferred for the final version of the paper.
I also appreciate the authors adding a paragraph about potential negative societal impact. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their detailed reviews and insightful comments. We are proud to say that all reviewers are recommending acceptance at this time. However, we aim to address any remaining critiques to improve the paper even further!
We summarize the perspective of the reviewers as follows: *Our method, FlowLLM, represents a significant improvement in empirical results when compared to existing work such as FlowMM and CrystalLLM. These results are achieved by proposing a novel and intriguing combination of LLM and Flow Matching in a way that the two methods complement and support one another. The motivation is therefore clear: take the best of the two methods and combine them. At the same time, this combination is somehow simple and therefore may have limited novelty.*
We appreciate the acknowledgement from reviewers that our method is a significant empirical improvement from previous methods. We address the critique of limited novelty below, but want to emphasize here that simplicity and novelty are not the same thing. We believe our proposal may be simple, but one that has heretofore not been explored in the literature and is therefore novel. Also, we view the simplicity of the approach as an asset rather than a hindrance.
## RMSD to ground state
Based on suggestions from one of the reviewers, we added a new set of results to the paper (Appendix D), comparing the generated structures to their ground state. We found that **FlowLLM was generating structures much closer to the ground state** than FlowMM. We believe this to be a very strong result, and re-emphasizes the efficacy of the FlowLLM method. See RMSD to ground state structure section in rebuttal to Reviewer sJw2.
## Novelty
We agree that our method combines two existing techniques. However, we respectfully disagree with the assessment that the combination is straightforward. The novelty of our contribution lies in the innovative combination of LLMs and Flow Matching:
1. **Unique Combination of LLMs & RFM:** To the best of our knowledge, no prior work has integrated the strengths of LLMs and Flow Matching together in one model for materials generation. This unique combination allows us to leverage the strengths of both methods, resulting in a substantial improvement in the rate and cost of generating stable materials.
2. **Exploiting Flow Matching's Flexibility:** FlowMM is the only prior work that uses flow matching for generating materials. The distinctive feature of Flow Matching, its ability to map to arbitrary base distributions, sets it apart from diffusion models. We are the first to recognize and leverage this property to take the output of an LLM and transform it with Flow Matching. It also opens the possibility of future work that is only possible with the flexibility of an LLM such as prompting for certain material properties.
3. **Bridging Disparate Philosophies:** The conventional wisdom in the field often views methods with inductive biases, like FlowMM, as an exclusive alternative to scalable methods like LLMs. Our work challenges this notion by successfully integrating these two seemingly disparate philosophies. The impact of such integration may be applicable beyond material generation to other domains such as proteins or small molecules. Those domains also apply strong, domain-specific inductive biases.
The substantial improvement in performance achieved by FlowLLM, as evidenced by our empirical results and ablation study, is very novel. Furthermore, the LLM's contribution extends beyond atom type prediction, serving as a learned base distribution that significantly enhances the learned target distribution. This combination and the resulting performance gains highlight the innovative nature of our work.
## Rejection Rate of the LLM
We acknowledge the limitations of the LLM prior in generating invalid crystals or lattices with angles outside the defined range. In our experiments, we observed a rejection rate of less than 0.5% due to invalid samples from the LLM (with a softmax temperature of 0.7). Note that since we can identify and reject these invalid cases, the inputs to flow matching are always valid materials. We have included this rejection rate in the revised manuscript (section 4.2).
In future work, we can explore strategies to address these limitations, such as incorporating constraints within the LLM.
## Inference Time
To compare generation speed between FlowLLM and FlowMM, we ran inference with both methods on an A100 GPU to generate 10000 samples. The time to generate 10000 samples breaks down as follows:
`InferenceTime(FlowLLM) = generate_text_time + rfm_time(250 steps) = 89.6 min`
`InferenceTime(FlowMM) = rfm_time(750 steps) = 65.1 min`
Since what we really care about is generating novel materials, a better metric is to compute the time required to generate a SUN material, which is:
`TimePerSUN(FlowLLM) = 10.9 sec / sun material`
`TimePerSUN(FlowMM) = 16.14 sec / sun material`
These results show that the inference times for the two methods are comparable, while FlowLLM generates new SUN materials faster than FlowMM.
Pdf: /pdf/fbcf65844336c006bfde10896b765f0b1c67b849.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fourier Amplitude and Correlation Loss: Beyond Using L2 Loss for Skillful Precipitation Nowcasting | Accept (poster) | Summary: The paper introduces novel loss functions - the Fourier Amplitude Loss (FAL), Fourier Correlation Loss (FCL) and a Regional Histogram Divergence (RHD) - to improve the realism of the predictions of precipitation nowcasting models without the use of generative models. The loss is applied to established precipitation nowcasting methods and is seen to significantly improve the sharpness and overall realism of the predictions.
Strengths: The paper presents a few interesting loss functions, and in particular the RTD is an interesting approach to improving the representation of precipitation quantiles while not overly penalizing small spatial shifts. The paper also shows comparisons of various key metrics with standard model architectures trained with MSE and with the new losses, showing the advantages of the new losses with
Weaknesses: The authors sensibly use MSE loss as a baseline when evaluating their model. However, there is quite a long history of developing new losses for making precipitation nowcasting produce sharper images and improve the representation of extreme precipitation. As such, I would have wanted to see a paper introducing a new loss function do some comparisons to earlier attempts to improve on the MSE loss if it wants to claim state-of-the-art performance. See e.g.:
https://proceedings.neurips.cc/paper/2017/hash/a6db4ed04f1621a119799fd3d7545d3d-Abstract.html
https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019EA000812
https://www.mdpi.com/2073-4433/10/5/244
Technical Quality: 3
Clarity: 3
Questions for Authors: You mention that you introduce FCL because the FAL doesn't fully constrain the spatial positions in the image. A seemingly simpler solution would be to use the FAL in combination with MSE. Have you tried this approach and if so, do you see advantages with using FCL instead?
Also, you are using a loss where you randomize between FAL and FCL and the probability of using FAL increases over time. You motivate this (lines 210-214) as it being tricky to find the correct weighting of FAL and FCL for a linear combination, but I don't quite understand, how is this not functionally equivalent to using a linear combination with the weight of FAL increasing over time?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses some limitations of the approach, but one point that is not mentioned is that unlike generative approaches to producing realistic nowcasting predictions, the model trained with the new loss function cannot produce multiple outputs for the same input, and is thus unable to provide stochastic predictions where multiple outputs from the model are used to characterize the probability distribution of precipitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer’s feedback and suggestion. Here is our response to each of the reviewer bJuK's concern.
---
> The authors sensibly use MSE loss as a baseline when evaluating their model...
Among the papers the reviewer suggested, BMSE [1] performs linear scaling at different pixel ranges, Multisigmoid loss (SSL) [2] preprocesses the images with linear transformations and nonlinear sigmoid function before applying MSE, [3] tested SSIM and MS-SSIM and recommended MSE+SSIM to be the loss function.
Due to limited time and resources, we present the results by adopting these losses with ConvLSTM trained on Stochastic Moving MNIST. For SSL, we follow the paper and pick $i \in (\frac{20}{70}, \frac{30}{70})$ and $c = 20$. The visualization is reported in the appended PDF.
Stochastic Moving MNIST:
|Model|Loss|MAE|SSIM|LPIPS|FVD|FSS|RHD|
|---|---|---|---|---|---|---|---|
|ConvLSTM|MSE|196.42|0.6975|0.2538|451.54|0.6148|1.1504|
|ConvLSTM|SSL[2]|175.17|0.7553|0.1906|348.18|0.7225|0.9840|
|ConvLSTM|MSE+SSIM[3]|184.10|0.7488|0.2573|529.71|0.3514|0.7921|
|ConvLSTM|FACL|180.10|0.7463|0.1092|82.28|0.8172|0.3391|
HKO-7:
|Model|Loss|MAE|SSIM|LPIPS|FVD|CSI-m|CSI4-m|CSI16-m|FSS|RHD|
|---|---|---|---|---|---|---|---|---|---|---|
|ConvLSTM|MSE|30.43|0.6664|0.3057|791.3|0.2772|0.2282|0.1702|0.2653|1.2453
|ConvLSTM|BMSE[1]|45.03|0.5537|0.3804|901.9|0.3484|0.3670|0.3354|0.3999|1.7918
|ConvLSTM|FACL|29.72|0.7168|0.2962|569.1|0.3054|0.3040|0.3351|0.7916|0.4045|0.7916
**We would like to direct the reviewer to view Figures 2 and 3 in the global rebuttal PDF.** We conclude the following summary from the above tables and appended figures, together with some of our previous experience:
1. Weighted MSE (such as BMSE) only “tilts” the focus. BMSE severely over-predicts in exchange for an improvement in CSI, sacrificing all other metrics such as MAE, SSIM, LPIPS, FVD, FSS and RHD.
2. SSL improves the model performance in general, but still cannot generate clear output under stochastic motion.
3. Losses integrating SSIM (and also L1) “dissolves” the prediction to zero over time under uncertainty. Such effect is especially significant for weaker models.
---
> You mention that you introduce FCL because the FAL doesn't fully constrain the spatial positions in the image. A seemingly simpler solution would be to use the FAL in combination with MSE. Have you tried this approach and if so, do you see advantages with using FCL instead?
Our research indeed started with FAL in combination with MSE in the early stage. We replaced MSE with FCL later due to a few concerns.
**Theoretical concern**:
In Appendix D, we showed that FAL can be broken down into three terms: $L_2(X, \hat{X})$ (that is, MSE in the **image space**), $\Sigma 2X\hat{X}$, and $-\Sigma 2F\hat{F}$. The difference $\Sigma 2X\hat{X} - \Sigma 2F\hat{F}$ corresponds to the translation invariant factor and the L2 term corresponds to the correctness of values. When the image is translated (Figure 4 in paper), the two terms almost cancel out, resulting in a perfect invariance to translation.
If we form the new loss as a linear combination of MSE and FAL, the weighting of the L2 term increases, breaking the balance between the two terms. With this intuition, we believe the resulting loss will not generate better results if we mix MSE with FAL.
**Practical concern**:
Despite the theoretical concern, we did test FAL + MSE. Besides, we also tested FAL + SSIM loss since SSIM is also considered a substitute for MSE.
The models are trained on SEVIR:
|Model|Loss|MAE|SSIM|LPIPS|FVD|CSI-m|CSI4-m|CSI16-m|FSS|RHD|
|-|-|-|-|-|-|-|-|-|-|-|
|ConvLSTM|FAL+MSE|33.3|0.7243|0.3611|351.6|0.3391|0.3728|0.4696|0.3463|1.4143|
|ConvLSTM|FAL+SSIM|36.7|0.7247|0.3762|395.3|0.3125|0.3433|0.4405|0.3150|1.6056|
|ConvLSTM|FACL|27.6|0.7402|0.3492|281.8|0.3633|0.3915|0.4838|0.4813|1.3445|
Empirically, FAL+MSE and FAL+SSIM work but FACL outperforms them by some margin. We will append this information to the paper if the reviewer finds it helpful.
---
> Also, you are using a loss where you randomize between FAL and FCL ... how is this not functionally equivalent to using a linear combination with the weight of FAL increasing over time?
A linear combination of FAL and FCL with increasing weighting on FAL is functionally equivalent with the current FACL, but the hyper-parameter tuning is different. We attempted to formulate $FACL = w(t) \text{FAL}(X, \hat{X}) + \text{FCL}(X, \hat{X})$ instead of Equation (6). Choosing a good $w(t)$ includes its curve type and maximum value, which can be challenging. Since FFT is orthonormalized (normalized by the image size $\sqrt{HW}$), we set the maximum value of $w(t)$ to $HW$ to generalize FACL to different datasets and image sizes. For HKO-7, $HW=230400$, which may cause numerical instability when multiplying to the FAL term. To avoid the hyper-parameters going too large, we find switching to a linear decay of selection probability between [0, 1] results in a more stable and neat solution. Since the adjustable range of $\alpha$ is small, it is also easier to control the tradeoff between accuracy and realisticity.
---
> limitation
We think the suggested limitation is built on the premise that the model itself cannot generate multiple forecasts. For example, if a ConvLSTM trained with MSE cannot have ensemble outputs, then the same model trained with FACL cannot do it either. However, if another model trained with MSE can generate diverse outputs, replacing MSE with FACL here can also yield multiple sharper outputs. | Summary: This paper propose a new Fourier Amplitude and Correlation Loss (FACL) to replace the traditional L_2 losses in precipitation nowcasting task. They evaluated the FACL on one synthetic dataset and three radar echo datasets, which demonstrates their method improve perceptual metrics and meteorology skill scores. Besides, they propose Regional Histogram Divergence (RHD) to improve the error margin in meteorological skill scores.
Strengths: 1. The FACL increases the quality of predictions compared to MSE. The model trained with FACL has more small patterns than MSE model, especially for longer lead time. Besides, FACL doesn’t import additional time cost according to Appendix G.
2. Regional Histogram Divergence (RHD), a variation of FSS, replaces the “hit” and “miss” of FSS with regional pixel distribution. It cancels the need of choosing threshold for FSS, alleviating the effect of threshold when evaluating models.
Weaknesses: 1. The evaluation is incomplete and unconvincing. The csi-m of EarthFormer is much lower than the results in [1] (0.3982/0.44). Besides, it could include more methods, such as NowcastNet[1] and DiffCast[2].
2. In figure9, compared to generative methods such as LDCast and MCVD, the FACL methods still lack predictions for extreme value and the distribution of predictions given by FACL is slightly different to the ground truth.
3. The intuition of FACL has no strong connection with precipitation nowcasting.
4. The coefficients of FAL and FCL needs to be selected manually, which may hinder the applications of FACL in other datasets and tasks.
Minor:
Models adopt the “prediction and refine” may help improve the related work section.
[1] Skillful nowcasting of extreme precipitation with NowcastNet
[2] Cascast: Skillful high-resolution precipitation nowcasting via cascaded modelling
Technical Quality: 3
Clarity: 2
Questions for Authors: Why does the author measure the CSI of SEVIR and MeteoNet with the same threshold? The data in SEVIR are VIL images, while the ones in MeteoNet are radar reflectivity (dbz).
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The evaluation is incomplete and unconvincing;lack predictions for extreme value;The intuition of FACL has no strong connection with precipitation nowcasting
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer’s feedback and suggestions. Here is our response to each of the reviewer 7mGa's concerns.
---
> The evaluation is incomplete and unconvincing. The csi-m of EarthFormer is much lower than the results in [1] (0.3982/0.44).
We adopted the official Earthfomer model and SEVIR dataloader. However, the training script is re-implemented to ensure a fair test (by only replacing MSE loss with FACL while keeping other factors constant). This causes us to change multiple hyper-parameters, potentially changing the model performance. For example, due to the scheduling for FACL random selection, we should not apply early-stop to the models. Due to resource limitations, the batch size is set to 32 instead of 64, etc. These tiny factors may aggregate and change the overall performance. For example, in DiffCast, their 5-in-20-out setting in SEVIR causes Earthformer to have a CSI-m of **0.2513**, showing that these factors can result in a huge difference in the ultimate performance.
Despite the difference, we do not think it invalidates our evaluation results. Our evaluation does not intend to convince the readers that an arbitrary model with FACL peaks any SOTA model in their optimal state. Instead, with a set of fair tests, we show that a model with FACL is better than the same model with MSE in precipitation nowcasting by its forecasts being more realistic (better LPIPS, FVD) and more skillful (better CSI, FSS, RHD). Such observation is consistently reflected by multiple tables and figures in the paper, both quantitatively and qualitatively. We hope the reviewer can at least agree with such a conclusion.
---
> Besides, it could include more methods, such as NowcastNet[1] and DiffCast[2].
DiffCast natively does not support 13-in-12-out sequences. Moreover, since 384x384 is too computationally heavy for inference, DiffCast downsamples the SEVIR dataset to 128x128. This causes unfair evaluation especially for LPIPS and pooled CSI which are sensitive to the image size.
We adopted the official checkpoint of DiffCast and upsample its forecasts to 384x384 on SEVIR:
|Model|Loss|MAE|SSIM|LPIPS|FVD|CSI-m|CSI_4-m|CSI_16-m|FSS|RHD|
|-|-|-|-|-|-|-|-|-|-|-|
|DiffCast(PhyDNet)|-|31.25|0.7568|0.3505|221.0|0.3588|0.3892|0.4874|0.5074|1.4270|
To sum up, DiffCast only has slightly better LPIPS and FVD, while other VP models with FACL in general have better pixel-wise accuracy and skill scores. (The performance of the other models can be found in Table 2 in the paper; the visualizations can be found in Figure 4 in the global rebuttal PDF.)
Again, we would like to reiterate that our core argument is that VP models trained with FACL achieve more realistic and skillful forecasts than those with MSE. The inclusion of generative models serves as a reference to the readers on how well generative models perform under the same setting. We believe further including more references has limited help in delivering our core argument.
---
> In figure9, ..., the FACL methods still lack predictions for extreme value...
To address the reviewer’s concern, let us present the individual scores with high thresholds: (160, 181, 219) for different pooling sizes 1, 4, and 16 on SEVIR.
|Model|Loss|CSI-160|CSI-181|CSI-219|CSI_4-160|CSI_4-181|CSI_4-219|CSI_16-160|CSI_16-181|CSI_16-219|
|-|-|-|-|-|-|-|-|-|-|-|
|ConvLSTM|FACL|0.2755|0.2418|0.1166|0.3082|0.2735|0.1522|0.4110|0.3682|0.2324|
|SimVP|FACL|0.2848|0.2522|0.1528|0.3133|0.2802|0.1791|0.4188|0.3706|0.2530|
|LDCast|-|0.1772|0.1460|0.0797|0.2210|0.1842|0.1063|0.3511|0.2986|0.1870|
|MCVD|-|0.2366|0.2032|0.1174|0.2807|0.2443|0.1504|0.4161|0.3692|0.2579|
From the table, FACL still exhibits better performance in most thresholds, except under a large pooling size. Although generative models attempt to generate more extreme values, the predicted points are distant from the actual location, forming false positives and hence lowering the CSI score. On the other hand, models trained with FACL, despite not predicting extreme values a lot, hit when they do. After considering everything in the intersection-over-union fashion, FACL can usually outperform the reference generative models in predicting extreme values unless with exceptionally large allowance.
---
> The intuition of FACL has no strong connection with precipitation nowcasting.
The intuition of FACL has the following connection with precipitation nowcasting:
(1) FACL is proposed to solve **blurriness caused by (random) motion**. Previous video prediction models keeps improving on deterministic data by looking into pixel-wise metrics. FACL utilizes the translation invariant property of FAL by treating precipitation events as stochastic data.
(2) FACL is proposed to work on **signal-based data**. We assume there is no cohesiveness between channels such as RGB raw image and medium-range atmospheric forecasting, most preferably having only 1 channel with its range bounded. Among the data types, radar reflectivity and its derivation products such as VIL are good candidates with such properties.
---
> The coefficients of FAL and FCL needs to be selected manually...
We refer the reviewer to the extra experiments for reviewer w5PB, which shows $\alpha$ is consistently stable in the suggested range (0.1 - 0.4).
---
> Minor: Models adopt the “prediction and refine” may help improve the related work section.
Thank you for the suggestion. We will include a few more recent models such as CasCast and DGDM to extend the introduction to the “prediction and refine” type of work.
---
> CSI thresholds for MeteoNet
Thank you for the feedback. The overall idea of choosing the CSI threshold was to select values that span the low, medium, and high range, so we conveniently followed the set used by SEVIR. We will update the CSI thresholds for MeteoNet following the DiffCast paper, by setting them to (12, 18, 24, 32) over 70 dbZ. A preview is reported in the comment.
---
Rebuttal 2:
Title: The updated table for MeteoNet with new CSI thresholds (12, 18, 24, 32)
Comment: Here we present the updated table for MeteoNet with new CSI thresholds (12, 18, 24, 32) based on the reviewer's feedback. Models trained with FACL still largely outperform those trained with MSE. With the increase of the CSI pooling radius, the performance of the diffusion models also drastically increases.
| Model | Loss | CSI-m | CSI_4-m | CSI_16-m |
| --- | --- | --- | --- | --- |
| ConvLSTM | MSE | 0.4388 | 0.3989 | 0.3904 |
| ConvLSTM | FACL | 0.4161 | 0.4876 | 0.6041 |
| SimVP | MSE | 0.4221 | 0.3748 | 0.3627 |
| SimVP | FACL | 0.4008 | 0.4513 | 0.5772 |
| Earthformer | MSE | 0.4004 | 0.3327 | 0.2946 |
| Earthformer | FACL | 0.3594 | 0.4038 | 0.5250 |
| LDcast | - | 0.2353 | 0.3188 | 0.4804 |
| MCVD | - | 0.3645 | 0.4559 | 0.6148 |
---
Rebuttal Comment 2.1:
Title: The updated commonts
Comment: Thanks the authors for the detailed responses. I agree that the proposed FACL is a good alternative to MSE, but I still want to know if the proposed FACL can improve the performance of the SOTA model or what are the benefits of applying it to the sota model
---
Reply to Comment 2.1.1:
Title: Regarding new SOTA models
Comment: We thank the reviewer for the follow-up responses. FACL is proposed to enforce sharpened forecasts for **video prediction** (VP; in contrast to video generation) models. The models we can apply FACL to are mostly **SOTA video prediction models**. The 3 most highlighted models in the paper: ConvLSTM, SimVP and Earthformer, correspond to three types of common model architectures in learning the spatiotemporal patterns: RNNs, CNNs and ViTs. In this category, we are able to provide results on more SOTA models / strong models, such as SimVP v2.
Stochastic Moving-MNIST:
| Model | Loss | MAE | SSIM | LPIPS | FSS | RHD |
| --- | --- | --- | --- | --- | --- | --- |
| SimVP v2 (gSTA) | MSE | 171.58 | 0.7625 | 0.1846 | 0.7464 | 0.9479 |
| SimVP v2 (gSTA) | FACL | 178.55 | 0.7622 | 0.0959 | 0.8210 | 0.3581 |
The SOTA models the reviewer previously suggested were **video generation models**. In Appendix I, we showed that replacing the MSE reconstruction loss with FACL in GAN/VAE improves different metrics to a considerable degree, but replacing diffusion loss with FACL does not influence the results significantly. Moreover, video generation models usually impose constraints to the data such as the image resolution and sequence length, causing difficulties to align with conventional video prediction models.
---
To sum up,
1. Applying FACL to **SOTA video prediction models** is the most preferred way of using FACL, which brings huge improvement in both perceptual metrics and skill scores.
2. Applying FACL to **SOTA VAE/GAN-based models** improves the models.
3. Applying FACL to **SOTA diffusion-based models** has almost no effect.
4. We are unable to include some generative models due to lack of implementation source and discrepancy in experimental settings. It is too computationally demanding to train some models (e.g. DiffCast) on the same datasets in full resolution. | Summary: This paper proposes the FACL loss function and provides theoretical and empiral proofs on how it boosts clarity and structure for images. The paper also shows how the loss behaves with generative setups and additionally proposes a new metric that is tolerant to deformations.
Strengths: - Clearly demonstrates why a naive fourier loss has no benefit over MSE
- Clearly demonstrates how FAL is translation invariant
- Clearly demonstrates why FCL provides global information to predicted pixels instead of pixel-level local information
Weaknesses: - Instead of the stochastic modification to Moving-MNIST, [1] already introduced a chaotic yet deterministic N-Body MNIST to mimic the complexity of Earth system interactions, would have been interesting to see the performance on this dataset
- FVD and LPIPS calculated on the basis of models trained on natural images, difficult to see why it would extend to scientific data
[1] Gao, Z., Shi, X., Wang, H., Zhu, Y., Wang, Y. B., Li, M., & Yeung, D. Y. (2022). Earthformer: Exploring space-time transformers for earth system forecasting. Advances in Neural Information Processing Systems, 35, 25390-25403.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Does the alpha parameter swing wildly between datasets? Any good strategy to figure this out?
- What are the benefits of RHD over a simple wasserstein distance between predicted and true distributions or their quantile error?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer’s feedback and suggestion. Here is our response to each of the reviewer w5PB's concern.
---
> Instead of the stochastic modification to Moving-MNIST, [1] already introduced a chaotic yet deterministic N-Body MNIST to mimic the complexity of Earth system interactions, would have been interesting to see the performance on this dataset.
We are pleased to also present the result of N-Body-MNIST. Due to limited time and resources, we trained ConvLSTM models with MSE and FACL on N-Body-MNIST for 100 epochs, with the results shown below:
| Model | Loss | MAE | SSIM | LPIPS | FVD | FSS | RHD |
| --- | --- | --- | --- | --- | --- | --- | --- |
| ConvLSTM | MSE | 57.22 | 0.8946 | 0.1264 | 178.57 | 0.7601 | 0.2301 |
| ConvLSTM | FACL | 43.11 | 0.9385 | 0.0533 | 80.83 | 0.9198 | 0.1586 |
**For visualization, see Figure 1 in the global rebuttal PDF**
Side notes regarding N-Body-MNIST:
As the reviewer suggested, N-Body-MNIST is deterministic in a closed system. However, we argue that the current research gap (blurry pattern for VP model) is the randomness causing VP models to predict poor quality forecasts. Such randomness cannot be simply solved by “improving the DNN models” as attempted in previous works. Strong models like Earthformer bring huge improvement to N-body-MNIST due to their increase of representation capability. However, upon non-deterministic motion (like Stochastic Moving-MNIST, and actual precipitation events), these strong models’ benefit is limited and they still suffer from blurry predictions. With such intuition, we point the culprit at the MSE loss, which mixes uncertainty into pixel value. By suggesting Stochastic Moving-MNIST as a non-deterministic system, we showed that a tiny stochastic factor (~1 pixel shift per frame) is sufficient to cause severe blurriness for the VP models trained with MSE **regardless of their representation capability** (as visualized in Figure 8 in the paper). FACL, on the other hand, enables the model to learn the expected trajectory and forms a clear output.
---
> FVD and LPIPS calculated on the basis of models trained on natural images, difficult to see why it would extend to scientific data
Due to the blurriness issue, despite DNN models’ remarkable accuracy, meteorologists find that the model forecasts do not “look like” actual observations. The intuition of using FVD and LPIPS is to show how close the forecasts “look like” actual ones, as a quantified approximation to qualitative studies by humans. For example, the DiffCast [1] paper also reported LPIPS to represent perceptual similarity.
We agree with the reviewer that FVD and LPIPS only do not extend well to scientific data. This is also one of our motivations for proposing RHD, to provide a systematic way of measuring the forecast with a closer approximation to human perception. However, we still include the perceptual metrics to provide readers with a comprehensive study in all three aspects: accuracy (MAE, SSIM), visual realisticity (LPIPS, FVD), and skillfulness (CSI, FSS).
[1] D. Yu, X. Li, Y. Ye, B. Zhang, C. Luo, K. Dai, R. Wang, and X. Chen, “DiffCast: A unified framework via residual diffusion for precipitation nowcasting,” in CVPR, 2024.
---
> Does the alpha parameter swing wildly between datasets? Any good strategy to figure this out?
We also conducted experiments on $\alpha$ using other datasets such as SEVIR.
| Model | $\alpha$ | MAE | SSIM | LPIPS | FVD | CSI-m | CSI4-m | CSI16-m | FSS | RHD |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| ConvLSTM | 0.0 | 26.15 | 0.7814 | 0.3502 | 391.37 | 0.4195 | 0.4339 | 0.4710 | 0.5727 | 1.3924 |
| ConvLSTM | 0.1 | 27.60 | 0.7624 | 0.3508 | 289.49 | 0.3984 | 0.4295 | 0.5073 | 0.5640 | 1.2087 |
| ConvLSTM | 0.3 | 27.80 | 0.7587 | 0.3312 | 258.24 | 0.3953 | 0.4288 | 0.5242 | 0.5453 | 1.1710 |
| ConvLSTM | 0.5 | 30.45 | 0.7402 | 0.3492 | 281.82 | 0.3633 | 0.3915 | 0.4838 | 0.4813 | 1.3445 |
Similar to the observation in Table 4, with the increase of $\alpha$ from 0 to 0.3, pixel-wise performance gradually drops and perceptual metrics gradually improve. At around 0.4-0.5, where the model may not fully converge in FCL, the performance starts to decay. Therefore, $\alpha = 0.1$ or $0.2$ as a good default value still holds.
The rule of thumb here is only exposing strong FAL to models that have been fully converged in FCL. (e.g. $\alpha < 0.5$, or have a long training time overall). Once the models converge in FCL, the effect of the proportion of FAL (0.1 - 0.4) is not very significant but a small tradeoff between pixel-wise accuracy and sharpness.
---
> What are the benefits of RHD over a simple wasserstein distance between predicted and true distributions or their quantile error?
One key component of RHD is to divide the forecast map into smaller patches. With this operation, the measured intensity distribution is limited to a local region, rather than the whole map.
As for the part of KL divergence, we believe it plays a similar role as Wasserstein distance. In other words, a similar measurement can be achieved if we keep the patching component and replace the KL-divergence with the EM distance.
---
Rebuttal Comment 1.1:
Comment: Thank you for the thorough response. I particularly appreciate the comments on robustness over the alpha parameters, and am also pleased to see the performance on the additional dataset. I will increase my original score. | null | null | Rebuttal 1:
Rebuttal: To AC and reviewers,
We sincerely appreciate all the constructive reviews from the reviewers. We have summarized the weaknesses the reviewers were concerned the most with, as well as our responses.
---
### Inclusion of more datasets, more previous losses, and more generative models.
Reviewer w5PB suggested also evaluating the models on N-body-MNIST. We performed a simplistic study on N-body-MNIST and the results are similar to that of Stochastic Moving-MNIST, except that more models fail in Stochastic Moving-MNIST due to its random nature. We will also include such a study in the paper.
Reviewer bJuK suggested including other loss functions proposed to improve the model performance. We extended the experiments by also comparing with BMSE, SSL and MSE+SSIM. Although some of them improve the model performance to a certain degree, none could produce a sharp prediction under stochastic data.
Reviewer 7mGa suggested including more generative models such as DiffCast and NowcastNet. We are unable to reproduce NowcastNet promptly due to the lack of implementation of its loss function, but we reported the performance of pre-trained DiffCast under our setting on SEVIR. However, we believe the pool of generative models should not be the most prioritized focus, compared with other aspects such as the generality of FACL and ablation studies.
---
### Picking $\alpha$
Both reviewers w5PB and 7mGa questioned the stability of the choice of $\alpha$. As explained in the rebuttal, $\alpha \in [0.1, 0.4]$ is generally stable. The underlying rule for the selection of $\alpha$ is to ensure the models are fully converged in FCL first before exposure to FAL. As long as the models converge in FCL, the effect of $\alpha$ is not sensitive but a little tradeoff between pixel-wise accuracy and sharpness. Since there is no absolute telling on which type of metrics is superior, we believe the freedom to control the tradeoff should be considered a benefit rather than a limitation.
---
### Other potential formulations for FACL
Reviewer bJuK inquired about other potential formulations for FACL, such as a linear combination of FAL and FCL, and replacing FCL with MSE, etc. We showed that we avoided linear combination due to concerns of bad hyper-parameter causing numerical instability, and our presented formulation of FACL is the most performant among the tested variations.
---
## The appended PDF for figures.
We present 4 additional figures to better illustrate our points to the reviewers.
- To reviewer w5PB: Figure 1 visualizes the output frames of the models on N-body-MNIST.
- To reviewer bJuK: Figures 2 and 3 show the output frames for models with different losses, including BMSE, SSL and MSE+SSIM, trained on Stochastic Moving-MNIST and HKO-7 respectively.
- To reviewer 7mGa: Figure 4 visualizes the output of the DiffCast model for the same test sample as Figure 9 in the paper.
Pdf: /pdf/6233357ba7ec8acfe9d3771b219d71ceb56e9dc2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Graph Neural Networks and Arithmetic Circuits | Accept (poster) | Summary: The paper establishes a series of expressiveness/expressive efficiency equivalence between graph neural networks (GNNs) whose aggregation/combine operations can be computed by arithmetic circuits, and various sub-classes of constant-depth arithmetic circuits. The main challenge consists of showing expressiveness equivalences in the case of the GNN or the aggregation/combine operations being equipped with some non-linearities. In principle, one could then leverage these expressive equivalence results to show limitations of what GNNs can compute through the language of arithmetic circuits.
Strengths: I believe the paper brings a substantial theoretical contribution to the problem of understanding the expressiveness of GNNs, since it proposes using a special class of arithmetic circuits to express the computations of GNNs (and vice versa). By doing so, any limitation shown over this class of arithmetic circuits can be translated to GNNs as well.
The proofs of the theorems are briefly discussed in the main text, and then they are fully presented in the appendix. I believe the proofs in the main text do quite a good job in giving an outline about the full proof. Most of the notation and terminology is very well-explained formally, even though in just a few cases they are not (see weaknesses).
Weaknesses: While I do not have a background about showing expressiveness results of GNNs, I have some background in doing so for arithmetic circuits. The great flexibility of arithmetic circuits when equipped with non-linearities (as the ones showed in the paper) suggests it might be very difficult showing their limitations, and therefore limitations for the GNNs in this way. Therefore, I think the class of arithmetic circuits considered might limit the impact of the contribution to being important but not exceptional. I am happy to reconsider my score if proved otherwise (see also my questions).
I think there are some weaknesses regarding the presentation at multiple points in the paper (hence the 2/4 score), which I list below:
- The introduction is too long (~3 pages) and I was often confused about which content in it was actually relevant for the paper. For example, the first paragraph (L9-16) is not relevant, as also mentioned in L17. Also, L26-50 discuss relationships with fragments of first-order logics in the literature, but this sounds far from the actual contributions of the paper. I have noticed there is no related work section in the paper. I would suggest moving some content of the introduction to a related work section just before the conclusion, if it is really something related to the paper. This would make the contributions appear sooner.
- The contributions of the paper are listed at multiple points of the introduction, and some concepts are often repeated. For instance, L68-72, L106-109, L114-116 look all similar sentences. I think it is sufficient to list the contribution and motivate them (e.g., L129-134) only once.
- The authors should check if all concepts are explained in the introduction before using them, such as "existential theory of real numbers" and the class $\mathrm{FAC}_R^0$ (L86).
- I was quite confused about the meaning of L253, since projection is used also as a gate in the arithmetic circuit. Here it is instead using to extract one element of the pair returned by the circuit GNN. Also, I was expecting $\sigma^{(i)}$ to be a non-linearity (e.g., ReLU), but it is instead the projection here. I think L252-L254 can be rephrased as to make the notation more clear.
- L202 uses the star symbol. I suppose this is the Kleene star operator, but it seems to not be defined.
- Definition 3.15 contains an upward half arrow labelled with $S$. What is the semantic of this symbol? Is this defined somewhere? If not, it is better to define it.
- Since there is a high number of definitions appearing way before the theorems using them, I was wondering if you could remove some definitions. For example, do you need $\mathrm{FSIZE-DEPTH}_{\mathbb{R}^k}(s,d)[\mathcal{A}]$ as a concept in other parts of the paper? Isn't it sufficient to just introduce Definition 2.7 without using it? It looks like the paper only deals with constant-depth circuits anyway.
I strongly recommend all these issues to be eventually fixed in the camera ready of the paper, which should require modest effort.
Technical Quality: 4
Clarity: 2
Questions for Authors: I list my questions below:
1. It seems the paper does not show a theoretical application of the expressive equivalences shown. Do the authors have some insight about which kind of limitations of GNNs can be shown through the language of arithmetic circuits? The arithmetic circuits equipped with non-linear gates appears quite powerful. Are there already some works on showing limitations of what they can compute?
2. The authors introduce the concept of arithmetic circuits computing tail-symmetric functions. What would be a systematic way to build tail-symmetric circuits in practice?
3. The proof of Lemma 3.9 shows how to decompose an arithmetic circuit whose maximum out-degree of nodes is > 1 into another one whose out-degree of nodes is instead 1. Do the authors agree that the space complexity of such decomposition is polynomial in the size of the original circuit only because we are considering constant-depth circuits? If the circuits were not constant-depth, then the complexity of the decomposition would be exponential w.r.t. the depth, right? If this is true, then the authors should make this more clear in the proof.
4. It is not clear to me at which point of the proof of Theorem 3.17 we need the activation functions to be countably injective. Would it be fine if they were simply bijective? Can you please explain why do we need this constraint?
5. The authors mention multiple times that their results are "uniform", but it seems it is not properly defined anywhere. What is a formal definition of "uniformity" in general? Why is it important?
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: The only limitation I can think of is related to the theoretical results is that in many cases they require using arithmetic circuits equipped with non-linear gates, for which showing limitations in what they can compute sounds quite hard. See my question 1 above for details.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Weaknesses
1\., 2\. & 3\. We will carefully revise our introduction to eliminate repetition and to ensure that all concepts are sufficiently treated. Currently our introduction starts with a subsection "Background and Related Work", we will split this section into two parts and collect relevant related work discussion to a separate "Related Work" subsection to the end of the introduction. We strongly believe that our work should be put into its context in the introduction by discussing related work there in a sufficient manner.
4\. & 5\. Thanks for pointing this out. The star is indeed the Kleene star and the upward half arrow defines the restriction of a function. We will address this in the paper.
6\. We agree that there is high number of definitions, but our aim is to have a general framework to be able to reason about the complexity of GNNs. We therefore defined the C-GNNs to be as general as possible not just restricting ourselves to a version with only circuits of constant depth.
## Questions
1. Once we fix the circuit and the activation function we get the connection to the specific circuit class this circuit belongs to and are able to study it. Concerning the limits of what can be computed by arithmetic circuits using activation functions, we would like to mention that this is related to the question of which functions can be computed using other functions, or in other words, which functions are "more complex" than others. We are only aware of very little work in this area. Results include an upper bound for arithmetic circuit complexity classes with the addition of the sign function shown by Cucker (Felipe Cucker. PR != NCR . J. Complexity, 8(3):230–238, 1992]).
2. We are not entirely sure if we understand this question correctly. Are you asking if there exists a syntactic characterization of the notion of tail-symmetry, since this notion is purely semantical? If so, this is indeed a very good question that would be interesting to look into in the future.
3. This is indeed true, thanks for pointing this out, we will make this more clear in our proof.
4. We need the countable injectivity in activation functions for simulating circuits by using C-GNNs, since we encode the gate numbers of the circuit we are trying to simulate in the preimage of our activation functions. This way, we retrieve the gate number whenever we apply the activation function at the end of the computation of a node in our C-GNN. In general since countable injectivitiy is the weaker requirement bijectivity would also suffice.
5. Any fixed arithmetic circuit has a fixed number of inputs. This means that any circuit can only work on inputs of a fixed length. This is why we defined families of such circuits for our results. Computational models with this property, i.e., where essentially a "different machine" is needed for every input length are generally called non-uniform. If one can by computational means obtain the circuits of a given family, this family is then called uniform, as there exists an algorithm to compute on every input length. In Boolean complexity theory, oftentimes a Turing machine outputting the nth circuit on input n is used as a witness of a circuit family's uniformity.
Uniformity is important, as it provides us with an algorithm that can work on inputs of arbitrary length, whereas the existence of e.g. a nonuniform circuit family that computes a particular function cannot really be seen as an implementable algorithm.
From the constructive nature of our proofs, it follows that we will have some notion of uniformity, since they essentially define an algorithm to obtain the nth circuit or C-GNN. Defining that notion precisely will still require some work.
---
Rebuttal 2:
Comment: >Are you asking if there exists a syntactic characterization of the notion of tail-symmetry, since this notion is purely semantical?
Yes, or better, I am asking what are the possible ways to construct tail-symmetric functions. Can you also provide some examples?
>Defining that notion precisely will still require some work.
Thank you for the very clear explanation.
I still suggest the authors to try to give an outline of what "uniform" means in the paper.
I have updated my score as to reflect the answers given by the authors.
---
Rebuttal Comment 2.1:
Comment: *Regarding tail symmetric functions:* One very simple way of constructing tail-symmetric functions would be by defining a circuit that computes a symmetric function in arbitrarily many inputs $f(x_1, ..., x_n)$ (e.g. the sum of all inputs) and one that computes a binary function $g(x_1, x_2)$ (e.g. a linear function in two variables). If one then takes the result of the symmetric function as the second input to the binary function, i.e., $h(x_1, ..., x_n) := g(x_1, f(x_2, ..., x_(n+1)))$, then the resulting functin $h$ is tail-symmetric. This particular example would yield one way to define aggregate-combine GNNs as per Barceló et al. [Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, Juan Pablo Silva. The Logical Expressiveness of Graph Neural Networks. ICLR 2020.], but of course any kind of such functions are possible.
It should be noted, that this is by no means a complete characterization of such functions and that investigating functions of that kind would be an interesting avenue for further research. In particular, we deem it interesting to study circuits which compute tail-symmetric functions and ideally obtain a normal form of such circuits, i.e., a syntactical characterization for all functions of that kind.
*Regarding uniformity:* We will add some comments about uniformity to the main body of the paper. | Summary: The paper investigates the computational power of GNNs by demonstrating that the expressiveness of GNNs with different activation functions is equivalent to the capabilities of arithmetic circuits over real numbers. The authors introduce a new GNN variant called C-GNNs, which are equipped with constant-depth arithmetic circuits, and prove that these can compute the same functions as constant-depth arithmetic circuits. This result holds uniformly for all common activation functions and provides insights into the inherent computational limitations of GNNs, suggesting that enhancing GNN expressivity requires more complex functions or different architectures beyond mere scaling.
Strengths: The paper offers a novel perspective on the expressiveness of GNNs, which is a valuable addition to the existing body of research on GNN expressiveness. This could potentially inspire future work in GNN design.
Weaknesses: 1. The paper's presentation may not align well with the expectations of a NeurIPS audience. Even for readers with a background in TCS and combinatorics, the contributions of the paper could be made more apparent. An improved presentation could potentially result in a higher evaluation score. (Questions 1, 2, 3)
2. The paper would benefit from a more thorough discussion and comparison with existing literature within the main body of the text. This would help to contextualize the paper's contributions within the broader field. (Questions 3, 4)
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In Line 115-116, you mention that the results are "not only an upper bound for neural networks." However, existing studies often establish the *equivalence* between GNNs and classic algorithms, implying both upper and lower bounds. Moreover, the exact correspondence seems to be established only for C-GNNs and not AC-GNNs. Could you clarify this distinction?
2. You state that the "result holds for all commonly used activation functions." Yet, the results appear to hinge solely on the injectivity of the activation function, without more detailed results. Since commonly adopted activation functions are injective, it's challenging to discern the expressiveness of these functions. Could you provide more nuanced insights into this matter?
3. The paper does not discuss applications to deep learning, which may limit its appeal to the NeurIPS audience.
- Could you elaborate on potential practical problems that the FAC-circuit family can solve or not solve?
- Additionally, how do FAC-circuits relate to traditional GNN expressiveness results, such as the WL test and first-order logic? Can these classic algorithms be simulated by FAC-circuits?
4. An important related work is missed [1]. This study discusses the equivalence between GNNs and a family of distributed algorithms, as well as the impact on aggregation functions. Furthermore, VVc-GNN appears similar to the C-GNN proposed in your paper. A discussion on this would be beneficial.
[1] Sato, Ryoma, Makoto Yamada, and Hisashi Kashima. "Approximation ratios of graph neural networks for combinatorial problems." NeurIPS 2019.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Regarding the mentioned weaknesses:**
Thank you for pointing out these issues, we will definitely keep these in mind when revising the paper. We strife to appeal to the general NeurIPS audience, and for this purpose focussed also to presenting a thorough general introduction to the paper. This is though a bit of a balancing act given the theoretic nature of the result.
**Regarding the questions:**
1. That is certainly correct. Our intention was not to imply that there aren't other precise characterisation results, instead our intent was to emphasize that our results produces a matching lower and upper bound. Furthermore, the direct correspondence we establish is indeed only achieved for C-GNNs. For AC-GNNs, we merely show an upper bound.
2. We wanted to emphasize that our results cover all commonly used activation functions. The proof technique does rely only on the injectivity of the activation function, and therefore holds for many more functions than just the ones mentioned. The main take away from this should be that if an activation function satisfies a very easy requirement (injectivity), then we get a correspondence between circuits and C-GNNs that have access to the same activation functions.
3. Questions about what different FAC-families can and cannot compute in practice still require further research. We believe that this work produces an application motivating the research on this topic for people working in circuit complexity. Since we are only concerned with real-valued computations, connections to classical FO do at least not seem obvious. Connections to FO over metafinite R-structures, however, seem plausible, and are worth investigating, since there is a close connection between real valued circuits and FO over R-structures.
4. Thank you, this is indeed an important paper as it is a precursor for the logical characterisations of Barcelo et al. (ICLR 2020) and Grohe (LICS 2023). We do not think that VVc-GNNs are directly related to our C-GNNs; in our work the aggregation-combine operations are computed by a circuit from some circuit class that delineates the complexity of the aggregation-combine method. In VVc-GNNs the focus is on the format of the message passing, and does not limit complexity of the aggregation-combine method. However, it is an interesting topic of future work to study whether by relating our C-GNNs to variants of VVc-GNNs we may drop the assumption of tail-symmetry. Finally, if the results of Sato et al. are combined with logical characterisations of
"Hella, Jarvisalo, Kuusisto, Laurinharju, Lempiainen, Luosto, Suomela, Virtema. Weak models of distributed computing, with connections to modal logic. PODC 2012"
the model of distributed computing that the paper by Sato et al. take inspiration from, one obtains a weaker form of the logical characterisation later proved directly to GNNs by Barcelo. We will add a comment about this to the paper.
---
Rebuttal 2:
Comment: I appreciate the authors' rebuttal, which has addressed many of my concerns, prompting me to increase my score to 6. While the practical impact of the paper remains unclear to me, I am convinced that it has the potential to inspire subsequent research in the field. I would suggest that the authors highlight the takeaways and proofs that could spark future work. Additionally, I would like to remind the authors to **include an official comment on the point raised in item 4 of the rebuttal** as soon as possible.
---
Rebuttal Comment 2.1:
Title: Response by authors
Comment: Thank you for your kind comments. We added an official comment that highlights the issues raised in point 4. | Summary: In this article, the authors present new contributions on the understanding on the computational framework provided by GNNs.
In particular, they draw a connection between Arithmetic Circuits and GNNs.
Based on a new definition of GNN using arithmetic circuits, they show that the function computed by a GNN on each node can be thought of a function that an (tail-symmetric) arithmetic circuit computes, by stacking the adjacency matrix of the graph and the features vector. In other words, by using arithmetic circuits as comb and aggregation functions inside a message passing framework, the overall procedure can still be thought of as an arithmetic circuit (and vice-versa).
Strengths: - The paper reads well, in particular the introduction about related work is useful. (although the authors may want to include some additional references for completeness, as detailed below in the Questions section).
- The authors propose a new way to think about GNNs, as a correspondence between a given iteration number, and two family circuits (which used to be the COMB and AGG functions).
- I find the several contributions interesting.
Weaknesses: - The main weakness that I can see, is that the definition of the C-GNNs makes the connection between AC-GNN (or close variants, which is what is implemented and used in practice) and circuits. Since the correspondence between GNNs and tail-symmetric arithmetic circuit is established in the following sense:
$$ \text{AC-GNN} \subsetneq (\text{ C-GNNs} = FAC^{0}_{\mathbb{R}^{k}}[\mathcal{A}] )$$
There is potentially a huge gap between AC-GNNs and C-GNNs, that is partially discussed in Remark 3.4. The obscure part of this correspondance seems to arise mainly because of the aggregation functions: i.e., if we have the sum, max and product as aggregation functions, what part of the C-GNNs can we simulate (with polynomial reductions)?
- Also, although I personally find the results interesting, they are not very deep or insightful: if one provide arithmetic circuit power on each node, and arithmetic circuit power as aggregation, then we obtain at most arithmetic circuits.
Technical Quality: 4
Clarity: 4
Questions for Authors: - The authors emphasize (and I think it is a good thing) that the computational equivalence they obtain with Arithmetic circuits is something that applies to the message passing framework (the essence of GNNs), regardless of the computational power on the nodes. But in their work, the authors provide an arithmetic circuit as computational power to the nodes. In other words, do the authors believe there is an arithmetic circuit that cannot be computed by an AC-GNN?
- The authors define standard notations like $[n]$, notation for multisets, but not for $(\mathbb{R}^{k})^{*}$. This probably refers to all possible n-tuples of $R^{k}$ for every n. However, it becomes confusing at several locations in the paper.
- From the abstract, the authors mention that their result hold uniformly and non uniformly. This is not mentioned again nor explained in the paper (uniform vs. non uniform) (except if I missed it?)
- The definition of an AC-GNN, as an C-GNN does not coincide with the standard one: a function \sigma^{i} is applied after the combination. This does not align for instance with [2020, Barcelo & Al.].
- Definition 3.3, is unclear: The authors may want to elaborate why the proj_1 and proj_2 operations are made consecutively, why is proj_2 needed for $\sigma^{(i)}$ ? I suppose that this is in order to have access to two different arithmetic consecutively.
- The authors may want to consider in their introduction, in order to compare their result (in particular the independence w.r.t. the activation functions) the references:
- Impact of aggregation function: Eran Rosenbluth, Jan Tönshoff, Martin Grohe ``Some might say all you need is sum``
Martin Grohe, Eran Rosenbluth ``Are Targeted Messages More Effective?``
- Impact of activation function on GNN expressivity: Sammy Khalife ``Graph Neural Networks with polynomial activations cannot express all GC2 queries``
- Sammy Khalife, Amitabh Basu ``On the power of graph neural networks and the role of the activation function.``
- In proof of Theorem 3.11 is used Definition A.7, and mentioned (``via the injective function'', but this definition allows several functions (as long as different depth implies different values taken by the function). Can the authors confirm and/or clarify this?
- I had a look at the proof of Theorem 3.11 in details (I find this result the most interesting). Can the authors confirm: the essence of the proof is to group the input of the initial circuits (vector of reals) and the circuit as a new graph input to the C-GNN? The gates and structure of the initial circuit is then processed by the C-GNN, in order to compensate for the excess (addition and multiplication shown in Algorithm 1). The division performed in Algorithm 1 is allowed as it is the same as multiplying by the inverse of a constant (this number is indeed a constant as it relates to the structure of the circuit, not the input).
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors are transparent in the conclusion of their work about the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## Regarding the mentioned weaknesses:
The nature of the relationship between AC-GNNs and C-GNNs lies in the definitions. In Remark 3.4 we assume the aggregation functions to be computable by $\text{FAC}^0_{R^k}$-circuit families, which the max function is not. So to be able to express an AC-GNN that uses max as the aggregation function, a different class of C-GNNs would be needed. The other way around (AC-CNNs simulating C-GNNs) this would boil down to the question which kind of circuit class the circuits used in the C-GNN are part of. For $\text{FAC}^0_{R^k}$ (note: no extra gates) we would intuitively say the AC-GNNs could simulate C-GNNs where every circuit has a restricted depth as the sum or product over n different inputs only takes one gate to compute in a $\text{FAC}^0_{R^k}$ circuit, assuming the circuits are minimal.
While the results may not seem too surprising at first glance, the proofs still involve a lot of technicalities, some mainly attributed to the use of real numbers and functions over those, while others arise from the fact that this essentially shows that message passing does not increase power in this kind of setting.
## Regarding the questions:
1. In their standard definition, arithmetic circuits have access to addition and multiplication gates.
However, functions which make use of arbitrary multiplication in a meaningful way cannot in general be computed by AC-GNNs of classical definition with constant depth, i.e., with sum as its aggregation and a weighted sum as its combine function.
If we were to use different aggregation and/or combine functions in the AC-GNN, depending on the chosen functions, we might very well be able to simulate arithmetic circuits using AC-GNNs as well.
2. You are correct, we will address this in the revised version.
3. From the constructive nature of our proofs, it follows that we will have some notion of uniformity. Defining that notion precisely and proving a formal statement will still require some work.
4. Apart from (not) fixing the combination function, our definition aligns with the one mentioned, as $\sigma^{(i)}$ corresponds to the function f that is part of the combination in Barceló et al. We changed the notation here because we refer to this function very frequently as the activation function and therefore wanted it to be written out separately from the combination function.
5. In our current definition, $\mathcal{N}(i)$ is a function pointing to a CGNN basis for every layer, which in turn is a tuple $(\mathcal{S}, \mathcal{A})$ containing a set of functions computed by a circuit family $\mathcal{S}$ and a set of activation functions $\mathcal{A}$.
We needed the projections to refer to the individual functions $\mathcal{C}^{(i)}$ and $\sigma^{(i)}$ for layer $i$. We will improve the somewhat poor presentation of this.
6. Thank you, we will look into these.
7. This is indeed a misleading formulation. Any numbering satisfying the condition of Definition A.7 would suffice, we will clarify this.
8. Yes, this is indeed the essence of the proof. The details certainly require a bit of technical work, but this portrays the general idea well.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanations. I think this paper will be of interest for the GNN community and researchers at the interface with circuit complexity. I strongly encourage the authors to include the changes discussed and will maintain my score as is. | Summary: This paper provides a new lens through which the expressive power of GNNs is analyzed via arithmetic circuits and derives expressiveness limits for general GNNs.
Strengths: - this paper is indeed modular, well-written, and cleanly organized
- it introduces an interesting perspective for analyzing GNN expressiveness
Weaknesses: - I think this paper would benefit from more detailed discussion on the relationship with WL tests.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Is there any numerical experiments you can run to demonstrate your theoretical findings?
Confidence: 2
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The limitations have been sufficiently addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **I think this paper would benefit from more detailed discussion on the relationship with WL tests.**
The Weisfeiler Lehman tests (and its variants) usually refer to the problem of graph isomorphism, and give insight to the question whether two graphs can be distinguished by some GNN model. Since we are concerned with comparing the classes of functions over the reals that can be computed (using distributed computation/message passing in the case of C-GNNs and AC-GNNs), the WL test does not precisely relate to our main focus. We of course mention WL tests as a well understood technique to compare GNN expressivity.
**Is there any numerical experiments you can run to demonstrate your theoretical findings?**
As it currently stands, there does not seem to be any meaningful experiments that could be run for this model of computation. Essentially, the problem is that not much is known about the computational power of the circuit classes we utilize. Further research into functions computable by practically implementable arithmetic circuits of different sizes and depths would be required.
This could e.g. be done by investigating the Boolean parts of different complexity classes defined by families of arithmetic circuits, i.e., the classes when restricted to Boolean inputs. This research would pinpoint properties that would be provably outside of the capabilities of particular GNN models, whose learnability could be then tested in practice. We believe that our results will motivate research for these circuit classes.
---
Rebuttal 2:
Comment: Many thanks for your rebuttal! | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study | Accept (poster) | Summary: The core question raised in the paper is whether LLMs can also learn through teaching (LbT). The authors demonstrate that the idea of LbT can be easily integrated into existing LLM training/prompting processes and propose three methods, each mimicking three levels of human LbT: observing feedback, learning from feedback, and iterative learning. The authors found that: (1) LbT can induce a progression from weak to strong generalization, where a strong model can improve itself by teaching other weak models; (2) Diversity among students is important, as teaching multiple students may be better than teaching just one student or the teacher itself.
Corresponding solutions:
1)M1 aims to improve the answer quality of LLMs by directly leveraging students' feedback (L1). 2) M2 aims to enhance the intrinsic abilities of LLMs by learning from students' feedback (L2). 3) M3 aims to improve the answer quality of LLMs by iteratively learning from students' feedback (L3).
Strengths: The paper provides a clear and detailed description of the problem and the proposed solutions, supported by ample experimentation.
Weakness:
The idea presented in the article is a well-known idea, but it lacks sufficient assumptions and counterexample analysis, making it too intuitive. See:
[1]Black-box Generalization of Machine Teaching, Xiaofeng Cao, Yaming Guo, Ivor W. Tsang, James T. Kwok, https://arxiv.org/abs/2206.15205v2.
Employing teaching feedback to guide a white-box or black-box learner has been mentioned earlier.
In machine learning, such an idea is the most basic one, especially in machine teaching. See:
[2] Liu, W., Dai, B., Humayun, A., Tay, C., Yu, C., Smith, L. B., ... & Song, L. (2017, July). Iterative machine teaching. In International Conference on Machine Learning (pp. 2149-2158). PMLR.
[3] Zhang, C., Cao, X., Liu, W., Tsang, I., & Kwok, J. (2023, July). Nonparametric iterative machine teaching. In International Conference on Machine Learning (pp. 40851-40870). PMLR.
There are also works for teaching multiple learners: [4] Yeo, T., Kamalaruban, P., Singla, A., Merchant, A., Asselborn, T., Faucon, L., ... & Cevher, V. (2019, July). Iterative classroom teaching. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 5684-5692).
[5] Zhang, C., Cao, X., Liu, W., Tsang, I., & Kwok, J. (2024). Nonparametric teaching for multiple learners. Advances in Neural Information Processing Systems, 36.
The authors lack a survey of the machine learning community regarding teaching.
The author exaggerates their contribution or should strengthen the underlying assumptions of their contribution. Teaching weak learners is related to their variance.
Teaching weak students is challenging unless there is a proper configuration. Otherwise, the improvement in students' learning and reception abilities is limited, as already verified in knowledge distillation (KD).
5)We have only seen separate presentations of the M1-M3 approaches. It is unclear how M1-M3 collectively contribute to the overall logic and objectives of the paper.
6)M3 mentioned in the paper is not sufficient to be considered a core contribution.
7)Would these methods still be effective if there is a significant difference between the student model and the teacher model?
Overall, this draft studies the teaching in the current LLM settings. While the assumptions and conclusions have issues based on current presentation. I suggest the authors tune down their statements.
Strengths: Strengths:
The paper provides a clear and detailed description of the problem and the proposed solutions, supported by ample experimentation.
Weaknesses: Weakness:
The idea presented in the article is a well-known idea, but it lacks sufficient assumptions and counterexample analysis, making it too intuitive. See:
[1]Black-box Generalization of Machine Teaching, Xiaofeng Cao, Yaming Guo, Ivor W. Tsang, James T. Kwok, https://arxiv.org/abs/2206.15205v2.
Employing teaching feedback to guide a white-box or black-box learner has been mentioned earlier.
In machine learning, such an idea is the most basic one, especially in machine teaching. See:
[2] Liu, W., Dai, B., Humayun, A., Tay, C., Yu, C., Smith, L. B., ... & Song, L. (2017, July). Iterative machine teaching. In International Conference on Machine Learning (pp. 2149-2158). PMLR.
[3] Zhang, C., Cao, X., Liu, W., Tsang, I., & Kwok, J. (2023, July). Nonparametric iterative machine teaching. In International Conference on Machine Learning (pp. 40851-40870). PMLR.
There are also works for teaching multiple learners: [4] Yeo, T., Kamalaruban, P., Singla, A., Merchant, A., Asselborn, T., Faucon, L., ... & Cevher, V. (2019, July). Iterative classroom teaching. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 5684-5692).
[5] Zhang, C., Cao, X., Liu, W., Tsang, I., & Kwok, J. (2024). Nonparametric teaching for multiple learners. Advances in Neural Information Processing Systems, 36.
The authors lack a survey of the machine learning community regarding teaching.
The author exaggerates their contribution or should strengthen the underlying assumptions of their contribution. Teaching weak learners is related to their variance.
Teaching weak students is challenging unless there is a proper configuration. Otherwise, the improvement in students' learning and reception abilities is limited, as already verified in knowledge distillation (KD).
5)We have only seen separate presentations of the M1-M3 approaches. It is unclear how M1-M3 collectively contribute to the overall logic and objectives of the paper.
6)M3 mentioned in the paper is not sufficient to be considered a core contribution.
7)Would these methods still be effective if there is a significant difference between the student model and the teacher model?
Overall, this draft studies the teaching in the current LLM settings. While the assumptions and conclusions have issues based on current presentation. I suggest the authors tone down their statements.
Technical Quality: 3
Clarity: 3
Questions for Authors: a) Would these methods still be effective if there is a significant difference between the student model and the teacher model?
b) Teaching weak students is challenging unless there is a proper configuration. Otherwise, the improvement in students' learning and reception abilities is limited, as already verified in knowledge distillation (KD).
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Incomplete analysis from the teaching and irregular contribution statements.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The idea presented in the article is a well-known idea.**
Thank you for suggesting relevant references from the machine teaching literature. We will cite and discuss them in our revised version. While these papers have similarities in terms of how the teacher should organize teaching materials, our work differs from prior studies in two key aspects:
1. As stated in our introduction, "teaching" has been extensively studied in machine learning, with knowledge distillation as a prominent example. While both machine teaching and knowledge distillation aim to **improve the student**, our focus is on whether using "teaching" to get students' feedback can help **improve the teacher**—especially in the context of weak-to-strong generalization (i.e., continuously evolve *stronger* teacher by teaching *weaker* students). This distinction sets our paper apart from existing work.
2. Instead of solving small-scaled tasks using simple analytical models, such as linear or kernel models, as is common in much of the machine teaching literature, our work focuses on advancing the **reasoning capabilities of contemporary LLMs** with tens of billions of parameters. As LLMs get stronger and stronger and even show human-like behaviors nowadays, our work aims to evaluate whether the LbT methodology that has proven effective in providing intrinsic supervision for accurate knowledge building and reasoning in human learning can similarly benefit LLMs.
To summarize, as far as we are aware, this is the first attempt at migrating the LbT idea from the learning sciences [3, 4, 5, 6, 7, 8, 9, 10] to enhance LLMs. We will discuss these key differences, including the motivation, the model scale, and the targeting tasks, between our work and provided references from the machine teaching community in the revision.
**Q2: It lacks sufficient assumptions and counterexample analysis, making it too intuitive.**
Thanks for this comment. We'd like to share our perspectives on this issue. We acknowledge that since our study focuses on a complex and powerful LLM solving complex reasoning problems, it is challenging to derive comprehensive theoretical frameworks with precise assumptions and counterexamples. As LLMs are becoming stronger and even show human-like behaviors nowadays, we believe that advancing algorithms at both ends of the spectrum can help push the field forward:
- Developing methods and models grounded with theoretical guarantees to solve simple or synthetic tasks.
- Building intuitive methods for use with state-of-the-art models to address real-world complex tasks. These methods, inspired by human problem-solving and learning processes, are often empirically validated through experimental analyses rather than being grounded in theoretical frameworks.
There are many other studies that also draw inspiration from human problem-solving techniques to enhance their capabilities, which have created enormous impact on the LLM community. For instance, techniques such as Chain-of-Thought [24] is motivated by the way humans reason through problems step-by-step. Self-Refine [25] is inspired by the human practice of refining written text. Similarly, Relexion [26] emulates how humans iteratively learn and adapt when tackling complex tasks. These works, together with ours, lie in the second category of the spectrum.
**Q3: Teaching weak learners is related to their variance.**
We are not very sure what "variance" refers to in this context. Could you please provide more details on this comment? We are eager to understand your perspective better and address any concerns you might have.
**Q4: Teaching weak students is challenging unless there is a proper configuration. Otherwise, the improvement in students' learning and reception abilities is limited, as already verified in knowledge distillation (KD).**
**Q7: Would these methods still be effective if there is a significant difference between the student model and the teacher model?**
Thank you for the great question. About why teachers can benefit from teaching weak students, our intuition is as follows. To teach a weak student, the teacher needs to organize detailed and high-quality teaching materials that can be digested even by a weak student. This is a high bar for the teachers and the teachers should be able to learn a lot from material preparation and the feedback from the students. However, the students should not be too weak. Otherwise, they may lack the ability to learn anything from the teacher (e.g. through in-context learning), and therefore cannot provide useful feedback.
Indeed, the effectiveness of LbT varies as the teacher-student configuration changes. As demonstrated by our experiments, certain configurations are less effective than others. For example, as shown in Table 2, when ChatGPT-3.5 is the teacher, it is more effective to use LLaMA3-8B as the student than Mistral-7B.
Despite this, our method proves effective even with a significant disparity between the teacher and the student. For example, in M1 and M3, we have shown that LLaMA3-70B (70.16% on 181 MATH) can benefit from teaching LLaMA3-8B (45.85% on 181 MATH). In M1, we demonstrated that GPT-3.5 (59.11% on 181 MATH) can benefit from teaching Mistral-7B (19.88% on 181 MATH). These examples involve models from different families with a large capacity gap.
To further demonstrate the robustness of our method across various configurations, we conducted an additional experiment where LLaMA3-70B teaches Mistral-7B. Please refer to the additional results in the provided PDF for more details.
Regarding the comment "the improvement in students' learning and reception abilities is limited", we are not sure whether we understand this comment correctly, but we aim at improving the performance of the **teacher** rather than the **student**. Please do not hesitate to correct our understanding if necessary.
---
Rebuttal 2:
Title: Additional rebuttal 1
Comment: **Q5: We have only seen separate presentations of the M1-M3 approaches. It is unclear how M1-M3 collectively contribute to the overall logic and objectives of the paper.**
The relationship between M1-M3: M2 is built on top of M1, where we derive the LbT scores from M1 and use them to fine-tune the teacher in M2. The combination of M3 with M1 is discussed in lines 268-277 as a near-term extension.
How M1-M3 collectively contribute to the overall logic and objective: Our aim is to study whether the general idea of LbT can help improve the crucial reasoning ability of LLMs. We summarize LbT in human learning into three levels (which is backed up by observations from the learning sciences [3-12]) and then demonstrate its effectiveness through three case studies, each corresponding to one of the three LbT levels. The exact objectives of each concrete method design are also summarized in Table 1.
**Q6: M3 mentioned in the paper is not sufficient to be considered a core contribution.**
We acknowledge that the idea of organizing teaching materials shares similarities with existing literature, particularly within the machine teaching community, but as discussed in the previous response, our focus on improving the ability of the **teacher** using LLMs is distinct from previous studies.
Furthermore, our aim is not to design brand-new pipelines, but to illustrate how the LbT idea can be integrated into existing LLM training and prompting pipelines. This integration offers exciting opportunities for models to evolve by teaching other (potentially weaker) models. The connection between our methods and existing pipelines is discussed in Section 2.
---
Rebuttal 3:
Title: Please tone donw the statements
Comment: ----"The findings are rather encouraging. For example, similar to LbT in human, we see that: (1) LbT can induce weak-to-strong generalization: strong models can improve themselves by teaching other weak models; (2) Diversity in students is important: teaching multiple students could be better than teaching one student or the teacher itself. "
----"However, the students should not be too weak. Otherwise, they may lack the ability to learn anything from the teacher (e.g. through in-context learning), and therefore cannot provide useful feedback."
Those statements have issues, but the authors don't tone down them. For teaching, "teaching a black-box learner" is feasible, and it is a special track in the machine learning community. "Teaching multiple students could be better than teaching one student" is also wrong. The students' variance in learning ability affects the iterative teaching performance. There are also topics where weak teachers teaching heterogeneous students, i.e., classroom teaching, is applicable. I suggest the authors read more works from Jerryzhu and Sanjoy Dasgupta.
Considering the authors present too many inaccurate statements in the draft and don't realize the proposed issues, I have to tune down my score.
Although the author was working on an engineering project, they could not deviate from the basic theoretical statements to avoid exaggeration and false demonstration of certain issues.
---
Rebuttal Comment 3.1:
Title: We appreciate the reviewer for the prompt reply. Here are the follow-up discussions (part I).
Comment: We appreciate the reviewer for the prompt reply. It seems like the remaining concern lies in the tone of the statements. **We are more than willing to revise statements that may have tone issues.** Please allow us to provide more discussions regarding this concern.
---
> **General Comment: Those statements have issues, but the authors don't tone down them.
Considering the authors present too many inaccurate statements in the draft and don't realize the proposed issues, I have to tune down my score.
Although the author was working on an engineering project, they could not deviate from the basic theoretical statements to avoid exaggeration and false demonstration of certain issues.**
We definitely agree that any research work should avoid exaggeration. In the paper as well as the rebuttal, we carefully stated that our scope is to conduct preliminary exploration of **LbT** in **LLMs for reasoning tasks**, and our conclusions are confined to the empirical results of our experiments. We did not mean to exaggerate that these observations are held generally in other domains. For example, in the title and abstract, we stated the scope clearly "Can LLMs also learn by teaching (LbT)? ... In this paper, we provide a preliminary exploration of this ambitious agenda." before presenting the findings; when describing the results, we use "could/can" when possible. We went through the paper again to double-check our statements. We will revise the tone of two short summary sentences according to the reviewer's suggestion (see the reply to Follow-up Comment 1), and we did not spot other issues beyond that.
Regarding the deviation from the theoretical statements and citations the reviewer brought up, we want to emphasize that their context and settings are quite different from ours. Thus, their results do not necessarily generalize to our problem setting, and their different results are not sufficient to claim that our results are wrong (see the reply to Comment 4).
BTW, we are happy to provide the code for reproducing all empirical results described by our statements. According to this year's Neurips policy, authors are not allowed to post links without reviewers' request. If needed, we will provide an anonymous code link if the reviewer could submit a code request comment.
> **Follow-up Comment 1 -- A quote of two statements in the abstract.**
Regarding the statement "Diversity in students is important: teaching multiple students could be better than teaching one student or the teacher itself". We have chosen the word "could" in the statement "... could be better ..." to describe the empirical results. We will further tone down the statement from "Diversity in students is important" to "Diversity in students might help" according to the suggestion.
Regarding the statement "LbT can induce weak-to-strong generalization: strong models can improve themselves by teaching other weak models". We think the statement is correct as it already uses "can" and describes the empirical results under our settings. We can further tone the summary down from "LbT can induce weak-to-strong generalization" to "LbT might help with weak-to-strong generalization".
We have checked our paper again. Except for these two sentences (which have corresponding explanations with proper tones), we think all sentences are in the proper tones. If there are other statements that could lead to potential misunderstandings, please let us know and we will revise them.
> **Follow-up Comment 2: For teaching, teaching a black-box learner is feasible, and it is a special track in the machine learning community.**
We guess that this comment is about this statement in our original rebuttal: "However, the students should not be too weak. Otherwise, they may lack the ability to learn anything from the teacher (e.g. through in-context learning), and therefore cannot provide useful feedback".
Does the reviewer mean that "teaching a black-box student is feasible, even if it is weak"? If so (if not, please let us know), we'd like to emphasize that this statement in the rebuttal describes the findings under our settings, i.e., LLM for reasoning task, with in-context learning as the "teaching method". The detailed logic is as follows: We employ in-context learning to let the student learn from the teacher's output (using the TP and teacher's TR as the in-context exemplar). If the student is too weak to follow the in-context learning exemplars to solve Exam Problems, then its score on the Exam Problems cannot reflect the quality of the teaching material, i.e., failing to provide useful feedback to further guide the teacher's generation or training.
If we have any misunderstandings about this comment, please let us know!
---
Rebuttal 4:
Title: We appreciate the reviewer for the prompt reply. Here are the follow-up discussions (part II).
Comment: > Follow-up Comment 3&4: "Teaching multiple students could be better than teaching one student" is also wrong. -- with two supporting comments.
Again, this statement was accurately describing our empirical findings in M1 and M3. We discuss the two supporting comments one as follows:
> Comment 3: The students' variance in learning ability affects the iterative teaching performance.
Based on this comment, **we think the misunderstanding might come from the definition of the term "teaching performance"**. For clarification, in machine teaching literature, "teaching performance" refers to the **student's accuracy**, whereas "teacher performance" in our work refers to the **teacher's accuracy**.
In our experiments, under the settings of teaching multiple students and teaching one student, we both report the **teacher's final generation accuracy** with M1/M3. Based on the previous clarification, we think making this claim "Teaching multiple students could be better than teaching one student" is direct and valid from this empirical comparison.
It might be possible that we still do not fully understand what your comment means. We have also asked for clarification in our original rebuttal: "We are not very sure what variance refers to in this context. Could you please provide more details on this comment? We are eager to understand your perspective better and address any concerns you might have". Could the reviewer explain it in more detail? We will be happy to discuss it further.
> Comment 4: There are also topics where weak teachers teaching heterogeneous students, i.e., classroom teaching, is applicable. I suggest the authors read more works from Jerryzhu and Sanjoy Dasgupta.
Thanks again for recommending the work from these researchers. We have already gone through all the literature recommended in the original review. We have discussed the connections between the prior works and our paper in the response to **Q1**. While we find these works from the machine teaching community to be highly inspiring, there are key differences between their focus and ours:
- Machine teaching is defined as *"an inverse problem of machine learning, that is, finding the optimal teaching examples if the teacher already knows the learning parameters."* [27, 28] and aims to improve the performance of the **student** with a minimal set of labeled examples. In contrast, our work focuses on enhancing the performance of the **teacher**, as measured by the **teacher's accuracy** on a test set.
- Machine teaching typically focuses on **small-scaled tasks using analytical models**, whereas we focus on the **reasoning capabilities of contemporary LLMs**.
Therefore, we respectfully argue that it is not appropriate to claim our statements are wrong based on literature and results from a significantly different context.
That being said, we agree that acknowledging these works in our paper will provide valuable context. Thank the reviewer for pointing them out. As we promised in our rebuttal, "We will discuss these key differences, including the motivation, the model scale, and the targeting tasks, between our work and provided references from the machine teaching community in the revision."
---
Thank the reviewer again. As criticisms and discussions are very helpful for enhancing the quality of our paper, we are happy to discuss further!
[27] An Overview of Machine Teaching. arXiv, 2018.
[28] Black-box Generalization of Machine Teaching. arXiv, 2022.
---
Rebuttal Comment 4.1:
Title: Lack of rigor and distinctive contributions
Comment: It is beneficial that the authors are able to approach the concept of teaching from a theoretical perspective, as this can provide a more rigorous logic for the paper. Teaching multiple students is challenging, as the variance among students can affect overall performance, even with effective communication skills.
LLM is a new topic in the community, so introducing teaching as a concept is a reasonable approach. However, from an optimization standpoint, the contributions related to using teaching feedback to improve performance are not novel. As mentioned in teaching theory, this is a well-known paradigm. Similarly, in knowledge distillation (application of teaching), this concept is also not new. Therefore, it is difficult to identify the unique contributions of the draft.
I hope the authors can enhance the quality of their ideas by seeking more stringent assumptions. At the very least, the paper should address issues relevant to the machine learning community.
Academic rigor should be prioritized over incremental improvements in results.
I will maintain my initial score of Borderline Accept. Thank you for the positive discussion.
---
Reply to Comment 4.1.1:
Comment: Thank you for raising the score. We will ensure that our revision includes a discussion on the differences between our work from the machine teaching literature. | Summary: In this paper the authors investigate whether the principles of 'Learning by Teaching' (LbT) in humans can be applied and used in LLMs. To investigate this they propose 3 techniques and map them different to LbT levels.
The first technique M1 aims at improving answer quality by developing a scoring function to rank answers generated by LLMs. For the purpose of this method a strong LLM is made to generate Teaching Rationale (TR) and Teaching Answer (TA) pairs for a given teaching problem (TP). The TR-TA pairs are used as ICL exemplars within student models presented with an Exam Problem (EP) which is of similar type as the TP. The student models then produce Exam Rationale (ER) and Exam Answers (EA) and receive an accuracy score or LbT score based on the correctness of EA. The TR-TA pairs are then selected based on the highest score and the highest sum of scores. The authors demonstrate that this scoring strategy outperforms self-consistency scoring/greedy scoring in a variety of teacher-student model settings and a variety of tasks (e.g. Math, Coding)
The second technique M2 aims to improve the ability of LLMs by leveraging the student feedback. For this DPO is used to finetune the teacher model on the TR and the corresponding LbT scores obtained using M1 along with correctness scores. The authors show that this technique leads to better performance than by simply doing DPO with the correctness scores.
The third technique M3 is based on iteratively improving the exemplars generated for the students by reflecting on the mistakes made by the students. The authors demonstrate that this technique helps improve the performance and also benefits when multiple student LLMs are used to provide feedback.
In summary the authors conclude by suggesting that strong teacher models can improve even when teaching weak students(weak-to-strong generalization) and teaching other students/multiple students works better than teaching itself.
Strengths: The paper is well written and provides several interesting and novel results which could be of significant interest to the research community (particularly alignment). The experimental analysis and discussion of results is sound and well supported by evidence. Along with the introduction of LbT the paper provides concrete methods (M1, M2, M3) to instantiate different aspects of LbT in LLMs. The paper also presents some encouraging findings such as 1) LbT can be used to improve answer quality and model capability, 2) LbT exhibits weak-to-strong generalization and 3) diversity of student models helps.
Weaknesses: The LbT score seems to be reliant on having the final answer being verifiable. It would be interesting to see how this can be translated to cases where the answer produced by the student models is in free-form text. In this regard the current use-cases demonstrated have been in improving performance on tasks such as math or coding. More extensive evaluation on diverse tasks may be needed to assess the generalizability of LbT. A more detailed error analysis on what types of errors the student models make and which of these errors provide the most helpful feedback to the teacher model could be interesting.
Technical Quality: 3
Clarity: 4
Questions for Authors: For the claim that 'Improvements do not saturate as number of TR-TA pairs increase' has there been any analysis done to identify the upper bound or the optimal number of TR-TA pairs needed (cost vs performance)?
General Comments:
There is a typo on line 53.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors acknowledge that the technique requires the strategies to solve the EPs be similar to the ones used for the TPs and that currently the EPs were selected based on human provided information.
As the authors acknowledge the proposed technique also leads to additional inference cost.
Furthermore it remains to be seen how this technique can be extended to tasks wherein the answers generated by student models are not easily verifiable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The LbT score seems to be reliant on having the final answer being verifiable. ... More extensive evaluation on diverse tasks may be needed to assess the generalizability of LbT.**
Thanks for this valuable question. LbT can indeed be extended to open-ended problems, such as dialogue, writing, and open-ended math problems. A natural extension could involve using a teacher LLM to evaluate a student's answer, which would then serve as the LbT score. This approach mirrors how a teacher assesses a student in human learning and is promising due to the strong evaluation capabilities of LLMs [19, 20]. A similar idea has been employed to generalize Self-Consistency (which only applies to verifiable problems) [21] to open-form problems [22].
Besides, as demonstrated in literature from the learning sciences, teachers may directly benefit from providing assessments [8] or reviews [23], and students can provide valuable free-form feedback beyond taking tests or exams, such as peer recommendations [11] and satisfaction questionnaires [16, 17]. We believe that an open-ended evaluation process could potentially offer additional feedback to the teacher LLM, which can be utilized to further enhance its performance. We plan to explore these extensions in future studies.
**Q2: A more detailed error analysis on what types of errors the student models make and which of these errors provide the most helpful feedback to the teacher model could be interesting.**
Thanks for this valuable suggestion. For math reasoning (M2), we have included some analyses in Section 4.3 and presented the examples in Appendix B.2. For competition-level code synthesis (M1), we have included analyses in Section 3.3.2 and presented the examples in Appendix A.2.2.
For math reasoning and code synthesis, students can make both logical and non-logical errors. The non-logical errors, such as computation errors (Math), missing imports (Code), miswritten variable names (Code), and incorrect usage of library functions (Code), are mainly related to the knowledge required by specific EQs and the robustness of the student model, rather than reflecting the quality of the TR-TAs. Thus, they do not provide helpful feedback to the teacher. To address this, for code synthesis, we apply self-debugging to correct non-logical bugs, making the students' exam V-score a more accurate indicator of TR quality and thus more helpful to the teacher.
In response to your suggestion, we will add more detailed analyses and examples to the appendix. For example, logical errors in code synthesis can be further classified into three types: (1) Code with generally correct logic but incorrect handling of boundary conditions, which fails in some hard cases; (2) Code with incorrect logic, such as a wrong DP recursion formula, which typically fails most cases; and (3) Code with correct logic but complexity issues, such as using recursion instead of DP or cached-recursion, leading to time or memory errors on large cases. Our experiments show that when students follow TR-TA in making these three types of logical errors, it provides valuable feedback for the teacher.
For textual reasoning tasks (M3), according to your suggestion, we add more error analysis in the attached PDF. More specifically, we further analyzed M3's behavior in identifying false generalizations [18] with Llama-3-70B as the teacher and Llama-3-8B as the student. Some causes of errors identified by the teacher from student mistakes are: (a) "Lack of examples within the context of multiple speakers or dialogue"; (b)"Insufficient context for understanding the argument"; (c) "Difficulty in handling nuances of everyday language and humor". First, we found that the errors that the teacher identifies from student mistakes are **also applicable to teacher's mistakes**, with 45.2%, 37.1%, and 44.6% of teacher mistakes caused by these three reasons. Second, after the teacher improves the ICL examples by learning from the students, the teacher's mistakes caused by these reasons are reduced by 6.0%, 11.6%. and 13.3%. Finally, mistakes of different students lead to complementary causes of errors that are also very relevant to teacher mistakes.
**Q3: For the claim that 'Improvements do not saturate as number of TR-TA pairs increase' has there been any analysis done to identify the upper bound or the optimal number of TR-TA pairs needed (cost vs performance)?**
Thanks for raising this valuable question. In the rebuttal, we have extended our M1 Math experiments to include 256 TR-TA pairs. Please refer to the updated Figure 4 (Left) in the provided PDF. While the curve does become flatter with the increase in the number of TR-TA pairs, we still observe a non-negligible slope even with 256 TR-TA pairs.
For precision, we will revise this claim to: "The relative improvement over SC increases as the number of TR-TA pairs increases within the range of TR-TA pairs in our experiments".
---
Rebuttal Comment 1.1:
Title: Acknowledgment of rebuttal
Comment: I thank the authors for the clarifications. I am keeping my score
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your great questions and support! | Summary: This paper the use of learning by teaching methods in the context of LLMs.
Strengths: The paper is well written and methodologically sound.
Weaknesses: The concise results should be briefly and systematically stated in the final Conclusion chapter, which is missing.
Technical Quality: 3
Clarity: 4
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The concise results should be briefly and systematically stated in the final Conclusion chapter, which is missing.**
Thank you for the suggestion. We will add a conclusion section and summarize the concise numbers together with the general conclusion there. | Summary: This paper "Can LLMs Learn by Teaching? A Preliminary Study" presents a novel approach towards LLM learning by teaching with three methods: observing student feedback, learning from student feedback, and learning iteratively. The contribute two key findings: teaching student models are an effective way to improve model performance (with fine-tuning), and student models must be diverse (teaching multiple students are better than teaching one student).
Strengths: - The paper is exceedingly well-written and structured, making it easy to follow for a reader. The diagrams are informative and helpful.
- The idea is simple and brilliant -- it could be could be very effective for LLM finetuning and is executed well.
- Particularly M3 is reminiscent of ideas from OpenAI (AlphaFold). This is a very interesting exploration.
- The experiments are plentiful and convincing.
Weaknesses: Figure 4, Table 2, Table 3, and Table 4 must include at least standard error or ideally 95% CI to prove statistical significance of the results.
With the number of acronyms in table 3 metrics, it's hard to understand what exactly numbers mean. Please find some way to include more understandable question categorizations.
The paper is missing literature from the learning sciences backing up the authors' strategy of implementing feedback and their premise of learning-by-teaching. Including a discussion on this would be important for motivating their work. Please examine literature on feedback (i.e. Power of Feedback, Hattie et al. or from the recent EDM, AIED, LAK communities) to strengthen your discussion and motivation. Additionally, it would be useful to examine ways to evaluate teaching quality also from the same communities.
Technical Quality: 4
Clarity: 4
Questions for Authors: What are the interpretability implications of teaching diverse students at different knowledge levels? A discussion on this might be useful in the paper.
Why choose the game theory + math reasoning datasets? The motivation for the dataset choice seems to be a bit missing in the paper.
Can this be extended to open-ended math problems? How and what would need to change in the architecture?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations are discussed mostly in terms of extensions. A discussion of bias perpetuation along the student / teacher pipeline i.e. Wambsganss et al. "Bias at a Second Glance" (COLING 2022) and global interpretability implications would be helpful here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: Figure 4, Table 2, Table 3, and Table 4 must include at least standard error or ideally 95% CI to prove statistical significance.**
Thanks for this valuable suggestion. Due to the high cost of running experiments with LLMs, we reported the standard errors using the "bootstrapping" method [1, 2] in Figure 4 (Left) and Table 6 (Appendix A.1.3). We will clarify this in the revision. Specifically, in Figure 4, standard errors were calculated for K < 128 (number of TR-TA pairs) by selecting K pairs from the total 128 TR-TA pairs. Similarly, standard errors are presented for K = 12 in Table 6, showing that M1 with K = 12 can outperform SC with K=128.
Following the suggestion, we extend the M1 Math experiments in Table 2 and Figure 4 to include up to 256 TR-TA pairs, enabling us to compute standard errors for the original 128-pair setting. We have also included additional experiments for M2 with 3 repeated runs. Please refer to the updated results in the provided PDF.
Nevertheless, we cannot obtain other standard errors through repeated runs during the rebuttal period due to the resource and time constraints. For instance, the experiments in Table 3 are primarily constrained by the rate limit of the LeetCode server.
**Q2: With the number of acronyms in table 3 metrics, it's hard to understand what exactly numbers mean. Please find some way to include more understandable question categorizations.**
Thanks for pointing this out. There is a typo in our original caption stating that each acronym represents a question category. In fact, each acronym in the M1-Code experiments, such as SG-1 or SG-2, represents an individual question. All results in Table 3 are from the Game Theory category. More broadly, we experimented with three general question categories: Game Theory, Bitmasking, and General-1D, and summarized their results in Tables 3/9, Tables 10/11, and Tables 12/13, respectively.
We will correct the typo in the caption and clarify the settings in the revision. Please do not hesitate to let us know if other clarification is needed for Table 3.
**Q3: The paper is missing literature from the learning sciences backing up the authors' strategy of implementing feedback and their premise of LbT. Including a discussion on this would be important for motivating their work. Please examine literature on feedback to strengthen your discussion and motivation. Additionally, it would be useful to examine ways to evaluate teaching quality also from the same communities.**
Thanks for this valuable suggestion. We agree that discussing more literature from the learning sciences will not only strengthen the motivation of our current implementation but also provide valuable insights for further improving the methods. We will add the discussions to our revision.
To back up our concrete implementation of the LbT idea, we review several studies within the field [3, 4, 5, 6, 7, 8, 9, 10] that discuss how students' feedback can enhance a teacher's capabilities. The benefits of such feedback can be attributed to:
- Reflection [7, 11, 12]: Teachers monitor and reflect on how well their ideas are understood by students and this reflection aids in evaluating their own understanding of domain concepts.
- Knowledge-building [4, 5, 6]: Through interactions with students, such as questioning, teachers reflect upon their own expertise and comprehension, and become aware of their own misconceptions, and then attempt to repair them.
Reflection directly supports the design of M1 and M3, where teachers improves their answer quality by observing how well a student answers similar problems (in M1) or reflecting on failure cases from multiple students (in M3). Additionally, knowledge-building backs up the implementation of M2, where students' feedback helps a teacher identify misconceptions, which are then addressed through DPO.
For the roadmap of further improving the LbT implementation, Section 6.3 currently discusses our perspective on the general LbT pipeline, including potentially useful strategies for teaching material design and educational pipeline design. We will extend this section and discuss relevant literature from the learning sciences, based on the framework in Figure 7, focusing on the following points:
- "cooperative learning" [13] implies dividing a difficult topic (as mentioned in "Task-oriented collaborative learning", Section 6.3) into several specific topics so that multiple agents can learn jointly.
- "teachable agents" [7, 9, 10] indicate that appropriately configuring a student's knowledge level enables it to provide more useful feedback to the teacher.
- Regarding the suggestion to "examine ways to evaluate teaching quality", we note that feedback can take many forms [14, 15]. Besides taking tests or exams [4, 5, 6], students can provide valuable feedback through their perception, such as peer recommendations [11] and satisfaction questionnaires [16, 17]. This implies that students could offer free-form feedback, which may be especially beneficial for open-ended problems.
---
Rebuttal 2:
Title: Additional rebuttal 1
Comment: **Q4: What are the interpretability implications of teaching diverse students at different knowledge levels?**
Thanks for bringing up this interesting topic. If our understanding of "interpretability" below does not align with your thoughts, we are happy to provide further clarifications during the discussion phase.
We observed that teaching diverse students at different knowledge levels contributes positively (Table 2, 3, 5).
For M3, the teacher makes verbalized reflections on why the current in-context learning examples are causing students' mistakes. During the rebuttal period, we conducted further analyses on the task of identifying false generalization fallacies [18] to verify that these natural language reflections could help interpret (1) how student diversity helps and (2) in-context learning behaviors.
For (1), the teacher identifies **diverse and complementary causes** (numbered a,b,c) of errors from mistakes of different students (numbered 1,2,3), as listed in Table 3 of the attached PDF, which **help interpret why having diverse students is better**.
We verified that the causes of students' mistakes (1a, 1b, ...) are indeed also causes of teachers' mistakes. Specifically, for each mistake the teacher makes on the test set, we prompt an LLaMa-3-70B to judge which cause categories (1a, 1b,...) does this mistake falls into. Note that one mistake can be caused by multiple causes simultaneously. Then, we report the percentage of teacher mistakes of that cause in Table 3's "% teacher mistakes of the same cause" column. By choosing a student model different from the teacher model, we identify more types of valid causes of teacher mistakes.
Finally, these causes in Table 3 indeed help the teacher improve the ICL examples. After the teacher revises the ICL examples by learning from student 1, the teacher's mistakes caused by 1a, 1b & 1c are reduced by 6.0%, 11.6%, and 13.3%. Based on this, we conjecture that LbT methods like M3 could lead to a more interpretable way to understand the flaws of in-context learning models.
**Q5: Why choose the game theory + math reasoning datasets?**
We regard the ability of **accurate knowledge and reasoning** as the most crucial for advancing the capabilities and broad applications of LLMs. Drawing from human learning experiences, the LbT methodology has proven effective in providing intrinsic supervision for accurate knowledge building and reasoning. Our work aims to evaluate whether LbT can similarly benefit contemporary LLMs.
Therefore, we choose math reasoning and competition-level code synthesis (including game theory, bitmasking, and general-1D) because they require **accurate knowledge and reasoning** and cannot be effectively solved with vague logic or reciting. Besides, these tasks are both popular and challenging, attracting considerable attention from the community. Successfully applying our approach to these tasks would demonstrate its effectiveness and highlight its potential for a wide-ranging impact.
That being said, LbT could potentially benefit other tasks and datasets as well. We leave it to future work.
**Q6: Can this be extended to open-ended math problems? How and what would need to change in the architecture?**
Thanks for this valuable question. LbT can indeed be extended to open-ended problems, such as dialogue, writing, and open-ended math problems. A natural extension could involve using a teacher LLM to evaluate a student's answer, which would then serve as the LbT score. This approach mirrors how a teacher assesses a student in human learning and is promising due to the strong evaluation capabilities of LLMs [19, 20]. A similar idea has been employed to generalize Self-Consistency (which only applies to verifiable problems) [21] to open-form problems [22].
Besides, as demonstrated in literature from the learning sciences, teachers may directly benefit from providing assessments [8] or reviews [23], and students can provide valuable free-form feedback beyond taking tests or exams, such as peer recommendations [11] and satisfaction questionnaires [16, 17]. We believe that an open-ended evaluation process could potentially offer additional feedback to the teacher LLM, which can be utilized to further enhance its performance. We plan to explore these extensions in future studies.
---
Rebuttal 3:
Title: Additional rebuttal 2
Comment: **Q7: A discussion of bias perpetuation along the student / teacher pipeline i.e. Wambsganss et al. "Bias at a Second Glance" (COLING 2022) and global interpretability implications would be helpful here.**
Thanks for raising this worth-discussing topic. In open-domain problems where no ground truth judgment exists (and LLM-based judgment might be needed), it is possible that teaching materials that are "well accepted and learned" by students may not necessarily be more accurate or closer to the truth, but may instead align with the existing biases of teachers or students. This poses a risk of the teacher perpetuating their own biases or indirectly learning the students' biases.
Moreover, while our work primarily focuses on leveraging LbT for mathematical and code reasoning abilities, the importance of addressing these biases is even greater in domains where societal bias and fairness are significant concerns.
We will add a discussion in the revision as this topic should be carefully considered in future work.
---
Rebuttal Comment 3.1:
Comment: Hello authors,
Thank you for your rebuttal! I would like to point out one concern: the rebuttal is strictly supposed to be 6000 characters. Adding not one but two additional comments to address concerns is disrespectful to both my time as a reviewer but more importantly disrespectful to other authors who worked very diligently in reducing their rebuttal to the word limit. I would like the PCs and Senior ACs to suggest a policy of what to do in this situation.
Thank you for the additional experiments. I would request that in Table 3, you could produce std devs. by choosing a smaller sample size or simply bootstrap sampling.
For Q3-Q7, could you suggest exactly what parts of the rebuttal discussion will be included in the paper and in what sections? I feel all these points are important to touch upon.
---
Reply to Comment 3.1.1:
Title: Thanks for your reply and further response from the authors
Comment: Dear reviewer,
Thanks for the reply and follow-up questions. We answer them as follows:
> Adding not one but two additional comments to address concerns is disrespectful.
Thanks for bringing up this concern to us directly. We are really sorry for any feelings of disrespect you have experienced. Our intention was never to violate any rules or show disrespect. We are used to using multiple comments as in previous conferences, which makes us overlook the need to shorten the question quotation and answers. Setting intentions aside, it is our bad that raises your concern, and we fully respect and accept any decisions of you and PCs/SACs.
> I would request that in Table 3, you could produce std devs. by choosing a smaller sample size or simply bootstrap sampling.
Following the suggestion, we use bootstrap sampling (choose 4 TR-TAs from 8 TR-TAs, 20 sets are sampled) to produce a table with standard deviation as follows:
| Models | Metrics | SG-1 | SG-2 | SG-3 | SG-4 | PW |
| - | - | - | - | - | - | - |
| | Avg. | 0.198±0.125 | 0.004±0.002 | 0.209±0.068 | 0.564±0.084 | 0.613±0.128
| T=LLaMA3-8B | M1(Max) | 0.539±0.162 | 0.004±0.004 | 0.220±0.116 | 0.690±0.233 | 0.576±0.346
| S=LLaMA3-8B | Avg.(V=1) | 1.000±0.000 | - | - | 0.647±0.294 | 0.847±0.113
| | M1(Max)(V=1) | 1.000±0.000 | - | - | 0.724±0.332 | 0.939±0.124
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| | Avg. | 0.335±0.142 | 0.005±0.002 | 0.292±0.108 | 0.591±0.106 | 0.695±0.076
| T=LLaMA3-8B | M1(Max) | 0.382±0.242 | 0.009±0.004 | 0.503±0.130 | 0.728±0.154 | 0.723±0.121
| S=LLaMA3-8B | Avg.(V=1) | 0.827±0.147 | - | - | 0.653±0.318 | 0.890±0.104
| (Self-debug)| M1(Max)(V=1) | 0.928±0.196 | - | - | 0.815±0.346 | 0.941±0.072
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| | Avg. | 0.582±0.128 | 0.007±0.002 | 0.432±0.177 | 1.000±0.000 | 0.643±0.132
| T=GPT-3.5 | M1(Max) | 0.827±0.178 | 0.010±0.002 | 0.631±0.193 | 1.000±0.000 | 0.774±0.129
| S=GPT-3.5 | Avg.(V=1) | 0.993±0.006 | - | 0.746±0.299 | 1.000±0.000 | 0.914±0.082
| | M1(Max)(V=1) | 1.000±0.000 | - | 0.593±0.432 | 1.000±0.000 | 0.962±0.049
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| | Avg. | 0.723±0.162 | 0.096±0.119 | 0.586±0.136 | 1.000±0.000 | 0.841±0.070
| T=GPT-3.5 | M1(Max) | 1.000±0.000 | 0.255±0.368 | 0.655±0.323 | 1.000±0.000 | 0.911±0.104
| S=GPT-3.5 | Avg.(V=1) | 0.996±0.004 | 1.000±0.000 | 0.666±0.338 | 1.000±0.000 | 0.897±0.088
| (Self-debug)| M1(Max)(V=1) | 1.000±0.000 | 1.000±0.000 | 0.666±0.338 | 1.000±0.000 | 0.931±0.075
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| | Avg. | 0.838±0.119 | 0.008±0.001 | 0.677±0.135 | 1.000±0.000 | 0.597±0.102
| T=LLaMA3-70B | M1(Max) | 0.900±0.200 | 0.007±0.002 | 0.787±0.398 | 1.000±0.000 | 0.671±0.112
| S=LLaMA3-8B | Avg.(V=1) | 1.000±0.000 | - | 1.000±0.000 | 1.000±0.000 | 0.918±0.127
| | M1(Max)(V=1) | 1.000±0.000 | - | 1.000±0.000 | 1.000±0.000 | 0.965±0.112
> For Q3-Q7, could you suggest exactly what parts of the rebuttal discussion will be included in the paper and in what sections? I feel all these points are important to touch upon.
Q3: We will merge the discussion into the current Section 6.3 ("Borrowing Education Strategies to Improve LLMs"), including both the literature supporting the design and the literature supporting our prospects in Figure 7.
Q4: We will add a subsection under Appendix C ("M3") to provide this detailed interpretation of M3's working process.
Q5: We will add the discussion on this important experimental design choice to Appendix A.1.2 ("Additional Experimental Setups"), and refer to it at the end of Section 3.1.
Q6: We will add these two sentences "LbT can potentially be extended to open-ended problems, such as dialogue, writing, and open-ended math problems. A natural extension could involve using a teacher LLM to evaluate a student's answer, which would then serve as the LbT score" to Section 6.1 ("Limitations and Near-Term Extensions").
Q7: We will add this important discussion to a standalone section in Section 6, positioned between the current Section 6.1 and 6.2. The title will be "Potential Risk of Bias Perpetuation".
Best,
Authors | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their valuable time and effort in reviewing our paper. We are encouraged that the reviewers recognize our paper as novel and interesting (UmKq, mNjM); see its potential impact on the LLM community (UmKq, mNjM); note the abundance of experiments included (UmKq, mNjM, E9eU); and find it well-written and easy to follow (UmKq, pWvs, mNjM). We also appreciate the thoughtful concerns and suggestions, which are both inspiring and valuable for further discussion.
In response to the comments, we have added several new experiments which can be found in the attached PDF:
- We have extended the number of TR-TA pairs in M1 Math to 256 (Table 1 in the PDF).
- We have included additional experiments for M1 Math with LLaMA3-70B teaching Mistral-7B and LLaMA3-70B teaching LLaMA3-8B + Mistral-7B (Table 1 and Table 2 in the PDF).
- We have updated the plot showing the relative improvements of M1 over SC with up to 256 TR-TA pairs (Figure 1 in the PDF).
- We have calculated standard errors for M1 Math with K < 256 (number of TR-TA pairs), by selecting K pairs from the total 256 TR-TA pairs (Table 2 and Figure 1 in the PDF).
- We have calculated standard errors for M2 with 3 repeated runs (Table 3 in the PDF).
- We have added an analysis of the causes of errors identified by the teacher (LLaMa3-70B) in M3 (Table 4 in the PDF).
**References**
[1] Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations. ACL, 2024.
[2] Improve mathematical reasoning in language models by automated process supervision. arXiv, 2024.
[3] Children teach children: Learning by teaching. Harper & Row, 1971.
[4] The influence of the tutee in learning by peer tutoring. Proceedings of the Annual Meeting of the Cognitive Science Society, 2004.
[5] Understanding tutor learning: Knowledge-building and Knowledge-telling in Peer Tutors' Explanations and Questions. Review of
Educational Research, 2007.
[6] Tutor learning: The Role of Explaining and Responding to Questions. Instructional Science, 2008.
[7] Learning by teaching: A New Agent Paradigm for Educational Software. Applied Artificial Intelligence, 2005.
[8] Learning-by-teaching. Evidence and implications as a pedagogical mechanism. Innovations in Education and Teaching International,
2017.
[9] Can you clarify what you said?: Studying the Impact of Tutee Agents' Follow-up Questions on Tutors' Learning. AIED, 2021.
[10] Teach AI how to code: Using Large Language Models as Teachable Agents for Programming Education. CHI, 2024.
[11] Building a metacognitive model of reflection. Higher Eduction, 1999.
[12] Learning from human tutoring. Cognitive Science, 2001.
[13] Evaluation of Jigsaw, a cooperative learning technique. Contemporary Educational Psychology, 1985.
[14] The power of feedback. Review of Educational Research, 2007.
[15] The power of feedback revisited: A Meta-analysis of Educational Feedback Research. Frontiers in Psychology, 2020.
[16] Quantifying quality: The Importance of Student Feedback. Quality In Higher Education, 2001.
[17] Instruments for obtaining student feedback: A Review of the Literature. Assessment & Evaluation in Higher Education, 2005.
[18] Logical fallacy detection. EMNLP Findings, 2022.
[19] Judging LLM-as-a-judge with MT-Bench and chatbot arena. NeurIPS, 2023.
[20] From crowdsourced data to high-quality benchmarks: Arena-Hard and BenchBuilder Pipeline. arXiv, 2024.
[21] Self-consistency improves chain of thought reasoning in language models. ICLR, 2023.
[22] Universal self-consistency for large language model generation. arXiv, 2023.
[23] Learning by reviewing. Journal of Educational Psychology, 2011.
[24] Chain-of-thought prompting elicits reasoning in large language models. NeurIPS, 2022.
[25] Self-refine: Iterative Refinement with Self-Feedback. NeurIPS, 2023.
[26] Reflexion: Language Agents with Verbal Reinforcement Learning. NeurIPS, 2023.
Pdf: /pdf/3be7200dac3b6d53359b7fd0f6046f08210af5c2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Road Less Scheduled | Accept (oral) | Summary: This paper proposes a new optimization procedure for deep learning that yields good any-time performance instead of requiring learning rate schedules that decay to zero. The method can be performed on top of standard optimizers, making it easy to apply to existing networks. The paper theoretically describes the method, proving desirable properties. The method is evaluated across a broad range of tasks, including various deep learning tasks (supervised learning) and simpler convex optimization tasks, typically roughly matching or slightly exceeding the performance of standard baselines. The any-time performance of the method in a single run is comparable to the pareto frontier of standard methods when using different schedule lengths, a very large advantage of the proposed method.
Strengths: - The problem studied is of significant relevance to the community and likely to have a large impact. The use of learning rate schedules is almost universal and this has the potential to change that. Strong anytime performance can significantly simplify experimental procedures.
- Good theoretical contribution.
- Strong empirical evaluation across a large number of tasks with sufficiently strong baselines with very good results.
- Significant novelty as far as I know, I have not seen any works that can provide this type of any-time performance before.
- Paper is relatively easy to follow with clear figures.
- Experimental validation is done over multiple seeds.
Weaknesses: - Deep learning evaluation could be improved slightly, especially with respect to hyperparameter tuning and transferability of the hyperparameter values. The effects of different hyperparameters and how they change between settings is not discussed. The full ranges of the hyperparameter tuning of the method and baseline is not provided.
Technical Quality: 3
Clarity: 3
Questions for Authors: Minor Suggestions:
- L20: Specify whether g_t is evaluated at x or z
- L29: Maybe clarify that this “T in advance” applies to many popular schedules. There are many examples of schedules that don’t have this, e.g. trapezoidal schedules / warmup-stable-decay / reduce-on-plateau.
- L41: No additional hyperparameters over SGD *with momentum* or Adam
- Maybe include equation numbers, they make it much easier for people to refer to the equations e.g. when discussing the paper even if you do not use these references in the manuscript. Labeling equation 1 as such seems odd since nothing else is labeled.
- L178: Why is Lookahead like EMA? I can see the resemblance to PA but not necessarily EMA.
- L290: It would be nice to show some results for different types of weighing when using a warmup
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, no concerns here.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for providing line-by-line comments we will update the camera ready and arxiv to reflect all of your suggestions. We also address them below
L20: Specify whether g_t is evaluated at x or z
- Good point, we will update.
L29: Maybe clarify that this “T in advance” applies to many popular schedules. There are many examples of schedules that don’t have this, e.g. trapezoidal schedules / warmup-stable-decay / reduce-on-plateau.
- Yes, we will state that there does exist schedules that instead require a cool-down period to be chosen, which can be chosen during the course of the run, and that the historically popular reduce-on-plateau schedule also doesn't require a stopping time (with the downside that it is significantly out-performed by modern cosine/linear schedules).
L41: No additional hyperparameters over SGD with momentum or Adam
- This should specify SGD+momentum not just SGD, we will fix.
Maybe include equation numbers, they make it much easier for people to refer to the equations e.g. when discussing the paper even if you do not use these references in the manuscript. Labeling equation 1 as such seems odd since nothing else is labeled.
- We will add equation numbers through the whole paper. Thanks!
L178: Why is Lookahead like EMA? I can see the resemblance to PA but not necessarily EMA.
- By EMA here we mean that the x sequence is an EMA, since it's updated with a fixed interpolation weight $\alpha$ rather than a decreasing weight sequence that would given an equal-weighted average. This can be seen by rearranging the equation for $x$ in the more classical EMA form: $x_{t}=\left(1-\alpha\right)x_{t-1}+\alpha z_{t,k}$. It's not strictly an EMA of the z sequence just of the last z from each inner loop. We will clarify this in the paper.
L290: It would be nice to show some results for different types of weighing when using a warmup.
This is a good suggestion. During the early development of the algorithm we did a sweep of weighting value powers and found that a power 2 works the best. We have not performed a large scale ablation of this value though. We will add this to the camera ready.
We are very glad to see your enthusiasm for our method! We want to note that a submission to the AlgoPerf competition that used our method has achieved a first-place result in the final standings, providing further independent evidence of the practicality of our method. We hope you will consider increasing your score in light of this.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and the additional hyperparameter sensitivity sweeps. I think the additional experiments and proposed changes improve the paper further, and the AlgoPerf results are very impressive as well. Overall I think this is a probably a top 10 paper at this conference, I will raise my score to reflect this. | Summary: The paper proposes a scheduler free method for training, analyzes it theoretically to show that it matches the theoretical benefits of Polyak averaging while recovering the performance of standard cosine decay used in practice. Their approach can be interpreted as being an interpolation of Polyak averaging and Primal averaging (which is equivalent to momentum). Thus it gets the benefits of acceleration while maintaining the low variance of Polyak averaging.
Strengths: 1. The empirical comparison to cosine decay scheduler is done in a thorough manner.
2. Another strength of this work is that alternative approaches such as Polyak averaging or EMA require maintaining one extra copy of the weights.
Weaknesses: Since the focus of the paper is to match theoretical guarantees of Polyak averaging while matching empirical performance of schedulers they also need to compare to Polyak averaging and EMA variants[1]. The paper does not do this thoroughly.
1. Line 22 states “Despite their theoretical optimality, PR averages give much worse results in practice than using the last-iterate of SGD with well-chosen learning rate sequences (Figure 2a)” but Figure 2a states the Polyak-averaging diverges. Clearly this could be fixed with a slightly smaller learning rate since cosine decay did not destabilize?
2. The paper does not compare to EMA which is another method which can empirically reduces variance while not using schedules.
Technical Quality: 3
Clarity: 4
Questions for Authors: Could the authors add comparison to (tuned) EMA? I would happy to increase my rating further if the authors do this.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Weaknesses
1.
This is a really good point. It is possible to get Polyak averaging to converge by using a smaller LR value. For the IWSLT14 illustrative plot we did not include a full LR sweep (as we did with all experiments in the experiments section) when we should have. We have ran this LR sweep and included the updated plot in the global rebuttal PDF. It shows that both Polyak and Primal averaging still greatly under-perform compared to both a cosine schedule and the Schedule-Free approach.
We currently do include Polyak and Primal averaging in all convex-case experiments but we can also run these for the deep learning experiments for completeness. We will run these additional experiments for the camera ready. After reflecting on your question, we realize that there hasn't been any published experiments extensively comparing Polyak or Primal averaging to schedules in the non-convex setting, and their sub-optimality is more a folk-law result. These additional experiments could be useful to the community.
2) See experimental results below.
# Questions
*Could the authors add comparison to (tuned) EMA?*
This turned out to be an interesting question. We ran an experiment on CIFAR-10 using SGD with momentum 0.9, using the same experimental setup as in the paper, and with warmup-then-flat learning rate schedule.
We found that with a careful sweep of the exponential model averaging parameter, we are able to achieve essentially the same test accuracy as we get with Schedule-Free. See below a list of EMA values and the associated test accuracies (single seed), sorted from best to worst:
0.9998: 96.02
0.9996 95.92
0.9999: 95.83
0.99996: 95.743
0.998: 95.66
0.999: 95.60
0.99998: 95.343
0.996: 95.313
0.99: 94.852
0.99999: 92.768
This is an interesting result! It suggests that the averaging in Schedule-Free may have a similar effect to exponential weight averaging, but without the requirement to tune an additional parameter, and also without the additional memory cost of EWA. Thank you for the suggestion that we investigate this. This link definitely warrants further investigation and we will run additional experiments for the camera ready that directly compare to EMA.
We also tried running exponential model averaging using a warmup+cosine schedule for the SGD+M method. This was significantly worse across the board than the warmup+fixed schedule, which is surprising as existing papers usually stack EMA with a schedule. See the results below:
0.998 95.48
0.99996 95.45
0.9996 95.38
0.9998 95.38
0.95 95.33
0.98 95.30
0.999 95.292
0.996 95.24
0.99 95.16
0.9999 95.04
When combining with a schedule there is less sensitivity to the choice of $\beta$.
We hope these experiments help answer your questions. Please let us know if you have any further questions or comments.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for these experiments. Since the experiments with EMA indicate that EMA can potentially match the performance of ScheduleFree I will maintain my score instead of increasing it.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thanks for the quick response during the discussion period, we appreciate it. This EMA approach is interesting, and we want to note a few things:
- The idea of using EMA as a replacement for a schedule has not appeared in the literature before as far as we are aware, and when used with a decreasing schedule as is normally done, we see that the EMA doesn't match Schedule-Free performance.
- The EMA requires precisely tuned momentum of 0.9998, it underperforms with similar momentum values such as 0.9996 and 0.9999. The momentum value also depends on the length of training, so does not maintain the nice properties of Schedule-Free learning. This extreme hyper-parameter sensitivity would make it very difficult to use in practice.
- The EMA sequence requires an additional memory buffer over Schedule-Free as the base optimizer must also use it's own momentum to prevent divergence. This is not necessary with Schedule-Free learning as the interpolation operation stabilizes the iterate sequence.
So overall, EMA doesn't have the same low-memory and low-tuning properties of Schedule-Free learning. Based on those notes, we hope you will reconsider raising your score. | Summary: The authors propose a method for training neural nets without needing to know the total training time T in advance. This contrasts with a standard training setup where one chooses a learning rate schedule in advance, and the schedule must include an a priori chosen stopping time T (e.g. cosine schedule or linear decay). If the method really works as advertised it therefore simplifies practical training setups and reduces the cost of training, by avoiding the need to run multiple training runs with different values of stopping time T.
The method works by tracking two sequences: one is the "noisy" sequence z that integrates noisy gradient updates, the second is the "smoothed" sequence x that tracks a uniform average over all past noisy iterates z. Gradients are evaluated at an interpolation between the current noisy z and smoothed x sequence, and added back to the noisy z sequence which is then averaged into the smoothed x sequence.
Authors present theoretical results about their technique for convex optimisation, and evaluate the method across a set of training benchmarks, some that involve tuning and some that don't. The results look promising.
Strengths: - the idea is legitimately really cool and clever and seems very original to me. People know that weight averaging on top of standard training can boost performance, so it's really cool to ask "is there a way to fold this back into the training loop" as the authors do.
- my broad understanding of the technique is that it's doing noise filtering in a clever way, and I think that this idea can inspire others to explore this direction, which is great
- the paper is very clearly written and I like the mix of the intuitive description of the technique, the formal results, and then the experimental evaluation
- it's great that the authors engaged with the MLCommons Algorithmic Efficiency benchmark, and from what I understand the results are very promising.
Weaknesses: Okay as I've mentioned, I think the idea is cool and original and can inspire followup work, which is all we really want of a paper. So I'm going to focus on giving you feedback which is intended to be constructive and I hope can help you generally improve the quality of the work or followup work that you do. I'm not going to hold the paper hostage, but if I feel like you engage meaningfully with my critiques I'm willing to upgrade my score. All of my critiques can be addressed either by making minor amendments to text or by adding a limitations section at the end of the paper *or* by running more experiments and addressing them, but I won't insist on which and leave this up to the authors.
### **Paper may be over claiming slightly on its results**
The paper makes claims to be **"one of the largest and most comprehensive machine learning optimization algorithm evaluations”**. Can you quantify in what sense this is true? To me the evaluation doesn't feel that comprehensive---for instance, there is only one plot on hyperparameter sensitivity, while a broad swathe of the community has started to explore this question as standard in evaluations papers. The authors also state their method **"has no notable memory, computation or performance limitations compared to scheduling approaches"**. In my opinion, the truth value of this statement is essentially unknowable without more thorough evaluation. It would be fine to amend the statement by saying "WE BELIEVE THAT our method has no notable memory, computation or performance limitations compared to scheduling approaches."
### **Hyperparameter sensitivity is not thoroughly investigated**
In my opinion, the main potential limitation of the work is that tuning hyperparameters (learning rate, weight decay, interpolation constant $\beta$) may implicitly be tuning an LR schedule. This is especially the case for this technique since the role of these hyperparameters is quite subtle given the unusual form of the update sequences. Now the authors may argue that they address this with their MLCommons Algorithmic Efficiency experiments that reportedly use the same set of hyperparameter across all tasks, but in my opinion this could be a fluke relating to all the MLCommons tasks involving similar training times for instance. What I would want to see to convince me otherwise is to see experiments that sweep across beta for different training times, and check if the optimal beta is invariant to training time, say.
### **Paper applies a potentially ill-suited theoretical framework to deep learning, and as such there are gaps**
The paper comes from a part of the community that uses convex optimization frameworks to do algorithm design, and then extends the methods to deep network training. Sometimes the extension can feel a bit forced: e.g. let's switch to Adam instead of SGD as our base optimiser because it works better, let's use learning rate warmup because it works better, etc. The paper uses theorems within this convex framework to support its significance, but as far as I can tell these theorems don't answer basic questions a practitioner would have about the technique: e.g. Theorems 1 and 2 provided no guidance on setting the interpolation parameter $\beta$ since they hold uniformly for all $\beta \in [0,1]$.
I want to point out that another part of the community is trying to build up understanding and frameworks for deep learning fundamentals that involve less of a "jump" from convex optimisation to deep learning. To point out two examples:
- https://arxiv.org/abs/2103.00065 "Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability"
- https://arxiv.org/abs/2405.14813 "Scalable Optimization in the Modular Norm" (concurrent work)
I think it would be great to at least read these works and potentially engage with them---not in this paper, but in the future
### **Final note**
I sometimes worry about giving frank feedback as I think I can sometimes have a blunt style. I want to re-emphasise that I really like this paper. I think the idea is clever and creative, and I really encourage you to pursue these directions further. The feedback is intended constructively. I am currently giving the paper a score of 5 because a score of 6 requires "no major concerns with respect to evaluation" and I am not there yet. I can get there if I become confident that you've engaged with my review.
Technical Quality: 2
Clarity: 3
Questions for Authors: - do you know if beta is sensitive to e.g. training time? Fine if you don't know, but consider adding a limitation saying future work could investigate this closer
- which sequence is used to do inference x, or y, or z? It might be worth clearly flagging this in this paper. Sorry if I just missed it.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: "Note that we excluded one benchmark problem, ResNet-50 training, as neither AdamW nor NAdamW can hit the target accuracy on that task." ---- this feels artificial to me. I still want to know how schedule free AdamW does!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Strengths
We are glad that you find our work interesting! We think that this approach has wide applicability and we are doing our best to spur adoption by doing an open source release in both PyTorch and Jax. An entry using our method was entered into the AlgoPerf competition earlier this year, and the final results were just released this week, showing that Schedule-Free AdamW ranked **first** in the self-tuning division, beating out all other entries including the AdamW+Cosine implementation tuned by the organizers. We hope this independent verification will help you assess the potential impact of our work.
# Weaknesses
## Paper may be over claiming slightly on its results
*"one of the largest and most comprehensive machine learning optimization algorithm evaluations”. Can you quantify in what sense this is true?*
We will reword this phrase as we agree it is not precise enough. We wanted to convey that we run experiments on a larger set of test problems than any prior paper on deep learning optimization. As far as we are aware, there is no paper out there that runs as many full-training (not fine-tuning) experiments as we do on deep learning problems. The are larger comparisons on logistic regression and other convex problems in past literature, as well as model fine-tunings, so we need to be careful to adjust our wording to clarify.
We have ran additional hyper-parameter sensitivity experiments, please see the section below.
*"has no notable memory, computation or performance limitations compared to scheduling approaches"*
- We will cut this phrase from the conclusion as we agree it is overreaching at the moment.
## Hyper-parameter sensitivity is not thoroughly investigated
This is a good point, we don't currently investigate hyper-parameter sensitivity thoroughly. We ran a series of additional experiments and have included these results in the PDF attached to the global rebuttal. We ran as many experiments as we were able to in the 1 week rebuttal period, so they are not completely conclusive. We will run additional experiments for the camera ready.
### Learning Rate
We ran a series of learning rate sweeps for the smaller problems in our test bench (these sweeps are time consuming to run), CIFAR10, CIFAR100 and SVHN. We see that the sensitivity to the optimal learning rate is very similar to SGD with momentum, with some variability observed. For CIFAR10/100 the curves look essentially the same, and for SVHN the curve has a somewhat tighter peak.
## Momentum
For our momentum sweep, we see a similar result. For both problems that we ran, neither Schedule-Free or the SGD+Momentum baseline show clearly more or less sensitivity to the hyper-parameter, and there isn't a clear pattern.
### Beta v.s. Training Time
See the Questions section below.
## Paper applies a potentially ill-suited theoretical framework to deep learning, and as such there are gaps
The analysis framework to use here is a question we constantly struggle with as researchers in this area. The convex framework is usually considered ill-suited to describe deep learning problems, but it leads to results that seem to work well in practice (this paper's method is a great example). Averaging of iterates for general non-convex problems can't be analyzed as far as we are aware, since you can end up averaging between winding valleys of the parameter space and the resulting points don't necessarily have low loss.
In terms of the optimal values of $\beta$, we are investigating this as followup work. So far we have a result that for stochastic quadratic problems $\beta=0.5$ is optimal. A similar result holds for strongly convex problems although it may not be strictly optimal in that case. In other convex settings, larger $\beta$ values than 0.5 are potentially better, but there is a dependence on the local quadratic-ness of the problem, and the more quadratic, the closer to 0.5 the value should be.
## Questions:
1) *do you know if beta is sensitive to e.g. training time?*
We have run an additional experiment to understand this dependence for the ImageNet test problem, it is included in the global rebuttal PDF. We ran training out to 200 epochs instead of 100, to see if larger $\beta$ values given improvements at the very late-stages of training.
We find that at the early stages of training, the value of $0.9$ gives the highest test accuracy, but at approximately epoch 50, the larger value of $0.95$ starts to dominate (we didn't run $0.9$ in our sweep in our paper, so this result is an improvement over the previous results). This $0.95$ value dominates for the remainder of training, and the larger value of $0.98$ is far behind in terms of test accuracy. The beta=0.75 run doesn't do better at the beginning, so it's not the case generally that smaller beta is always better at the beginning.
So to summarize, smaller $\beta$ values can perform better for shorter duration training runs, but this dependence on training time for the optimal beta seems very mild.
2) *which sequence is used to do inference x, or y, or z?*
This a good suggestion, we don't currently clearly indicate which sequence is used. We will update the paper. The sequence used for inference should be the $x$ iterate.
If you have any further questions please don't hesitate to ask.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I'm just responding with my thoughts to provide a chance for more discussion:
**"Schedule-Free AdamW ranked first in the self-tuning division"** Congrats on this. I'm not 100% sure how competitive this track was (5 submissions, 4 of them struggling). But still it's really impressive you beat the baseline set by the organizers, so fair play! And again I already mentioned in my review I wasn't sure what the diversity of benchmarks was in terms of problem scale.
**"We ran as many experiments as we were able to in the 1 week rebuttal period"** I appreciate that it's going to be difficult to get a lot done in one week, so thank you for running these extra experiments. Still these experiments are not quite what I had in mind. **I think the paper would strongly benefit from running these sensitivity sweeps at a variety of problem scales, e.g. transformer training on 10k, 100k, 1M, 1B tokens and check whether the minimum of each sweep lines up or not.** Sweeping training time on a log grid here I think is pretty crucial. Also varying number of tokens instead of number of ImageNet epochs avoids confounding factors from doing multiple passes over the same data. But thanks for running the ImageNet one---and I suppose it is showing some dependence of optimal beta on training time.
**"The convex framework is usually considered ill-suited to describe deep learning problems, but it leads to results that seem to work well in practice"** Again I'll be a little blunt, but please understand I'm just trying to convey my opinion and then we can discuss. From my perspective, it seems like the convex opt stuff is a great source of inspiration in your work, driving you to test novel algorithms that no one else is thinking of. This is amazing. However, I'm not recalling any evidence in your paper that shows that the theory you develop has any connection to how your method actually works in practice. I'm a bit suspicious about this aspect.
---
Reply to Comment 1.1.1:
Title: Response
Comment: This is a good point. There is a significant difference in scale between the problems in AlgoPerf, ranging from solving in minutes to days, but it's not at the level of modern large-scale training runs, as the entire benchmark suite is designed to run beginning-to-end in under a week on 8 V100 GPUS (2 generations old!).
We are also very interested in evaluations at larger scale as this is crucial for wide adoption of our method. We plan to run these evaluations over the coming months. These evaluations are time-consuming as the optimal LR values, momentum and decay for Schedule-Free differ from schedule-based runs, and so we need to run large log-spaced parameter sweeps.
*"The convex framework is usually considered ill-suited to describe deep learning problems, but it leads to results that seem to work well in practice"*
Most researchers in this area take a very different approach than we do. We want to elaborate on this further as it's central to the development of this method.
It's clear that non-convex deep learning problems don't inherit many of the nice-properties of convex problems. For example, it's often the case that methods that rely on estimating smoothness fail when extended naively to the deep learning setting. However, there is a growing belief in the community that the *online* convex optimization framework, which only assumes bounded gradient rather than smoothness, can accurately model the behavior of non-convex learning. This was captured in a recent COLT Open Problem submission: https://www.ehazan.com/open_problem.pdf. Any progress on this open problem, developing a theoretical black-box reduction between the two settings, would justify our method.
So as you say, we don't directly answer the question of why our method works in the setting of our deep learning experiments. We don't know how to answer this question concretely yet. It's wild that it works - averaging of iterates makes little sense on non-convex problems, and so some level of local convexity must be present, and seemingly far more than we would have believed before we started this line of research! We are actively researching this ourselves, and we hope that our work also spurs others to look into this further. | Summary: This paper proposed an optimization style for stochastic optimization, termed schedule-free optimization, which is free of manually selected/tuned learning rate schedulers. The proposed method enjoys both the worst-case optimal last-iterate convergence and promising empirical performance. The authors also introduced a general online-to-batch analysis framework to analyze the proposed method. The empirical results show that the proposed method outperforms the state-of-the-art methods in various tasks, including training large language models (LLMs) and image classification tasks.
Strengths: 1. The mismatch between the theoretically optimal learning rate schedule and the practice is an important problem. This paper re-introduces and re-emphasizes this issue in a well-educated manner.
2. The proposed online-to-batch analysis framework is general and insightful.
3. The positive result in the empirical verification of equation (9) introduces an interesting new open problem that is worth further investigation: why could well-known convex/non-convex optimization problems as such an "bounded-regret" property under gradient descent?
4. The proposed schedule-free method enjoys both the worst-case optimal last-iterate convergence and promising empirical performance.
Weaknesses: 1. In the update rule of $z_{t+1}$, using $y_t$ instead of $z_t$ could incur additional forward passes during the optimization process (of neural networks through backprop), which especially undermines the efficiency and applicability of the proposed method in the scenario of model parallelism.
2. The adaptation from schedule-free SGD to schedule-free AdamW is still in a heuristic way.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Why don't you report any empirical results that combining schedule-free methods and parameter-free methods? By the way, certain parameter-free literature is cited in the Appendix for the use of proof. The authors are encouraged to elaborate on the connection between this citation and the current work.
2. Any conjecture/explanation on the empirical observation on Line 158?
3. How do you deal with the "decoupled weight decay" issue in adapting the most popular optimizer for LLMs (i.e., AdamW) to the schedule-free style?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No major societal limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Title: Reviewer 1 Rebuttal
Comment: Thank you for the detailed review, we appreciate it. We are very glad you see the potential of our method. We would like to start by addressing each of your concerns separately:
Weaknesses:
1) *additional forward passes* This is a good point! In our PyTorch and Jax implementations, we are able to avoid any additional forward or backward passes over regular AdamW, except when BatchNorm is used, where the extra forward passes are needed to warmup the BN running_mean/var buffers before a test loss evaluation. These extra forward passes have less than 2% runtime overhead. Given this low overhead, and the fact that batch-norm is quickly being replaced by layer-norm and other alternatives in modern architectures, we don't think this is a major issue.
2) *heuristic adaptation for AdamW* As you say, our Schedule-Free AdamW version is heuristically adapted from standard AdamW. We will add a section to discuss this to the paper. AdamW doesn't have a concrete regret bound, but related methods that "fix" AdamW's theory, such as AMSgrad, can also be used. Since our main Theorem shows how any method with a proven regret bound can be used adapted to be Schedule-Free, it is then no longer heuristic. Our theorem handles arbitrary learning rates including warmup, and any weighting sequence, so this bound directly applies to the methods as practically implemented.
Questions:
1) *LR Adaptation* We have done some initial investigation into integrating Schedule-Free with various recently described LR adaptation methods. This integration is surprising subtle as the theory doesn't apply directly for the technical reason that D-Adaptation and DoG methods show bounds on the stochastic loss, NOT the regret, whereas our method specifically requires a regret bound. This difference is crucial to those methods behavior from the theory point of view. We so far have not seen good results when using Schedule-Free in combination with D-Adaptation or Prodigy. We are planning to next investigate integrating with regret-bound methods such as Mechanic and Coin-Betting approaches in the future, and we hope to see good results there.
2) *Line 158 - Momentum allows larger LR values* In the quadratic setting, for large beta values, our method becomes convergent for larger learning rates then you would normally be able to use. This is likely related to the similarity between our method and momentum which provides acceleration in the quadratic case. However, we don't believe this result is indicative of the actual behavior of the method on deep learning problems as it only holds in the quadratic setting. We have further theoretical results in this direction that didn't make it into the paper by the submission deadline, and we see in the stochastic strongly convex case, our method provides provably better convergence rate bounds with beta \in (0,1) than is achieved with Polyak or Primal averaging.
3) *decoupled weight decay* Integration of optimization methods with decoupled weight decay is a surprisingly subtle issue. Currently we have a switch in the open source code that supports weight decay calculated either at $y$ (the default used in our experiments) and $x$. We find empirically that calculating it at $y$ performs a tiny bit better on some problems but it largely doesn't matter. From a theory point of view, you can make arguments for both forms, which is why we support both. We will add additional clarification around this difference in the paper.
We hope that you will consider raising your score given our comments above. We believe this work has potential to become the default training method in the future given it's strong advantages over classical schedules and it's ease of use. An entry using our method was recently announced as the first-place winner in the self-tuning track of the AlgoPerf competition, a further indication of the potential of our method. It significantly beat all other entries including a tuned AdamW.
---
Rebuttal Comment 1.1:
Comment: The authors largely covered my questions, with an interesting one left. I reiterate that point here, which I mentioned in the original review.
> By the way, certain parameter-free literature is cited in the Appendix for the use of proof. The authors are encouraged to elaborate on the connection between this citation and the current work.
I mean line 620 ("Recall from D-Adaptation ...") in the current version of the paper.
---
Reply to Comment 1.1.1:
Title: Response
Comment: Thank you for the quick response!
To answer your question, we have adopted the analysis framework developed recently by Defazio and Mishchenko as it makes the proofs of many results that link stochastic optimization and online optimization shorter and simpler. This framework is actually generic and can be applied to most methods for online optimization, not just parameter-free methods such as D-Adaptation. Our theory doesn't directly use any results that relate to parameter-freeness, just the basic online-learning analysis inequality they develop. It makes the dependence on the norm of $s$ more explicit in the bounds compared to classical inequalities such as used in Orabona's Online Learning monograph. | Rebuttal 1:
Rebuttal: Several reviewers requested additional plots to examine the hyper-parameter sensitivity of our method. We have ran as many experiments as time allowed in the rebuttal period, covering the sensitivity to learning rate, momentum and the duration of training.
Pdf: /pdf/e6bdfecf9b532a9f137faf26ab1e4280f0346317.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning-Augmented Algorithms with Explicit Predictors | Accept (poster) | Summary: The paper introduces a new framework for learning-augmented online algorithms. Learning-augmented algorithms (aka algorithms with predictions) are a very active subfield of beyond worst-case algorithm analysis. For online algorithms, which cope with uncertainty on their input, it gives an algorithm an addition information in form of a prediction. This prediction can be the (unknown) online instance or any other form of additional information. A learning-augmented algorithm is usually analyzed w.r.t. the quality of the given prediction.
In the most commonly used setup for online algorithm, the prediction is assumed to be generated upfront from some black-box predictor, or arrives online together with requests. There are also models where multiple predictions are initially available to an algorithm.
The authors of this paper introduce a new framework which integrates both the predictor and the learning-augmented algorithm.
The main motivation is that a predictor might update its prediction while the online instance is being revealed, and, thus, increase the prediction's quality over time.
In their model, they assume that there are given a set of hypotheses, which can be thought of different characteristics which the coming input could have.
The predictor can compute at any time a new prediction using these hypotheses and the input which has been revealed up to that time.
The authors distinguish between two settings: the easier "realizable" setting, which assumes that the hypothesis which corresponds to the actual input is contained in the hypotheses class, and the harder "agnostic" setting, which does not make this assumption.
The authors then apply this framework to three well-studied online problems in the learning-augmented area: caching, load balancing and non-clairvoyant scheduling. Some of their new bounds improve over previous work.
Strengths: - The paper introduces an interesting generalization of the traditional learning-augmented framework. I think that this new view on learning-augmented algorithm can have an impact on this rapidly evolving field, since it untangles concepts which have already been used partially. I also appreciate the clear structure of the framework into predictor and algorithm.
- The framework is conceptually well motivated and the authors show that it can be applied to at least three different and well-studied problems, which proves that it is practicable and can be used for relevant problems.
- The authors present improved guarantees for previously well-studied online problems.
Weaknesses: - A minor weakness can be the algorithmic novelty for the concrete applications. It seems that the algorithms follow in many cases the predictions given by the predictor. The predictors are tailored to the specific applications and are also rather simple.
- Another minor weakness is the close relation to the established 'multiple prediction' setting. From my understanding, one can see the hypothesis class the set of given predictions, and the goal (in the agnostic setting) is to guarantee a bound w.r.t. the best one.
Technical Quality: 3
Clarity: 3
Questions for Authors: - As far as I understand it one can model the traditional black-box setting via the agnostic setting by using a singleton hypothesis class and the 'traditional' prediction error as loss. Is this true? I am not sure if I missed this, but I think this could also be an interesting insight.
Further comments:
- Line 1140: there is a comma before the second inequality.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As far as I see, all limitations have been properly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for highlighting the positive aspects of our work and pointing out a typo.
> As far as I understand it one can model the traditional black-box setting via the agnostic setting by using a singleton hypothesis class and the 'traditional' prediction error as loss. Is this true?
Yes, the classical setting of learning-augmented algorithms is indeed subsumed by the case where the hypothesis class contains only a single hypothesis. We will make sure to clarify this in the final version of our manuscript.
> A minor weakness can be the algorithmic novelty for the concrete applications.
In our framework, we achieve better bounds than the works on multiple black-box predictions using simpler algorithms (e.g. see our results for non-clairvoyant scheduling and caching).
This can be viewed as a strength rather than a weakness. Moreover, the idea of separating the learning and algorithmic parts is novel.
> Another minor weakness is the close relation to the established 'multiple prediction' setting.
Our setting is able to model the established 'multiple prediction' setting. However, our setting is more general and flexible. As can be seen from our results, it naturally leads to clean and simple algorithms which we find important.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I thank the authors for their rebuttal, and for answering my question about the connection to the traditional setting.
My opinion about the paper and my score remain unchanged. | Summary: This paper proposes a new framework to use machine learned predictions to improve online algorithms. Recent work that has done this (in a slightly different framework) goes under the name learning-augmented algorithms (LA algorithms) or algorithms with predictions. The main difference in this work compared to the usual LA algorithms is that these authors propose integrating the learning problem into the algorithmic problem by allowing the algorithm to adaptively update its prediction based on the new information it receives. Additionally, instead of maintaining a single prediction, this framework maintains a hypothesis class of information about the instance. The paper exemplifies this framework on caching, online load balancing, and online non-clairvoyant scheduling. For all 3 problems they study a realizable and agnostic setting, where in the realizable setting, the hypothesis class contains h(I) for I the actual underlying offline instance, and in the agnostic setting, the information corresponding to the true offline instance is not necessarily in the class.
Strengths: - This framework could indeed be a better way to study algorithms with predictions. In combining the learning process with the algorithmic procedure we get the benefit of (1) demystifying how the prediction is obtained and (2) allowing ourselves the flexibility of updating the prediction as we receive new information about the input.
- Using this more all-encompassing approach, the framework is able to improve upon some bounds from LA algorithms.
- The authors did a good job in the related work differentiating their model from that on algorithms with a prediction pf portfolios, as well as data driven algorithms.
Weaknesses: The algorithms proposed are augmenting simple procedures with this richer, more flexible hypothesis class, and the analyses involved are very straightforward. This isn’t inherently a bad thing. But there is a lot of work in the LA algorithms scene lately, and the work that stands out to me the most is the work that highlights some interesting technicality of the problem that wasn’t understood through classic worst-case analysis approaches. I’m afraid the problems that this model has been applied on did not result in technical components that stood out, and thus could lead to this paper blending into an already noisy scene. I will leave more specific related questions that may address this fear under “questions”.
Technical Quality: 4
Clarity: 3
Questions for Authors: - How do you see this framework allowing us to understand the complexity of problems (from a beyond-worst case analysis perspective) in a better way than the standard LA algorithms framework or other beyond worst-case analysis frameworks? In other words, how does the ability to (1) have a hypothesis class of information for the prediction and (2) update the prediction allow us to further surpass worst case lower bounds?
- It’s unclear to me how large the hypothesis class should be. You get this trade off in the agnostic setting between having a large \ell should lead to smaller \mu^* indoor bounds, or a small \ell might imply larger \mu^*. Have you thought at all for the problems you study about how you can balance these?
- A few times, you mention something like “each hypothesis h(I) provided sufficient information about the instance in order to determine an offline optimal solution OPT(I)”. This statement cannot be literally true, as you study a problem whose offline version is NP hard. I think you mean something like, h(I) provides sufficient information about the instance in order to find an offline solution \alpha OPT(I) for alpha the apx factor, and then alpha comes into your completive ratio or regret bound.
- Something seems weird in the appendix, section B.1 and C.1 have the exact text repeated. It seems B.1 is maybe a totally unnecessary section.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Most (if not all) works on learning augmented algorithms solve some learning problem in an ad-hoc manner by assuming a black-box access to some predicitions. In our framework, we "open the box" and define the learning problem explicitly. This allows for a deeper and methodical integration of results from learning theory in learning-augmented algorithms, and as we demonstrate, gives rise to simple and clean algorithms with better performance bounds than previous works. We hope that our "white-box" perspective will inspire further work.
> how does the ability to (1) have a hypothesis class of information for the prediction and (2) update the prediction allow us to further surpass worst case lower bounds?
We use hypothesis classes to model the learning problem (as is standard in learning theory). Thus, when the input is compatible with one of the hypothesis in the class, our algorithms can outperform algorithms in the standard learning augmented framework. This can be illustrated by a simple example of a bi-modal input: there are only two possible scenarios, and there is no single set of pre-computed predictions which works well for both. In our framework, we can learn the actual input sequence while processing it and then apply the approriate set of predicitons. As illustrated in the paper, this idea naturally extends to more than just two inputs and as well as to noisy scenarios.
Similar situations happen in other frameworks of beyond-worst case analysis. For example, the input may be stochastic in the first scenario
while possessing some specific pattern in the second scenario.
One might argue that the argument above does not apply to problems like caching, where previous works did consider predictions that are generated over time. As noted by reviewer qEgX, in such a setting, the predictor may adapt its predictions based on the part of the input seen so far. Our framework captures this in a principled way which allows
theoretical analysis of such adaptability without committing to specific ad-hoc predictions.
> It’s unclear to me how large the hypothesis class should be.
In principle, the hypothesis class reflects prior knowledge the algorithm design has on the learning problem. In practice the class can be determined e.g. using collected statistics of past inputs.
Our framework is effective when one can indeed define a restricted hypothesis class which can capture the regularities in the inputs. (Otherwise we are back in the classical worst-case setting where any input is possible.)
Our framework allows one to try and reduce the size of the hypothesis class by ``clustering'' the hypotheses. Indeed this is likely to increase the distance of the actual input to the class. We specify the performance as a function of the size of the class and the maximum distance of an input to the class. Studying the relation between these parameters for specific applications is a nice direction for further research.
> A few times, you mention something like “each hypothesis h(I) provided sufficient information about the instance in order to determine an offline optimal solution OPT(I)”. This statement cannot be literally true, as you study a problem whose offline version is NP hard.
We mean that $h(I)$ contains enough information to determine an offline optimal solution. We do not require that this could be done in polynomial time nor use it to compute an optimal solution: e.g. our algorithm for load balancing uses $h(I)$ to compute an approximate solution in polynomial time.
Thank you for pointing out that B.1 and C.1 are repeated. We will fix this.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
The ability to combine the learning problem with the algorithmic one is obviously desirable. I think where my initial criticism came from is that I'm not totally sure the paper's proposed way of doing this is "the right way". On further thought, this is an unrealistic expectation to have. It may take several tries for researchers on LA algorithms to build consensus on the right model to open up the black box of the learning problem. I'm not sure this is the right model, but it's a good start, and it shows promise given the results they obtain. From this lens, I think the paper is making an important contribution, and I will be raising my score. I support acceptance. | Summary: This paper considers a new formulation for learning-augmented algorithms/algorithms with predictions and applies it to the fundamental problems of online caching, online load balancing, and non-clairvoyant scheduling. In the new formulation, the predictions/predictor is made more explicit and is a part of the solution process. The predictions are given by a hypothesis class ${\cal H}$ of size $\ell$, which contains information about the "plausible instances" we are trying to solve. One can think of ${\cal H}$ as coming from past data and the goal is to use this information to get improved algorithms. This is done in two settings: the realizable and agnostic settings.
In the realizable setting, the true instance is realized in ${\cal H}$, so we can compare to the optimal cost in hindsight. For this, the paper provides algorithms with the following bounds:
- Caching: ${\rm OPT}+ k \log \ell$
- Load Balancing: $O(\log \ell \cdot {\rm OPT})$
- Non-clairvoyant scheduling: ${\rm OPT} + \ell \sqrt{ 2 \rm OPT}$
In the agnostic setting, the true instance may not be realized in ${\cal H}$, so instead we compare to the best hypothesis in ${\cal H}$. For this, the paper provides algorithms with the following bounds:
- Caching: ${\rm OPT} + O(\mu^* + k \log \ell)$
- Load Balancing: $O( \mu^* \cdot \log \ell \cdot {\rm OPT})$
- Non-clairvoyant scheduling: ${\rm OPT} + \mu^* + O(n^{5/3} \log \ell)$
where in each, $\mu^*$ is a problem-dependent notion of distance from ${\cal H}$ to the true instance.
Some lower bounds are given showing tightness or almost-tightness for these results.
Strengths: This paper gives a very different approach to learning-augmented algorithms that is interesting and has the potential to generate more ideas. While the algorithms once given the predictions are simple (they are all essentially some notion of follow-the-prediction), this work shows that these simple algorithms can work well if fed carefully constructed predictions (given by the prediction algorithms designed in this paper). Lower bounds are given showing that the results are somewhat tight.
Weaknesses: - I think it would help the presentation to give examples of hypothesis classes to make the main conceptual idea more clear and provide a stronger connection to practice.
- There is no experimental evaluation. While some works in this area lack an experimental evaluation, there is a standard setup that has been used for the caching problem by Lykouris and Vassilvitskii [32] as well as Antoniadis et al. [4]. Providing such an evaluation could elucidate the construction of hypothesis classes and provide further evidence in favor of this paper's approach.
- The writing has some awkward phrases and grammar throughout, e.g. the following (among others):
- line 62 "... will be much smaller then ..." -> "... will be much smaller than ..."
- lines 107-108 "... using arguably simpler approach ..." -> "... using an arguably simpler approach ..."
- lines 306-307 "... under certain assumptions about input." -> "... under certain assumptions about the input."
- lines 374-375 "In offline setting, ..." -> "In the offline setting, ..."
Technical Quality: 3
Clarity: 2
Questions for Authors: - To what extent can these techniques be generalized? E.g., what can be said about $k$-server?
- Only the case of finite ${\cal H}$ is considered. Can anything be said about infinite but "low complexity" ${\cal H}$?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: Limitations and assumptions have been made clear. Potential negative societal impacts from this work are very unlikely.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive words and for their useful suggestions
to improve our manuscript.
We respond the questions of the reviewer explicitly.
> To what extent can these techniques be generalized? E.g., what can be said about -server?
Our framework can be extended to $k$-server, even to any Metrical Task System (MTS).
In the lines of our results on caching, we can obtain the following bounds.
In the realizable case, where one of the hypothesis is the input sequence itself,
we can achieve regret $O(D\ln \ell)$, where $D$ is the diameter of the metric space.
In the agnostic case, we can obtain regret $O(D\mu^* + D\ln \ell)$,
where $\mu^*$ is the number of mistakes in the best hypothesis, where a mistake
is considered each time the predicted request (or cost function) is not
exactly the same as the real one.
Our paper is already quite dense and adding MTS and k-server requires several additional definitions. We tried to illustrate the framework without making it too hard to read.
> Only the case of finite $H$ is considered. Can anything be said about infinite but "low complexity" $H$?
The case of infinite hypothesis classes with low complexity of some kind is very interesting. This is a great direction for future work.
> There is no experimental evaluation.
Our work is proposing a new theoretical framework for modeling of learning augmented algorithms. We have decided to focus on theoretical analysis. Experimenting with our framework is an interesting direction for future research.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response, my evaluation is unchanged. | Summary: The paper studies three online algorithms in the learning augmented model: caching, load balancing, and non-clairvoyant scheduling. The goal seems to be to 'online learning' flavored learning-augmented algorithms. Rather than having a single predictor that synthesizes or predict something about the online input, the authors propose a framework where we have access to a large class of hypothesis such as the set of all past inputs. The goal is to be competitive with respect to the best hypothesis for the online input in hindsight. For example in the caching problem, the hypothesis are a large set of input sequences, and the goal is to get close to OPT in the case that the online sequence is actually one of these inputs in our set of hypothesis. In the cases where the online input is not part of the set of hypothesis, the authors obtain error guarantees depending on the hamming distance of the online input and the closest one in the hypothesis.
For the problems studied, the authors match or improve upon (in some parameter regimes) prior works which can be interpreted to be in their framework.
Strengths: - The motivation of the formulation is natural: having access say to the entire history of past inputs could allow one to directly learn the best predictor from data
- The framework matches the best bounds for load balancing, and improves the caching result.
Weaknesses: - The $n^(5/3)$ additive error for non-clairvoyant scheduling seems to be a meaningless guarantee, since there are $n$ jobs which are assumed to have maximum job length $1$. So any scheduling guarantees that the jobs will finish in $n$ time.
- The algorithms, for example for caching, must iterate over the entire set of hypothesis every time, which seems quite inefficient.
- I'm also not sure if the formulation exactly captures the motivation of directly learning a good predictor from data. For example in caching, the performance seems to be 'bottle-necked' by a single past instance in the set of hypothesis, in the sense that we can only ever expect to do as good as the 'most similar' input in our set of past inputs. Rather, it would be more natural if one would learn from all of them simultaneously (so for ex 'learn' something from the first half of one past input and the second half of another). This is not the fault of their algorithms (which already achieve close to optimal performance), but rather the formulation itself. This is why I still think the standard prediction framework maybe more natural since a predictor can learn to synthesize information and implicitly combine different aspects of different past inputs seen, rather than relying on a single past input alone.
- There are no experiments, even simple synthetic ones. It would be interesting to see even in synthetic case if the framework can actually be carried out.
- There are no algorithms in the main body and it is hard to judge algorithmic improvements.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can the authors clarify the n^(5/3) issue?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: No societal consequences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback.
However, their claim about our performance bound for non-clairvoyant scheduling being meaningless is incorrect. We study non-clairvoyant scheduling with the classical objective of
minimizing the *sum* of completion times which has a different magnitude than the length of the schedule (or makespan).
In particular, if all jobs have size 1 and the length of the schedule is $n$,
as the reviewer suggests, then the sum of completion times is about $n^2$.
Our regret bound is of order $n^{5/3}$, i.e. it is sublinear.
> The algorithms, for example for caching, must iterate over the entire set of hypothesis every time.
>
> It would be more natural if one would learn from all of them [hypotheses] simultaneously (so for ex 'learn' something from the first half of one past input and the second half of another).
Our paper describes a basic framework, and we focused mainly on the approximation guarantees and not on other (important) resources such as running time. However, our framework does allow for optimization of other desired resources. For example, to avoid the need of iterating over all hypotheses in caching, we can replace HEDGE in our predictor by a suitable algorithm for the bandit setting like EXP4 (which considers a single hypothesis in each time-step). Similarly, if we want to capture inputs partitioned into several intervals,
each resembling different hypothesis, we can use, for example, the classical SHARE algorithm
instead. Furthermore, our analysis readily implies how the regret guarantees of these algorithms (e.g. EXP4 or SHARE) translate to approximation guarantees of the implied learning-augmented online algorithm.
While in some scenarios it might be the case that the black-box predictor satisfies the two properties pointed out by the reviewer, in other scenarios it might not. Our framework allows to optimize and analyze the predictor explicitly towards achieving these properties and other desired properties.
> There are no experiments, even simple synthetic ones. It would be interesting to see even in synthetic case if the framework can actually be carried out.
Our work is proposing a new theoretical framework for modeling of learning augmented algorithms. We have decided to focus on theoretical analysis. Experimenting with our framework is an interesting direction for future research.
> There are no algorithms in the main body and it is hard to judge algorithmic improvements.
We have tried to informally convey the main ideas behind our algorithms in the main body. However we will try to further clarify these in subsequent versions (and welcome any suggestions toward this end). | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ScaleKD: Strong Vision Transformers Could Be Excellent Teachers | Accept (poster) | Summary: This paper introduces a novel knowledge distillation method called ScaleKD. The method aims to leverage well pre-trained vision transformer models as teacher models for a variety of student model architectures.the authors first adopt a cross attention projector to align student features with the teacher's. Then, a dual-view feature mimicking module and a teacher parameter perception module are used to achieve better knowledge transfer. Extensive experiments demonstrate the effectiveness of ScaleKD across various tasks and model types.
Strengths: 1. The paper is well-written and well-structured.
2. The authors provide extensive experimental results and analyses that demonstrate the effectiveness of the proposed method.
Weaknesses: 1. The paper's motivation could be strengthened. As highlighted in previous research [1], a stronger teacher model does not always equate to a better teacher. The necessity of adapting a ViT teacher for training a CNN student needs further justification. Including a comparison between ViT teachers and CNN teachers would provide better support.
2. The teacher parameter perception (TPP) module's cross-architecture KD paradigm is similar to techniques used in existing research [2,3]. While this does not diminish the novelty of other aspects of the paper, these related works should be properly discussed.
3. The authors critique the focus on "evaluation on small datasets with non-mainstream student models" in existing works. Some relevant papers about these issues should also be discussed, such as [4,5].
[1] Mirzadeh, Seyed Iman, et al. "Improved knowledge distillation via teacher assistant." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020.
[2] Wang, Jiabao, et al. "CrossKD: Cross-head knowledge distillation for object detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[3] Bai, Haoli, et al. "Few shot network compression via cross distillation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020.
[4] Hao, Zhiwei, et al. "Vanillakd: Revisit the power of vanilla knowledge distillation from small scale to large scale." arXiv preprint arXiv:2305.15781 (2023).
[5] Stanton, Samuel, et al. "Does knowledge distillation really work?." Advances in Neural Information Processing Systems 34 (2021): 6906-6919.
Technical Quality: 3
Clarity: 3
Questions for Authors: In the dual-view feature mimicking module, the direct component is omitted during the alternative feature mimicking process. However, the remaining non-direct features are duplicated in the normal matching process (as shown in the upper path in Figure 2b). How does the removal of the non-direct component in the upper path affect performance?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the recognition of our work and the constructive comments.
**1. To your comments regarding strengthening the motivation of our work,**
>**Our responses: (1)** Yes, in previous KD works like [1] using CNNs as the teachers, a stronger model is not always a better teacher. Following your suggestion, with ResNet-50 as the student, we additionally use CovNeXt-XL (the current top-performing CNN) as the teacher (adopted in recent work [4]) besides Swin-L and BEiT-L. From the results shown below, we can observe that: i) ViT teachers are superior to CNN teachers for our method, e.g., a notable 0.3%|0.62 top-1 gain is achieved when the teacher is changed from ConNeXt-XL to Swin-L|BEiT-L; ii) Given ConNeXt-XL as the teacher, our method also outperforms [4] with a clear margin. These results validate the necessity of adapting large pre-trained ViTs as teachers for training a CNN student; **(2)** Besides, we would like to emphasize the motivation of our work. Along with the evolution of network architectures and model learning paradigms, ViTs show notably improved performance when scaling up the model size and the pre-training data size. The motivation of our work is to connect cross architecture KD research with well pre-trained ViT teacher models that stand out for their remarkable scalable properties, exploring an effective way (ScaleKD) to transfer teachers' scalable properties to student models (CNNs/MLPs/heterogeneous ViTs), given that the pre-training data is invisible to student models.
|Teacher|Student|Method|Top-1(%)|
|--|--|:--:|:--:|
|ConNeXt-XL(86.97)|ResNet-50(79.80)|VanillaKD|81.10
|ConNeXt-XL(86.97)|ResNet-50(78.64)|ScaleKD|81.72
|Swin-L(86.24)|ResNet-50(78.64)|ScaleKD|82.02
|BEiT-L(88.58)|ResNet-50(78.64) |ScaleKD|82.34
**2. To your comments regarding the discussion of our TPP module with some existing research [2,3],**
>**Our responses: (1)** Thank you for mentioning these two works [2,3] that are related to our TPP module. Actually, we discussed some similar related works [32,59,60,61] in Line#329-333 of the manuscript, which are in the same line of research as [2,3]. Generally, TPP differs from these works in motivation and designs; **(2)** The motivation of why previous works construct the proxy path is to avoid contradictory supervision signals from the annotations and the teacher’s predictions [2], or to avoid accumulated estimation errors [3]. In contrast, the key motivation of TPP is to align the teacher’s and student’s parameter spaces, which is beneficial for pursuing the pre-training knowledge implicitly contained in the teacher's parameter space; **(3)** As the motivations are not the same, our TPP has some special modifications. On the one hand, unlike previous works that reuse a proportion of detection head [2] or a single layer [3], TPP uses the last whole stage of the teacher to have an integrated perception of teachers' parameter space. On the other hand, to have a better feature alignment, the outputs of the proxy path are provided as input-dependent queries of our CAP module.
**3. To your comments regarding the discussion of our method with [4,5],**
>**Our responses: (1)** Thank you for mentioning two related works [4,5], which aim for a better understanding of vanilla KD based on logits ([14] in our manuscript). Specifically, the early work [5] studies how to improve fidelity (measuring the predication consistency between teacher and student models) instead of the prevailing generalization ability (model accuracy), and the later work [4] explores how advanced training recipes (e.g., larger datasets, stronger data augmentations/optimizers, significantly longer training epochs (4800 epochs)) affect the performance gap of vanilla KD to its variants. Generally, [4,5] do not present any new KD methods; **(2)** We already discussed a related work FuncMatch [49] in our manuscript, which has similar statements as [4]; **(3)** In contrast, our work presents a new cross architecture KD method, showing that well pre-trained ViT models could be used as teachers and their scalable properties could be transferred to CNN/MLP/heterogeneous ViT students;**(4)** Due to the orthogonal focuses, our method may get improved performance using advanced training recipes in [4]. We leave it for future work.
**4. To your question about the effect of removing the non-direct components in the first path of DFM,**
> **Our responses: (1)** Following your comments, we remove all non-direct components in the first path of DFM and perform an experiment under the same settings as Table 9(b) in our manuscript; **(2)** For the results shown below, DFM(Dir + Alt) indicates your mentioned setting, while DFM(All+ Alt) is the original design. We can observe that removing the non-direct components in the first path will slightly decrease the effectiveness of DFM, but its performance is still obviously higher than CAP. As we discussed in Line#109-113, the first path of DFM aims to capture the teacher's global features, where the subtle non-direct components are also indispensable parts. Directly removing non-direct components in the first path will break the integrity of the original feature space (ScaleKD is not conducted in the frequency space), thus lowering the efficacy of DFM.
|Method|Top-1(%)|$\Delta$Top-1(%)|
|--|:--:|:--:|
Baseline|76.55|-|
CAP|77.87|+1.32
DFM(Dir+Alt)|78.23|+1.68
DFM(All+Alt)|78.51|+1.96
>[1] S.I., Mirzadeh, et al. "Improved...teacher assistant", AAAI 2020.
[2] J Wang, et al. "CrossKD...for object detection", CVPR-W 2024.
[3] H Bai, et al. "Few shot...cross distillation", AAAI 2020.
[4] Z Hao, et al. "VanillaKD... to large scale", arXiv 2023.
[5] S Stanton, et al. "Does...really work?", NeurIPS 2021.
**Finally, we will update the manuscript based on the above responses**. Regarding more experiments and discussions that we have made, you are referred to our responses to the other reviewers and in "Author Rebuttal by Authors".
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. Most of my concerns have been well addressed. I will maintain my initial score.
---
Reply to Comment 1.1.1:
Title: Thanks for the Recognition of Our Rebuttal
Comment: Thank you so much for the recognition of our responses. We are glad to see that you tend to accept our paper.
We will make more efforts to improve our paper further.
Many thanks for your constructive comments, time and patience.
---
Rebuttal Comment 1.2:
Comment: Thanks for the author's response, I would like to increase my score
---
Reply to Comment 1.2.1:
Title: Thanks for the Recognition of Our Rebuttal
Comment: Thank you so much for the recognition of our responses. We are glad to see that you have raised your score from 6 to 7.
We will make more efforts to improve our paper further.
Many thanks for your constructive comments, time and patience. | Summary: This paper presents a new knowledge distillation method, named ScaleKD. Previous works mostly use CNNs to distill vision transformers. However, how to use vision transformers to distill CNNs is less explored. This paper shows that pretrained vision transformers are good teachers for other types of student models. The cores of the proposed method are three main components, including the cross-attention projector, dual view feature mimicking, and teacher parameter perception.
The motivation of this paper is clear and the proposed method is also interesting. The authors propose to do model distillation from three aspects, which have been proven useful in the experiment section.
Strengths: - The idea of this paper is interesting and the novelty is significant. Unlike previous KD methods, this paper proposes a new way to do knowledge distillation.
- The presentation of this paper is also good. It seems that the proposed method is easy to follow.
- Experimental results are good. Compared to previous knowledge distillation methods, the results shown in this paper improve them. In addition, the authors also provide detailed analysis on the importance of each component.
Weaknesses: - Reading this paper is too tedious. The paragraph is too long in the introduction section. It is difficult to capture the important content.
- The proposed method consists of three parts, which make it look complicated.
- From Table 2, it seems that when the teacher models' scale increases, there is improvement. However, according to my knowledge, previous KD methods mostly fail to do this. Can the authors explain why the proposed method can achieve this? I think this is important for the KD community to design better KD methods.
- Though exploring how to use vision transformers as teachers is important, I am also curious about another thing. Have the authors used CNNs as teacher models? This is what most previous works did.
Technical Quality: 3
Clarity: 3
Questions for Authors: I think the authors should elaborate more on why scaled teacher models help in the proposed method. I would like to see some analysis on this.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have included limitations in the main paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the recognition of our work and the constructive comments.
**1. To your concern regarding the presentation of the Introduction section,**
>**Our response: (1)** Indeed, paragraphs in the Introduction section of our original manuscript are not short. Although the key messages are organized in a relatively dense manner, we think they are presented logically. In paragraph#1, we introduce the background on the evolution of network architectures and model learning paradigms and the problems. In paragraph#2, we first analyze the existing KD research and point out the weaknesses, then propose the motivation of our work, and finally elucidate three technical barriers for KD under our settings. In paragraph#3, we first describe the core ideas of our three components to address above barriers, then clarify how our method is formulated with the proposed components; **(2)** Following your suggestion, we will improve the presentation of the Introduction section, split long paragraph#2|#3 into two short paragraphs, making each paragraph be easy to capture the key content.
**2. To your concern regarding the proposed method looks complicated,**
>**Our response: (1)** Our method consists of three components, CAP, DFM and TPP, which are closely coupled to each other to address the differences in feature computing paradigm, model scale, and knowledge density. Our base component CAP is a neat cross-attention projector to transform CNN/MLP features into ViT-like features first, then DFM promotes feature alignment in both the original and frequency-aware feature spaces, inspired by two insightful observations (clarified in Line#98-105). Built upon CAP and DFM, TPP further establishes a proxy path to exploit the use of the pre-trained teacher's parameters to align the knowledge density discrepancy. In the ablation study, we validated the necessity of each component design (Table 9); **(2)** Thanks to the coupled design properties of CAP, DFM and TPP, the overall formulation of our method is simple, as discussed in Section 2.4.
**3. To your comments regarding why our method could make pre-trained ViTs become good teachers,**
>**Our response: (1)** Yes, many previous KD methods, such as [1-3], claim that large models are not always good teachers, usually using CNNs as the teachers; **(2)** In the ViT era, ViT shows extraordinary scalability on model size and pre-training data size: i) larger model size provides ViT with stronger model learning capability; ii) the progress of pre-training makes large ViT learn better knowledge from more data. Thus, large ViTs are pre-trained on massive datasets, like IN-22K or LAION-2B. As a result, the capability gap between large-size models and small-size models in this era is larger than before, not only just suggested as the increasing model scale differences, but also as knowledge differences from pre-training data gaps; **(3)** Based on the above analysis, our work questions whether the student could inherit ViT teachers' scalable properties and underlines three important differences between teachers and students that lead to KD barriers. They are feature computing paradigm differences, model scale differences, and knowledge density differences. Accordingly, we design CAP, DFM and TPP to tackle the problems altogether. Fundamentally, we design CAP to directly align different computing paradigms across different architectures by employing patchfy stem and learnable queries. Progressively, we notice the other two types of differences are intertwined under the prevalent pre-training and fine-tuning paradigm, and are finally encoded in both teacher and student models’ feature space and parameter space. In light of this, DFM and TPP address the issues of model scale and knowledge density differences via considering feature distillation in the perspectives of feature space and parameter space. In feature space, DFM promotes feature alignment in both the original and frequency-aware feature spaces, giving more attention to subtle alternative components to promote feature space alignment. In parameter space, built upon CAP and DFM, TPP further establishes a proxy path to exploit the use of the pre-trained teacher's parameters to align the knowledge density discrepancy. As the three problems are well addressed, students can break down the barriers and inherit ViT teachers' capabilities. Thus, we can see the desired scalable performance trends in Table 2/3/6.
>[1] S.I., Mirzadeh, et al. "Improved...teacher assistant", AAAI 2020.
[2] W. Son, et al. “Densely...multiple teacher assistants”, CVPR 2021.
[3] T. Huang, et al. “Knowledge...stronger teacher”, NeurISP 2022.
**4. To your comments on using CNN teachers,**
>**Our response: (1)** We did not use CNNs as teacher models in the original manuscript since our work focuses on exploring the scalable properties of pre-trained ViT teachers to advance cross architecture KD research, as clarified in the Abstract and Introduction section; **(2)** To your request, with ResNet-50 as the student, we additionally use CovNeXt-XL as the teacher (adopted in the recent work [4]) besides Swin-L and BEiT-L. From the results shown below, we can observe that: i) ViT teachers show superiority to CNN teachers, e.g., ScaleKD achieves a notable 0.62% top-1 gain when the teacher is changed from ConNeXt-XL to BEiT-L; ii) Given ConNeXt-XL as the teacher, ScaleKD also outperforms [4] with a clear margin.
|Teacher|Student|Method|Top-1(%)|
|--|--|--|--|
|ConNeXt-XL(86.97)|ResNet-50(79.80)|VanillaKD|81.10
|ConNeXt-XL(86.97)|ResNet-50(78.64)|ScaleKD| 81.72
|Swin-L(86.24)|ResNet-50(78.64)|ScaleKD|82.02
|BEiT-L(88.58)|ResNet-50(78.64)|ScaleKD|82.34
>[4] Z Hao, et al. "VanillaKD... to large scale", arXiv 2023.
**Finally, we will update the manuscript based on the above responses**. Regarding more experiments and discussions that we have made, you are referred to our responses to the other reviewers and in "Author Rebuttal by Authors".
---
Rebuttal Comment 1.1:
Title: Final rating
Comment: Thanks for the reponses to my concerns. Basically, the authors have solved my concerns. In addition, all the other reviewers also recognize the contributions this paper made. I would like to keep my original rating unchanged.
---
Reply to Comment 1.1.1:
Title: Thanks for the Recognition of Our Rebuttal
Comment: Thank you so much for the recognition of our responses. We are glad to see that you tend to accept our paper.
We will make more efforts to improve our paper further.
Many thanks for your constructive comments, time and patience. | Summary: This paper focuses on whether the pre-trained vision transformer models could be used as teachers to distilling knowledges to heterogeneous neural network architectures. The proposed ScaleKD aims to solve three problems including 1) feature computing paradigm different, 2) model scale differences, and 3) knowledge density differences. Extensive experiments are conducted with different neural networks, including CNN, ViT, and MLP on image classification task. This paper also shows that when scaling up the size of teacher models or their pre-training datasets, ScaleKD showcases larger gains to the student models.
Strengths: 1. The difficulties of transferring knowledge between different architectures are well summarized, including the differences in feature computing paradigm, differences in model scale, and differences in knowledge density.
2. All the figures, as well as the writing are clear and easy to follow.
3. The extensive experiments show the effectiveness of the proposed ScaleKD.
Weaknesses: 1. The related work section should be improved to discuss the differences between the proposed ScaleKD and existing works. The CAP, DFM, and TPP should be compared with existing works to show the originality and novelty. The references include:
[1] ViTKD: Feature-based Knowledge Distillation for Vision Transformers, CVPR 2024
[2] Prefallkd: Pre-impact fall detection via cnn-vit knowledge distillation, ICASSP 2023
[3] Distilling efficient vision transformers from cnns for semantic segmentation, Arxiv 2023
[4] A good student is cooperative and reliable: CNN-transformer collaborative learning for semantic segmentation, ICCV 2023
2. The experiments are conducted with ViT, SwinT, ResNet, MobileNet, ConvNeXt, MLP-Mixer. There are several types of CNN, however the claims in Abstract mentions that "Our method can train student backbones that span across a variety of CNN, MLP and ViT." More backbones should be included, or the description in abstract is over-claimed.
3. A question is the ScaleKD includes the MLP and ViT layers in CAP, does it mean the improved KD performance comes from such architecture update? Is the proposed method achieve KD with adding such layers to augment the student model? This is the main concern, please give more discussion.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weakness.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for the recognition of our work and the constructive comments.
**1. To your comments about discussing more related works [1-4],**
>**Our responses: (1)** Thanks for pointing out these four existing works which address KD between two transformers [1] or between the transformer and CNN [2-4]; **(2) Generally, our ScaleKD differs from these works in application focus, motivation, and method formulation. Firstly**, the application focuses of [2-4] are obviously different from ours. Our paper explores cross-architecture KD in the context of training backbone, fitting both classification and various downstream tasks, while [3-4] are KD methods specialized for semantic segmentation, and [2] is a KD application for fall detection (a task that aims to detect people's falls for avoiding injuries). **Secondly**, our motivation moves further from [1-4]. Specifically, they follow the training regime of traditional KD settings: i) the teachers are always small-to-medium-sized without pre-training (sharing the same dataset as the student); ii) the model size difference between teacher and student is not large. In sharp contrast, our ScaleKD explores how to help heterogeneous students approximate medium-to-large-size ViTs through new motivations: **how to connect KD research with state-of-the-art ViTs, aiming to help students mimic teachers' behaviors from the higher model capacity and the massive pre-training**, which shows clear differences to [1-4]; **Thirdly**, the formulation of ScaleKD is different from [1-4]. Basically, [1] uses distinct feature distillation strategies on shallow and deep layers, and [2] only conducts vanilla logits distillation between ViT teacher and CNN student. Differently, [3-4] have deeper insights and design methods specialized for semantic segmentation. To avoid heterogeneous features, [3] utilizes an MHSA layer to model global interdependencies on CNN features, and a channel-attention layer to build linguistic features, while [4] moves from offline KD to online collaborative learning and conducted feature distillation in early layers based on the similar mechanism as [3]. For model capacity gaps, [3] decouples the pixel-wise distillation by categories, while [4] employs a selective mechanism to ensure reliable distillation areas. In contrast, although our method also discusses related problems, the design ideas and method formulation have clear differences from [3,4]. In the context of learning from large pre-trained ViTs, our method underlines differences affecting the efficacy of KD from three aspects, feature computing paradigm, model scale and knowledge density. We first design CAP to tactfully align different computing paradigms across different architectures by employing patchfy stem and learnable queries. Progressively, we notice the other two types of differences are intertwined under the pre-training and fine-tuning paradigm, and are finally encoded in both teacher and student models’ feature space and parameter space. In light of this, DFM and TPP address the differences in model scale and knowledge density via considering feature distillation in the perspectives of feature space and parameter space.
>[1] Z Yang, et al. "ViTKD...transformers", CVPR-W 2024.
[2] TH Chi, et al. "Prefallkd...distillation", ICASSP 2023.
[3] X Zheng, et al. "Distilling...segmentation", arXiv 2023.
[4] J Zhu, et al. "A good student...segmentation", ICCV 2023.
**2. To your comments on experiments with more ViT and MLP backbones,**
>**Our response: (1)** Thank you for kind suggestion. Accordingly, we conducted extra experiments on more ViT and MLP students. Due to the time limitation in the rebuttal phase, we performed experiments on two other typical models, ResMLP-S12 [5] and PVT-S [6], following the same setting as Table 3 in the manuscript. From the results in the below table, we can see that both on ResMLP-S12 and PVT-S backbones, our method obtains promising top-1 gains, further verifying its good generalization ability to ViT and MLP backbones; **(2) Actually, the students in Table 3 are elaborately selected.** Specifically, we consider i) heterogeneous and homogeneous network pairs, ii) network pairs with different model capacity gaps, iii) the variety of the teacher and the student, and iv) the popularity of the networks. We initially selected 10 student models across 6 model types based on the above 4 principles. Now, we have 12 student models across 8 model types, which are more comprehensive. **The updated Table 3 is added to the PDF file attached in "Author Rebuttal by Authors"**.
| Teacher|Student|Top-1(%)| $\Delta$ Top-1(%)|
|--|--|:--:|:--:|
|Swin-L(86.24)|ResMLP-S12(76.51)|80.54|+4.03
|Swin-L(86.24)|PVT-S(79.80)|83.72|+3.92
>[5] H Touvron, et al. "ResMLP...Training", TPAMI 2023.
[6] W Wang, et al. "Pyramid ...Without Convolutions", ICCV 2021.
**3. To your comments about whether the improvements come from the architecture updates,**
>**Our response: (1)** Our ScaleKD does not alter the student network architecture as CAPs are connected to the student as auxiliary paths for feature distillation similar to conventional feature projectors. After the distillation training stage, all three components of ScaleKD, namely CAP, DFM and TPP will be removed, introducing no additional cost in the inference stage; **(2)** We guess it is Fig 1, especially Fig 1(c), that may result in this misunderstanding, where CAP seems to be involved in the student's network. For simplicity, in Fig 1, we only show the feature-mimicking process, not including the inference process; **(3)** Actually, we clarified this in Line#123-125 in the manuscript. We will revise Fig 1 and its caption to avoid this potential misunderstanding.
**Finally, we will update the manuscript based on the above responses**. Regarding more experiments and discussions that we have made, you are referred to our responses to the other reviewers and in "Author Rebuttal by Authors".
---
Rebuttal Comment 1.1:
Comment: I will set my final rate as "Accept" to support this paper
---
Reply to Comment 1.1.1:
Title: Thanks for Your Support of Our Paper
Comment: Thank you so much for recognizing our rebuttal and setting the final rate as "Accept" to support our paper.
We will make more efforts to improve our paper further.
Many thanks for your constructive comments, time and patience. | Summary: This paper concentrates on the distillation of knowledge from a large-scale, pre-trained, ViT-based teacher model to heterogeneous architectures. It incorporates three distinct designs: a) a Cross Attention Projector (CAP), which serves as the fundamental design that bridges the structural disparity between a non-ViT model and the ViT teacher; b) a Dual-View Feature Mimicking and a Teacher Parameter Perception module, both of which are constructed on top of the CAP to facilitate the distillation process. The effectiveness of the proposed methodology is validated through extensive experimentation.
Strengths: - The proposed distillation technique is effective for heterogeneous architectures with a ViT-based teacher.
- The conducted experiments are comprehensive, providing solid validation for the effectiveness of the method.
- The proposed CAP structure is a good contribution which is feasible for various student architectures.
Weaknesses: __Presentation__: The connection between DFM and TPP and their respective motivations is unclear. Specifically, it’s unclear how the modifications of DFM and TPP specifically address the issues of model scale and knowledge density.
__Effects of CAP__: While the introduced CAP appears to be a promising design, its effectiveness is only demonstrated within the context of the ScaleKD framework. It would be interesting to see if this architecture could be integrated with other heterogeneous architecture methods, such as OFA, to replace traditional projectors.
__Effects of DFM__: The ablation study of DFM merely indicates that the filtering operation is beneficial to CAP-based distillation. It raises the question of whether this strategy is also compatible with other feature alignment architectures, such as linear head or convolutional head?
__Effects of TPP__: The design of TPP seems to be at odds with its stated purpose. Given that different network architectures, such as CNNs or ViTs, are known to have different knowledge preferences, it’s questionable whether aligning the student’s parameters with the teacher’s under a heterogeneous architecture is an effective approach. Furthermore, it’s unclear why using the last layer of the teacher as an additional head function for distillation would aid in alignment in the parameter space. Empirical evidence (such as CKA visualization) or theoretical justification is needed to demonstrate whether TPP can encourage different network architectures to exhibit similar learning behavior.
__Experiments__: Some details of the experiments, such as the teacher information in Tables 6-8, are missing from the presented tables.
Technical Quality: 3
Clarity: 2
Questions for Authors: please refer to the weakness part.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The limitations are properly discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for the recognition of our work and the constructive comments.
**1. To your comments about the presentation of DFM and TPP,**
> **Our responses: (1)** The basic motivation of our DFM and TPP components is to align model scale and knowledge density differences in a joint but not separate manner, **under the premise that**: our basic component CAP (a novel cross-attention projector containing positional embedding, patchify stem and trainable queries) has already aligned feature computing paradigm differences between the ViT teacher and the CNN/MLP/heterogeneous ViT student and made the student have the same tokens-based feature modeling in terms of semantic units and spatial resolution as the teacher to perform feature distillation; **(2)** According to our analysis (Line#68-81), model scale differences make teacher and student models have different learning capacities, and knowledge density differences are mainly due to the pre-training data which is supposed to be visible only for the teacher in our work. As a result, these differences are intertwined under the prevalent pre-training and fine-tuning paradigm and are finally encoded in both teacher and student models’ feature space and parameter space. In light of this, DFM and TPP address the issues of model scale and knowledge density differences via considering feature distillation in the perspectives of feature space and parameter space; **(3)** In principle, i) DFM relies on an insightful observation that shows the frequency feature distributions of pre-trained ViTs are extremely imbalanced (dominated by the direct component). Inspired by this, DFM uses a novel dual-view feature mimicking formulation to promote feature alignment in the original and frequency-aware feature spaces; ii) TPP relies on another critical observation that the pre-training knowledge is only gained by the ViT teacher as the pre-training data is invisible to the student. Based on this, TPP forms a cross-network proxy parameter alignment path via bridging the early stages (projected by CAP) of the student to the later stages of the ViT teacher, exploiting the use of the pre-trained teacher's parameters to further reduce the knowledge density discrepancy; **(4)** Generally, CAP, DFM and TPP are progressively designed, and they are closely connected to each other as discussed in (Line#51-127, Section 2) and validated in Table 9/13, Fig 2/3/5/6.
**2. To your comments about the effects of CAP to other heterogenous KD methods,**
>**Our responses:** Following your suggestion, we added an ablation by applying CAP to OFA. According to the results shown below, our CAP brings 0.27% top-1 gain to OFA with traditional projectors, validating its superiority.
|Teacher|Student|Method|Top-1(%)|
|--|--|--|:--:|
|DeiT-T(72.17)|ResNet-18(69.85)|OFA|71.33
|DeiT-T(72.17)|ResNet-18(69.85)|OFA+CAP|71.60
**3. To your comments about the effects of DFM on other feature projectors,**
>**Our response: (1)** Yes, our DFM would be compatible with existing feature projectors such as Linear and Conv projectors, as it stems from the observation of the teacher's feature distribution; **(2)** To study this compatibility, we added an ablation, following your suggestion. From the results shown below, we can see: i) DFM brings 0.35%|0.45% top-1 gain to Linear|Conv projector, validating the compatibility of DFM with them; ii) Comparatively, the top-1 gain of DFM to CAP is much higher, indicating the coupled design property of DFM and CAP.
|Method|Top-1(%)|$\Delta$ Top-1(%)|
|--|:--:|:--:|
Baseline|76.55|-|
Linear|77.43|+0.88
Linear+DFM|77.78|+1.23
Conv|77.52|+0.97
Conv+DFM|77.97|+1.42
CAP|77.87|+1.32
CAP+DFM|78.51|+1.96
**4. To your comments about the effects of TPP,**
> **Our responses: (1)** We first hope this concern is alleviated after you read our responses to your first concern. Given heterogeneous teacher and student network architectures, it is indeed not reasonable to use TPP directly, as connecting the student’s early-stage parameters with the teacher’s later-stage parameters seems to be a conflict due to different feature computing paradigms (lead to different knowledge preferences); **(2)** However, in our method, TPP is built upon CAP which well aligns feature computing paradigm differences and DFM which jointly considers feature distributions in the original and frequency-aware feature spaces, paving a strong base to exploit the use of the pre-trained teacher's parameters to reduce the knowledge density discrepancy. For simplicity, in implementation, we use the last stage of the teacher to construct the proxy parameter alignment path, which already attains promising performance (Table 9) yet it may be not optimal (e.g., using the last two stages of the teacher is slightly better, bringing ~0.15% extra top-1 gain); **(3)** Following your suggestion, we add CKA visualizations to illustrate the effects of TPP. As shown in Fig 1 of the PDF file in "Author Rebuttal by Authors", ScaleKD encourages the student to have similar behaviors as the teacher at the last stage where TPP is applied.
**5. To your comments about teachers' information to experiments in Tables 6-8,**
>**Our responses: (1)** There are no teachers for the experiments on downstream tasks in Table 7-8. As described in Line#187-189, the experiments were performed based on the student model trained by ScaleKD on IN-1K to verify whether the performance gain could be well preserved to downstream tasks in the standard transfer learning regime; **(2)** In Table 6, when comparing ScaleKD with supervised and self-supervised methods, our teacher is Swin-L pre-trained on IN-22K. When comparing ScaleKD with the other two pre-training paradigms, our teacher is BEiT-L pre-trained by EVA.
**Finally, we will update the manuscript based on the above responses**. Regarding more experiments and discussions that we have made, you are referred to our responses to the other reviewers and in "Author Rebuttal by Authors".
---
Rebuttal Comment 1.1:
Title: Post rebuttal comment
Comment: Thanks for the response, which addresses my concerns. I will increase my rating accordingly. However, I still think the statement of 'parameter space' is somewhat ambiguous. I suggest the authors include more clarification in the final version.
---
Rebuttal 2:
Title: Thanks for the Recognition of Our Rebuttal
Comment: Thank you so much for the recognition of our responses. We are glad to see that you have raised your score.
We will improve the statement and clarification related to 'parameter space' regarding our TPP component and continue to make more efforts to improve our paper further.
Many thanks for your constructive comments, time, and patience. | Rebuttal 1:
Rebuttal: Dear Reviewers, Area Chairs, Senior Area Chairs and Program Chairs,
We sincerely thank all four reviewers for their thorough and constructive comments. We are glad that the novelty, method component designs, validation pipeline and performance of our work have been mostly recognized by all four reviewers.
In the past week, we carefully improved the experiments (using all computational resources we have), the clarifications and the discussions of our work to address the concerns, the questions and the requests by all four reviewers. **Summarily, we made the following improvements:**
> (1) To have a better understanding of specific component designs in our ScaleKD framework, we follow the constructive comments/requests by Reviewer k9Ls and Reviewer mfZF, and add several ablation studies and analytical experiments, including: i) An ablation study on the compatibility of DFM with other types of feature alignment architectures; ii) An ablation study on verifying the necessity of non-zero frequency features in the first mimicking path of DFM; iii) An ablation study on the compatibility of CAP with other cross-architecture KD methods; iv) Analytical experiments on how TPP affects the student's learning behaviors in our ScaleKD framework.
(2) To further strengthen the motivation of our work, we follow the constructive comments/requests by Reviewer 3qQD, Reviewer rjnT, and Reviewer mfZF, and provide: i) Experiments that compare ViT teachers and CNN teachers; ii) Experiments on more teacher-student network pairs; iii) Detailed analysis on why ScaleKD shows scalability in terms of the pre-trained ViT teacher's capability; iv) Comparisons with more related works.
(3) We also provide detailed responses to the other concerns/questions/requests raised by each reviewer one by one.
Finally, in the attached one-page PDF file, all the aforementioned experimental results are summarized in different Tables. We will include the above experiments and discussions in our final paper. We hope our detailed responses are helpful to address the concerns, the questions and the requests of all four reviewers.
Pdf: /pdf/cdc8d6435ea902c4a522fe4e1bd5c1a648961afd.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.